text
stringlengths 1
1.78M
| meta
dict |
|---|---|
\section{Introduction}\label{sec:intro}
The design of observer-based controllers has attracted significant research interest for decades because of its theoretical importance and wide applications \cite{krstic1995nonlinear,kokotovic2001constructive,khalil2017high,rajamani2017observers,borri2017luenberger}. The \emph{controller-observer separation} property, which is possessed by linear control systems, generally does not hold for nonlinear control systems \cite{praly2004relaxed,lin1995bounded,arcak2005certainty}. Examples have been given showing that a certainty-equivalence implementation of a stabilizing state feedback controller and an observer may lead to instability (e.g., see \cite{kokotovic1992joy,mazenc1994global}).
Some classes of nonlinear systems, for which the controller-observer separation is possible, have been discussed in the literature (see \cite{teel1994global,arcak2001observer,atassi2000separation} and references therein).
When external disturbances, measurement noises or parameter uncertainties are considered, the observer-based controller design becomes more challenging (e.g., see \cite{wang2017nonlinear,zemouche2017robust,dawson1992state,saberi1990observer,ankush2017state,kheloufi2013lmi}) and various observer design approaches, such as the sliding mode observer \cite{spurgeon2008sliding}, the high-gain observer \cite{prasov2013nonlinear}, and the adaptive observer \cite{marine2001robust}, have been proposed. In spite of those interesting results, many problems remain open and deserve further investigation.
The \emph{incremental quadratic constraint}, a special case of the integral quadratic constraint, is characterized by the so-called incremental quadratic inequality through incremental multiplier matrices \cite{acikmesethesis,acikmese2005observers,accikmecse2008stability,accikmecse2011observers,alto13incremental}. Many common nonlinearities, such as the globally Lipschitz nonlinearity, the sector bounded nonlinearity, the positive real nonlinearity and the polytopic Jacobian nonlinearity, are all special cases of the incremental quadratic constraint
\cite{zemouche2017robust,ankush2017state,accikmecse2011observers,arcak1999observer,fan2003observer,phanomchoeng2011nonlinear}. One special class of (control) systems that has been studied extensively in the literature is the system with globally Lipschitz nonlinearities; different observer design and observer-based control design approaches were proposed for globally Lipschitz systems, either with or without disturbances/uncertainties (e.g., see \cite{zemouche2017robust,raghavan1994observer,ekram2017observer,rajamani1998observers,chen2007robust,zemouche2013lmi} and references therein). In those approaches, quadratic Lyapunov functions were usually used for analysis and LMI-based conditions were proposed for the design purposes. For systems with nonlinearities satisfying the incremental quadratic constraints, their observer design was studied in \cite{accikmecse2011observers}, which was later generalized to the system with bounded exogenous disturbances in \cite{ankush2017state}. However, the observer-based control design for systems whose nonlinearities satisfy incremental quadratic constraints and that are affected by external disturbances is still lacking, to the best of our knowledge.
Recently, the event-triggered control (ETC) emerged as a promising control paradigm, which performs the sensing and/or actuation only when necessary based on some triggering rules \cite{tabuada2007event,heemels2012introduction,postoyan2015framework,wang2011event,heemels2013periodic}. Compared with the classical periodic sampled-data control, ETC is reactive and may significantly reduce the burden of communication and actuation, which is beneficial for systems whose computation or communication resources are limited. Meanwhile, stability of the closed-loop system can be guaranteed if the triggering rules are designed appropriately. Most of the ETC design methods assume that the full-state information are available, and use them in the design of the triggering conditions. This assumption, however, is restrictive since most systems in practice can only have the information of their measurement outputs.
The absence of the separation principle makes the construction of observer-based ETCs difficult because the ETMs that are based on state feedback can not be extended straightforwardly to the output feedback case, especially for nonlinear systems \cite{lehmann2011event,tallapragada2012event,zhang2014event,tarbouriech2016observer,donkers2012output,dolk2017output,abdelrahim2017robust}. When external disturbances are considered, design of the event-triggering conditions becomes harder, partially because
the existence of disturbances makes the exclusion of the Zeno phenomenon (i.e., accumulation of event times) more involved. For example, it was shown in \cite{borgers2014event} that two commonly-used output feedback-based ETC schemes, the relative and the absolute ETMs, have zero robustness for linear control systems with disturbances. For linear control systems with disturbances, the stability and $\mathcal{L}_\infty$ performance were studied using dynamic output-based ETCs in \cite{donkers2012output}. For nonlinear control systems with disturbances, the $\mathcal{L}_p$ performance were investigated under different dynamic output-based ETCs in \cite{dolk2017output} and \cite{abdelrahim2017robust}, where sufficient conditions and specific design methods of the triggering conditions were given; nonetheless, almost all the sufficient conditions for nonlinear systems that are known in the literature are existential instead of constructive, making them less useful in the design process.
In this paper, we investigate the observer-based robust stabilizing controller design for a class of continuous-time nonlinear control systems whose nonlinear terms satisfy the incremental quadratic constraints, in the presence of bounded external disturbances. We first consider the problem in the continuous time domain and give sufficient conditions in the form of LMIs, such that the controllers and the observers are designed simultaneously and the closed-loop system is ISS with respect to (w.r.t.) the disturbances. Based on that, we consider the case where ETMs are implemented in the controller and the observer channels, and provide conditions under which the closed-loop system is ISpS w.r.t. the external disturbances and the Zeno behaviors are excluded.
The main contributions of the paper are summarized as follows:
for two different parameterizations of the incremental multiplier matrices that characterize the nonlinear terms, we provide LMI-based sufficient conditions for the simultaneous design of the observers and the controllers in the continuous-time case; for two different ETM configurations,
we give conditions under which there exist observer-based ETCs that render the closed-loop system ISpS w.r.t. the disturbances; we explicitly design the triggering rules, which have build-in lower inter-execution time bounds that can exclude the Zeno behavior of the closed-loop system.
In particular, our theoretical results altogether provide LMI-based sufficient conditions on the co-design of the observer-based controllers and the ETMs for the incrementally quadratic nonlinear systems with disturbances.
A preliminary version of this work will appear in the
conference publication \cite{xu2018acc}. The present paper are different from
\cite{xu2018acc} in the following important ways: the incrementally quadratic nonlinear systems considered are subject to external disturbances; the event-triggered mechanism is considered; a subsection is added to discuss conditions of the theorems in continuous time; all the complete proofs are included.
The remainder of the paper is organized as follows. In Section \ref{sec:formulation}, we give some background on incremental quadratic constraints and state the problems investigated in the paper. In Section \ref{sec:continuous}, we consider the continuous-time controller design case where LMI-based sufficient conditions are given respectively for two different parameterizations of the incremental multiplier matrices. In
Section \ref{sec:trigger}, we consider the event-triggered controller design where two configurations of the triggering mechanisms are discussed separately.
In Section \ref{sec:example}, a simulation example is given to illustrate the theoretical results. Finally, some conclusions are given in Section \ref{sec:conclusion}.
\emph{Notation.} Denote the set of non-negative real numbers by $\mathbb{R}_0^+$. Denotes the $2$-norm of a vector $x$ by $\|x\|$ and the Frobenius norm of a matrix $P$ by $\|P\|$. Denote the minimal and maximal eigenvalues of a matrix $P$ by $\lambda_{m}(P)$ and $\lambda_{M}(P)$, respectively. Denote the identity matrix of size $n$ by $I_n$. Denote the zero matrix of size $n_1\times n_2$
by ${\bf 0}_{n_1\times n_2}$; denote the zero vector of size $n$
by ${\bf 0}_{n}$; for simplicity, the subscript will be omitted when clear from context or unimportant. For a matrix $M$, $M>0$ (resp. $M\geq 0$) means that $M$ is positive definite (resp. positive semi-definite). For symmetric matrices, we will write $*$ for entries whose values follow from symmetry.
A continuous function $f: \mathbb{R}_0^+\rightarrow \mathbb{R}_0^+$ belongs to class $\mathcal{K}$ (denoted as $f\in\mathcal{K}$) if it is strictly increasing and $f(0)=0$; $f$ belongs to class $\mathcal{K}_{\infty}$ (denoted as $f\in\mathcal{K}_{\infty}$) if $f\in\mathcal{K}$ and $f(r)\rightarrow\infty$ as $r\rightarrow \infty$. A continuous function $f: \mathbb{R}^+_0\times \mathbb{R}^+_0\rightarrow \mathbb{R}^+_0$ belongs to class $\mathcal{KL}$ (denoted as $f\in\mathcal{KL}$) if for each fixed $s$, function $f(r,s)\in \mathcal{K}_{\infty}$ w.r.t. $r$ and for each each fixed $r$, function $f(r,s)$ is decreasing w.r.t. $s$ and $f(r,s)\rightarrow 0$ as $s\rightarrow 0$.
\section{Preliminaries and Problem Statement}\label{sec:formulation}
Consider the following nonlinear system
\begin{align}\label{dyn1}
\begin{cases}
\dot x=Ax+Bu+Ep(q)+E_ww,\\
y=Cx+Du+F_ww,\\
q=C_qx,
\end{cases}
\end{align}
where $x\in\mathbb{R}^{n_x}$ is the state, $u\in\mathbb{R}^{n_u}$ is the control input, $y\in\mathbb{R}^{n_y}$ is the measured output, $p:\mathbb{R}^{n_q}\rightarrow \mathbb{R}^{n_p}$ is a function representing the known nonlinearity of the system, $w\in\mathbb{R}^{n_w}$ is the unknown external disturbance or measurement noise, and $A\in\mathbb{R}^{n_x\times n_x},B\in\mathbb{R}^{n_x\times n_u},C\in\mathbb{R}^{n_y\times n_x},D\in\mathbb{R}^{n_y\times n_u},C_q\in\mathbb{R}^{n_q\times n_x},E\in\mathbb{R}^{n_x\times n_p},E_w\in\mathbb{R}^{n_x\times n_w},F_w\in\mathbb{R}^{n_y\times n_w}$ are constant matrices with proper sizes.
The characterization of the nonlinearity $p$ is based on the incremental multiplier matrix.
\begin{definition}\label{def:delQC}\cite{accikmecse2011observers}
Given a function $p:\mathbb{R}^{n_q}\rightarrow \mathbb{R}^{n_p}$, a symmetric matrix $M\in\mathbb{R}^{(n_q+n_p)\times (n_q+n_p)}$ is called an \emph{incremental multiplier matrix} ($\delta$-MM)
for $p$ if it satisfies the following \emph{incremental quadratic constraint} ($\delta$-QC) for any $q_1, q_2\in\mathbb{R}^{n_q}$:
\begin{equation}\label{eq:delQC}
\begin{pmatrix}
\delta q\\
\delta p
\end{pmatrix}^\top M
\begin{pmatrix}
\delta q\\
\delta p
\end{pmatrix}\geq 0
\end{equation}
where $\delta q=q_2-q_1$, $\delta p=p(q_2)-p(q_1)$.
\end{definition}
For a given nonlinearity $p$, it is clear that its $\delta$-MM is not unique. Particularly, if $M$ is a $\delta$-MM for $p$, then $\lambda M$ is also a $\delta$-MM for $p$ for any $\lambda> 0$. In what follows, we will not consider the trivial case when $M$ is a positive semi-definite matrix since it does not provide any information on the nonlinearity, although it also satisfies the condition shown in \eqref{eq:delQC}.
In what follows, we assume that $p$ satisfies $p({\bf 0}_{n_q})={\bf 0}_{n_p}$, in which case the following useful condition holds for $M$:
\begin{equation*}
\begin{pmatrix}
q\\
p
\end{pmatrix}^\top M
\begin{pmatrix}
q\\
p
\end{pmatrix}\geq 0,\;\forall q\in\mathbb{R}^q
\end{equation*}
\begin{remark}\label{remark1}
The $\delta$-QC condition \eqref{eq:delQC} includes broad classes of nonlinearities as special cases. For instance, the globally Lipschitz condition $\|p(x_2)-p(x_1)\|\leq \gamma \|x_2-x_1\|$
for some constant $\gamma\!>\!0$ can be expressed in the form of \eqref{eq:delQC} with $q=x$ and
\begin{align}\label{M1}
M=
\begin{pmatrix}
\gamma^2 I_{n_q}&{\bf 0}\\
{\bf 0}&-I_{n_p}
\end{pmatrix}.
\end{align}
The sector bounded nonlinearity $(p-K_1q)^\top S(p-K_2q)\leq 0$
for some symmetric matrix $S$ and some constant matrices $K_1,K_2$ can be expressed in the form of \eqref{eq:delQC} with
\begin{align}\label{M2}
M=
\begin{pmatrix}
-K_1^\top SK_2-K_2^\top SK_1&*\\
S(K_1+K_2)&-2S
\end{pmatrix}.
\end{align}
The positive real nonlinearity $p^\top Sq\geq 0$
for some symmetric, invertible matrix $S$ can be expressed in the form of \eqref{eq:delQC} with
\begin{align}\label{M3}
M=
\begin{pmatrix}
{\bf 0}&*\\
S&{\bf 0}
\end{pmatrix}.
\end{align}
Some other nonlinearities that can be expressed using the $\delta$-QC were discussed in \cite{accikmecse2011observers,alto13incremental}, such as the incrementally sector bounded nonlinearities, the case when the Jacobian of $p$ w.r.t. $q$ is confined in a polytope or a cone.
\end{remark}
Next, we will introduce input-to-state practically stability and its characterization using Lyapunov functions. Consider the system
\begin{align}\label{eqnISPS}
\dot{x}=f(x,u)
\end{align}
where $f:\mathbb{R}^{n_x}\times \mathbb{R}^{n_u}\rightarrow \mathbb{R}^{n_x}$ is a locally Lipschitz function and $u:\mathbb{R}\rightarrow \mathbb{R}^{n_u}$ is a measurable essentially bounded input. Define $x(t,x_0,u)$ as the solution of \eqref{eqnISPS} with initial state $x_0$ and input $u$, which satisfies $x(0,x_0,u)=x_0$.
\begin{definition}\label{dfn:ISPS}\cite{jiang1996lyapunov}
The system \eqref{eqnISPS} is called \emph{input-to-state practically stable} (ISpS) w.r.t. $u$, if there exist functions $\beta_1\in\mathcal{KL}$, $\beta_2\in\mathcal{K}$ and a non-negative constant $d$ such that for every initial state $x_0$ and every $w$, the trajectory $x(t,x_0,u)$ satisfies
\begin{align}\label{ISSineq3}
\|x(t,x_0,u)\|\leq \beta_1(\|x_0\|,t)+\beta_2(\|u\|_{\infty})+d,\;\forall t\geq 0
\end{align}
where $\|u\|_{\infty}:=\sup_{t\geq 0}\|u(t)\|$.
\end{definition}
When \eqref{ISSineq3} is satisfied with $d=0$, the system is said to be \emph{input-to-state stable} (ISS) w.r.t. $u$~\cite{isidori2013nonlinear}.
\begin{definition}\cite{jiang1996lyapunov}
A smooth function $V:\mathbb{R}^n\rightarrow \mathbb{R}$ is said to be an \emph{ISpS-Lyapunov function} for the system \eqref{eqnISPS} if $V$ is positive definite, radially unbounded, and there exist functions $\gamma\in\mathcal{K}_\infty,\chi\in\mathcal{K}$ and a non-negative constant $d$ such that the following condition holds:
\begin{align}\label{ISSineq}
\nabla V(x)f(x,u)\leq -\gamma(\|x\|)+\chi(\|u\|)+d.
\end{align}
\end{definition}
Instead of requiring the inequality \eqref{ISSineq}, the ISpS-Lyapunov function can be also defined equivalently as follows: a smooth, positive definite, radially unbounded function $V$ is an ISpS-Lyapunov function for the system \eqref{eqnISPS} if there exist a positive-definite function $\gamma$, a class $\mathcal{K}$ function $\chi$ and a non-negative constant $d$ such that the following condition holds \cite{jiang1996lyapunov}:
\begin{align}
\|x\|\geq \chi(\|u\|)+d\; \Rightarrow \; \nabla V(x)f(x,u)\leq -\gamma(\|x\|).\label{ISSineq4}
\end{align}
The existence of an \emph{ISpS-Lyapunov function} is a necessary and sufficient condition for the ISpS property.
\begin{proposition}\label{proISS}\cite{sontag1995characterizations}
The system \eqref{eqnISPS} is ISpS (resp. ISS) if and only if it has an ISpS- (resp. ISS-) Lyapunov function.
\end{proposition}
Particularly, if there exist a matrix $P=P^\top>0$, two constants $\alpha>0,d\geq0$ and a function $\chi\in\mathcal{K}_\infty$ such that the positive definite function $V(x)=x^\top Px$ satisfies
\begin{align}\label{ISSineq2}
\nabla V(x)f(x,u)\leq -\alpha V(x)+\chi(\|u\|)+d,
\end{align}
then $V$ is an ISpS-Lyapunov function satisfying \eqref{ISSineq} with $\gamma(\|x\|)=\alpha\lambda_{m}(P)\|x\|^2$, implying that \eqref{eqnISPS} is ISpS w.r.t. $u$.
\begin{figure}[!hbt]
\begin{center}
\subfigure[]{\includegraphics[height=3.0cm]{Config4}}
\subfigure[]{\includegraphics[height=3.5cm]{Config1}}
\subfigure[]{\includegraphics[height=3.5cm]{Config3}}
\caption{(a) Configuration of the continuous-time closed-loop system. (b) Configuration of the closed-loop system where the ETM is implemented in the controller channel. (c) Configuration of the closed-loop system where ETMs are implemented in both the output and the controller channel asynchronously.}\label{figConfig1}
\end{center}
\end{figure}
Now we state the problems that are studied in the paper.
\emph{Consider a system described by \eqref{dyn1} where the nonlinear term $p$ satisfies the $\delta$-QC inequality \eqref{eq:delQC} for some $M$.\\
i) For the configuration shown in Fig. \ref{figConfig1} (a), design a continuous-time, observer-based feedback controller such that the closed-loop system is ISS w.r.t. $w$;\\
ii) For the configuration shown in Fig. \ref{figConfig1} (b), design an observer-based, feedback controller and a triggering rule for the ETM, such that the closed-loop system is ISpS w.r.t. $w$ and the Zeno behavior is avoided; \\
iii) For the configuration shown in Fig. \ref{figConfig1} (c), design an observer-based, feedback controller and two triggering rules for the ETMs, such that the closed-loop system is ISpS w.r.t. $w$ and the Zeno behavior is avoided.}
\section{Continuous-time Control Design}\label{sec:continuous}
In this section, we design a continuous-time observer and a continuous-time feedback control law for the system \eqref{dyn1}, such that the closed-loop system is ISS w.r.t. $w$. LMI-based sufficient conditions will be given to the simultaneous design of the observer and the controller.
We propose an observer that has the following form:
\begin{align}\label{obser1}
\begin{cases}
\dot{\hat{x}}\!=\!A\hat x+Bu+E p(\hat q+L_1(\hat y-y))+L_2(\hat y-y),\\
\hat y\!=\!C\hat x+Du,\\
\hat q\!=\!C_q\hat x,
\end{cases}
\end{align}
where $L_1,L_2$ are matrices to be designed.
Clearly, the proposed observer contains a copy of the plant and two correction terms, the \emph{nonlinear injection term} $L_1(\hat y-y)$ and the Luenberger-type \emph{correction term} $L_2(\hat y-y)$.
Based on the observer \eqref{obser1}, we design the feedback controller $u$ as
\begin{align}\label{input1}
u&=k(\hat x)
\end{align}
where $k:\mathbb{R}^{n_x}\rightarrow \mathbb{R}^{n_u}$ is a function that
has the form of
\begin{align}\label{inputform}
k(x)&=K_1x+K_2p(C_qx)
\end{align}
with matrices $K_1\in\mathbb{R}^{n_u\times n_x}$, $K_2\in\mathbb{R}^{n_u\times n_p}$ to be designed.
Define the estimation error by $e(t)=x(t)-\hat x(t)$.
Then, \eqref{input1} can be rewritten as
$$
u=k(x)-\Delta k(x,\hat x)
$$
where $\Delta k(x,\hat x)=k(x)-k(\hat{x})$.
Recalling \eqref{inputform}, $\Delta k$ can be expressed as $\Delta k=K_1e-K_2\Delta p$
where
\begin{align}
\Delta p=p(\hat q)-p(q).\label{Deltap}
\end{align}
The closed-loop system resulting from the observer-based controller \eqref{input1} can now be expressed as
\begin{align}\label{dyn3-1}
\begin{cases}
\dot x\!=\!(A\!+\!BK_1)x\!+\!(E\!+\!BK_2)p\!-\!B\Delta k\!+\!E_ww, \\
\dot e\!=\!(A\!+\!L_2C)e\!-\!E\delta p\!+\!(E_w\!+\!L_2F_w)w,
\end{cases}
\end{align}
where
\begin{align}\label{delp}
\begin{cases}
\delta p=p(q+\delta q)-p(q),\\
\delta q=-(C_q+L_1C)e-L_1F_ww
\end{cases}
\end{align}
Let $z=
\begin{pmatrix}
x\\
e
\end{pmatrix}$. Then dynamics \eqref{dyn3-1} are expressed compactly as
\begin{align}
\dot z=A_cz+H_1p+H_2\delta p+H_3\Delta p +H_4w\label{eqthm3-1}
\end{align}
where $\Delta p$ is given in \eqref{Deltap}, $\delta p$ is given in \eqref{delp}, and
\begin{align}
A_c&=
\begin{pmatrix}
A+BK_1&-BK_1\\
{\bf 0}& A+L_2 C
\end{pmatrix},\label{eqA}
\end{align}
\begin{align}\label{H1}
\begin{cases}
H_1=
\begin{pmatrix}
E+BK_2\\
{\bf 0}
\end{pmatrix},\;H_2=
\begin{pmatrix}
{\bf 0}\\
-E
\end{pmatrix},\\
H_3=
\begin{pmatrix}
BK_2\\
{\bf 0}
\end{pmatrix},\;H_4=
\begin{pmatrix}
E_w\\
E_w+L_2F_w
\end{pmatrix}
\end{cases}
\end{align}
The following proposition shows a sufficient condition for the closed-loop system \eqref{eqthm3-1} to be ISS w.r.t. $w$.
\begin{proposition}\label{prp1}
Consider the system described by \eqref{dyn1}-\eqref{eq:delQC} with $M$ a $\delta$-MM for $p$.
If there exist matrices $L_1\in\mathbb{R}^{n_q\times n_y},L_2\in\mathbb{R}^{n_x\times n_y},K_1\in\mathbb{R}^{n_u\times n_x},K_2\in\mathbb{R}^{n_u\times n_p},P\in\mathbb{R}^{2n_x\times 2n_x}$ with $P>0$, and real numbers $\alpha_0>0,\mu>0,\sigma_1\geq 0,\sigma_2\geq 0,\sigma_3\geq 0$ such that
\begin{align}
&
\begin{pmatrix}
S_0&S_1\\
S_1^\top&{\bf 0}
\end{pmatrix}+\sigma_1S_2^{\top} MS_2+\sigma_2 S_3^{\top} MS_3\nonumber\\
&\quad\quad\quad+\sigma_3S_4^{\top} MS_4-\mu S_5^\top S_5\leq 0\label{recomLMI}
\end{align}
where
\begin{align*}
S_0&=PA_c+A_c^\top P+\alpha_0 P,\\
S_1&=(PH_1,PH_2,PH_3,PH_4),\\
S_2&=
\begin{pmatrix}
C_q,{\bf 0}_{n_q\times (n_x+3n_p+n_w)}\\
{\bf 0}_{n_p\times 2n_x},I_{n_p},{\bf 0}_{n_p\times (2n_p+n_w)}
\end{pmatrix},\\
S_3&=
\begin{pmatrix}
{\bf 0}_{n_q\times n_x},-(C_q+L_1C),{\bf 0}_{n_q\times 3n_p},-L_1F_w\\
{\bf 0}_{n_p\times (2n_x+n_p)},I_{n_p},{\bf 0}_{n_p\times (n_p+n_w)}
\end{pmatrix},\\
S_4&=
\begin{pmatrix}
{\bf 0}_{n_q\times n_x},-C_q,{\bf 0}_{n_q\times (3n_p+n_w)}\\
{\bf 0}_{n_p\times (2n_x+2n_p)},I_{n_p},{\bf 0}_{n_p\times n_w}
\end{pmatrix},\\
S_5&=({\bf 0}_{n_w\times (2n_x+3n_p) },I_{n_w}),
\end{align*}
then the closed-loop system \eqref{eqthm3-1} is ISS w.r.t. $w$. Furthermore, it holds that $\dot V\leq -\alpha_0 V+\mu w^\top w$ where $V(z)=z^\top Pz$.
\end{proposition}
The proof of the proposition is given in the appendix. Clearly, the matrix inequality \eqref{recomLMI} is not a LMI. Hence, we can not solve for $L_1,L_2,K_1,K_2$ reliably via the \emph{interior point method} (IPM) algorithms of convex optimization. In the next two subsections, we will consider two parameterizations of the $\delta$-MM $M$ and provide sufficient LMI conditions for solving $L_1,L_2,K_1,K_2$ simultaneously.
\subsection{Block Diagonal Parameterization}
This subsection considers a block diagonal parameterization of the $\delta$-MM for $p$.
We first make the following two assumptions on the parameterizations of $M$.
{\color{black}
\begin{assumption}\label{ass1}
There exist a set $\mathcal{N}_1$ of matrix pairs $(X_1,Y_1)$ with $X_1\in\mathbb{R}^{n_q\times n_q}$, $Y_1\in\mathbb{R}^{n_p\times n_p}$ symmetric, and an invertible matrix $T_1$ with
\begin{align}\label{eqT}
T_1=
\begin{pmatrix}
T_{11}&T_{12}\\
T_{13}&T_{14}
\end{pmatrix}
\end{align}
and $T_{14} \in\mathbb{R}^{n_p\times n_p}$ invertible, such that $M_1$ given below is a $\delta$-MM of $p$ for all $(X_1,Y_1)\in\mathcal{N}_1$:
\begin{align}
\label{eq:Mass1}
M_1&=T_1^\top \tilde M_1T_1\;\;\mbox{where}\;\;\tilde M_1=
\begin{pmatrix}
X_1&{\bf 0}\\
{\bf 0}&-Y_1
\end{pmatrix}.
\end{align}
\end{assumption}
\begin{assumption}\label{ass2}
There exist a set $\mathcal{N}_2$ of matrix pairs
$(X_2,Y_2)$ with $X_2\in\mathbb{R}^{n_q\times n_q}$, $Y_2\in\mathbb{R}^{n_p\times n_p}$ symmetric and invertible, and an invertible matrix $T_2$ with
\begin{align}\label{eqT2}
T_2=
\begin{pmatrix}
T_{21}&T_{22}\\
T_{23}&T_{24}
\end{pmatrix}
\end{align}
and $T_{24} \in\mathbb{R}^{n_p\times n_p}$ invertible, such that $M_2$ given below is a $\delta$-MM of $p$ for all $(X_2,Y_2)\in\mathcal{N}_2$:
\begin{align}
\label{eq:Mass2}
M_2&=T_2^\top \tilde M_2T_2\;\;\mbox{where}\;\;\tilde M_2=
\begin{pmatrix}
X_2^{-1}&{\bf 0}\\
{\bf 0}&-Y_2^{-1}
\end{pmatrix}.
\end{align}
\end{assumption}
}
\begin{remark}\label{remark2}
For the globally Lipschitz nonlinearity $\|p(x_2)-p(x_1)\|\leq \gamma \|x_2-x_1\|$, the matrix $M$ shown in \eqref{M1} satisfies Assumption \ref{ass1} and \ref{ass2} if we choose
\begin{align*}
T_1\!=\!T_2\!=\!
\begin{pmatrix}
\gamma I&{\bf 0}\\
{\bf 0}&I
\end{pmatrix},\;\mathcal{N}_1\!=\!\mathcal{N}_2\!=\!\{(\lambda I,\lambda I)|\lambda>0\}.
\end{align*}
For the sector bounded nonlinearity $(p-K_1q)^\top S(p-K_2q)\leq 0$ where $S$ is symmetric and invertible, the matrix $M$ shown in \eqref{M2} satisfies Assumption \ref{ass1} and \ref{ass2} if we choose
\begin{align*}
T_1\!=\!T_2\!=\!
\begin{pmatrix}
K_2-K_1&{\bf 0}\\
K_2+K_1&-2I
\end{pmatrix},\;
\mathcal{N}_1\!=\!\mathcal{N}_2\!=\!\{(\lambda S,\lambda S)|\lambda>0\}.
\end{align*}
For the positive real nonlinearity $p^\top Sq\geq 0$ where $S$ is symmetric and invertible, the matrix $M$ shown in \eqref{M3} satisfies Assumption \ref{ass1} and \ref{ass2} if we choose
\begin{align*}
T_1\!=\!T_2\!=\!
\begin{pmatrix}
I&I\\
I&-I
\end{pmatrix},\;
\mathcal{N}_1\!=\!\mathcal{N}_2\!=\!\{(\lambda S,\lambda S)|\lambda>0\}.
\end{align*}
{\color{black}
$\mathcal{N}_1$ and $\mathcal{N}_2$ do not have to be the set of scalings of a matrix pair as in the examples above. For instance, for the nonlinearity whose Jacobian is confined within a polytope or a cone, $\mathcal{N}_1$ that satisfies Assumption \ref{ass1} (or $\mathcal{N}_2$ that satisfies Assumption \ref{ass2}) is characterized via matrix inequalities (see Section 5 in \cite{accikmecse2011observers} for more details). Furthermore, $T_1$ does not necessarily has to be chosen to be equal to $T_2$.}
\end{remark}
Because $T_1$ in Assumption \ref{ass1} and $T_2$ in Assumption \ref{ass2} are invertible, the matrix $\Gamma_{i1} (i=1,2)$ defined as
\begin{align}
\Gamma_{i1}&=T_{i1}-T_{i2}T_{i4}^{-1}T_{13}\label{gamma1}
\end{align}
is also invertible by the matrix inversion lemma. Furthermore, we define the matrix $\Gamma_{i2} (i=1,2)$ as
\begin{align}
\Gamma_{i2}=T_{i2}T_{i4}^{-1}.\label{gamma2}
\end{align}
Now we are ready to present the first result of this section. The following theorem provides sufficient conditions for the design of matrices $L_1,L_2$ in the observer \eqref{obser1} and matrices $K_1,K_2$ in the controller \eqref{input1}, when the $\delta$-MM can be parameterized in a block diagonal manner.
The proof of this theorem is given in the appendix.
\begin{theorem}\label{thm1}
{\color{black} Consider the system described by \eqref{dyn1}-\eqref{eq:delQC}. Suppose that there exist $T_1$, $\mathcal{N}_1$ such that Assumption \ref{ass1} holds, and there exist $T_2$, $\mathcal{N}_2$ such that Assumption \ref{ass2} holds with $M_2=
\begin{pmatrix}
M_{21}&M_{22}\\
M_{23}&M_{24}
\end{pmatrix}$ a $\delta$-MM for $p$ where $M_{21}\in\mathbb{R}^{n_q\times n_q}$, $M_{24}\in\mathbb{R}^{n_p\times n_p}$ and $M_{24}<0$.
Suppose that there exist positive numbers $\alpha_1,\alpha_2,\mu_1,\mu_2>0$ and matrices $R_1,R_2,R_3,R_4,P_1=P_2^\top>0,P_2=P_2^\top>0,X_1=X_1^\top>0,Y_1=Y_1^\top,X_2=X_2^\top>0,Y_2=Y_2^\top>0$, such that $(X_1,Y_1)\in\mathcal{N}_1$, $(X_2,Y_2)\in\mathcal{N}_2$ and
the following \eqref{LMI1}-\eqref{LMI2} hold:}
\begin{align}
\text{\rm (observer ineq.)}\; &
\begin{pmatrix}
\Phi-\varphi^\top Y_1\varphi & \phi^\top\\
\phi & -X_1
\end{pmatrix}\leq 0, \label{LMI1}\\
\text{\rm (controller ineq.)} \; &
\begin{pmatrix}
\Psi-\varphi^\top Y_2\varphi & \psi^\top\\
\psi & -X_2
\end{pmatrix}\leq 0, \label{LMI2}
\end{align}
where
\begin{align}
\Phi&=
\begin{pmatrix}
\Phi_0 & -P_1\tilde E_1 & P_1 E_w+R_1 \\
* & {\bf 0} & {\bf 0} \\
* &*&-\mu_1I
\end{pmatrix},\label{Phi}\\
\Phi_0&=\tilde A_1^\top P_1+P_1\tilde A_1+ C^\top R_1^\top+R_1 C+\alpha_1 P_1,\label{Phi0}\\
\Psi&=
\begin{pmatrix}
\Psi_0& \tilde E_2Y_2 +BR_4 & E_w \\
* & {\bf 0} &{\bf 0} \\
* & *&-\mu_2I
\end{pmatrix},\label{Psi}\\
\Psi_0&=\tilde A_2 P_2+P_2\tilde A_2^\top+ B R_3+R_3^\top B^\top+\alpha_2 P_2,\label{Psi0}\\
\phi&=(-(X_1 \Gamma_{11} C_q+R_2 C) , X_1 \Gamma_{12}, -R_2F_w),\label{phi}\\
\varphi&=({\bf 0}_{n_p\times n_x}, I_{n_p}, {\bf 0}_{n_p\times n_w}),\label{varphi}\\
\psi&=(\Gamma_{21} C_q P_2 , \Gamma_{22}Y_2 ,{\bf 0}_{n_q\times n_w}),\\
\tilde A_i&=A-ET_{i4}^{-1}T_{i3}C_q, \;i=1,2,\label{tilA}\\
\tilde E_i&=ET_{i4}^{-1},\;i=1,2. \label{tilE}
\end{align}
with $\Gamma_{i1}(i=1,2)$ given in \eqref{gamma1} and $\Gamma_{i2}(i=1,2)$ given in \eqref{gamma2}.
If $K_1,K_2$ and $L_1,L_2$ are chosen as
\begin{align}\label{L1}
\begin{cases}
L_1=\Gamma_{11}^{-1}X_1^{-1}R_2,\\%\label{L1}\\
L_2=P_1^{-1}R_1+ET_{14}^{-1}T_{13}L_1,\\%\label{L2}\\
K_1=R_3P_2^{-1} + K_2T_{24}^{-1}T_{23}C_q,\\%\label{eqK1}\\
K_2=R_4 Y_2^{-1}T_{24}
\end{cases}
\end{align}
then
the closed-loop system \eqref{dyn3-1} is ISS w.r.t. $w$.
\end{theorem}
If $\alpha_1$ is fixed, then \eqref{LMI1} is a LMI in decision variables $\mu_1,P_1,R_1,R_2,X_1,Y_1$; if $\alpha_2$ is fixed, then \eqref{LMI2} is a LMI in decision variables $\mu_2,P_2,R_3,R_4,X_2,Y_2$. Hence, the synthesis of the observer gains $L_1,L_2$ and the controller gains $K_1,K_2$ are decoupled, indicating a controller-observer separation.
\begin{remark}\label{rem-recompute}
The proof of Theorem \ref{thm1} indicates that
larger $\alpha_1,\alpha_2$ result in a larger function $\gamma(\cdot)\in\mathcal{K}_\infty$ in \eqref{ISSineq}, which in turn indicate a faster convergence rate for the system \eqref{eqthm3-1}. Line searches can be used to optimize $\alpha_1,\alpha_2$ in \eqref{LMI1}-\eqref{LMI2}.
The convergence rate given in the proof of Theorem \ref{thm1} can be improved by re-computing the ISS-Lyapunov function. Note that the closed-loop system satisfies $\dot V\leq -\alpha_0 V+\mu\|w\|^2$ for an ISS-Lyapunov function $V=z^\top Pz$ where $P=P^\top>0$ and $\alpha_0,\mu>0$ are given explicitly in the proof of Theorem \ref{thm1}. The matrix $P$ can be re-computed via Proposition \ref{prp1} such that $V$ results in better performance guarantees, i.e., better convergence rate and smaller ultimate bound.
Specifically, the matrix $P$ does not need to be the diagonal matrix shown in \eqref{LyaFun}; instead, after the matrix gains $K_1,K_2,L_1,L_2$ are obtained, we can solve for $P,\alpha_0,\mu$ satisfying \eqref{recomLMI} and try to maximize $\alpha_0$ and/or minimize $\mu$.
\end{remark}
\subsection{Block Anti-Triangular Parameterization}
In this subsection, we consider a block anti-triangular parameterization of the $\delta$-MM for $p$. The following assumption on the parameterization of $M$ is given first.
{\color{black}
\begin{assumption}\label{ass3}
There exist a set $\mathcal{N}$ of matrix pairs $(X,Y)$ with $X\in\mathbb{R}^{n_q\times n_p}$, $Y\in\mathbb{R}^{n_p\times n_p}$, and an invertible matrix $T\in\mathbb{R}^{(n_p+n_q)\times (n_p+n_q)}$, such that $M$ given below is a $\delta$-MM of $p$ for all $(X,Y)\in\mathcal{N}$:
\begin{align}
M=T^\top \tilde M T\;\mbox{where}\;\tilde M=
\begin{pmatrix}
{\bf 0}&X\\
X^\top&Y
\end{pmatrix}.\label{eq:Mass3}
\end{align}
\end{assumption}
}
The following theorem provides sufficient conditions for the design of matrices $L_1,L_2,K_1,K_2$ when the $\delta$-MM $M$ can be parameterized in a block anti-triangular manner. The proof of the theorem is given in the appendix.
\begin{theorem}\label{thm2}
Consider the system described by \eqref{dyn1}-\eqref{eq:delQC}. Suppose that Assumption \ref{ass3} holds for some $T_1,\mathcal{N}_1$ and $T_2,\mathcal{N}_2$, respectively, where $T_1$ and $T_2$ are partitioned as in \eqref{eqT} and in \eqref{eqT2}, respectively, with $T_{14}$, $T_{24}$ invertible.
Suppose that there exist positive constants $\alpha_1,\alpha_2,\mu_1,\mu_2$ and matrices $R_1,R_2,R_3,R_4,X_1,Y_1.X_2,Y_2,P_1=P_1^\top>0,P_2=P_2^\top>0$, such that $(X_1,Y_1)\in\mathcal{N}_1$, $(X_2,Y_2)\in\mathcal{N}_2$, $X_1$ is of full column rank and the following \eqref{LMI3}, \eqref{LMI4}, \eqref{LMI5} hold:
\begin{align}
&\Phi+\Upsilon_1^\top \tilde M_1\Upsilon_1+\Upsilon_2^\top \Upsilon_1+\Upsilon_1^\top \Upsilon_2\leq 0,\label{LMI3}\\
&\Psi+\Upsilon_3^\top \tilde M_2\Upsilon_3+\Upsilon_4^\top \Upsilon_3+\Upsilon_3^\top \Upsilon_4\leq 0,\label{LMI4}\\
&\Gamma_{12}^\top X_1+X_1^\top\Gamma_{12}+Y_1<0,\label{LMI5}
\end{align}
where
\begin{align}
\tilde M_1&=
\begin{pmatrix}
{\bf 0}&X_1\\
X_1^\top&Y_1
\end{pmatrix},\;\tilde M_2=
\begin{pmatrix}
{\bf 0}&X_2\\
X_2^\top&Y_2
\end{pmatrix},\\
\Psi&=
\begin{pmatrix}
\Psi_0& \tilde E_2+BR_4 & E_w \\
* & {\bf 0} &{\bf 0} \\
* & *&-\mu_2I
\end{pmatrix},
\end{align}
\begin{align}\label{upsilon1}
\begin{cases}
\Upsilon_1=
\begin{pmatrix}
-\Gamma_{11} C_q& \Gamma_{12}& {\bf 0}_{n_q\times n_w} \\
{\bf 0}_{n_p\times n_x}& I_{n_p}& {\bf 0}_{n_p\times n_w}
\end{pmatrix},\\%\label{upsilon1}\\
\Upsilon_2=
\begin{pmatrix}
{\bf 0}_{n_q\times n_x} & {\bf 0}_{n_q\times n_p} & {\bf 0}_{n_q\times n_w} \\
-R_2 C& {\bf 0}_{n_p} & -R_2F_w \\
\end{pmatrix},\\%\label{upsilon2}\\
\Upsilon_3=
\begin{pmatrix}
{\bf 0}_{n_q\times n_x} & \Gamma_{22}& {\bf 0}_{n_q\times n_w} \\
{\bf 0}_{n_p\times n_x} & I_{n_p}& {\bf 0}_{n_p\times n_w} \\
\end{pmatrix},\\%\label{upsilon3}\\
\Upsilon_4=
\begin{pmatrix}
{\bf 0}_{n_q\times n_x} & {\bf 0}_{n_q\times n_p} & {\bf 0}_{n_q\times n_w} \\
X_2\Gamma_{21}C_qP_2& {\bf 0}_{n_p} & {\bf 0}_{n_p\times n_w} \\
\end{pmatrix}
\end{cases}
\end{align}
with $\Gamma_{i1}(i=1,2)$ given in \eqref{gamma1}, $\Gamma_{i2}(i=1,2)$ given in \eqref{gamma2}, $\Phi$ given in \eqref{Phi}, $\Phi_0$ given in \eqref{Phi0}, $\Psi_0$ given in \eqref{Psi0}, $\varphi$ given in \eqref{varphi}, $\tilde A_i(i=1,2)$ given in \eqref{tilA} and $\tilde E_i(i=1,2)$ given in \eqref{tilE}.
If $L_2$, $K_1$ are given by \eqref{L1}, and $L_1,K_2$ are chosen as
\begin{align}\label{L1v2}
\begin{cases}
L_1=\Gamma_{11}^{-1}X_1^{\dagger}R_2,\\%\label{L1v2}\\
K_2=R_4 T_{24}
\end{cases}
\end{align}
where $X_1^{\dagger}$ is the pseudoinverse of $X_1$, then
the closed-loop system \eqref{dyn3-1} is ISS w.r.t. $w$.
\end{theorem}
Note that \eqref{LMI3} is a LMI in decision variables $\mu_1,P_1,R_1,R_2$ when $\alpha_1$ is fixed, \eqref{LMI4} is a LMI in decision variables $\mu_2,P_2,R_3,R_4$ when $\alpha_2$ and $X_2$ are fixed, and \eqref{LMI5} is a LMI in decision variables $X_1,Y_1$. Hence, we can fix $\alpha_1,\alpha_2,X_2$ and solve for \eqref{LMI3}-\eqref{LMI5}.
As discussed in Remark \ref{rem-recompute}, when $L_1,L_2,K_1,K_2$ are obtained, a re-computation for $P,\alpha_0,\mu$ using Proposition \ref{prp1} may result in better performance guarantees. It is also worth mentioning that Theorem \ref{thm1} and Theorem \ref{thm2} provide LMI-based sufficient conditions to render the closed-loop system globally exponentially stable if there is no disturbance (i.e., $w\equiv 0$ or $E_w=F_w={\bf 0}$).
\subsection{Discussions}\label{subsecdiscuss}
In Theorem \ref{thm1}, the condition $M_{24}<0$ is used to show the boundedness of $\|\Delta k\|/\|e\|$. On one hand, it is clear that this condition is not always satisfiable. For instance, the nonlinearity $p(q)=x|x|$ with $q=x$ is a positive real nonlinearity that
satisfies $(x_1|x_1|-x_2|x_2|)(x_1-x_2)\geq 0$ for any $x_1,x_2\in\mathbb{R}$. As shown in Remark \ref{remark1}, $p$
can be expressed in the form of \eqref{eq:delQC} with $M=
\begin{pmatrix}
0&1\\
1&0
\end{pmatrix}$
whose sub-matrix $M_{24}$ does not satisfy the condition $M_{24}<0$.
On the other hand, if $p$ is globally Lipschitz with Lipschitz constant $\ell$, then $\|\Delta k\|\leq
(\|K_1\|+\ell\|K_2\|\|C_q\|)\|e\|$, implying the boundedness of $\|\Delta k\|/\|e\|$. Hence, in such a case, the condition $M_{24}<0$ in Theorem \ref{thm1} is no longer needed.
The following assumption shows a weaker condition than the globally Lipschitz.
\begin{assumption}\label{ass4}
Given a function $p:\mathbb{R}^{n_q}\rightarrow \mathbb{R}^{n_p}$, there exist a $\mathcal{K}$ function $g_1$ and a non-decreasing function $g_2:[0,\infty)\rightarrow[0,\infty)$ such that
\begin{align}
\|p(C_q(x+\Delta x))-p(C_qx)\|\leq g_1(\|\Delta x\|)\|C_qx\|\label{ineqass4}
\end{align}
for all $x,\Delta x$ that satisfy $\|C_qx\|\geq g_2(\|\Delta x\|)$.
\end{assumption}
A useful condition, under which Assumption \ref{ass4} can be verified, is the existence of two functions $\hat g_1\in\mathcal{K},\hat g_2\in\mathcal{K}$, such that $\|p(C_q(x+\Delta x))-p(C_qx)\|\leq \hat g_1(\|\Delta x\|) + \hat g_2(\|\Delta x\|)\|C_qx\|$ for all $x,\Delta x$.
Indeed, this inequality implies Assumption \ref{ass4} with $g_1(\|\Delta x\|)=\hat g_1(\|\Delta x\|)+\hat g_2(\|\Delta x\|)\in\mathcal{K}$ and $g_2(\|\Delta x\|)\equiv 1$. For the nonlinearity $p(q)=x|x|$ discussed above, it is easy to verify that it satisfies such an inequality, and therefore, Assumption \ref{ass4}.
Using Assumption \ref{ass4}, we can state the following corollary that shows the globally exponential stability of the closed-loop system, without the need for $M_{24}<0$.
\begin{corollary}\label{cor1}
Consider a system described by \eqref{dyn1}-\eqref{eq:delQC} where $E_w=F_w={\bf 0}$. Suppose that $p$ satisfies Assumption \ref{ass4}, and all the conditions of Theorem \ref{thm1} but $M_{24}<0$ hold. If $L_1,L_2,K_1,K_2$ are given by \eqref{L1}, then
the feedback controller \eqref{input1} with the observer \eqref{obser1} renders the closed-loop system \eqref{dyn3-1} globally exponentially stable.
\end{corollary}
Corollary \ref{cor1} can be proved by following the proof of Theorem \ref{thm1} and using Corollary 10.3.3 of \cite{isidori2013nonlinear}. A similar argument can be also found in Theorem 2 of \cite{arcak2001observer} for the certainty-equivalence feedback control implementation where a similar inequality as \eqref{ineqass4} was assumed.
Another way to eliminate the need for $M_{24}<0$ is to use a simpler form of the controller $u$. Specifically, suppose that the observer-based, feedback control $u$ has the following form
\begin{align}\label{input2}
u(t)&=K_1\hat x(t)
\end{align}
where $K_1\in\mathbb{R}^{n_u\times n_x}$ is a constant matrix to be designed. Then, with the controller \eqref{input2}, sufficient conditions without the condition $M_{24}<0$ can be given to render the closed-loop system ISS w.r.t. $w$, as shown in the following corollary.
\begin{corollary}\label{cor2}
Consider a system described by \eqref{dyn1}-\eqref{eq:delQC}. Suppose that all the conditions of Theorem \ref{thm1} but $M_{24}<0$ hold where $R_4$ is chosen as $R_4={\bf 0}$.
If $L_1,L_2,K_1$ are given by \eqref{L1}, then the feedback controller \eqref{input2} with the observer \eqref{obser1} renders the closed-loop system \eqref{dyn3-1} ISS w.r.t. $w$.
\end{corollary}
The proof of Corollary \ref{cor2} directly follows from the proof of Theorem \ref{thm1}.
Note that although $M_{24}<0$ is no longer needed in Corollary \ref{cor2}, the LMI \eqref{LMI2} in the sufficient conditions is less likely to be satisfied by fixing $R_4={\bf 0}$.
We also point out that the condition \eqref{LMI5} in Theorem \ref{thm2} is used to establish the boundedness of $\|\Delta k\|/\|e\|$. Corollaries similar to
Corollary \ref{cor1} and Corollary \ref{cor2} can be given without requiring \eqref{LMI5} for the block anti-triangular parameterization.
\section{Event-triggered Control Design}\label{sec:trigger}
In this section, we discuss adapting ETMs within the observer-based controller for two configurations that are shown in Figure \ref{figConfig1} (b) and (c). We consider the system described by \eqref{dyn1}-\eqref{eq:delQC} where the nonlinearity $p$ is assumed to be globally Lipschitz, and
provide conditions under which the plant with the event-triggered controller designed is globally ISpS w.r.t. the disturbance $w$.
We point out that it is not hard to extend our results to incrementally quadratic nonlinearities that are Lipschitz on compact sets, such that the closed-loop system is ISpS w.r.t. the disturbance $w$ in a semi-global sense (also refer to the discussions in \cite{tabuada2007event,borgers2014event}). Furthermore, for certain incrementally quadratic nonlinearities that imply the global Lipschitzness (such as the sector bounded nonlinearity and the nonlinearities with Jacobians in polytopes \cite{accikmecse2011observers}), using their corresponding incremental matrix characterizations, instead of the matrix characterizations for global Lipschitzness, makes the associated LMIs in the design procedure less conservative, while benefiting from having the Lipschitz property needed for the upcoming ETM-related results to hold.
\subsection{Configuration I: The Controller Is Implemented By ETM}
In this subsection, we discuss the configuration shown in Figure \ref{figConfig1} (b) where dynamics of the plant are described by \eqref{dyn1}-\eqref{eq:delQC}, the observer is given in \eqref{obser1}, the continuous-time feedback controller is given in \eqref{input1}, and the ETM only has the information of $\hat x$, the state of the observer. We will assume that $\|w\|_\infty\leq \omega_0$ where $\omega_0$ is an arbitrary positive number, in this subsection and the next subsection.
The feedback controller $u(t)$ is implemented by an ETM such that it is only updated at some time instances $t_1,t_2,...$ where $t_k<t_{k+1}$ for any $k\geq 0$. Define $t_0=0$ and the piecewise constant signal $\hat x_s$ as
\begin{align}
\hat x_s(t)=\hat x(t_k),\;\forall t\in[t_k,t_{k+1}).\label{hatxs}
\end{align}
Then the input $u(t)$ is given by
\begin{align}\label{inputform2}
u(t)=K_1\hat x_s(t)+K_2 p(C_q\hat x_s(t))
\end{align}
where $K_1,K_2$ are matrices to be designed.
That is, the input $u(t)$ has the same form as that in \eqref{inputform}, but it is updated at triggering time instances $t=t_k$.
The triggering times $t_1,t_2,\dots$ are determined by the following type of triggering rule:
\begin{align}
&t_{k+1}=\inf\{t\mid t\geq t_k+\tau,\;\|\hat x_e(t)\|> \sigma \|\hat x(t)\|+\epsilon\}\label{triggercon1}
\end{align}
where $\hat x_e$ is defined as $\hat x_e(t)=\hat x_s(t)-\hat x(t)$ and $\tau,\sigma,\epsilon$ are all positive numbers to be specified. The time-updating rule
\eqref{triggercon1} guarantees that the inter-execution times $\{t_{k+1}-t_{k}\}$ are lower bounded by the built-in positive constant $\tau$, which means that Zeno phenomenon (i.e., infinite executions happen in a finite amount of time) will not occur \cite{tabuada2007event}.
The closed-loop system that combines the system \eqref{dyn1}-\eqref{eq:delQC} and the event-triggered controller \eqref{inputform2} with the observer \eqref{obser1} is expressed compactly as
\begin{align}\label{closedtrigger}
\dot z=A_cz+H_1p+H_2\delta p+H_3\delta \hat p+H_4w+H_5\hat x_e
\end{align}
where $\delta p,\delta q$ are given in \eqref{delp}, $A_c$ is given in \eqref{eqA}, $H_1,H_2,H_3,H_4$ are given in \eqref{H1}, and
\begin{align}
\delta \hat p&=p(t,C_q\hat x_s)-p(t,C_qx),\label{deltahatp}\\
H_5&=
\begin{pmatrix}
BK_1\\
{\bf 0}
\end{pmatrix}.\label{H5}
\end{align}
The following theorem is the first result of this subsection, which shows conditions for
the closed-loop system in Figure \ref{figConfig1} (b) to be ISpS w.r.t. $w$, for any initial condition $x(0),\hat x(0)$. The proof of the theorem is given in the appendix.
\begin{theorem}\label{thm3}
Consider the configuration shown in Figure \ref{figConfig1} (b) where the plant is described by \eqref{dyn1}-\eqref{eq:delQC} and $\|w\|_\infty\leq \omega_0$ with $\omega_0$ an arbitrary positive number. Suppose that $p$ is globally Lipschitz continuous and
there exist positive numbers $\alpha_0>0,\mu>0$, and matrices $P>0,K_1,K_2,L_1,L_2$ such that the closed-loop system \eqref{eqthm3-1} with the controller \eqref{input1} and the observer \eqref{obser1} satisfies $\dot V\leq -\alpha_0 V+ \mu\|w\|^2$ where $V=z^\top Pz$.
Then,
$\tau,\sigma,\epsilon$ in \eqref{triggercon1} can be chosen such that
the closed-loop system \eqref{closedtrigger} is ISpS w.r.t. $w$.
\end{theorem}
When the output $y$ has no measurement noise, i.e., $F_w$ in \eqref{dyn1} is $F_w={\bf 0}$,
the following theorem provides more relaxed conditions, which requires \eqref{eqthm3-1} satisfy $\dot V\leq -\alpha_0 V$ when $w\equiv 0$, than Theorem \ref{thm3}, which requires \eqref{eqthm3-1} satisfy $\dot V\leq -\alpha_0 V+ \mu\|w\|^2$, for the closed-loop system \eqref{closedtrigger} to be ISpS w.r.t $w$. The proof of the theorem is given in the appendix.
\begin{theorem}\label{corthm3}
Consider the configuration shown in Figure \ref{figConfig1} (b) where the plant is described by \eqref{dyn1}-\eqref{eq:delQC} with $F_w={\bf 0}$ and $\|w\|_\infty\leq \omega_0$ with $\omega_0$ an arbitrary positive number. Suppose that $p$ is globally Lipschitz continuous and there exist constants $\alpha_0>0,\mu>0$, and matrices $P>0,K_1,K_2,L_1,L_2$ such that the closed-loop system \eqref{eqthm3-1} with the controller \eqref{input1} and the observer \eqref{obser1} satisfies $\dot V\leq -\alpha_0 V$ when $w\equiv 0$ where $V=z^\top Pz$.
Then, $\tau,\sigma,\epsilon$ in \eqref{triggercon1} can be chosen such that
the closed-loop system \eqref{closedtrigger} is ISpS w.r.t. $w$.
\end{theorem}
In the proofs of Theorem \ref{thm3} and Theorem \ref{corthm3}, the equations for $\tau$ are given explicitly, but any $\tau'\in(0,\tau]$ also makes the proofs valid.
The parameter $\sigma$ can be chosen in an interval, but there is a trade-off in choosing $\sigma$: a larger $\sigma$ makes the triggering condition relatively harder to satisfy, and therefore, increases the inter-execution times, while a smaller $\sigma$ reduces the ultimate bound of the state.
The parameter $\epsilon$ can be chosen arbitrarily, but there is also a trade-off in choosing $\epsilon$: firstly, the value of $d$ in the inequality \eqref{ISSineq} or \eqref{ISSineq2} increases as $\epsilon$ increases, meaning that the ultimate bound for $x$ increases as $\epsilon$ increases; secondly, the explicit equation of $\tau$ depends on $\epsilon$, with $\tau$ decreasing to $0$ when $\epsilon$ approaches $0$; thirdly, smaller $\epsilon$ makes the triggering condition easier to satisfy, and therefore, reduces the inter-execution times.
Hence, parameters $\tau,\sigma,\epsilon$ in the triggering condition should be chosen appropriately to balance the execution times and the performance.
Sufficient conditions to satisfy $\dot V\leq -\alpha_0 V+ \mu\|w\|^2$ in Theorem \ref{thm3} or $\dot V\leq -\alpha_0 V$ in Theorem \ref{corthm3} are given by Theorem \ref{thm1} and Theorem \ref{thm2}.
Therefore, those theorems together
provide a way to design the observer-based controller and the triggering rule simultaneously for the configuration shown in Figure \ref{figConfig1} (b).
We also point out that because $p$ is assumed to be globally Lipschitz in Theorem \ref{thm3} and Theorem \ref{corthm3}, the condition $M_{24}<0$ in Theorem \ref{thm1} or the LMI \eqref{LMI5} in Theorem \ref{thm2} is not needed, as discussed in Subsection \ref{subsecdiscuss}.
The triggering rule given in \eqref{triggercon1} only depends on the local information $\hat x$ and $\hat x_e$, which are available from the observer designed. The triggering rule of the form \eqref{triggercon1} is termed the \emph{``mixed ETM''} in the literature, which distinguishes itself with the so-called ``relative ETM'' and ``absolute ETM'' in the choice of $\sigma$ and $\epsilon$: $\sigma$ and $\epsilon$ in the mixed ETM are all strictly positive, $\sigma,\epsilon$ in the relative ETM are chosen to be $\sigma>0,\epsilon=0$, while $\sigma,\epsilon$ in the absolute ETM are chosen to be $\sigma=0,\epsilon>0$ \cite{donkers2012output,borgers2014event}. It was shown that when external disturbances exist, Zeno phenomenon may occur if the relative or the absolute ETM is used \cite{abdelrahim2017robust,borgers2014event}.
\subsection{Configuration II: The Controller and Output Are Both Implemented By ETMs}
In this subsection, we discuss the configuration shown in Figure \ref{figConfig1} (c) where the ETM for the output is triggered by the information of $y$ and the ETM for the input is triggered by the information of $\hat x$, in an asynchronous manner.
Consider a system described by \eqref{dyn1}-\eqref{eq:delQC}.
The observer in the configuration of Figure \ref{figConfig1} (c) only has a sampled information $y_s(t)$ of the output $y(t)$ where $y_s(t)$ is updated at time instances $t_1^y,t_2^y,...$ by
\begin{align}\label{outputupdate1}
y_s(t)=y(t_k^y),\;\forall t\in[t_k^y,t^y_{k+1}).
\end{align}
Here, $t_0^y=0$ and the triggering times $t_1^y,t_2^y,\dots$ are determined by the following triggering rule:
\begin{align}
&t_{k+1}^y=\inf\{t\mid t\geq t_k^y+\tau_y,\;\|y_e(t)\|> \sigma_y \|y(t)\|+\epsilon_y\}\label{triggeroutput1}
\end{align}
where $y_e(t)=y_s(t)-y(t)$ and $\tau_y,\sigma_y,\epsilon_y$ are all positive numbers to be specified.
With the sampled information $y_s(t)$, the observer now has the following form:
\begin{align}\label{trobser1}
\begin{cases}
\dot{\hat{x}}\!=\!A\hat x\!+\!Bu\!+\!E_p p(\hat q\!+\!L_1(\hat y\!-\!y_s))\!+\!L_2(\hat y\!-\!y_s)\\%\label{trobser1}\\
\hat y\!=\!C\hat x\!+\!Du,\\
\hat q\!=\!C_q\hat x,\\%\label{trobser3}
\end{cases}
\end{align}
where $L_1,L_2$ are matrices to be designed.
The observer-based feedback controller $u(t)$ has the form shown in \eqref{inputform2} where $\hat x_s(t)$ is updated at time instances $t_1^u,t_2^u,...$ by
\begin{align}
\hat x_s(t)=\hat x(t_k),\;\forall t\in[t_k^u,t^u_{k+1}).\label{hatxs2}
\end{align}
Here, $t_0^u=0$ and the triggering times $t_1^u,t_2^u,\dots$ are determined by the following triggering rule:
\begin{align}
&t_{k+1}^u=\inf\{t\mid t\geq t_k^u+\tau_u,\;\|\hat x_e(t)\|> \sigma_u \|\hat x(t)\|+\epsilon_u\}\label{triggerinput1}
\end{align}
where $\hat x_e(t)=\hat x(t_k)-\hat x(t)$ and $\tau_u,\sigma_u,\epsilon_u$ are all positive numbers to be specified. Note that the information of $\hat x$ and $\hat x_e$ are available from the observer designed.
The time-updating rule \eqref{triggeroutput1} (resp. \eqref{triggerinput1}) provides a built-in positive lower bound $\tau_y$ (resp. $\tau_u$) for the inter-execution times $\{t_{k+1}^y-t_{k}^y\}$ (resp. $\{t_{k+1}^u-t_{k}^u\}$), implying that Zeno phenomenon will not occur. Although there is no bound guarantee on the inter-execution times between $t_{k}^y$ and $t_{k}^u$, this will not cause a problem since these two ETMs are implemented separately.
Since $y_e(t)=y_s(t)-y(t)$, the closed-loop system that combines the system \eqref{dyn1}-\eqref{eq:delQC} and the event-triggered controller \eqref{inputform2} with the observer \eqref{trobser1} is expressed compactly as
\begin{align}\label{closedtrigger3}
\dot z\!=\!A_cz\!+\!H_1p\!+\!H_2\delta \tilde p\!+\!H_3\delta\hat p\!+\!H_4w\!+\!H_5\hat x_e\!+\!H_6y_e
\end{align}
where $A_c$, $H_1$, $H_2$, $H_3$, $H_4$, $H_5$ are given in \eqref{eqA}, \eqref{H1}, \eqref{H5}, respectively, $\delta\hat p$ is given in \eqref{deltahatp}, $\delta \tilde p=p(q+\delta \tilde q)-p(q)$, $\delta \tilde q=-(C_q+L_1C)e-L_1F_ww-L_1y_e$, and $H_6=
\begin{pmatrix}
{\bf 0}\\
L_2
\end{pmatrix}$.
The following theorem provides conditions
for the closed-loop system in Figure \ref{figConfig1} (c) to be ISpS w.r.t. $w$ for any initial condition $x(0),\hat x(0)$. The proof of the theorem is given in the appendix.
\begin{theorem}\label{thm4}
Consider the configuration shown in Figure \ref{figConfig1} (c) where the plant is described by \eqref{dyn1}-\eqref{eq:delQC} with $D={\bf 0}$ and $\|w\|_\infty\leq \omega_0$ with $\omega_0$ an arbitrary positive number. Suppose that $p$ is globally Lipschitz continuous and there exist constants $\alpha_0>0,\mu>0$, and matrices $P>0,K_1,K_2,L_1,L_2$ such that the closed-loop system \eqref{eqthm3-1} with the controller \eqref{input1} and the observer \eqref{obser1} satisfies $\dot V\leq -\alpha_0 V+ \mu\|w\|^2$ where $V=z^\top Pz$.
Then, $\tau_y,\sigma_y,\epsilon_y$ in \eqref{triggeroutput1} and $\tau_u,\sigma_u,\epsilon_u$ in \eqref{triggerinput1} can be chosen such that
the closed-loop system \eqref{closedtrigger3} is ISpS w.r.t. $w$.
\end{theorem}
The equations for $\tau_u,\tau_y$ are given explicitly in the proof of Theorem \ref{thm4}, while $\sigma_u,\sigma_y$ can be chosen in two given intervals, respectively, and $\epsilon_u,\epsilon_y$ can be chosen arbitrarily. Similar to the discussion in the last subsection, there are also several trade-offs among those parameters (e.g., smaller $\epsilon_u,\epsilon_y$ reduces the ultimate bounds but also decreases the inter-execution times).
The following theorem provides less restrictive conditions than Theorem \ref{thm4} for the closed-loop system \eqref{closedtrigger3} to be ISpS w.r.t. $w$ when $F_w={\bf 0}$ (i.e., the output has no measurement noise). The sketch of the proof is given in the appendix.
\begin{theorem}\label{corthm4}
Consider the configuration shown in Figure \ref{figConfig1} (c) where the plant is described by \eqref{dyn1}-\eqref{eq:delQC} with $D={\bf 0},F_w={\bf 0}$, and $\|w\|_\infty\leq \omega_0$ with $\omega_0$ an arbitrary positive number. Suppose that $p$ is globally Lipschitz continuous and there exist constants $\alpha_0>0,\mu>0$, and matrices $P>0,K_1,K_2,L_1,L_2$ such that the closed-loop system with the controller \eqref{input1} and the observer \eqref{obser1} satisfies $\dot V\leq -\alpha_0 V$ when $w\equiv 0$.
Then, $\tau_y,\sigma_y,\epsilon_y$ in \eqref{triggeroutput1} and $\tau_u,\sigma_u,\epsilon_u$ in \eqref{triggerinput1} can be chosen such that
the closed-loop system \eqref{closedtrigger3} is ISpS w.r.t. $w$.
\end{theorem}
Theorem \ref{thm1} (or Theorem \ref{thm2}) together with Theorem \ref{thm4} (or Theorem \ref{corthm4})
provide us a way to design the observer-based controller and the triggering rules simultaneously for the configuration shown in Figure \ref{figConfig1} (c). Discussions at the end of the preceding subsection also apply for this configuration.
\section{Simulation Example}\label{sec:example}
In this section, we use a single-link robot arm example and the configuration of Figure \ref{figConfig1} (c) to illustrate the theoretical results developed above.
Dynamics of the single-link robot arm are expressed as \cite{abdelrahim2017robust}:
\begin{align*}
\dot x_1&=x_2,\\
\dot x_2&=-\sin(x_1)+u+w,\\
y&=x_1,
\end{align*}
where $x=(x_1,x_2)^\top$ is the state representing the angle and the rotational velocity, $u$ is the input representing the torque, and $w$ is the external disturbance. The system can be written in the form of \eqref{dyn1} with $A=
\begin{pmatrix}
0&1\\
0&0
\end{pmatrix}$, $B=p
\begin{pmatrix}
0\\
1
\end{pmatrix}p$, $C=(1,0)$, $D=0$, $E=
\begin{pmatrix}
0\\
-1
\end{pmatrix}$, $E_w=
\begin{pmatrix}
0\\
1
\end{pmatrix}$, $F_w=0$, $C_q=(1,0)$ and $p(q)=\sin(q)$. The nonlinearity $p$ satisfies the $\delta$-QC \eqref{eq:delQC} with $M=
\begin{pmatrix}
1&0\\
0&-1
\end{pmatrix}$. Recalling Remark \ref{remark2}, $p$ satisfies Assumption \ref{ass1} and \ref{ass2} with $T_1=T_2=
\begin{pmatrix}
\gamma I&{\bf 0}\\
{\bf 0}&I
\end{pmatrix}$ and $X_1=Y_1=X_2=Y_2=I$.
Additionally, the corresponding $M_{24}=-1<0$. By letting $\alpha_1=\alpha_2=1$, $\mu_1=\mu_2=0.1$, the LMIs \eqref{LMI1}-\eqref{LMI2} are feasible, from which we can obtain the matrix gains $L_1=-1$, $L_2=
\begin{pmatrix}
-5.1294\\
-18.0352
\end{pmatrix}$, $K_1=(-7.3936,-3.9937)$, $K_2=1$. The observer is given in \eqref{trobser1} with $L_1,L_2$ above, and the controller is given in \eqref{inputform2} with $K_1,K_2$ above. We then let $\alpha_0=0.25$, $w_0=0.02$ and recompute $P$ via \eqref{recomLMI} with the objective to be minimizing the condition number of $P$.
With $\varrho=0.8$, $a_1=a_2=0.5$, $\epsilon_u=\epsilon_y=0.005$, we can calculate that $\sigma_y=0.0017$, $\sigma_u=0.0023$, and $\tau_u\geq 1.07\times 10^{-4}$ s, $\tau_u\geq 7.68\times 10^{-5}$ s. In the simulations, we suppose that the disturbance $w$ is uniformly generated from $[-w_0,w_0]$, and the initial conditions of the plant and the observer are $(0.1, -0.15)$ and $(-0.1, 0.05)$, respectively. The simulation results are shown in Figure \ref{figstate} through Figure \ref{figinput}.
Figure \ref{figstate} and Figure \ref{figerror} show trajectories of the state $x$ and the estimation error $e$, respectively. Both $x$ and $e$ eventually enter a small neighborhood of the origin as expected.
Figure \ref{figtimehaty} shows the inter-execution times $\{t_{k+1}^y-t_{k}^y\}$ in the output ETM \eqref{triggeroutput1}, and Figure \ref{figtimehatx} shows the inter-execution times $\{t_{k+1}^u-t_{k}^u\}$ in the controller ETM \eqref{triggerinput1}. Figure \ref{figinput} shows the trajectory of the piecewise constant input $u(t)$ that is fed into the plant. It is readily seen that the control input $u(t)$ updates its values at each sampling time $t=t_k^u$, which is determined by the triggering rule \eqref{triggerinput1}.
Denote $\tau^{min}_{[T_1,T_2]}$ and $\tau^{avg}_{[T_1,T_2]}$ as the minimal and average inter-execution times during the time interval $[T_1,T_2]$, respectively. The values of $\tau_{min}^{[0,20]}$, $\tau_{avg}^{[0,20]}$, $\tau_{min}^{[3,20]}$, $\tau_{avg}^{[3,20]}$ for the output ETM and the controller ETM are summarized in Table \ref{tab1}. We notice that after 3 seconds, the controller input is updated about every 0.36 seconds on average, and the plant output is updated about every 1.09 seconds on average, which shows the effectiveness of our control design.
{\renewcommand{\arraystretch}{1.5}%
\begin{table}[!hbt]\caption{Minimal and average inter-execution times for the output ETM and the controller ETM}\label{tab1}
\centering
\begin{tabular}{c|c|c|c|c}
\hline
& $\tau^{min}_{[0,20]}$ & $\tau^{avg}_{[0,20]}$ & $\tau^{min}_{[3,20]}$ & $\tau^{avg}_{[3,20]}$ \\
\hline
output ETM & 0.0106 s & 0.1945 s& 0.2104 s& 1.0977 s\\
\hline
controller ETM & 0.0013 s & 0.0663 s & 0.0903 s& 0.3665 s\\
\hline
\end{tabular}
\end{table}
}
\begin{figure}[!hbt]
\centering
\includegraphics[width=0.8\columnwidth]{figstates.pdf}
\caption{Trajectory of the plant state $x$.
}\label{figstate}
\end{figure}
\begin{figure}[!hbt]
\centering
\includegraphics[width=0.8\columnwidth]{figerror.pdf}
\caption{Trajectory of the estimation error $e=x-\hat x$.
}\label{figerror}
\end{figure}
\begin{figure}[!hbt]
\centering
\includegraphics[width=0.8\columnwidth]{figtimey.pdf}
\caption{Inter-execution times $\{t_{k+1}^y-t_{k}^y\}$ in the output ETM \eqref{triggeroutput1}. }\label{figtimehaty}
\end{figure}
\begin{figure}[!hbt]
\centering
\includegraphics[width=0.8\columnwidth]{figtimehatx.pdf}
\caption{Inter-execution times $\{t_{k+1}^u-t_{k}^u\}$ in the controller ETM \eqref{triggerinput1}.}\label{figtimehatx}
\end{figure}
\begin{figure}[!hbt]
\centering
\includegraphics[width=0.8\columnwidth]{figinput.pdf}
\caption{Trajectory of the input $u(t)$.}\label{figinput}
\end{figure}
\section{Conclusion}\label{sec:conclusion}
We studied the observer-based, robust stabilizing controller design for a class of nonlinear control systems that are affected by disturbances and have nonlinearities satisfying incrementally quadratic constraints. We proposed LMI-based sufficient conditions to the simultaneous design of the observer and the controller in the continuous-time domain for two parameterizations of the $\delta$-MM. Based on that, we investigated ETMs within the observer-based controller setting for two configurations. Conditions for the event-triggered controller design are constructive where the triggering rules are given explicitly, based on which the Zeno behavior can be excluded.
The simulation example showed the effectiveness of the controller design and the triggering rule design proposed. In future work, we aim to optimize the parameters in the triggering conditions in order to optimize the bounds for the inter-execution times. Furthermore, we will investigate the periodic event-triggered control and self-triggered controller for the incrementally quadratic nonlinear control systems.
\section*{APPENDIX}\label{appex}
{\bf \emph{Proof of Proposition \ref{prp1}}.}
Since $p$ satisfies \eqref{eq:delQC}, the $\delta$-MM $M$ satisfies $\begin{pmatrix}
q\\
p
\end{pmatrix}^\top M
\begin{pmatrix}
q\\
p
\end{pmatrix}\geq 0,
\begin{pmatrix}
\delta q\\
\delta p
\end{pmatrix}^\top M
\begin{pmatrix}
\delta q\\
\delta p
\end{pmatrix}\geq 0,
\begin{pmatrix}
\Delta q\\
\Delta p
\end{pmatrix}^\top M
\begin{pmatrix}
\Delta q\\
\Delta p
\end{pmatrix}\geq 0$
where $\delta p,\delta q$ are given in \eqref{delp}, $\Delta p$ is given in \eqref{Deltap}, and
\begin{align}\label{Deltaq}
\Delta q=C_q\hat x-C_qx=-C_qe.
\end{align}
Define $\xi=(x^\top,e^\top,p^\top,\delta p^\top,\Delta p^\top,w^\top)^\top$. Then, it is clear that $
\begin{pmatrix}
q\\
p
\end{pmatrix}=S_2\xi$, $
\begin{pmatrix}
\delta q\\
\delta p
\end{pmatrix}=S_3\xi$, $
\begin{pmatrix}
\Delta q\\
\Delta p
\end{pmatrix}=S_4\xi.$
Pre- and post-multiplying \eqref{recomLMI} with $\xi^\top$ and $\xi$, respectively. Since
\begin{align*}
\xi^\top S_2^\top MS_2 \xi\geq 0,\;\xi^\top S_3^\top MS_3 \xi\geq 0 ,\;\xi^\top S_4^\top MS_4 \xi\geq 0,
\end{align*}
and $\sigma_1\geq 0,\sigma_2\geq 0,\sigma_3\geq 0$, we have
\begin{align}
\xi^\top
\begin{pmatrix}
S_0&S_1\\
S_1^\top&{\bf 0}
\end{pmatrix}\xi-\mu \xi^\top S_5^\top S_5\xi\leq 0.\label{ineqpro1}
\end{align}
Define a positive definite function $V(z)=z^\top Pz$. Then, it is easy to check that $\dot V+\alpha_0 V-\mu w^\top w$ is equal to the left hand side of \eqref{ineqpro1} where $\dot V$ is the derivative of $V$ along the trajectories of \eqref{eqthm3-1}. Therefore, $V$ is an ISS-Lyapunov function since $\dot V\leq -\alpha_0 V+\mu \| w\|^2$. The conclusion follows by Proposition \ref{proISS}.
\hfill$\Box$
{\bf \emph{Proof of Theorem \ref{thm1}}.} The proof proceeds in five steps.
1) Firstly, we derive the dynamics of the system under transformations of variables $q$ and $p$ via $T_1$ and $T_2$.
Since $M_1$ (resp. $M_2$) satisfies Assumption \ref{ass1} (resp. Assumption \ref{ass2}) with an invertible matrix $T_1$ (resp. $T_2$),
we introduce variable transformations from $(q, p)$ to $(\tilde q_i,\tilde p_i)$ as follows:
\begin{align}\label{tilq}
\begin{pmatrix}
\tilde q_i\\
\tilde p_i
\end{pmatrix}=T_i
\begin{pmatrix}
q\\
p
\end{pmatrix}.
\end{align}
Since
$
\tilde{p}_i=T_{i3}q+T_{i4}p
$
and $T_{i4}$ is invertible, we have $p=T_{i4}^{-1}\tilde{p}_i-T_{i4}^{-1}T_{i3}q$
and
\begin{align}
\tilde{q}_i&=\Gamma_{i1}q+\Gamma_{i2}\tilde p_i,\label{tran2}
\end{align}
for $i=1,2$, where $\Gamma_{i1},\Gamma_{i2}$ are given in \eqref{gamma1},\eqref{gamma2}.
Note that $\Gamma_{i1}$ is invertible since $T_i$ is invertible.
Substituting $p=T_{24}^{-1}\tilde{p}_2-T_{24}^{-1}T_{23}q$ into \eqref{dyn3-1}, we have
\begin{align}
\dot x&=(\tilde A_2+B\tilde{K}_1)x+(\tilde E_2+B\tilde{K}_2)\tilde p_2-B\Delta k+E_ww,\label{dyn3-2}
\end{align}
where $\tilde{A}_2$ is given in \eqref{tilA}, $\tilde E_2$ is given in \eqref{tilE},
\begin{align}
\tilde{K}_1 &= K_1 -K_2T_{24}^{-1}T_{23}C_q,\qquad \tilde{K}_2 = K_2T_{24}^{-1},\label{tildeK}
\end{align}
and
\begin{equation}
\Delta k=\tilde{K}_1e-\tilde{K}_2\Delta \tilde p,\label{Delk}
\end{equation}
where
\begin{align}
\Delta \tilde{p}&=\tilde{p}_2 (C_q\hat{x})-\tilde{p}_2(C_qx).\label{Delp}
\end{align}
Substituting $p=T_{14}^{-1}\tilde{p}_1-T_{14}^{-1}T_{13}q$ into \eqref{delp}, we have
\begin{align}
\delta p&=T_{14}^{-1}(\tilde p_1(q+\delta q)-\tilde p_1(q))-T_{14}^{-1}T_{13}\delta q\nonumber\\
&=T_{14}^{-1}\delta \tilde p_1+T_{14}^{-1}T_{13}[(C_q+L_1C)e+L_1F_ww]\label{delp2}
\end{align}
where
\begin{align}
\delta \tilde p_1&=\tilde p_1(q+\delta q)-\tilde p_1(q).\label{deltilp1}
\end{align}
Substituting \eqref{delp2} into \eqref{dyn3-1}, we have
\begin{align}
\dot e&=(\tilde A_1+\tilde L_2C)e-\tilde E_1\delta\tilde p_1+(E_w+\tilde L_2F_w)w,\label{dyn4-2}
\end{align}
where $\tilde{A}_1$ is given in \eqref{tilA}, $\tilde E_1$ is given in \eqref{tilE}, and $\tilde L_2$ is defined as
\begin{align}
\tilde L_2&=L_2-ET_{14}^{-1}T_{13}L_1.\label{tilL2}
\end{align}
Equations \eqref{dyn3-2} and \eqref{dyn4-2} are the dynamics of the closed-loop system after transformations of variables via $T_1$ and $T_2$, respectively.
2) Secondly, we show the observer design by \eqref{LMI1}.
From \eqref{L1} we have $R_1=P_1\tilde L_2$ where $\tilde L_2$ is given in \eqref{tilL2}, and from \eqref{L1} we have $R_2=X_1\Gamma_{11}L_1$. Plugging $R_1$ into $\Phi$ in \eqref{Phi}, we have $\Phi_0=P_1(\tilde A_1+\tilde L_2C)+(\tilde A_1+\tilde L_2C)^\top P_1+\alpha_1 P_1$, and the $(1,3)$ entry of $\Phi$ to be $P_1 E_w+R_1F_w=P_1(E_w+\tilde L_2F_w)$; plugging $R_2$ into $\phi$ in \eqref{phi} we have $\phi=X_1\phi_0$ where $\phi_0:=(-\Gamma_{11} (C_q+L_1 C) , \Gamma_{12}, -\Gamma_{11}L_1F_w).$
Recalling $\varphi$ given in \eqref{varphi} and applying Schur's complement to \eqref{LMI1}, we have
\begin{align}
&\Phi+
\begin{pmatrix}
\phi_0 \\
\varphi
\end{pmatrix}^\top\tilde M_1
\begin{pmatrix}
\phi_0 \\
\varphi
\end{pmatrix}\leq 0.\label{ineqV1}
\end{align}
Define $\xi_1=(e^\top,\delta\tilde p_1^\top,w^\top)^\top$. Pre- and post-multiplying the inequality \eqref{ineqV1} by $\xi_1^\top$ and $\xi_1$, respectively, we have
\begin{align}
\xi_1^\top \Phi\xi_1+\xi_1^\top
\begin{pmatrix}
\phi_0 \\
\varphi
\end{pmatrix}^\top\tilde M_1
\begin{pmatrix}
\phi_0 \\
\varphi
\end{pmatrix}\xi_1\leq 0.\label{ineqV11}
\end{align}
From \eqref{tran2}, it is clear that $\delta \tilde q_1=\Gamma_{11}\delta q+\Gamma_{12}\delta\tilde p_1=-\Gamma_{11}(C_q+L_1C)e+\Gamma_{12}\delta\tilde p_1-\Gamma_{11}L_1F_ww.$
Since $\begin{pmatrix}
\delta\tilde q_1\\
\delta\tilde p_1
\end{pmatrix}^\top \tilde M_1
\begin{pmatrix}
\delta\tilde q_1\\
\delta\tilde p_1
\end{pmatrix}\geq 0$,
we have $\xi_1^\top
\begin{pmatrix}
\phi_0 \\
\varphi
\end{pmatrix}^\top \tilde M_1 p
\begin{pmatrix}
\phi_0 \\
\varphi
\end{pmatrix}\xi_1\geq 0$.
Thus, $\xi_1^\top \Phi\xi_1\leq 0$ from \eqref{ineqV11}, which is equivalent to $2e^\top P_1[(\tilde A+\tilde L_2\tilde C)e-\tilde E_1 \delta\tilde p_1+(E_w+\tilde L_2F_w)w]
+\alpha_1 e^\top P_1e-\mu_1\|w\|^2\leq 0.$
Define $V_1(e)=e^\top P_1e$. Then the derivative of $V_1$ along the trajectory of \eqref{dyn4-2} satisfies $\dot V_1=2e^\top P_1[(\tilde A+\tilde L_2\tilde C)e-\tilde E_1 \delta\tilde p_1+(E_w+\tilde L_2F_w)w]\leq -\alpha_1 e^\top P_1e+\mu_1\|w\|^2.$
3) We now prove that $\|\Delta k\|/\|e\|$ is bounded where $\Delta k$ is given in \eqref{Delk}.
Since $M_{24}=T_{22}^\top X_2^{-1}T_{22}-T_{24}^\top Y_2^{-1}T_{24}<0$ and $T_{24}$ is invertible, we have
\begin{align}
\Gamma_{22}^\top X_2^{-1} \Gamma_{22}-Y_2^{-1}=T_{24}^{-\top}M_{24}T_{24}^{-1}<0.\label{Mpf1}
\end{align}
Recall that $\Delta q=-C_qe$ in \eqref{Deltaq} and define
$\Delta \tilde q:=\tilde q_2(C_q\hat x)-\tilde q_2(C_qx)$.
Then, $\Delta \tilde q=-\Gamma_{21}C_qe+\Gamma_{22}\Delta\tilde p$ where $\Delta\tilde p$ is given in \eqref{Delp}. Define $\zeta=(e^\top,\Delta\tilde p^\top)^\top$. Therefore,
\begin{align}
&\zeta^\top
\begin{pmatrix}
-\Gamma_{21}C_q & \Gamma_{22} \\
{\bf 0} & I
\end{pmatrix}^\top\tilde M_2
\begin{pmatrix}
-\Gamma_{21}C_q & \Gamma_{22} \\
{\bf 0} & I
\end{pmatrix}\zeta\nonumber\\
=&
\begin{pmatrix}
-\Gamma_{21} C_qe+\Gamma_{22}\Delta\tilde p\\
\Delta\tilde p
\end{pmatrix}^\top \tilde M_2
\begin{pmatrix}
-\Gamma_{21} C_qe+\Gamma_{22}\Delta\tilde p\\
\Delta\tilde p
\end{pmatrix}\nonumber\\
=&
\begin{pmatrix}
\Delta q\\
\Delta p
\end{pmatrix}^\top T_2^\top\tilde M_2 T_2
\begin{pmatrix}
\Delta q\\
\Delta p
\end{pmatrix}\geq 0,\nonumber
\end{align}
where the last equality is from \eqref{eq:Mass2} in Assumption \ref{ass2}. Hence, $e^\top C_q^\top\Gamma_{21}^\top X_2^{-1}\Gamma_{21}C_qe-2e^\top C_q^\top\Gamma_{21}^\top X_2^{-1}\Gamma_{22}\Delta\tilde p+\Delta\tilde p^\top(\Gamma_{22}^\top X_2^{-1} \Gamma_{22}-Y_2^{-1})\Delta\tilde p\geq 0.$
From \eqref{Mpf1}, the inequality above implies that $\kappa_1 \|e\|^2+\kappa_2\|e\|\|\Delta\tilde p\|-\kappa_3\|\Delta\tilde p\|^2\geq 0$
where $\kappa_1=\lambda_{\max}(C_q^\top\Gamma_{21}^\top X_2^{-1}\Gamma_{21}C_q)$, $\kappa_2=2\|C_q^\top\Gamma_{21}^\top X_2^{-1}\Gamma_{22}\|$, $\kappa_3=\lambda_{\min}(Y_2^{-1}-\Gamma_{22}^\top X_2^{-1} \Gamma_{22})$. Clearly, $\kappa_1,\kappa_3>0$, $\kappa_2\geq 0$. Therefore, we have
$\|\Delta\tilde p\|\leq \kappa \|e\|$
where $\kappa:=(\kappa_2+\sqrt{\kappa_2^2+4\kappa_1\kappa_3})/2\kappa_3>0$.
Since $\Delta k=\tilde{K}_1e+\tilde{K}_2\Delta \tilde p$ by \eqref{Delk}, we have
\begin{align}\label{delk}
\|\Delta k\| \leq \hat \kappa\|e\|
\end{align}
for all $x,e$, where $\hat{\kappa}=\|\tilde{K}_1\|+\|\tilde{K}_2\|\kappa>0$, which bounds $\|\Delta k\|/\|e\|$.
4) Next, we establish the controller design by using \eqref{LMI2}.
From \eqref{L1} we have $R_3=\tilde K_1P_2$ and $R_4=\tilde K_2Y_2$ where $\tilde K_1,\tilde K_2$ are given in \eqref{tildeK}. Plugging $R_3,R_4$ into \eqref{Psi}, we have $\Psi_0=(\tilde A_2+B\tilde K_1)P_2+P_2(\tilde A_2+B\tilde K_1)^\top+\alpha_2P_2$, and the $(1,2)$ entry of $\Psi$ to be $(\tilde E_2+B\tilde K_2)Y_2$. Pre- and post-multiplying the inequality \eqref{LMI2} by the matrix $diag(I_n,Y_2^{-1},I_{n_w},I_{n_q})$, and then applying Schur's complement, we have
\begin{align}
&\tilde\Psi+
\begin{pmatrix}
\psi_1 \\
\varphi
\end{pmatrix}^\top\tilde M_2
\begin{pmatrix}
\psi_1 \\
\varphi
\end{pmatrix}\leq 0,\label{eqPsi1}
\end{align}
where
\begin{align*}
\tilde\Psi&=
\begin{pmatrix}
\Psi_0& \tilde E_2+B\tilde K_2 & E_w \\
* & {\bf 0} &{\bf 0} \\
* & *&-\mu_2I
\end{pmatrix},
\end{align*}
and $\psi_1=(\Gamma_{21} C_q P_2 , \Gamma_{22} ,{\bf 0}_{n_q\times n_w})$, $\Psi_0$ is shown above, $\varphi$ is given in \eqref{varphi}.
Let $P_3=P_2^{-1}$ and pre- and post-multiply the inequality \eqref{eqPsi1} by $diag(P_3,I_{n_p},I_{n_w})$ and its transpose, respectively. This results in
\begin{align}
&\hat\Psi+
\begin{pmatrix}
\psi_0 \\
\varphi
\end{pmatrix}^\top\tilde M_2
\begin{pmatrix}
\psi_0 \\
\varphi
\end{pmatrix}\leq 0,\label{eqPsi2}
\end{align}
where $\psi_0=(\Gamma_{21} C_q, \Gamma_{22} ,{\bf 0}_{n_q\times n_w})$ and
\begin{align}
\hat\Psi&=
\begin{pmatrix}
\hat\Psi_0& P_3(\tilde E_2+B\tilde K_2) & P_3E_w \\
* & {\bf 0} &{\bf 0} \\
* & *&-\mu_2I
\end{pmatrix},\label{hatpsi}\\
\hat\Psi_0&=P_3(\tilde A_2+B\tilde K_1)+(\tilde A_2+B\tilde K_1)^\top P_3+\alpha_2P_3,\label{hatpsi0}
\end{align}
Define $\xi_2=(x^\top,\tilde p_2^\top,w^\top)^\top$. Pre- and post-multiplying the inequality \eqref{eqPsi2} by $\xi_2^\top$ and $\xi_2$, respectively, we have
\begin{align}
\xi_2^\top \hat\Psi\xi_2+\xi_2^\top
\begin{pmatrix}
\psi_0 \\
\varphi
\end{pmatrix}^\top\tilde M_2p
\begin{pmatrix}
\psi_0 \\
\varphi
\end{pmatrix}\xi_2\leq 0.\label{ineqV21}
\end{align}
By \eqref{eq:Mass2} and \eqref{tilq}, we have
\begin{equation}\label{eqdiag3}
\begin{pmatrix}
\tilde q_2\\
\tilde p_2
\end{pmatrix}^\top \tilde M_2
\begin{pmatrix}
\tilde q_2\\
\tilde p_2
\end{pmatrix}\geq 0.
\end{equation}
Since $\tilde{q}_2=\Gamma_{21}q+\Gamma_{22}\tilde p_2=\Gamma_{21}C_qx+\Gamma_{22}\tilde p_2$ by \eqref{tran2}, inequality \eqref{eqdiag3} implies that $\xi_2^\top
\begin{pmatrix}
\psi_0 \\
\varphi
\end{pmatrix}^\top\tilde M_2
\begin{pmatrix}
\psi_0 \\
\varphi
\end{pmatrix}\xi_2\geq 0.$
Thus, $\xi_2^\top \hat\Psi\xi_2\leq 0$ from \eqref{ineqV21}, which is equivalent to $2x^\top P_3[(\tilde A_2+B\tilde K_1)x+(\tilde E_2+B\tilde K_2)\tilde p_2+E_ww]+\alpha_2 x^\top P_3x-\mu_2\|w\|^2\leq 0.$
Let $V_2(x)=x^\top P_3x$. Then the derivative of $V_2$ along the trajectory of \eqref{dyn3-2} satisfies $\dot V_2 =2x^\top P_3[(\tilde A_2+B\tilde{K}_1)x+(\tilde E_2+B\tilde{K}_2)\tilde p_2-B\Delta k+E_ww]
\leq -\alpha_2 x^\top P_3x+\mu_2\|w\|^2+2\|P_3B\|\|x\|\|\Delta k\|.$
Recalling \eqref{delk}, we have $\dot{V}_2 \leq -\alpha_2 x^\top P_3x+\mu_2\|w\|^2+\theta\|x\|\|e\|$
where
\begin{align}
\theta=2\|P_3B\|\hat{\kappa}.\label{gam}
\end{align}
5) Finally, we prove that the closed-loop system expressed by \eqref{dyn3-2} and \eqref{dyn4-2} is ISS with respect to $w$.
Choose two constants $c_1,c_2$ as
$c_1=\alpha_1\lambda_m(P_1)/\lambda_M(P_1),\;c_2=\alpha_1\lambda_m(P_3)/\lambda_M(P_3).$
Since $c_1>0,c_2>0$, we can choose two constants $\alpha_0>0,\beta_0>0$ such that
\begin{align*}
\alpha_0&<\min\{c_1,c_2\},\\
\beta_0&\geq \frac{\theta^2}{4\lambda_M(P_1)\lambda_M(P_3)(c_1-\alpha_0)(c_2-\alpha_0)},
\end{align*}
where $\theta$ is given in \eqref{gam}.
Then, it is easy to check that the matrix $P_0:=
\begin{pmatrix}
\tilde P_0&\theta/2\\
\theta/2&\hat P_0
\end{pmatrix}$
is negative semi-definite where $\tilde P_0=-\alpha_2\lambda_m(P_3)+\alpha_0 \lambda_M(P_3)$ and $\hat P_0=\beta_0(-\alpha_1\lambda_m(P_1)+\alpha_0 \lambda_M(P_1))$.
Define a matrix $P$ as
\begin{align}\label{LyaFun}
P&=
\begin{pmatrix}
P_3&{\bf 0}\\
{\bf 0}&\beta_0P_1
\end{pmatrix}.
\end{align}
Clearly, $P$ is positive definite. We can verify that the candidate Lyapunov function $V(x,e):=z^\top Pz$
satisfies $V(x,e)=\beta_0V_1(e)+ V_2(x)$, and its derivative along the trajectory of \eqref{dyn3-2} and \eqref{dyn4-2} satisfies
\begin{align*}
\dot V+\alpha_0 V
\leq& -\alpha_1 \beta_0 e^\top P_1e -\alpha_2 x^\top P_3x+\theta\|x\|\|e\|+\alpha_0 V\\
\leq&(\|x\|,\|e\|) P_0(\|x\|,\|e\|)^\top+(\mu_1\beta_0+\mu_2)\|w\|^2\\
\leq &(\mu_1\beta_0+\mu_2)\|w\|^2.
\end{align*}
Therefore, the closed-loop system \eqref{dyn3-2} and \eqref{dyn4-2}, or equivalently \eqref{dyn3-1}, satisfies \eqref{ISSineq} with $\mathcal{K}_\infty$ functions $\gamma(\|(x,e)\|)=-\alpha_0\lambda_{m}(P)\|(x,e)\|^2$ and $\chi(\|w\|)=(\mu_1\beta_0+\mu_2)\|w\|^2$. This completes the proof.\hfill $\Box$
{\bf \emph{Proof of Theorem \ref{thm2}}.}
As shown in \eqref{dyn3-2} and \eqref{dyn4-2}, dynamics of the closed-loop system under transformations can be described as
\begin{align*}
\dot x&=(\tilde A_2+B\tilde{K}_1)x+(\tilde E_2+B\tilde{K}_2)\tilde p_2-B\Delta k+E_ww,\\
\dot e&=(\tilde A_1+\tilde L_2C)e-\tilde E_1\delta\tilde p_1+(E_w+\tilde L_2F_w)w,
\end{align*}
where $\tilde A_1,\tilde A_2$ are given in \eqref{tilA}, $\tilde E_1,\tilde E_2$ are given in \eqref{tilE}, $\tilde{K}_1,\tilde{K}_2$ are given in \eqref{tildeK}, $\Delta k$ is given in \eqref{Delk}, $\delta\tilde p_1$ is given in \eqref{deltilp1}, $\tilde p_2$ is given in \eqref{tilq}, and $\tilde L_2$ is given in \eqref{tilL2}.
From \eqref{L1v2}, we have $R_1=P_1\tilde L_2$ and $R_2=X_1\Gamma_{11}L_1$.
We claim that \eqref{LMI3} is equivalent to
\begin{align}
&\Phi+Q_1^\top\tilde M_1Q_1\leq 0\label{ineqV3}
\end{align}
and \eqref{LMI4} is equivalent to
\begin{align}
&\Psi+Q_2^\top\tilde M_2Q_2\leq 0\label{ineqV5}
\end{align}
where
\begin{align*}
Q_1&=
\begin{pmatrix}
-\Gamma_{11} (C_q+L_1C)& \Gamma_{12}& -\Gamma_{11}L_1F_w\\
{\bf 0}_{n_p\times n_x}& I_{n_p}& {\bf 0}_{n_p\times n_w}
\end{pmatrix},\\
Q_2&=
\begin{pmatrix}
\Gamma_{21} C_qP_2& \Gamma_{22} &{\bf 0}_{n_q\times n_w} \\
{\bf 0}_{n_p\times n_x}& I_{n_p}& {\bf 0}_{n_p\times n_w}
\end{pmatrix}.
\end{align*}
Indeed, $Q_1$ can be written as $Q_1=\Upsilon_1+\hat \Upsilon_2$ where $\Upsilon_1$ is given in \eqref{upsilon1} and
\begin{align*}
\hat\Upsilon_2&=
\begin{pmatrix}
-\Gamma_{11}L_1 C& {\bf 0}_{n_q\times n_p}& -\Gamma_{11}L_1F_w \\
{\bf 0}_{n_p\times n_x}& {\bf 0}_{n_p\times n_p}& {\bf 0}_{n_p\times n_w}
\end{pmatrix}.
\end{align*}
It is easy to verify that $\Upsilon_2=\tilde M_1 \hat\Upsilon_2$ and $\hat\Upsilon_2^\top \tilde M_1\hat\Upsilon_2={\bf 0}$.
Therefore,
\begin{align*}
Q_1^\top\tilde M_1Q_1&=(\Upsilon_1+\hat\Upsilon_2)^\top\tilde M_1(\Upsilon_1+\hat\Upsilon_2)\\
&=\Upsilon_1^\top \tilde M_1\Upsilon_1+\hat\Upsilon_2^\top \tilde M_1\Upsilon_1+\Upsilon_1^\top \tilde M_1\hat\Upsilon_2+\hat\Upsilon_2^\top \tilde M_1\hat\Upsilon_2\\
&=\Upsilon_1^\top \tilde M_1\Upsilon_1+\Upsilon_2^\top \Upsilon_1+\Upsilon_1^\top \Upsilon_2.
\end{align*}
Similarly, $Q_2$ can be written as $Q_2=\Upsilon_3+\hat \Upsilon_4$ where $\Upsilon_3$ is given in \eqref{upsilon1} and
\begin{align*}
\hat\Upsilon_4&=
\begin{pmatrix}
\Gamma_{21}C_qP_2& {\bf 0}_{n_q\times n_p}& {\bf 0}_{n_q\times n_w} \\
{\bf 0}_{n_p\times n_x}& {\bf 0}_{n_p\times n_p}& {\bf 0}_{n_p\times n_w}
\end{pmatrix}.
\end{align*}
It is easy to verify that $\Upsilon_4=\tilde M_2 \hat\Upsilon_4$ and $\hat\Upsilon_4^\top \tilde M_2\hat\Upsilon_4={\bf 0}$.
Therefore, $Q_2^\top\tilde M_2Q_2=(\Upsilon_3+\hat\Upsilon_4)^\top\tilde M_2(\Upsilon_3+\hat\Upsilon_4)
=\Upsilon_3^\top \tilde M_2\Upsilon_3+\Upsilon_3^\top \Upsilon_4+\Upsilon_4^\top \Upsilon_3.$
Hence, our claim is proved.
Plugging $R_1$ into $\Phi_0$ and $\Phi$, we have $\Phi_0=P_1(\tilde A_1+\tilde L_2C)+(\tilde A_1+\tilde L_2C)^\top P_1+\alpha_1 P_1$, and the $(1,3)$ entry of $\Phi$ is $P_1(E_w+\tilde L_2F_w)$.
Define $\xi_1=(e^\top,\delta\tilde p_1^\top,w^\top)^\top$. Pre- and post-multiplying \eqref{ineqV3} by $\xi_1^\top$ and $\xi_1$, respectively, we have $\xi_1^\top \Phi\xi_1+\xi_1^\top Q_1^\top\tilde M_1Q_1\xi_1\leq 0.$
Since $Q_1\xi_1=
\begin{pmatrix}
\delta\tilde q_1\\
\delta\tilde p_1
\end{pmatrix}=T_1
\begin{pmatrix}
\delta q \\
\delta p
\end{pmatrix}$ and $M_1$ satisfies Assumption \ref{ass3},
we have $\xi_1^\top Q_1^\top\tilde M_1Q_1\xi_1\geq 0$,
which implies that $\xi_1^\top \Phi\xi_1\leq 0$. Hence, $2e^\top P_1[(\tilde A+\tilde L_2\tilde C)e-\tilde E_1 \delta\tilde p_1+(E_w+\tilde L_2F_w)w]+\alpha_1 e^\top P_1e-\mu_1\|w\|^2\leq 0.$
Define $V_1(e)=e^\top P_1e$. Then,
we have $\dot V_1\leq -\alpha_1 e^\top P_1e+\mu_1\|w\|^2$.
Define $\Delta q=C_q\hat x-C_qx$ and $\Delta \tilde q:=\tilde q_1(C_q\hat x)-\tilde q_1(C_qx)$. Then, $\Delta q=-C_qe$ and $\Delta \tilde q=-\Gamma_{11}C_qe+\Gamma_{12}\Delta\tilde p$ where $\Delta \tilde{p}=\tilde{p}_1 (C_q\hat{x})-\tilde{p}_1(C_qx)$. Define $\zeta=(e^\top,\Delta\tilde p^\top)^\top$. Therefore,
\begin{align*}
&\zeta^\top
\begin{pmatrix}
-\Gamma_{11}C_q & \Gamma_{12} \\
{\bf 0} & I
\end{pmatrix}^\top\tilde M_1
\begin{pmatrix}
-\Gamma_{11}C_q & \Gamma_{12} \\
{\bf 0} & I
\end{pmatrix}\zeta\nonumber\\
=&
\begin{pmatrix}
\Delta q\\
\Delta p
\end{pmatrix}^\top T_1^\top\tilde M_1T_1
\begin{pmatrix}
\Delta q\\
\Delta p
\end{pmatrix}\geq 0,
\end{align*}
where the last equality is from Assumption \ref{ass3}. Hence, $-2e^\top C_q^\top\Gamma_{11}^\top X_1 \Delta\tilde p+\Delta\tilde p^\top(\Gamma_{12}^\top X_1+X_1^\top \Gamma_{12}+Y_1)\Delta\tilde p\geq 0.$
From \eqref{LMI5}, the inequality above implies that
$\kappa_1 \|e\|\|\Delta\tilde p\|-\kappa_2\|\Delta\tilde p\|^2\geq 0$,
where $\kappa_1=2\|C_q^\top\Gamma_{11}^\top X_1\|$ and $\kappa_2=-\lambda_{\min}(\Gamma_{12}^\top X_1+X_1^\top \Gamma_{12}+Y_1)$. Noticing that $\kappa_1\geq 0,\kappa_2>0$, we have $\|\Delta\tilde p\|\leq \frac{\kappa_1}{\kappa_2} \|e\|$.
Noting that $\Delta k=\hat{K}_1e+\hat{K}_2\Delta \tilde p$ with $\Delta \tilde{p}=\tilde{p}_1 (C_q\hat{x})-\tilde{p}_1(C_qx)$, $\hat{K}_1= K_1 -K_2T_{14}^{-1}T_{13}C_q$, $\hat{K}_2 = K_2T_{14}^{-1}$, we have
\begin{align}\label{delk2}
\|\Delta k\| \leq \hat \kappa\|e\|
\end{align}
for all $x,e$, where $\hat{\kappa}=\|\tilde{K}_1\|+\|\tilde{K}_2\|\kappa_1/\kappa_2\geq 0$.
From \eqref{L1} we have $R_3=\tilde K_1P_2$ and $R_4=\tilde K_2$ where $\tilde K_1,\tilde K_2$ are defined in \eqref{tildeK}. Plugging $R_3,R_4$ into $\Psi_0$ and $\Psi$, we have $\Psi_0=(\tilde A_2+B\tilde K_1)P_2+P_2(\tilde A_2+B\tilde K_1)^\top+\alpha_2P_2$, and the $(1,2)$ entry of $\Psi$ is $\tilde E_2+B\tilde K_2$.
Let $P_3=P_2^{-1}$ and pre- and post-multiply \eqref{ineqV5} by $diag(P_3,I_{n_p},I_{n_w})$ and its transpose, respectively. This results in
\begin{align}
&\hat\Psi+Q_3^\top\tilde M_2Q_3\leq 0,\label{eqPsi3}
\end{align}
where $Q_3=
\begin{pmatrix}\Gamma_{21} C_q& \Gamma_{22} &{\bf 0}_{n_q\times n_w}\\
{\bf 0}_{n_p\times n_x}& I_{n_p}& {\bf 0}_{n_p\times n_w} \end{pmatrix}$
and $\hat\Psi$ is given in \eqref{hatpsi} with $\hat\Psi_0$ given in \eqref{hatpsi0}. Define $\xi_2=(x^\top,\tilde p_2^\top,w^\top)^\top$. Pre- and post-multiplying \eqref{eqPsi3} by $\xi_2^\top$ and $\xi_2$, respectively, we have $\xi_2^\top \hat\Psi\xi_2+\xi_2^\top Q_3^\top\tilde M_2Q_3\xi_2\leq 0$.
Since $Q_3\xi_2=
\begin{pmatrix}
\tilde q_2\\
\tilde p_2
\end{pmatrix}=T_2
\begin{pmatrix}
q \\
p
\end{pmatrix}$ and $M_2$ satisfies Assumption \ref{ass3}, it follows that $\xi_2^\top Q_3^\top\tilde M_2Q_3\xi_2\geq 0$.
Hence, we have $\xi_2^\top \hat\Psi\xi_2\leq 0$, which is equivalent to $2x^\top P_3[(\tilde A_2+B\tilde K_1)x+(\tilde E_2+B\tilde K_2)\tilde p_2+E_ww]+\alpha_2 x^\top P_3x-\mu_2\|w\|^2\leq 0.$
Let $V_2(x)=x^\top P_3x$. Then, we have $\dot V_2=2x^\top P_3[(\tilde A_2+B\tilde{K}_1)x+(\tilde E_2+B\tilde{K}_2)\tilde p_2-B\Delta k+E_ww]\leq -\alpha_2 x^\top P_3x+\mu_2\|w\|^2+2\|P_3B\|\|x\|\|\Delta k\|.$
Recalling \eqref{delk2}, we have
$\dot{V}_2 \leq -\alpha_2 x^\top P_3x+\mu_2\|w\|^2+\theta\|x\|\|e\|$
where $\theta=2\|P_3B\|\hat{\kappa}$. The rest of the proof proceeds as that given in part v) of the proof of Theorem \ref{thm1}.\hfill$\Box$
{\bf \emph{Proof of Theorem \ref{thm3}}.}
Since the derivative of $V$ along the trajectory of the closed-loop system \eqref{eqthm3-1} satisfies $\dot V\leq -\alpha_0 V+ \mu\|w\|^2$, the derivative of $V$ along the trajectory of the closed-loop system \eqref{closedtrigger} satisfies $\dot V\leq -\alpha_0 V+ \mu\|w\|^2+2z^\top P[H_5\hat x_e+H_3(\delta \hat p-\Delta p)]
\leq -\alpha_0\lambda_{m}(P)\|z\|^2+ \mu\|w\|^2+2\|z\|(\|PH_5\|\|\hat x_e\|
+\|PH_3\|\|\delta \hat p-\Delta p\|).$
Since $p$ is assumed to be globally Lipschitz continuous, there exists a constant $\ell>0$ such that $\|p(r)-p(s)\|\leq \ell\|r-s\|$ for any $r,s\in\mathbb{R}^n$. Hence, $\|\delta \hat p-\Delta p\|=\|p(C_q\hat x_s)-p(C_q\hat x)\|\leq \ell\|C_q(\hat x_s-\hat x)\|\leq \ell\|C_q\|\|\hat x_e\|$. Then, we have
\begin{align}
\dot V&\leq -\alpha_0\lambda_{m}(P)\|z\|^2+ \mu\|w\|^2+2s\|z\|\|\hat x_e\|\nonumber\\
&\leq -(1-\varrho)\alpha_0\lambda_{m}(P)\|z\|^2 + \mu\|w\|^2\nonumber\\
&\quad \quad + \|z\|\Big[2s\|\hat x_e\|-\varrho\alpha_0\lambda_{m}(P)\|z\|\Big]\label{pf3dotv}
\end{align}
where $\varrho$ is a constant satisfying $0<\varrho<1$, and
\begin{align}
s=\|PH_5\|+\ell\|PH_3\|\|C_q\|.\label{eqns}
\end{align}
Choose $\sigma>0$ in \eqref{triggercon1} as
\begin{align*}
\sigma&=\frac{\varrho\alpha_0\lambda_m(P)}{2\sqrt{2}s}
\end{align*}
For any $x,e$, we have $\|z\|=\sqrt{\|x\|^2+\|x-\hat x\|^2}=\sqrt{\|\hat x\|^2+2\|x\|^2-2x^\top\hat x}\geq \|\hat x\|/\sqrt{2}$, meaning that $\|\hat x\|\leq \sqrt{2}\|z\|$. Therefore, the condition
\begin{align}\label{triggercon}
\|\hat x_e\|\leq \sigma \|\hat x\|+\epsilon
\end{align}
implies
\begin{align}
\|\hat x_e\|\leq \sqrt{2}\sigma \|z\|+\epsilon,\label{pf3dotvcon}
\end{align}
which is equivalent to the inequality
$2s\|\hat x_e\|-\varrho\alpha_0\lambda_{m}(P)\|z\|\leq 2s\epsilon.$
Choose a constant $c$ such that $0<c<(1-\varrho)\alpha_0\lambda_{m}(P)$. Then, as long as \eqref{pf3dotvcon} holds, from \eqref{pf3dotv} we have
\begin{align}
\dot V
&\leq-[(1-\varrho)\alpha_0\lambda_{m}(P)-c]\|z\|^2+\mu\|w\|^2+ \frac{s^2\epsilon^2}{c}.\label{triggerLayu}
\end{align}
Recalling that $p({\bf 0})={\bf 0}$ and $\ell$ is the Lipschitz constant of $p$, we have $\|p\|\leq \ell\|C_q\|\|x\|$, $\|\delta p\|\leq \ell \|\delta q\|\leq \ell(\|C_q+L_1C\|\|e\|+\|L_1F_w\|\|w\|)$, and $\|\delta \hat p\|\leq \ell \|C_q\|(\|\hat x_e\|+\|e\|)$.
Therefore, from \eqref{closedtrigger} we have
\begin{align*}
\|\dot z\|
&\leq \|A_c\|\|z\|+\|H_5\|\|\hat x_e\|+\ell\|H_1\|\|C_q\|\|x\|\\
&\quad\quad +\ell\|H_2\|(\|C_q+L_1C\|\|e\|+\|L_1F_w\|\|w\|)\\
&\quad\quad +\ell\|H_3\|\|C_q\|(\|\hat x_e\|+\|e\|)+\|H_4\|\|w\|\\
&\leq \eta_1\|z\|+\eta_2\|\hat x_e\|+\eta_3\|w\|
\end{align*}
where
\begin{align}\label{eta1}
\begin{cases}
\eta_1=\|A_c\|+\ell\sqrt{b_1^2+b_2^2},\\%\label{eta1}\\
\eta_2=\|H_5\|+\ell\|H_3\|\|C_q\|,\\%\label{eta2}\\
\eta_3=\ell\|H_2\|\|L_1F_w\|+\|H_4\|,\\%\label{eta3}\\
b_1=\|H_1\|\|C_q\|,\\
b_2=\|H_2\|\|C_q+L_1C\|+\|H_3\|\|C_q\|.
\end{cases}
\end{align}
Note that the second inequality above follows from Cauchy's inequality $b_1\|x\|+b_2\|e\|\leq \sqrt{b_1^2+b_2^2}\|z\|$.
Because $\|\dot z\|=\sqrt{\|\dot x\|^2+\|\dot e\|^2}=\sqrt{\|\dot x\|^2+\|\dot x-\dot{\hat{x}}\|^2}=\sqrt{\|\dot {\hat{x}}\|^2+2\|\dot x\|^2-2\dot x^\top\dot{\hat{x}}}\geq \|\dot{\hat{x}}\|/\sqrt{2}$
and $\|\dot{\hat{x}}_e\|=\|\dot{\hat{x}}\|$, we have $\|\dot{\hat{x}}_e\|\leq \sqrt{2}\|\dot z\|$.
Let
\begin{equation*}
v(t) = \frac{\|\hat x_e(t)\|}{\sqrt{2}\sigma\|z(t)\|+\epsilon}.
\end{equation*}
Then for any $h>0$,
\begin{align}
&v(t+h)-v(t)=
\frac{\|\hat x_e(t+h)\|}{\sqrt{2}\sigma\|z(t+h)\|+\epsilon}-\frac{\|\hat x_e(t)\|}{\sqrt{2}\sigma\|z(t)\|+\epsilon}
\nonumber\\
&=
\frac{ \|\hat x_e(t+h)\|(\sqrt{2}\sigma\|z(t)\|+\epsilon)-\|\hat x_e(t)\|(\sqrt{2}\sigma\|z(t+h)\|+\epsilon)}{(\sqrt{2}\sigma\|z(t+h)\|+\epsilon)(\sqrt{2}\sigma\|z(t)\|+\epsilon)}
\nonumber\\
&=
\frac{( \|\hat x_e(t+h) -\|\hat x_e(t)\|)(\sqrt{2}\sigma\|z(t)\|+\epsilon)}{(\sqrt{2}\sigma\|z(t+h)\|+\epsilon)(\sqrt{2}\sigma\|z(t)\|+\epsilon)}
\nonumber\\
&-\frac{\sqrt{2}\sigma \|\hat x_e(t)\|(\|z(t+h)\|-\|z(t)\|)}{(\sqrt{2}\sigma\|z(t+h)\|+\epsilon)(\sqrt{2}\sigma\|z(t)\|+\epsilon)}
\nonumber
\end{align}
and hence
\begin{align}
D^+v(t) &= \lim\sup_{h \rightarrow 0^+}\frac{v(t+h) -v(t)}{h}
\nonumber\\
&=\frac{D^+\|\hat{x}_e(t)\|}{\sqrt{2}\sigma\|z(t)\|+\epsilon}
-\frac{\sqrt{2}\sigma\|\hat{x}_e(t)\|D^+\|z(t)\|}{(\sqrt{2}\sigma\|z(t)\|+\epsilon)^2}.
\label{eq:D+v}
\end{align}
When $z(t) \neq0$, $D^+\|z(t)\| =\frac{z(t)^T\dot{z}(t)}{\|z(t)\|}$ and therefore $\lvert D^+\|z(t)\|\rvert\le \|\dot{z}(t)\|$.
When $z(t)= 0$,
\begin{align}
D^+\|z(t)\| &= \lim\sup_{h \rightarrow 0^+} \frac{ \|z(t+h)\| -\|z(t)\| }{h}
\nonumber\\
&= \lim\sup_{h \rightarrow 0^+} \|\frac{z(t+h)}{h} \|
=\| \dot{z}(t)\|.
\nonumber
\end{align}
Thus, in all cases $\lvert D^+\|z(t)\|\rvert\le \|\dot{z}(t)\|$.
Similarly, $\lvert D^+\|\hat{x}_e(t)\|\rvert\le \|\dot{\hat x}_e(t)\|$.
Dropping the argument $t$,
it now follows from \eqref{eq:D+v} that
\begin{align}
D^+v &\le
\frac{\|\dot{\hat x}_e\|}{\sqrt{2}\sigma\|z\|+\epsilon}
+\frac{\sqrt{2}\sigma\|\hat{x}_e\|\|\dot{z}\|}{(\sqrt{2}\sigma\|z\|+\epsilon)^2}
\nonumber\\
&\leq \frac{\sqrt{2}\|\dot{z}\|}{\sqrt{2}\sigma\|z\|+\epsilon}+\frac{\sqrt{2}\sigma\|\hat x_e\|\|\dot z\|}{(\sqrt{2}\sigma\|z\|+\epsilon)^2}\nonumber\\
&= \frac{\sqrt{2}\|\dot z\|}{\sqrt{2}\sigma\|z\|+\epsilon}(1+\frac{\sigma\|\hat x_e\|}{\sqrt{2}\sigma\|z\|+\epsilon})
\nonumber\\
&\leq \sqrt{2}(\eta_4+\eta_2\frac{\|\hat x_e\|}{\sqrt{2}\sigma\|z\|+\epsilon})
(1+\sigma\frac{\|\hat x_e\|}{\sqrt{2}\sigma\|z\|+\epsilon})
\nonumber\\
&=\sqrt{2}\left(\eta_4 +\eta_2v\right)\left(1 + \sigma v\right)
\label{dotode}
\end{align}
where
\begin{align}
\eta_4=\frac{\eta_1}{\sqrt{2}\sigma}+\frac{\eta_3\omega_0}{\epsilon}\label{eta4}
\end{align}
and the following facts are used to derive the last inequality:
\begin{align*}
\frac{\eta_1\|z\|}{\sqrt{2}\sigma\|z\|+ \epsilon}\leq \frac{\eta_1}{\sqrt{2}\sigma},\;\frac{\eta_3\|w\|}{\sqrt{2}\sigma\|z\|+\epsilon}\leq \frac{\eta_3\omega_0}{\epsilon}.
\end{align*}
Since $v(t_k) =0$, it now follows from the comparison lemma that $v(t)\leq \phi(t-t_k)$
where $\phi$ is the solution of the following ODE with $\phi(0) = 0$:
\begin{align}
&\dot \phi=\sqrt{2}(\eta_4+\eta_2\phi)(1+\sigma\phi).\label{odephi}
\end{align}
Let $\tau>0$ in \eqref{triggercon1} be the solution of the equation $\phi(\tau)=1$.
Then the time it takes for $v$ to evolve from $0$ to $1$ is lower bounded by $\tau$.
Therefore, \eqref{pf3dotvcon} holds during the time interval $[t_k,t_k+\tau]$.
For any $k\geq 0$, if $t_{k+1}=t_k+\tau$, then \eqref{pf3dotvcon} holds during the interval $[t_k.t_{k+1})$ as shown above; if $t_{k+1}>t_k+\tau$, then, during the interval $[t_k+\tau, t_{k+1})$, condition \eqref{triggercon} holds, which implies that \eqref{pf3dotvcon} holds. Therefore, \eqref{pf3dotvcon} holds during any interval $[t_k, t_{k+1})$ for any $k\geq 0$, i.e., it holds for any $t\geq 0$.
Since satisfaction of \eqref{pf3dotvcon} implies the inequality \eqref{triggerLayu}, we conclude that the function $V$ is an ISpS-Lyapunov function since it satisfies \eqref{ISSineq} for any $t\geq 0$ with $\gamma(\|z\|)=[(1-\varrho)\alpha_0\lambda_{m}(P)-c]\|z\|^2\in\mathcal{K}_\infty$, $\chi(\|w\|)=\mu\|w\|^2\in\mathcal{K}$ and $d=s^2\epsilon^2/c>0$. The conclusion follows by Proposition \ref{proISS}. \hfill$\Box$
{\bf \emph{Proof of Theorem \ref{corthm3}}.}
The closed-loop system that combines the system \eqref{dyn1}-\eqref{eq:delQC} and the continuous-time controller \eqref{input1} when $E_w=F_w={\bf 0}$ can be expressed as
\begin{align}
\dot z=A_cz+H_1p+H_2\delta p+H_3\Delta p.\label{eqthm3-2}
\end{align}
Since the derivative of $V$ along the trajectory of the closed-loop system \eqref{eqthm3-2} satisfies $\dot V\leq -\alpha_0 V$, the derivative of $V$ along the trajectory of the closed-loop system \eqref{closedtrigger} satisfies
\begin{align}
\dot V&\leq -\alpha_0 V+2z^\top P[H_5\hat x_e+H_3(\delta \hat p-\Delta p)+H_4w]\nonumber\\
&\leq -(1-\varrho)\alpha_0\lambda_{m}(P)\|z\|^2 -\|z\|\Big[\varrho\alpha_0\lambda_{m}(P)\|z\|\nonumber\\
&\quad \quad -2s\|\hat x_e\|-2\|PH_4\|\|w\|\Big]\label{pf3dotv2}
\end{align}
where $s=\|PH_5\|+\ell\|PH_3\|\|C_q\|$ as defined in \eqref{eqns}, $\varrho$ is a constant satisfying $0<\varrho<1$.
Choose $\sigma$ in \eqref{triggercon1} as
\begin{align}
0<\sigma&<\frac{\varrho\alpha_0\lambda_m(P)}{2\sqrt{2}s}.\label{eqsigma2}
\end{align}
Since $\|\hat x\|\leq \sqrt{2}\|z\|$ as shown in the proof of Theorem \ref{thm3}, the condition
$\|\hat x_e\|\leq \sigma \|\hat x\|+\epsilon$ given in \eqref{triggercon} implies the condition $\|\hat x_e\|\leq \sqrt{2}\sigma \|z\|+\epsilon$ given in \eqref{pf3dotvcon}.
If \eqref{pf3dotvcon} holds and
\begin{align}
\|z\|&\geq \frac{1}{1-\frac{2\sqrt{2}s}{\varrho\alpha_0\lambda_m(P)}\sigma}\Big[\frac{2\|PH_4\|}{\varrho\alpha_0\lambda_{m}(P)}\|w\|+\frac{2s}{\varrho\alpha_0\lambda_{m}(P)}\epsilon\Big],\label{ineqz1}
\end{align}
we have
\begin{align}
\|z\|&\geq \frac{2\|PH_4\|}{\varrho\alpha_0\lambda_{m}(P)}\|w\|+\frac{2s}{\varrho\alpha_0\lambda_{m}(P)}\|\hat x_e\|.\label{ineqz2}
\end{align}
Furthermore, as long as \eqref{ineqz2} holds, the following inequality holds by \eqref{pf3dotv2}:
\begin{align}
\dot V\leq -(1-\varrho)\alpha_0\lambda_{m}(P)\|z\|^2.\label{dotV1}
\end{align}
Using the same argument as in the proof of Theorem \ref{thm3}, we can show that \eqref{dotode} holds with the same $\eta_1,\eta_2,\eta_3,\eta_4$ given in \eqref{eta1}-\eqref{eta4} and $\sigma$ given in \eqref{eqsigma2}.
Let $\tau$ be the solution of the equation $\phi(\tau,0)=1$ where $\phi(t,0)$ satisfies the ODE shown in \eqref{odephi}. Then, the time it takes for $\|\hat x_e\|$ to evolve from $0$ to $\sqrt{2}\sigma\|z\|+\epsilon$ is lower bounded by $\tau$.
Furthermore, condition \eqref{pf3dotvcon} holds during any interval $[t_k, t_{k+1})$ for any $k\geq 0$, i.e., it holds for any $t\geq 0$.
Then, it follows that \eqref{dotV1} holds as long as \eqref{ineqz1} holds. Thus, the function $V$ is an ISpS-Lyapunov function since it satisfies \eqref{ISSineq4} for any $t\geq 0$ with $\gamma(\|z\|)=(1-\varrho)\alpha_0\lambda_{m}(P)\|z\|^2$, $\chi(\|w\|)=\frac{2\|PH_4\|}{\varrho\alpha_0\lambda_{m}(P)-2\sqrt{2}s\sigma}\|w\|$ and $d=\frac{2s}{\varrho\alpha_0\lambda_{m}(P)-2\sqrt{2}s\sigma}\epsilon$.
The conclusion follows by Proposition \ref{proISS}.
\hfill$\Box$
{\bf \emph{Proof of Theorem \ref{thm4}}.}
If the derivative of $V$ along the trajectory of \eqref{eqthm3-1} satisfies $\dot V\leq -\alpha_0 V+ \mu\|w\|^2$, then the derivative of $V$ along the trajectory of the closed-loop system \eqref{closedtrigger3} satisfies
\begin{align}
\dot V&\leq -\alpha_0 V+ \mu\|w\|^2+2z^\top P\Big[H_2(\delta \tilde p-\delta p)\nonumber\\
&\qquad\qquad+H_3(\delta\hat p-\Delta p)+H_5\hat x_e+H_6y_e\Big]\nonumber\\
&\leq -(1-\varrho)\alpha_0\lambda_{m}(P)\|z\|^2+ \mu\|w\|^2+\|z\|\Big(2s_1\|\hat x_e\|\nonumber\\
&\; +2s_2\|y_e\|-\varrho\alpha_0\lambda_{m}(P)\|z\|\Big)\label{dotVthm4}
\end{align}
where
\begin{align}\label{s1}
\begin{cases}
s_1=\|PH_5\|+\ell\|PH_3\|\|C_q\|,\\%\label{s1}\\
s_2=\|PH_6\|+\ell\|PH_2\|\|L_1\|,\\%\label{s2}
\end{cases}
\end{align}
$\varrho$ is a constant satisfying $0<\varrho<1$, $\ell$ is the Lipschitz constant of $p$. The following facts are used in the third inequality above:
$\|\delta \tilde p-\delta p\|\leq \ell\|\delta\tilde q-\delta q\|\leq \ell \|L_1\|\|y_e\|$,
$\|\delta \hat p-\Delta p\|\leq \ell\|C_q\|\|\hat x_e\|$.
Let $a_1,a_2$ be two constants satisfying $0< a_1,a_2< 1$ and $a_1+a_2=1$. As $\|z\|\geq \|\hat x\|/\sqrt{2}$ and $\|z\|\geq \|x\|\geq \|y\|/\|C\|$, we have
\begin{align}
\|z\|\geq \frac{a_1\|\hat x\|}{\sqrt{2}}+\frac{a_2\|y\|}{\|C\|}.\label{znorm}
\end{align}
From \eqref{dotVthm4} and \eqref{znorm} we have
\begin{align}
\dot V&\leq -(1-\varrho)\alpha_0\lambda_{m}(P)\|z\|^2 + \mu\|w\|^2\nonumber\\
&\quad \quad + \|z\|\Big[2s_1\|\hat x_e\|-\frac{a_1\varrho \alpha_0\lambda_{m}(P)}{\sqrt{2}}\|\hat x\|\Big]\nonumber\\
&\quad \quad+ \|z\|\Big[2s_2\|y_e\|-\frac{a_2\varrho \alpha_0\lambda_{m}(P)}{\|C\|}\|y\|\Big].\label{pf3dotv3}
\end{align}
Choose $\sigma_y$ in \eqref{triggercon1} and $\sigma_u$ in \eqref{triggeroutput1} as follows:
\begin{align}
\sigma_y&=\frac{a_2\varrho \alpha_0\lambda_{m}(P)}{2\|C\|s_2},\;\;
\sigma_u=\frac{a_1\varrho \alpha_0\lambda_{m}(P)}{2\sqrt{2}s_1}.\label{eqsigmau}
\end{align}
The condition $\|\hat x_e\|\leq \sigma_u \|\hat x\|+\epsilon_u$
implies
\begin{align}
\|\hat x_e\|\leq \sqrt{2}\sigma_u \|z\|+\epsilon_u,\label{triggerconu2}
\end{align}
and the condition $\|y_e\|\leq \sigma_y \|y\|+\epsilon_y$
implies
\begin{align}
\|y_e\|\leq \sigma_y\|C\| \|z\|+\epsilon_y.\label{triggercony2}
\end{align}
As long as \eqref{triggerconu2} and \eqref{triggercony2} hold, we have
\begin{align}
\dot V
&\leq -[(1-\varrho)\alpha_0\lambda_{m}(P)-c]\|z\|^2+\mu\|w\|^2+ \frac{\epsilon_0^2}{4c}\label{triggerLayu2}
\end{align}
where $\epsilon_0=2(s_1\epsilon_u+s_2\epsilon_y)$, and $c$ is a constant satisfying $0<c<(1-\varrho)\alpha_0\lambda_{m}(P)$.
Since $\|p\|\leq \ell\|C_q\|\|x\|$, $\|\delta \hat p\|\leq \ell \|C_q\|(\|\hat x_e\|+\|e\|)$, and $\|\delta \tilde p\|\leq \ell(\|C_q+L_1C\|\|e\|+\|L_1F_w\|\|w\|+\|L_1\|\|y_e\|)$, from \eqref{closedtrigger3} we have $\|\dot z\|\leq \eta_1\|z\|+\eta_2\|\hat x_e\|+\eta_3\|w\|+\eta_4\|y_e\|$
where $\eta_1,\eta_2,\eta_3$ are given in \eqref{eta1}, and $\eta_4=\ell\|H_2\|\|L_1\|+\|H_6\|$.
Similar to the argument in the proof of Theorem \ref{thm3}, it is not hard to show the following inequality hold when $\|\hat x_e\|\neq 0$ and $\|z\|\neq 0$:
\begin{align}
&\frac{\diff }{\diff t}(\frac{\|\hat x_e\|}{\sqrt{2}\sigma_u\|z\|+\epsilon_u})\leq \sqrt{2}(1+\frac{\sigma_u\|\hat x_e\|}{\sqrt{2}\sigma_u\|z\|+\epsilon_u})\times \nonumber\\
& (\eta_5+\frac{\eta_2\|\hat x_e\|}{\sqrt{2}\sigma_u\|z\|+\epsilon_u}+\frac{\eta_4\|y_e\|}{\sqrt{2}\sigma_u\|z\|+\epsilon_u})
\label{dotode2}
\end{align}
where
\begin{align*}
\eta_4&=\frac{\eta_1}{\sqrt{2}\sigma_u}+\frac{\eta_2\omega_0}{\epsilon_u}.
\end{align*}
Choosing $d_1=\max\{\frac{\epsilon_y}{\epsilon_u},\frac{\sigma_y\|C\|}{\sqrt{2}\sigma_u}\}$, it is easy to verify that
$
\frac{\eta_4\|y_e\|}{\sqrt{2}\sigma_u\|z\|+\epsilon_u}\leq d_1\frac{\eta_4\|y_e\|}{\sigma_y\|C\|\|z\|+\epsilon_y}.
$
Hence, from \eqref{dotode2} we have
\begin{align*}
&\frac{\diff }{\diff t}(\frac{\|\hat x_e\|}{\sqrt{2}\sigma_u\|z\|+\epsilon_u})\leq \sqrt{2}(1+\frac{\sigma_u\|\hat x_e\|}{\sqrt{2}\sigma_u\|z\|+\epsilon_u})\times \nonumber\\
& (\eta_5+\frac{\eta_2\|\hat x_e\|}{\sqrt{2}\sigma_u\|z\|+\epsilon_u}+d_1\frac{\eta_4\|y_e\|}{\sigma_y\|C\|\|z\|+\epsilon_y}).
\end{align*}
When $\|\hat x_e\|= 0$ or $\|z\|= 0$, the upper right-hand
derivative of $\frac{\|\hat x_e\|}{\sqrt{2}\sigma_u\|z\|+\epsilon_u}$ can be calculated similar to the proof of Theorem \ref{thm3}, which can still be captured by the inequality above.
Since $\|\dot y_e\|=\|\dot y\|\leq \|C\|\|\dot x\|\leq \|C\|\|\dot z\|$, we can show the following inequality holds similar to the derivation above:
\begin{align*}
&\frac{\diff }{\diff t}(\frac{\|y_e\|}{\sigma_y\|C\|\|z\|+\epsilon_y})\leq \|C\|(1+\frac{\sigma_y\|y_e\|}{\sigma_y\|C\|\|z\|+\epsilon_y})\times \nonumber\\
& (\eta_6+\frac{\eta_4\|y_e\|}{\sigma_y\|C\|\|z\|+\epsilon_y}+d_2\frac{\eta_2\|\hat x_e\|}{\sqrt{2}\sigma_u\|z\|+\epsilon_u})
\end{align*}
where $\eta_6=\frac{\eta_1}{\sigma_y\|C\|}+\frac{\eta_3\omega_0}{\epsilon_y},\;d_2=\max\{\frac{\epsilon_u}{\epsilon_y},\frac{\sqrt{2}\sigma_u}{\sigma_y\|C\|}\}$, and the discussion on using the upper right-hand
derivative is omitted since it can be done similar to the proof of Theorem \ref{thm3}.
Let $\tau_u>0$ be the solution of $\phi_1(\tau_u,0)=1$ where $\phi_1(t,x_0)$ is the solution of the following ODE with initial state $x_0$: $\dot \phi_1=\sqrt{2}(1+\sigma_u\phi_1)(\eta_5+\eta_2\phi_1+d_1\eta_4).$
Let $\tau_y>0$ be the solution of $\phi_2(\tau_y,0)=1$ where $\phi_2(t,x_0)$ is the solution of the following ODE with the initial state $x_0$: $\dot \phi_2=\|C\|(1+\sigma_y\phi_2)(\eta_6+\eta_4\phi_2+d_2\eta_2).$
Then it is not hard to show that the time it takes for $\|\hat x_e\|$ (resp. $\|y_e\|$) to evolve from $0$ to $\sqrt{2}\sigma_u\|z\|+\epsilon_u$ (resp. $\sigma_y\|C\|\|z\|+\epsilon_y$) is lower bounded by $\tau_u$ (resp. $\tau_y$), which implies that \eqref{triggerconu2} holds during $[t_k^u,t_k^u+\tau_u)$, and \eqref{triggercony2} holds during $[t_k^y,t_k^y+\tau_y)$, for any $k\geq 0$. Recalling that $\|\hat x_e\|\leq \sigma_u \|\hat x\|+\epsilon_u$ implies \eqref{triggerconu2} and $\|y_e\|\leq \sigma_y \|y\|+\epsilon_y$ implies \eqref{triggercony2}, the triggering rules \eqref{triggeroutput1} and \eqref{triggerinput1} guarantee that \eqref{triggerconu2} holds during the interval $[t_k^u, t_{k+1}^u)$ for any $k\geq 0$, and \eqref{triggercony2} holds during the interval $[t_k^y, t_{k+1}^y)$ for any $k\geq 0$. Hence, \eqref{triggerLayu2} holds for any $t\geq 0$, implying that the function $V$ is an ISpS-Lyapunov function since it satisfies \eqref{ISSineq} with $\gamma(\|z\|)=[(1-\varrho)\alpha_0\lambda_{m}(P)-c]\|z\|^2\in\mathcal{K}_\infty$, $\chi(\|w\|)=\mu\|w\|^2\in\mathcal{K}$ and $d=\epsilon_0^2/4c>0$. The conclusion follows by Proposition \ref{proISS}. \hfill$\Box$
{\bf \emph{Proof of Theorem \ref{corthm4}}.}
As shown in \eqref{eqthm3-2}, the closed-loop system that combines the system \eqref{dyn1}-\eqref{eq:delQC} and the continuous-time controller \eqref{input1} when $E_w=F_w={\bf 0}$ can be expressed as $\dot z=A_cz+H_1p+H_2\delta p+H_3\Delta p$ where $\delta p$ is given in \eqref{delp}, and $\Delta p$ is given in \eqref{Deltap}. Since the derivative of $V$ along the trajectory of the closed-loop system \eqref{eqthm3-2} satisfies $\dot V\leq -\alpha_0 V$ when $E_w=F_w={\bf 0}$, then the derivative of $V$ along the trajectory of the closed-loop system \eqref{closedtrigger3} satisfies $\dot V\leq -\alpha_0 V+ 2z^\top P\Big[H_2(\delta \tilde p-\delta p)+H_3(\delta\hat p-\Delta p)+H_4w+H_5\hat x_e+H_6y_e\Big]
\leq -(1-\varrho)\alpha_0\lambda_{m}(P)\|z\|^2+\|z\|\Big[2s_1\|\hat x_e\|
+2s_2\|y_e\|+2\|PH_5\|\|w\|-\varrho\alpha_0\lambda_{m}(P)\|z\|\Big]$
where $\varrho$ satisfies $0<\varrho<1$, $s_1,s_2$ are given in \eqref{s1}. Then, following the proof of Theorem \ref{corthm3} and Theorem \ref{thm4}, it can be shown that $V$ is an ISpS-Lyapunov function that satisfies \eqref{ISSineq4} for any $t\geq 0$. The details are omitted here due to the space limitation. The conclusion can be obtained by Proposition \ref{proISS}. \hfill$\Box$
\bibliographystyle{IEEEtran}
|
{
"timestamp": "2018-04-02T02:01:08",
"yymm": "1802",
"arxiv_id": "1802.08813",
"language": "en",
"url": "https://arxiv.org/abs/1802.08813"
}
|
\section{Introduction}
The observed spectrum of a single pixel is determined by illumination, reflectance
and shading.
Shading image contains illumination condition and geometry information, while reflectance image contains the color information and material reflectance property, which are invariable to light condition and shadow effect.
The decomposition problem has been a long standing problem of assorted areas
such as both computer graphics and computer vision applications. For instance, shape-from-shading algorithms could benefit from an image with only shading effects, while image segmentation would be easier in a world without cast shadows.
Obviously, intrinsic image decomposition is an ill-posed problem, since there are more unknowns than observations. In order to solve this problem, many work~\cite{shen2008intrinsic, shen2013intrinsic, chen2017intrinsic} focus on sparse representation spatially, but this does not hold for images in general. This paper addressed the problem of the recovery of reflectance and shading from a single multspectral image, namely, the Intrinsic Image Decomposition problem of a whole multispectral image captured under general spectral illumination, hereafter referred to as the IID problem. This problem is worth exploring since geometry and color information are useful under certain circumstances, but one of them always interferes the detection of the other one. Unfortunately, growing dimension of data makes this problem harder to cope with.
The low rank constraint that we proposed is based on the low rank prior, or low rank nature of both shading and reflectance images. According to the inherent nature of the multispectral image, we derive shading basis on the knowledge of illumination condition and derive reflectance subspace bases by means of principle component analysis (PCA) of Munsell color board. Assuming that the basis of Retinex theory would continue to take effect on multispectral domain, we proposed an low-rank-based model so that deriving reflectance and shading from a multispectral image can be modelled as an convex optimization problem. In a significant departure from the conventional approaches which operate in the logarithmic domain, we directly operate on the image domain to avoid adding additional noise or breaking low rank nature. The flowchart of our proposed algorithm for LRIID is shown in Fig.~\ref{fig_pipe}.
Suffering from the lack of ground truth data of shading and reflectance, we provide a ground-truth dataset for multispectral intrinsic images, which provides us a possible way to judge the quality of the decomposition results. Quantitative and qualitative experiments on our dataset have demonstrated that the performance of our work are better than prior work in multispectral domain. Our work can bring merits to multiple applications, such as recolorization, relighting, scene reconstruction and image segmentation.
Our major contribution can be summarized as follows: (1) we extend the Retinex model to multispectral image intrinsic decomposition problem, and propose a low rank constraint to handle the ill-posedness of the problem; (2) we provide a ground-truth dataset for multispectral intrinsic images, which can facilitate future evaluation and comparison of multispectral image intrinsic decomposition algorithms; (3) the proposed method achieves promising results on various of images both quantitatively and qualitatively.
\begin{figure*}[htbp]
\centering
\subfigure{
\includegraphics[width=0.9\linewidth]{flowchart.pdf}
}
\caption{The flowchart of our proposed LRIID algorithm.}
\label{fig_pipe}
\end{figure*}
\section{Related Work}
\paragraph{Intrinsic Image Decomposition.} The problem of Intrinsic Image Decomposition (IID) was first introduced by Barrow and Tenenbaum~\cite{barrow1978computer}. The reflectance describes the illumination-invariant albedo of the surface, while the shading contains surface geometric information and illumination condition.
Assorted methods take advantage of additional information, including images sequences~\cite{weiss2001deriving, laffont2013rich} and videos~\cite{lee2012estimation} to avoid shadow effect in poor lighting condition. With the improvement of sensing devices like kinect, depth cue~\cite{barron2013intrinsic, chen2013simple, lee2012estimation} or surface normal~\cite{newcombe2011kinectfusion} have been applied to strengthen their assumption. More recently, Bousseau \etal.~\cite{bousseau2009user} proposed a user-assisted method to further improve the result of separation.
Lots of methods with single input image are also proposed for the separation task. Bell \etal.~\cite{bell2014intrinsic} developed a dense conditional random field (CRF) based intrinsic image algorithm for images in the wild. Barron \etal.~\cite{barron2015shape} introduced “shape, illumination and reflectance from shading” (SIRFS) model which performs well on images of segmented objects. Sai \etal.~\cite{bi20151} proposed L1 Image Transform model for scene-level intrinsic decomposition. Entropy method~\cite{finlayson2004intrinsic} raised by Finlayson \etal. offered us a new viewpoint to understand this problem. With the abundance and availability of datasets and the development of computational equipment, training-based models~\cite{ bell2001learning,tappen2003recovering,tappen2006estimating,zhou2015learning} have been built to derive reflectance and shading from images.
An especially well-know and wide-employed model called Retinex~\cite{land1971lightness} made an assumption that the large chromatically change is generally caused by changes in reflectance rather than shading. With Retinex theory, we are able to pinpoint where the reflectance changes in local area. Horn, and Funt
and Drew~\cite{ho1990separating} analyzed local derivatives for distinguishing
between image variations that are due to shading or reflectance. But it neglects the connection between pixels sharing the same neighborhood. On the basis of Retinex thoery, we follow work of Chen \etal.~\cite{chen2017intrinsic} to handle intrinsic image decomposition task in multispectral domain.
\paragraph{Sparse Representation.} Researchers have also extended trichromatic color constancy model to multispectral images in order to separate reflectance and shading in higher spectral dimensions. A lot of trials have been made to explore this area. For example, Ikari \etal.~\cite{ikari2008separating} showed us the possibility of separating dozens of multispectral signals. Huynh \etal. ~\cite{huynh2010solution} assumed that the scene could be segmented into several homogeneous surface patches, and were able to estimate the illumination and reflectance spectra under the dichromatic reflectance model. In remote sensing area, Kang \etal.~\cite{kang2015intrinsic} fit multispectral data into trichromatic model to extract features.
To overcome the drawbacks of ambiguity in local analysis, lots of research have been done to reduce the ambiguity of both reflectance and shading. Shen \etal.~\cite{shen2008intrinsic} proposed a global optimization algorithm which combines Retinex theory and non-texture constraint to obtain global consistency of image structures. Shen~\cite{shen2013intrinsic} further applied sparse representation of reflectance as global constraint of their observation. Material cues~\cite{nadian2016intrinsic} has also been introduced. As to multispectral images, Chen \etal.~\cite{chen2017intrinsic} used super-pixel to cut down the number of unknown parameters in this underdetermined problem. Unlike the approaches above, we assumed that both shading and reflectance live in low dimensional subspace. the low rank nature of shading space is widely acknowledged and exploited in prior work~\cite{ho1990separating}, and the low-dimensional subspace model of reflectance is introduced by~\cite{maloney1986evaluation, parkkinen1989characteristic, zheng2015illumination}. With the help of training data of reflectance bases and illumination spectra, we can solve this problem effectively.
\paragraph{Dataset.} As far as establishing ground-truth for intrinsic images,
Tappen \etal.~\cite{tappen2003recovering} created small sets of both computer-generated
and real intrinsic images. The computer-generated images
consisted of shaded ellipsoids with piecewise-constant reflectance. The real images were created using green marker on crumpled paper~\cite{tappen2006estimating}. Grosse \etal.~\cite{grosse2009ground} provided an all-rounded dataset which is widely used in following analysis. Bell \etal.~\cite{bell2014intrinsic} also introduced Intrinsic Images in the Wild, a large scale, public dataset for evaluating intrinsic image decompositions of indoor scenes. In multispectral field, \cite{yasuma2010generalized, chakrabarti2011statistics} provided a set of multispectral images of various objects without ground truth, and Chen \etal.~\cite{chen2017intrinsic} built a dataset but the number, spectral resolution and diversity of images are limited.
There is no other attempts to establish ground truth for multispectral intrinsic images.
\section{Our Model}
We assume the object surface as Lambertian and hence has diffuse reflection.
In most prior work on intrinsic image decomposition, the captured luminance
spectrum at every point $l_p$ is modelled as the product of Lambertian reflectance spectrum $r_p$ and shading spectrum $s_p$, where $s_p$ is used to characterize the combined effect of object geometry, illumination, occlusion and shadowing. Mathematically, this model can be expressed as
\begin{equation}
l_p = s_p.*r_p
\end{equation}
where $l_p$, $r_p$ and $s_p$ are all vectors with dimensions equal to the number of spectral bands of the captured image, “$.*$” denotes element-wise multiplication. The problem is to derive $s_p$ and $r_p$ from observed multispectral luminance vector $l_p$: In this project, we will focus on recovering the reflectance spectrum using this model. Once $r_p$ is determined, the shading image can be derived by point-wise division.
Different from the conventional approaches which operate in the logarithmic domain, we directly formulate the problem in the image domain, and this can overcome numerical problems caused by the logarithmic transformation of the image values, where noise in pixels with low intensity values can lead to large variations. Besides, although there have been substantial evidence of the low rank nature of the reflectance space, it is not clear whether the logarithmically transformed reflectance space is still low rank. This makes it hard to incorporate the low rank prior in formulations based on log-transformed images.
\subsection{Estimate Reflectance or Shading Independently}
The Retinex model makes following two important observations:
\begin{enumerate}[1)]
\item When there is significant reflectance change between two adjacent pixels $p$ and $q$, the shading is typically constant. This leads to the relation $l_p./l_q$ = $r_p./r_q$, where “$./$” denotes element-wise division;
\item When the expected reflectance difference between two pixels is small, the recovered reflectance difference between the two pixels should be small.
\end{enumerate}
First we look at whether we can separate reflectance and shading from the measured luminance signal independently. Take reflectance for example. By recognizing two adjacent pixels which have the same shading, the ratio relationship can be written as $l_p.*r_q = l_q.*r_p$, or $L_pr_q = L_qr_p$ where $L_p$ is a diagonal matrix consisting of spectral elements in $l_p$, we formulate the energy functions in terms of the reflectance vectors directly:
\begin{equation}
\begin{array}{c}
{Esc = \sum\limits_{p, q \in \mathcal{N}_{sc}}\lVert w_{p,q}(L_pr_q-L_qr_p) \rVert^d} \\
\\
{Erc = \lVert v_{p,q}(r_p-r_q) \rVert^d}
\end{array}
\label{EscErc}
\end{equation}
where $\mathcal{N}_{sc}$ and $\mathcal{N}_{rc}$ denote neighborhood pair sets and $w_{p, q}$ and $v_{p, q}$ denote weights. $w_{p, q}$ would be large but $v_{p, q}$ will be small when the expected reflectance difference between two adjacent pixels $p$ and $q$ are large, and vice verse. To make the formulation general, we use d to indicate the error norm, with $d = 2$ for L2 norm, and $d = 1$ for L1 norm. L1 norm is more difficult to solve, but the
solution can be more robust to outliers.
If we directly solve for $r_p$, the above energy function can be written as the sum of $K$ terms, one for each
spectral component and each term can be separately minimized. With a little exercise, it can be shown that
the minimal is achieved exactly when $r_p = l_p$; This is due to the inherent ambiguity of the problem, when no other constraints are imposed on $r_p$. we reduce the ambiguity by exploiting the fact that the reflectance spectra of typical object surfaces live
in a low dimensional subspace of $R^K$, so that any reflectance vector can be written as a linear combination
of $J_r$ basis, with $J_r < K$.
Let $B_r$ represent the $K \times J$ basis matrix for representing the reflectance vector, $r_p$ can be written as $r_p = B_r*\widetilde{r}_p$. The energy function in Eq.(\ref{EscErc}) now becomes:
\begin{equation}
\begin{array}{c}
{Esc = \sum\limits_{p, q \in \mathcal{N}_{sc}}\lVert w_{p,q}(L_pB_r\widetilde{r}_q-L_qB_r\widetilde{r}_p) \rVert^d} \\
\\
{Erc = \lVert v_{p,q}(B_r\widetilde{r}_p-B_r\widetilde{r}_q) \rVert^d}
\end{array}
\label{sparseEscErc}
\end{equation}
The combined energy can be represented in a matrix form as:
\begin{equation}
E = \lVert W_{L, B_r}\widetilde{R} \rVert^d + \lambda_1 \lVert V_{B_r}\widetilde{R} \rVert^d
\end{equation}
where $\widetilde{R}$ consists of $\widetilde{r}_p$ for all pixels in a vector. The matrix $W_{L, B_r}$ depends on the neighborhood $\mathcal{N}_{sc}$
considered, the weight $w_{p, q}$, the reflectance basis $B_r$ used, and importantly the luminance data $l_p$; whereas the matrix $V_{B_r}$ depends on the neighborhood $\mathcal{N}_{rc}$ considered, the weight $v_{p,q}$ and the reflectance basis
$B_r$ used. Therefore, with this non-logarithmic formulation, we encode the constraint due to the measured luminance data in the matrix $W_{L, B_r}$.
The ambiguity with the scaling factor is inherent in all intrinsic image decomposition problems since only the product of reflectance and shading is known. To circumvent the ambiguity about the scaling factor, we also explored another solution, where we express the generic constraint on the coefficient sum as $M\widetilde{R} = C$, and augment the original energy function to enforce this constraint:
\begin{equation}
\begin{aligned}
E_{\text{refl}} = \lVert W_{L, B_r}\widetilde{R} \rVert^d + \lambda_1 \lVert V_{B_r}\widetilde{R} \rVert^d + \lambda_2 \lVert M_r\widetilde{R}-C \rVert^d
\end{aligned}
\label{E_ref}
\end{equation}
Similarly, the low rank nature of the shading space is also widely acknowledged and exploited in prior work [14]. Shading is inherently low rank, because there are usually only a few lighting sources with different illumination spectra acting in each captured scene, and the shading effect due to geometry and shadowing only modifies the spectra by a location-dependent scalar. If there is a single illumination source and its spectrum is known or is able to be identified by method of~\cite{zheng2015illumination}, we will use this spectrum(after normalization) as the only shading basis vector($J_s = 1$ and $B_s$ equals to this normalized spectrum). Likewise the problem can be formulate as minimize
\begin{equation}
\small
\begin{aligned}
E_{\text{shad}}
& = \lVert W_{B_s}\widetilde{S} \rVert^d + \lambda_1 \lVert V_{L, B_s}\widetilde{S} \rVert^d + \lambda_2 \lVert M_s\widetilde{S}-C \rVert^d
\end{aligned}
\label{E_shad}
\end{equation}
\subsection{Simultaneous Recovery of Reflectance and Shading}
Based on the formulation that solves reflectance and shading respectively, we proposed an optimization algorithm that simultaneously solves both shading and reflectance. We assume that the low rank subspace of the shading and reflectance are known, represented
by basis matrices $B_s$ and $B_r$, respectively, so that $s_p = B_s\widetilde{s}_p$ and $r_p = B_r\widetilde{r}_p$. We will use $\widetilde{S}$ to denote the
long vector consisting of shading coefficient vectors $\widetilde{s}_p$ at all pixels, and $\widetilde{R}$ the long vector consisting of
reflectance coefficient vectors $\widetilde{r}_p$. We propose to solve $\widetilde{s}_p$ and $\widetilde{r}_p$, or equivalently $\widetilde{S}$ and $\widetilde{R}$, by minimizing a weighted average of the following energy terms.
When shading is expected to be similar in pixels $p$ and $q$, we have $s_p \approx s_q $ and $l_p.*r_q \approx l_q.*r_p$, or $L_pr_q = L_qr_p$, where $L_p$ is a diagonal matrix consisting of spectral elements in $l_p$. We formulate the energy functions directly:
\begin{equation}
\begin{aligned}
E_{sc} &= \sum\limits_{p, q \in \mathcal{N}_{sc}}\lVert w_{p,q}(L_pr_q-L_qr_p) \rVert^d + \lVert w_{p,q}(s_p-s_q) \rVert^d \\
&=\lVert W_{L, B_r} \widetilde{R} \rVert^d + \lVert W_{B_s} \widetilde{S} \rVert^d
\end{aligned}
\end{equation}
When reflectance is expected to be similar in pixels $p$ and $q$, we have $r_p \approx r_q $ and $l_p.*s_q \approx l_q.*s_p$, leading to a regularization energy
\begin{equation}
\begin{aligned}
E_{rc} &= \sum_{p, q \in \mathcal{N}_{rc}}\lVert v_{p,q}(L_ps_q-L_qs_p) \rVert^d + \lVert v_{p,q}(r_p-r_q) \rVert^d\\
&= \lVert V_{L, B_s} \widetilde{S} \rVert^d + \lVert V_{B_r} \widetilde{R} \rVert^d
\end{aligned}
\end{equation}
The inherent data constraint $l_p = s_p.*r_p$ leads to another energy function:
\begin{equation}
\begin{aligned}
E_{\text{data}}&= \sum_{p}\lVert s_p.*r_p- l_p\rVert^d = \lVert Q_{\widetilde{S}, B_s, B_r} \widetilde{R} - L \rVert^d \\
&= \lVert Q_{\widetilde{R}, B_r, B_s} \widetilde{S} - L \rVert^d
\end{aligned}
\label{data_cons}
\end{equation}
where $Q_{\widetilde{S}, B_s, B_r}$ is a block diagonal matrix that depends on the solution for $\widetilde{R}$ and the basis matrices $B_s$ and $B_r$(likewise $Q_{\widetilde{R}, B_r, B_s}$).
The problem is to find $\widetilde{S}$ and $\widetilde{R}$
that minimizes a weighted average of the three energy functions:
\begin{equation}
\small
\begin{aligned}
E &= E_{sc}+\lambda_{1}E_{rc} + 2\lambda_{data} E_{data} \\
& = \lVert W_{L, B_r} \widetilde{R} \rVert^d + \lVert W_{B_s} \widetilde{S} \rVert^d +\lambda_{1} \left(\lVert V_{L, B_s} \widetilde{S} \rVert^d + \lVert V_{B_r} \widetilde{R}\rVert^d \right) \\
&+ \lambda_{data} \lVert Q_{\widetilde{S}, B_s, B_r} \widetilde{R} - L \rVert^d + \lambda_{data} \lVert Q_{\widetilde{R}, B_r, B_s} \widetilde{S} - L \rVert^d\\
\end{aligned}
\label{Etotal}
\end{equation}
Direct solution of the above problem solving $s_p$ and $r_p$ simultaneously is hard because of the bilinear nature of the data term. We applied iterative solution, where we solve $\widetilde{R}$ and $\widetilde{S}$ iteratively using alternating projection. As the dimension of the shading subspace is likely to be smaller than the dimension of the reflectance subspace, we solve the shading $\widetilde{S}$ first. Also there
are typically more subregions in an image with similar reflectance, where it is easier to use the constant reflectance constraint to resolve the ambiguity about shading.
The only problem is that we need to give a initial estimation of $\widetilde{S}$ and $\widetilde{R}$ in order to effectuate the data constraint in Eq.(\ref{data_cons}). With the generic constraint in Part 3.1, Eq.(\ref{E_ref}) and Eq.(\ref{E_shad}), we give a previous estimation of $\widetilde{S}$ first, and then we can update $\widetilde{R}$ and find reflectance basis matrix, finally $\widetilde{R}$ and $\widetilde{S}$ are estimated respectively using the data constraint in Eq.(\ref{data_cons}).
More specifically, the whole recovery algorithm can be classified as Algorithm~\ref{algo}:
\begin{algorithm}[]
\label{algo}
\caption{LRIID algorithm}
\setcounter{AlgoLine}{0}
\textbf{Step 1}: Assign constant-shading weights $w_{p,q}$ and constant-reflectance weights $v_{p,q}$.\\
\textbf{Step 2}: Solve an initial low rank estimate of the shading image $\widetilde{S}$ using a generic constraint.\\
\textbf{Step 3}: Solve an initial low rank reflectance estimate $\widetilde{R}$ using a generic constraint and the data constraint defined by the previous shading estimate.\\
\Repeat{until the solution for $\widetilde{S}$ and $\widetilde{R}$ converge}{
\textbf{Step 4}: Solve $\widetilde{S}$ using the data constraint defined by the previous reflectance estimate.\\
\textbf{Step 5}: Solve $\widetilde{R}$ using the data constraint defined by the previous shading estimate.
}
\textbf{Step 6}: Reconstruct $S$ and $R$ to get the refined shading and reflectance.
\end{algorithm}
\section{Details}
\subsection{Weight Choice}
As we know, images suffering from poor light condition may contain shadow area, which would in turn bring in unnecessary edges that confuse the algorithm. Various methods are used to determined weights $w_{p, q}$ and $v_{p, q}$, including pixel gradient~\cite{funt1992recovering, kimmel2003variational, land1971lightness}, hue~\cite{zhao2012closed}, correlation between vectors~\cite{jiang2010correlation} and learning~\cite{tappen2003recovering}. we proposed a illumination-robust and compute-friendly distance -- normalized cosine distance, to signify the differences between spectra of pixels in one neighborhood. This distance can be formulated as
\begin{equation}
d_{p,q \in \mathcal{N}_{sc}} = 1-\frac{l_p' l_q}{\lvert l_p \rvert \cdot \lvert l_q \rvert}
\end{equation}
$d_{p, q}$ tends to approximate to 0 when pixel $p$ and $q$ have same spectra, and depart from 0 when spectra of $p$ and $q$ are different. In order to derive weight $w_{p, q}$, we need to further magnify the difference between homogeneous and heterogeneous pixels and make it more robust to the noise. In our implementation
\begin{equation}
\begin{array}{ccc}
{w_{p, q} = \frac{1}{1+e^{\alpha(d_{p, q}-\beta)}}} & &
{v_{p, q} = 1-w_{p, q}}
\end{array}
\end{equation}
$\alpha$ and $\beta$ are parameters of sigmoid function. To set $\alpha$ and
$\beta$, We sample 20 value of $\alpha$ within [1000, 10000] and 50 values of $\beta$ within [$10^{-5}$, $10^{-2}$] and choose which perform best.
Fig.~\ref{fig_weight} shows different separation results when $\beta$ changes. Local mean squared error (LMSE) is valued in 3 results. If $\beta$ is too small, shading tends to be more blurred; while $\beta$ is too big, reflectance would be blurred.
\begin{figure*}[htbp]
\centering \subfigure[LMSE = 0.050, $\alpha = 1000, \ \beta = 0.01$]{
\begin{minipage}{5.5cm}
\centering \includegraphics[width=1\linewidth]{aa.pdf}
\end{minipage}}
\subfigure[LMSE = 0.026, $\alpha = 5000, \ \beta = 0.0032$]{
\begin{minipage}{5.5cm}
\centering \includegraphics[width=1\linewidth]{bb.pdf}
\end{minipage}}
\subfigure[LMSE = 0.031, $\alpha = 8000, \ \beta = 7.9e-4$]{
\begin{minipage}{5.5cm}
\centering \includegraphics[width=1\linewidth]{cc.pdf}
\end{minipage}}
\caption{Results of threshold $\alpha$ and $\beta$. (b) achieves good result while shading and reflectance overlap clearly in (a) and (c).}
\label{fig_weight}
\end{figure*}
\subsection{Low Rank Constraint}
An important step to formulate a low rank constraint is to derive the low rank basis. With the help of multispectral imaging systems as PMIS~\cite{cao2011prism} and CASSI~\cite{arce2014compressive}, we can successfully get grouth-truth illumination spectra. Also, there are plenty of work referring to how to extract illumination from the image. For example, ~\cite{zheng2015illumination} can be applied in multispectral domain and performs well in implementation. In order not to complicate our method, We simplify the process of getting normalized illumination spectra $B_s$ by using the grouth-truth illumination data.
When it comes to the low rank basis of reflectance, the authors of~\cite{maloney1986evaluation,parkkinen1989characteristic} have found $J_r$ to be around 8 so as to reach the best trade-off between expression power and noise resistance in the process of fitting reflectance spectra.we set $J_r$ to be 8 and use the matrix introduced by~\cite{maloney1986evaluation} and perform Principle Component Analysis (PCA) to derive $B_r$ from it.
\subsection{Initial Estimation}
In our implementation, only horizonally adjacent and vertically adjacent pixels are considered in the neighborhood $N_{sc}$ and $N_{rc}$. Suppose there are $N$ pixels in the image. The size of matrices $W_{L, B_r}$, $V_{B_r}$ and $\widetilde{R}$ are $4NK \times NJ_r$, $4NK \times NJ_r$ and $NJ_r \times 1$ respectively in Eq.(\ref{sparseEscErc}). Similarly, The size of matrices $W_{B_s}$, $V_{L, B_s}$ and $\widetilde{S}$ in Eq.(\ref{E_shad}) are $4NK \times NJ_s$, $4NK \times NJ_s$ and $NJ_s \times 1$ respectively.
To avoid ambiguity, we further require that the shading image has small deviation from the input image. In Eq.(\ref{E_ref}), let $M_r$ be a identity matrix, and $C$ be a long vector concatenating all the pixels of the original image.
We use L2 form for all terms. so that the above problem for solving Eq.(\ref{E_ref})or Eq.(\ref{E_shad}) is a quadratic programming problem, and can be solved efficiently using conjugated gradient method. In Algorithm \ref{algo}, The solution to the unconstrained optimization problem in Step 2 satisfies the following linear equation
\begin{equation}
\footnotesize
\begin{aligned}
\lambda_2 M_s^T C = \left( W_{B_s}^T W_{B_s} + \lambda_1 V_{L, B_s}^T V_{L, B_s} + \lambda_2 M_s^T M_s \right)\widetilde{S} = Q_s\widetilde{S}
\end{aligned}
\end{equation}
In Step 3, a data constraint defined by shading estimate need to be added to Eq.(\ref{E_ref}), so that the linear equation can be written as
\begin{equation}
\footnotesize
\begin{aligned}
&\lambda_{\text{data}} Q_{\widetilde{S}}^T L + \lambda_2 M_r^T C = \\
& \left( W_{L, B_r}^T W_{L, B_r} + \lambda_1 V_{B_r}^T V_{B_r} + \lambda_2 M_r^T M_r +\lambda_{\text{data}} Q_{\widetilde{S}}^T Q_{\widetilde{S}}\right)\widetilde{R} = Q_r\widetilde{R}
\end{aligned}
\end{equation}
Because the matrix $Q_r$ and $Q_s$ is self-adjoint and sparse, we can solve this equation iteratively, which typically converges very fast. Here, $\lambda_1$ and $\lambda_2$ are positive weights for combining three different objective functions. In our implementation, we set $\lambda_1 = 2$, $\lambda_2 = 0.01$ and $\lambda_{\text{data}} = 1$ empirically.
\subsection{Iteration Performance}
We use alternating projection to get refined shading and reflectance. Just like what we stated in Step 4 and Step 5 in Algorithm \ref{algo}, we we update shading first and then the reflectance. In each round of iteration, gradient descent method is applied. Eq.~\ref{Etotal} converges whenever it reaches the maximum iteration times 1000 or $\nabla E < 0.01$.
Fig.~\ref{iter_curve} demonstrates the iteration performance of our algorithm. Both cost function and LMSE decrease via iteration. Before the iteration, some shadow which should be the component of the shading image remains on the reflectance, while the brightness of the reflectance image tends to be uniform after iteration.
\begin{figure}[htbp]
\includegraphics[width=1\linewidth]
{curve3.pdf}
\caption{Iteration Performance. The algorithm converges within 50 times. Two images demonstrate the reflectance image which generated before and after iteration.}
\label{iter_curve}
\end{figure}
\iffalse
\begin{figure}[htbp]
\centering
\subfigure[Before Iteration]{
\begin{minipage}{3.5cm}
\centering \includegraphics[width=1\linewidth]{before.pdf}
\end{minipage}}
\subfigure[After Iteration]{
\begin{minipage}{3.5cm}
\centering \includegraphics[width=1\linewidth]{after.pdf}
\end{minipage}}
\caption{Reflectance of BOX}
\end{figure}
\fi
\section{Experimental Results}
In this section, we provide extensive experimental validation of the proposed method. For the better visualization, we show the result in pseudo-rgb and linearly normalized the image to the range [0, 1]. We first show the performance of our algorithm. This is followed by our method on on-line dataset and visual results. Finally, we test on our dataset with ground truth and compare our method with~\cite{chen2017intrinsic}.
\subsection{Experiments on proposed dataset}
We provide a benchmark dataset with ground truth for the performance evaluation of multispectral image intrinsic decomposition problem. Following~\cite{grosse2009ground}, we use local mean squared error (LMSE) from the ground truth to measure shading and reflectance image. We also compare with~\cite{chen2017intrinsic}, which proposed intrinsic image decomposition algorithm in multispectral domain.
A benchmark dataset with ground-truth illumination, shading, reflectance and specularity was presented in~\cite{chen2017intrinsic} for performance evaluation of multispectral image intrinsic decomposition. Inspired by their ideas, we build up the newest multispectral intrinsic ground-truth dataset including 12 scenes under the same environmental conditions. We apply the updated mobile multispectral imaging camera to acquire the multispectral scenes, which could provide higher resolutions in spectral data ranging from 450nm-700nm with 118 spectral channels. Compared with the dataset provided by~\cite{chen2017intrinsic}, ours is a similar but flowering in the diversity of projects with more detailed and bumpy scenarios which enables the dataset show further and potential applications in other vision researches (e.g. IRSS, segmentation and recognition).
Here, we evaluate our algorithm via our proposed dataset and use LMSE from the ground truth to validate our algorithm quantitatively. Compared with ground-truth, decomposition results that we achieved are desirable in terms of both the LMSE score (0.018 in average for the entire data set and the visual quality of the decomposed reflectance and shading results).
We display 4 examples from our dataset. For all the input diffuse images equipped with multispectral data without the effect from illumination, the corresponding visualized RGB images for reflectance and shading are listed in Fig.~\ref{fig_by_our} together with the LMSE results. It is clear that our method could produce pleasing visually decompositions based on multispectral images. In~\cite{chen2017intrinsic} generic constraint forces the sum value of a group of pixels to be a certain constant C, while in our method we let each pixel in shading image approximates to the pixel value of original image in the same position. Our generic constraint matrix is more sparse than that in~\cite{chen2017intrinsic}, so that our algorithm will converge more quickly. What we have to emphasize here is that, we did all the image processing and metric computations of LMSE on down-sampled 30 out of 118 spectral channels (because~\cite{chen2017intrinsic} failed to process more bands on our 32G bytes memory) but all multispectral images are finally visualized above using corresponding synthesized RGB data.
\begin{figure}[htbp]
\centering
\subfigure{
\begin{minipage}{8cm}
\centering
\includegraphics[width=1\linewidth]{visual2.pdf}
\end{minipage}}
\caption{Results on sample images from the benchmark data set. we only show reflectance and shading color images synthesized from spectral data of 4 examples.}
\label{fig_by_our}
\end{figure}
\begin{table}[htbp]
\caption{Performance statistics for dataset image}
\centering
\begin{tabular}{c|c|c|c|c}
\hline
\hline
& \multicolumn{2}{c|}{Time(s)} & \multicolumn{2}{c}{LMSE} \\
\hline & SIID & LRIID & SIID & LRIID \\
\hline
box & 9.221 & 1.351 & 0.032 & \textbf{0.023} \\
cup & 7.610 & 0.675 & 0.016 & \textbf{0.012} \\
car1 & 14.771 & 0.876 & 0.025 & \textbf{0.012} \\
bottle1 & 11.275 & 0.666 & 0.062 & \textbf{0.031} \\
bottle2 & 8.342 & 2.709 & \textbf{0.005} & 0.008 \\
bottle3 & 7.993 & 2.337 & \textbf{0.009} & \textbf{0.009} \\
bus & 13.667 & 2.669 & \textbf{0.030} & 0.031 \\
car2 & 7.643 & 1.807 & 0.030 & \textbf{0.024} \\
dinosaur & 13.465 & 2.306 & \textbf{0.021} & 0.023 \\
minion & 17.794 & 2.517 & 0.020 & \textbf{0.018} \\
plane & 10.763 & 2.125 & 0.024 & \textbf{0.015} \\
train & 9.108 & 0.832 & 0.017 & \textbf{0.015} \\
\hline
Avg. & 10.971 & \textbf{1.739} & 0.024 & \textbf{0.018} \\
\hline
\hline
\end{tabular}%
\label{tab:1}%
\end{table}%
We display the computation time for examples in Fig.~\ref{fig_by_our}. Here SIID denotes spectral intrinsic image decomposition from the latest method based on Retinex in~\cite{chen2017intrinsic}. LRIID denotes the low rank multispectral image intrinsic decomposition from our algorithm. We compare the computation results obtained with and without low rank constraint. It can be seen that the low rank constraint helps to improve the computation time and decomposition results at the same time.
In table \ref{tab:1}, we demonstrate the performance statics for dataset images. The last row shows the average metrics for the ground truth dataset. On average, the SIID method requires almost 11 seconds to process a multispectral image and generates results with 0.024 LMSE. While our method LRIID takes 1.739 seconds and produce results with 0.018 LMSE. Although we reduce the average LMSE slightly from 0.024 to 0.018, but the computation time is qualitatively more efficient than that of SIID, and the running time approximates to the time in RGB cases in~\cite{shen2008intrinsic}. Moreover, We note that our method is memory-friendly and is able to process larger images with more spectral bands.
As to the reflectance, since it is not visualized from the spectral perspective, we try to compare the spectral curves of the reflectance from the ground truth and our algorithm. we choose patches in some scenes of our ground truth. In Fig.~\ref{fig_curve}, it is obvious that our reflectance matches well with the ground truth which means we could gain accurate spectral reflectance with better performance in computation.
\begin{figure}[htbp]
\centering
\subfigure[GT]{
\begin{minipage}{2.1cm}
\centering
\includegraphics[width=1\linewidth]{box_gt_2.pdf}
\end{minipage}}
\subfigure[Ours]{
\begin{minipage}{2.3cm}
\centering
\includegraphics[width=1\linewidth]{box_ours.pdf}
\end{minipage}}
\subfigure[Spectra Curve]{
\begin{minipage}{2.9cm}
\centering
\includegraphics[width=1\linewidth]{box_curve.pdf}
\end{minipage}}
\subfigure[GT]{
\begin{minipage}{2.25cm}
\centering
\includegraphics[width=1\linewidth]{cup_gt.pdf}
\end{minipage}}
\subfigure[Ours]{
\begin{minipage}{2.25cm}
\centering
\includegraphics[width=1\linewidth]{cup_ours.pdf}
\end{minipage}}
\subfigure[Spectra Curve]{
\begin{minipage}{2.6cm}
\centering
\includegraphics[width=1\linewidth]{cup_curve.pdf}
\end{minipage}}
\caption{(a) and (d) are ground truth of our dataset, (b) and (e) are reflectance images of our results, and the (c) and (f) are spectra curve of marked area(solid red: curve of ours; dotted black: curve of ground-truth).}
\label{fig_curve}
\end{figure}
\begin{figure*}[!ht]
\centering
\subfigure{
\begin{minipage}{17cm}
\centering
\includegraphics[width=1\linewidth]{nayar.pdf}
\end{minipage}}
\caption{Comparison of decomposition without and with the low-rank constraint. (a) A single image for five different scenarios in Nayar dataset~\cite{yasuma2010generalized}. (b)-(c) and (d)-(e) show the reflectance and shading components computed by our LRIID solution without and with low-rank constraint, respectively. Note that both the original multispectral images (a) and multispectral reflectance (d) are integrated into 3-channel images by using the response curves of RGB sensors for visualization.}
\label{fig_nayar}
\end{figure*}
\subsection{Experiments on Nayar multispectral image database\cite{yasuma2010generalized}}
A great variety of problems assume low-dimensional subspace structure and have been solved by adding low-rank constraint, so is our method. To demonstrate the benefitd of this constraint, we compare results with and without it in Fig.~\ref{fig_nayar} using Nayar Multispectral Image Database~\cite{yasuma2010generalized}. The original images are shown in Fig.~\ref{fig_nayar}(a), the decomposed reflectance and shading without and with the low-rank constraint are shown in Fig.~\ref{fig_nayar}(b)-(e) respectively. From the comparison, we can see that the low-rank constraint helps to maintain global structures and improves decompostion results.
\section{Conclusion}
We have addressed the problem of the recovery of reflectance and shading from a single multspectral image captured under general spectral illumination. We have applied a low rank constraint to settle the multispectral image intrinsic decomposition problem, which significantly reduced the ambiguity. gradient descent has been used to give the initial estimation of reflectance and shading, and alternating projection method has been applied to solve the bilinear problem. Experiments on our dataset have demonstrated that the performance of our work are better than prior work in multispectral domain.
Our work has left out depth information. In fact, Retinex theory fails to take effect when both shading and reflectance change extensively in local area. Shading depends on the object surface geometry, which can be derived from depth information. We hope that we can make more accurate hypothesis about shading variation using the depth and surface normal information in the future.
{\small
\bibliographystyle{ieee}
|
{
"timestamp": "2018-02-27T02:03:57",
"yymm": "1802",
"arxiv_id": "1802.08793",
"language": "en",
"url": "https://arxiv.org/abs/1802.08793"
}
|
\section{Introduction}
Short-lived radioisotopes (SLRs) -- ${}^{10}\mbox{Be}$, ${}^{26}\mbox{Al}$, ${}^{36}\mbox{Cl}$, ${}^{41}\mbox{Ca}$, ${}^{53}\mbox{Mn}$, ${}^{60}\mbox{Fe}$, ${}^{107}\mbox{Pd}$, ${}^{129}\mbox{I}$, ${}^{182}\mbox{Hf}$ and ${}^{244}\mbox{Pu}$ -- are radioactive elements with half-lives ranging from 0.1 Myr to more than 15 Myr that existed in the early Solar system (e.g., \citealt{Adams2010}). They were incorporated into meteorites' primitive components such as calcium-aluminum-rich inclusions (CAIs), which are the oldest solids in the Solar protoplanetary disc, or chondrules, which formed $\sim$ 1 Myr after CAI formation. The radioactive decay of these SLRs fundamentally shaped the thermal history and interior structure of planetesimals in the early Solar system, and thus is of central importance for core-accretion planet formation models. The SLRs, particularly ${}^{26}\mbox{Al}$, were the main heating sources for the earliest planetesimals and planetary embryos from which terrestrial planets formed \citep{GrimmMcSween1993, JohansenEtAl2015}, and are responsible for the differentiation of the parent bodies of magmatic meteorites in the first few Myrs of the Solar system \citep{GreenwoodEtAl2005, ScherstenEtAl2006, SahijpalSoniGupta2007}. The SLRs are, moreover, potential high-precision and high-resolution chronometers for the formation events of our Solar system due to their short half-lives \citep{KitaEtAl2005, KrotEtAl2008, AmelinEtAl2010, BouvierWadhwa2010, ConnellyEtAl2012}.
Detailed analyses of meteorites show that the early Solar system contained significant quantities of SLRs. The presence of ${}^{26}\mbox{Al}$ in the early Solar system was first identified in CAIs from the primitive meteorite Allende in 1976, defining a canonical initial $^{26}\mbox{Al} / {}^{27}\mbox{Al}$ ratio of $\sim 5 \times 10^{-5}$ \citep{LeeEtAl1976, LeeEtAl1977, JacobsenEtAl2008}, far higher than the ratio of $^{26}\mbox{Al} / {}^{27}\mbox{Al}$ in the interstellar medium (ISM) as estimated from continuous galactic nucleosynthesis models \citep{MeyerClayton2000} and $\gamma$-ray observations measuring the in-situ decay of ${}^{26}\mbox{Al}$ \citep{DiehlEtAl2006}.
Compared to $^{26}\mbox{Al} / {}^{27}\mbox{Al}$, the initial ratio of $^{60}\mbox{Fe} / {}^{56}\mbox{Fe}$ is still somewhat uncertain; analyses of bulk samples of different meteorite types produced a low initial ratio of $^{60}\mbox{Fe} / {}^{56}\mbox{Fe}$ $\sim 1.15 \times 10^{-8}$ \citep{TangDauphas2012, TangDauphas2015}, while other studies of chondrules using in situ measurements found higher initial ratio of $^{60}\mbox{Fe} / {}^{56}\mbox{Fe}$ $\sim 5-13 \times 10^{-7}$ than the ISM ratio (e.g., \citealt{MishraGoswami2014}). \citet{TelusEtAl2016} found that the bulk sample estimates were skewed toward low initial $^{60}\mbox{Fe} / {}^{56}\mbox{Fe}$ ratios because of fluid transport of Fe and Ni during aqueous alteration on the parent body and/or during terrestrial weathering, and \citet{TelusEtAl2018} have found the initial ratios of $^{60}\mbox{Fe} / {}^{56}\mbox{Fe}$ as high as $\sim 0.85-5.1 \times 10^{-7}$, although the initial $^{60}\mbox{Fe} / {}^{56}\mbox{Fe}$ value is still a matter of debate. If estimates in the middle or high end of the plausible range prove to be correct, they would imply a $^{60}\mbox{Fe} / {}^{56}\mbox{Fe}$ ratio well above the interstellar average as well.
It has been long debated how the early Solar System came to have SLR abundances well above the ISM average. The isotopes $^{26}\mbox{Al}$ and $^{60}\mbox{Fe}$, on which we focus in this paper, are of particular interest because they are synthesised only in the late stages of massive stellar evolution, followed by injection into the ISM by stellar winds and supernovae (SNe) \citep{HussEtAl2009}. Other SLRs (e.g., ${}^{10}\mbox{Be}$, ${}^{36}\mbox{Cl}$ and ${}^{41}\mbox{Ca}$) can be produced in situ by irradiation of the protoplanetary disc by the young Sun \citep{HeymannDziczkaniec1976, ShuShangLee1996, LeeEtAl1998, ShuEtAl2001, GounelleEtAl2006}.\footnote{Small amounts of $^{26}\mbox{Al}$ can also be produced by this mechanism, but much too little to explain the observed $^{26}\mbox{Al} / {}^{27}\mbox{Al}$ ratio \citep{DupratTatischeff2007}.} Explaining the origin site of the $^{26}\mbox{Al}$ and $^{60}\mbox{Fe}$, and how they travelled from this site to the primitive Solar System before decaying, is an outstanding problem.
One possible origin site is asymptotic giant branch (AGB) stars \citep{WasserburgEtAl1994, BussoGallinoWasserburg1999, WasserburgEtAl2006}. However, because AGB stars only provide SLRs at the end of their lives, and because their main-sequence lifetimes are long ($> 1$ Gyr), the probability of a chance encounter between an AGB star and a star-forming region is very low \citep{KastnerMyers1994}. For these reasons, the supernovae and stellar winds of massive stars, which yield SLRs much more quickly after star formation, are thought to be the most likely origin of ${}^{26}\mbox{Al}$ and ${}^{60}\mbox{Fe}$. Proposed mechanisms by which massive stars could enrich the infant Solar System fall into three broad scenarios: (1) supernova triggered collapse of pre-solar dense cloud core, (2) direct pollution of an already-formed proto-solar disc by supernova ejecta and (3) sequential star formation events in a molecular cloud.
The first scenario, supernova triggered collapse of pre-solar dense cloud core, was proposed by \citet{CameronTruran1977} just after the first discovery of ${}^{26}\mbox{Al}$ in Allende CAIs by \citet{LeeEtAl1976}. In this scenario, a nearby Type II supernova injects SLRs and triggers the collapse of the early Solar nebula. Many authors have simulated this scenario \citep{Boss1995, FosterBoss1996, BossEtAl2010, GritschnederEtAl2012, LiFrankBlackman2014, BossKeiser2014, Boss2017} and shown that it is in principle possible. A single supernova shock that encounters an isolated marginally stable prestellar core can compress it and trigger gravitational collapse while at the same time generating Rayleigh-Taylor instabilities at the surface that mix SLRs into the collapsing gas. However, these simulations have also demonstrated that this scenario requires severe fine-tuning. If the shock is too fast then it shreds and disperses the core rather than triggering collapse, and if it is too slow then mixing of SLRs does not occur fast enough to enrich the gas before collapse. Only a very narrow range of shock speeds are consistent with what we observe in the Solar System, and even then the SLR injection efficiency is low \citep{GritschnederEtAl2012, BossKeiser2014, Boss2017}. A possible solution to overcome the mixing barrier problem is the injection of SLRs via dust grains. However, only grains with radii larger than 30 $\rm \mu m$, which is much larger than the typical sizes of supernova grains (< 1 $\rm \mu m$), can penetrate the shock front and inject SLRs into the core \citep{BossKeiser2010}. Furthermore, analysis of Al and Fe dust grains in supernova ejecta constrains their sizes to be less than 0.01 $\rm \mu m$ \citep{BocchioEtAl2016}. \citet{DwarkadasEtAl2017} proposed a triggered star formation inside the shell of a Wolf-Rayet bubble, and found that the probability is from $0.01 - 0.16$.
The second scenario is a direct pollution: the Solar system's SLRs were injected directly into an already-formed protoplanetary disc by supernova ejecta within the same star-forming region \citep{Chevalier2000, HesterEtAl2004}. Hydrodynamical simulations of a protoplanetary disc have shown that the edge-on disc can survive the impact of a supernova blast wave, but that in this scenario only a tiny fraction of the supernova ejecta that strike the disc are captured and thus available to explain the SLRs we observe \citep{OuelletteDeschHester2007, ClosePittard2017}. \citet{OuelletteDeschHester2007} suggests that dust grains might be a more efficient mechanism for injecting SLRs into the disc, and simulations by \citet{OuelletteDeschHester2010} show that about 70 per cent of material in grains larger than 0.4 $\rm \mu m$ can be captured by a protoplanetary disc. However, extreme fine-tuning is still required to make this scenario work quantitatively. One can explain the observed SLR abundances only if SN ejecta are clumpy, the Solar nebula was struck by a clump that was unusually rich in ${}^{26}\mbox{Al}$ and ${}^{60}\mbox{Fe}$, and the bulk of these elements had condensed into large dust grains before reaching the Solar System. The probability that all these conditions are met is very low, $10^{-3} - 10^{-2}$. Moreover, the required dust size of 0.4 $\rm \mu m$ is still a factor of 40 larger than the value of 0.01 $\rm \mu m$ obtained by detailed study of dust grain properties by \citet{BocchioEtAl2016}.
The third scenario is sequential star formation events and self-enrichment in a giant molecular cloud (GMC) \citep{GounelleEtAl2009, GaidosEtAl2009, GounelleMeynet2012, Young2014, Young2016}. \citet{GounelleMeynet2012} proposed a detailed picture of this scenario; in a first star formation event, supernovae from massive stars inject ${}^{60}\mbox{Fe}$ to the GMC, and the shock waves trigger a second star formation event. This second star formation event also contains massive stars, and the stellar winds inject ${}^{26}\mbox{Al}$ and collect ISM gas to build a dense shell surrounding an H \textsc{ii} region. In the already enriched dense shell, a third star formation event occurs where the Solar system forms. \citet{VasileiadisEtAl2013} and \citet{KuffmeierEtAl2016} have modelled the evolution of a GMC by hydrodynamical simulations and shown that SN ejecta trapped within a GMC can enrich the GMC gas to abundance ratios of $^{26}\mbox{Al} / {}^{27}\mbox{Al}$ $\sim 10^{-6} - 10^{-4}$ and $^{60}\mbox{Fe} / {}^{56}\mbox{Fe}$ $\sim 10^{-7} - 10^{-5}$, comparable to or higher than any meteoritic estimates. However, this scenario requires that the bulk of the SLRs that are produced be captured within their parent GMCs. This is enforced by fiat in the simulations (by the use of periodic boundary conditions), but it is far from clear if this requirement can be met in reality. In the simulations the required enrichment levels are not reached for $\sim 15$ Myr, but observed young star clusters are always cleared of gas by ages of $\lesssim 5$ Myr \citep[e.g.,][]{hollyhead15a}. Moreover, the observed distribution of ${}^{26}\mbox{Al}$ has a scale height significantly larger than that of GMCs, which would seem hard to reconcile with the idea that most ${}^{26}\mbox{Al}$ remains confined to the GMC where it was produced \citep{BouchetEtAl2015}.
The literature contains a number of other proposals \citep[e.g.,][]{TatischeffDupratdeSereville2010, GoodsonEtAl2016}, but what they have in common with the three primary scenarios outlined above is that they require an unusual and improbable conjunction of circumstances (e.g., a randomly-passing WR star, SN-produced grains much larger than observations suggest) that would render the Solar System an unusual outlier in its abundances, or that they are not consistent with the observed distribution of $^{26}\mbox{Al}$ in the Galaxy.
Here we present an alternative scenario, motivated by two observations. First, ${}^{26}\mbox{Al}$ is observed to extend to a significant height above and below the Galactic disc, suggesting that regions contaminated by SLRs much be at least kpc-scale \citep{BouchetEtAl2015}. Second, there is no a priori reason why one should expect star formation to produce a SLR distribution with the same mean as the ISM as a whole, because star formation does not sample from the ISM at random. Instead, star formation and SLR production are both highly correlated in space and time \citep[e.g.,][]{EfremovElmegreen1998, GouliermisEtAl2010, GouliermisEtAl2015, GouliermisEtAl2017, GrashaEtAl2017a, GrashaEtAl2017b}; the properties of GMCs are also correlated on Galactic scales \citep[e.g.,][]{FujimotoEtAl2014, FujimotoEtAl2016, Colombo14a}. That both SLRs and star formation are correlated on kpc scales suggests that it is at these scales that we should search for a solution to the origin of SLRs in the early Solar System.
In this paper, we will study the galactic-scale distributions of ${}^{26}\mbox{Al}$ and ${}^{60}\mbox{Fe}$ produced in stellar winds and supernovae, and propose a new contamination scenario: contamination due to Galactic-scale correlated star formation. In \autoref{Methods}, we present our numerical model of a Milky-Way like galaxy, along with our treatments of star formation and stellar feedback. In \autoref{Results}, we describe global evolution of the galactic disc and the abundance ratios of the stars that form in it. In \autoref{Discussion} we discuss the implications of our results, and based on them we propose a new scenario for SLR deposition. We summarise our findings in \autoref{Conclusions}.
\section{Methods}
\label{Methods}
We study the abundances of $^{60}\textrm{Fe}$ and $^{26}\textrm{Al}$ in newly-formed stars by performing a high-resolution chemo-hydrodynamical simulation of the interstellar medium (ISM) of a Milky-Way like galaxy.
The simulation includes hydrodynamics, self-gravity, radiative cooling, photoelectric heating, stellar feedback in the form of photoionisation, stellar winds and supernovae to represent dynamical evolution of the turbulent multi-phase ISM, and a fixed axisymmetric logarithmic potential to represent the gravity of old stars and dark matter, which causes the galactic-scale shear motion of the ISM in a flat rotation curve. In the simulation, when self-gravity causes the gas to collapse past our ability to resolve, we insert "star particles" that represent stochastically-generated stellar populations drawn star-by-star from the initial mass function (IMF). Each massive star in these populations evolves individually until it produces a mass-dependent yield of $^{60}\mbox{Fe}$ and $^{26}\mbox{Al}$ at the end of its life. We subsequently track the transport and decay of these isotopes, and their incorporation into new stars. Further details on our numerical method are given in the following subsections.
We carry out all analysis and post-processing of the simulation outputs, and produce all simulations visualisations, using the \textsc{yt} software package \citep{TurkEtAl2011}.
\subsection{Chemo-hydrodynamical simulation}
\label{Chemo-hydrodynamical simulation}
Our simulations follow the evolution of a Milky-Way type galaxy using the adaptive mesh refinement code \textsc{enzo} \citep{BryanEtAl2014}. We use a piecewise parabolic mesh hydrodynamics solver to follow the motion of the gas. Since the $\sim 200\ \mathrm{km\ s^{-1}}$ circular velocity of the galaxy necessitates strongly supersonic flows in the galactic disc, we make use of the dual energy formalism implemented in the \textsc{enzo} code, in order to avoid spurious temperature fluctuations due to floating point round-off error when the kinetic energy is much larger than the internal energy. We treat isotopes as passive scalars that are transported with the gas, and that decay with half-lives of 2.62 Myr for $^{60}\mbox{Fe}$ and 0.72 Myr for $^{26}\mbox{Al}$ \citep{RugelEtAl2009, NorrisEtAl1983}.
The gas cools radiatively to 10 K using a one-dimensional cooling curve created from the \textsc{cloudy} package's cooling table for metals and \textsc{enzo}'s non-equilibrium cooling rates for atomic species of hydrogen and helium \citep{AbelEtAl1997, FerlandEtAl1998}. This is implemented as tabulated cooling rates as a function of density and temperature \citep{JinEtAl2017}. In addition to radiative cooling, the gas can also be heated via diffuse photoelectric heating in which electrons are ejected from dust grains via FUV photons. This is implemented as a constant heating rate of $8.5 \times 10^{-26}\ \mathrm{erg\ s^{-1}}$ per hydrogen atom uniformly throughout the simulation box. This rate is chosen to match the expected heating rate assuming a UV background consistent with the Solar neighbourhood value \citep{Draine2011}. Self-gravity of the gas is also implemented.
We do not include dust grain physics because the typical drift velocity of the small dust ($\sim 0.1 \mu \rm m$) relative to gas at sub-parsec scale in the galactic disc is only $7.5 \times 10^{-4}\ \rm km/s$, much smaller than the typical turbulent velocity of the ISM ($\sim 10\ \rm km/s$) \citep{WibkingThompsonKrumholz2018}. Furthermore, analysis of Al and Fe dust grains in supernova ejecta constrains their sizes to be less than 0.01 $\rm \mu m$ \citep{BocchioEtAl2016}. Therefore, the dust grains and gas are very well coupled at the spatial scale we resolve in this simulation.
\subsection{Galaxy model}
\label{Galaxy model}
The galaxy is modelled in a three-dimensional simulation box of $(128\ \rm kpc)^3$ with isolated gravitational boundary conditions and periodic fluid boundaries. The root grid is $128^3$ with an additional 7 levels of refinement, producing a minimum cell size of 7.8125 pc. We refine a cell if the Jeans length, $\lambda_{\rm J} = c_{\rm s} \sqrt{\pi/(G\rho)}$, drops below 8 cell widths, comfortably satisfying the \citet{TrueloveEtAl1998} criterion. In addition, to ensure that we resolve stellar feedback, we require that any computational zone containing a star particle be refined to the maximum level. To keep the Jeans length resolved after collapse has reached the maximum refinement level, we employ a pressure floor such that the Jeans length is resolved by at least 4 cells on the maximum refinement level. In addition to the static root grid, we impose 5 additional levels of statically refined regions enclosing the whole galactic disc of 14 kpc radius and 2 kpc height. This guarantees that the circular motion of the gas in the galactic disc is well resolved, with a maximum cell size of 31.25 pc.
We use initial conditions identical to those of \citet{TaskerTan2009}. These are tuned to the Milky-Way in its present state, but the Galaxy's bulk properties were not substantially different when Solar system formed 4.567 Gyr ago ($z \sim 0.4$). The simulated galaxy is set up as an isolated disc of gas orbiting in a static background potential which represents both dark matter and a stellar disc component. The form of the background potential is
\begin{equation}
\Phi (r, z) = \frac{1}{2} {v_{c, 0}^2} \ln \left[\frac{1}{{r_c^2}} \left( {r_c^2} + r^2 + \frac{z^2}{{q_{\phi}^2}} \right)\right],
\end{equation}
where $v_{c, 0}$ is the constant circular velocity at large radii, here set equal to 200 $\rm km\ s^{-1}$, $r$ and $z$ are the radial and vertical coordinates, the core radius is $r_c = 0.5\ \rm kpc$, and the axial ratio of the potential is $q_{\phi} = 0.7$. This corresponding circular velocity is
\begin{equation}
v_c = \frac{v_{c, 0} r}{\sqrt{{r_c^2} + r^2}}.
\end{equation}
The initial gas density distribution is
\begin{equation}
\rho (r, z) = \frac{\kappa \sqrt{{c_s^2} + {\sigma_{\rm 1D}^2}}}{2 \pi G Q z_h} {\rm sech}^2 \left( \frac{z}{z_h} \right),
\end{equation}
where $\kappa$ is the epicyclic frequency, $c_s$ is the sound speed, here set equal to $6\ \rm km\ s^{-1}$, $\sigma_{\rm 1D}$ is the one-dimensional velocity dispersion of the gas motions in the plane of the disc after the subtraction of the circular velocity, $Q$ is the Toomre stability parameter, and $z_h$ is the vertical scale height, which is assumed to vary with galactocentric radius following the observed radially-dependent H \textsc{i} scale height for the Milky-Way. Our disc is initialized with $\sigma_{1\rm D} = 0$.
The initial disc profile is divided radially into three parts. In our main region, between radii of $r = 2-13\ \rm kpc$, $\rho$ is set so that $Q = 1$. The other regions of the galaxy, from 0 to 2 kpc and from 13 to 14 kpc, are initialised with $Q = 20$. Beyond 14 kpc, the disc is surrounded by a static, very low density medium. We set the initial abundances of $^{60}\mbox{Fe}$ and $^{26}\mbox{Al}$ to $10^{-12}$, though this choice has no practical effect since the initial abundances decay rapidly. In total, the initial gas mass is $8.6 \times 10^9\ M_{\odot}$, and the initial $^{60}\mbox{Fe}$ and $^{26}\mbox{Al}$ mass are set to $8.6 \times 10^{-3}\ M_{\odot}$.
Note that we do not include explicit spiral perturbations in our gravitational potential, but that flocculent spiral structure nonetheless forms spontaneously in our simulation as a result of gas self-gravity (see \autoref{Evolution of the Disc}). Similarly, we do not have a live model of the stellar bulge, but we implicitly include its effects on the gas via our potential, which has a bulge-like flattening at small radii. However, our simulation does not include the effects of a galactic bar, nor does it include the effects of cosmological inflow or tidal interactions with satellite galaxies. The influence of these effects should be addressed in a future work.
\subsection{Star formation}
\label{Star formation}
Implementations of star formation in galaxy-scale scale simulations such as ours are generally parameterised by two choices: a threshold density at which star formation begins, and an efficiency of star formation in cells above that threshold. In isolated galaxy simulations such as the one we perform, numerical experiments \citep[e.g.,][]{HopkinsEtAl2013} have shown that observed galaxies are best reproduced in simulations where the star formation threshold is set based on criteria of gravitational boundedness, i.e., star formation should occur only in fluid elements that are gravitationally bound or nearly so at the highest available numerical resolution. In a grid simulation such as ours, the criterion of boundedness is most conveniently expressed in terms of the ratio of the local Jeans length $\lambda_J$ to the local cell size $\Delta x$. We set our star formation threshold such that gas is star-forming if $\lambda_J / \Delta x < 4$ for $\Delta x$ at the maximum allowed refinement level \citep{TrueloveEtAl1997}; note that this choice guarantees that star formation occurs only in cells that have been refined to the highest allowed level. Rather than calculating the sound speed on the fly, it is more convenient to note that, at the densities at which we will be applying this condition, the gas is always very close to the thermal equilibrium defined by equality between photoelectric heating and radiative cooling (\autoref{Chemo-hydrodynamical simulation}). Consequently, we can reduce the condition for gas to be star-forming to a simple resolution-dependent density threshold by setting the sound speed based on the equilibrium temperature as a function of density. Doing so and plugging in the various resolutions we will use in this paper (see \autoref{Results}) yields number density thresholds for star formation of 12 $\rm cm^{-3}$ for a resolution $\Delta x = 31$ pc, 25.4 $\rm cm^{-3}$ for $\Delta x = 15$ pc and 57.5 $\rm cm^{-3}$ for $\Delta x = 8$ pc.
The second parameter in our star formation recipe characterises the star formation rate in gas that exceeds the threshold. We express the star formation rate density in cells that exceed the threshold as
\begin{equation}
\frac{d\rho_*}{dt} = \epsilon_{\rm ff} \frac{\rho}{t_{\rm ff}}.
\end{equation}
Here $\rho$ is the gas density of the cell, $t_{\rm ff} = \sqrt{3\pi / 32 G \rho}$ is the local dynamical time, and $
\epsilon_{\rm ff}$ is our rate parameter. Fortunately the value of $\epsilon_{\rm ff}$ is very well constrained by both observations and numerical experiments. For observations, one can measure $\epsilon_{\rm ff}$ directly by a variety of methods, and the consensus result from most techniques is that $\epsilon_{\rm ff} \approx 0.01$, with relatively little dispersion \citep[e.g.,][though see \citealt{Lee16a} for a contrasting view]{Krumholz07a, Krumholz12a, Evans14a, Heyer16a, Vutisalchavakul16a, Leroy17a, Onus18a}. From the standpoint of numerical experiments, a number of authors have shown that only simulations that fix $\epsilon_{\rm ff} \approx 0.01$ yield ISM density distributions consistent with observational constraints \citep[e.g.,][]{Hopkins13c, Semenov18a}. Given these constraints, we adopt $\epsilon_{\rm ff} = 0.01$ for this work.
To avoid creating an extremely large number of star particles whose mass is insufficient to have a well sampled stellar population, we impose a minimum star particle mass, $m_{\rm sf}$, and form star particles stochastically rather than spawn particles in every cell at each timestep. In this scheme, a cell forms a star particle of mass $m_{\rm sf} = 300\ M_{\odot}$ with probability
\begin{equation}
P = \left(\epsilon_{\rm ff} \frac{\rho}{t_{\rm ff}} \Delta x^3 \Delta t \right) / m_{\rm sf},
\end{equation}
where $\Delta x$ is the cell width, and $\Delta t$ is the simulation timestep. In practice, all star particles in our simulation are created via this stochastic method with masses equal to $m_{\rm sf}$. Note that the choice of the star particle of mass $300\ M_{\odot}$ does not affect the total star formation rate in the simulated galaxy as shown in Figure 1 in \citet{GoldbaumKrumholzForbes2015} , and we show \aref{Resolution} that our star particles are small enough that we resolve the characteristic size scale on which star formation is clustered extremely well, so that our choice of star particle mass does not affect the clustering of star formation either. Star particles are allowed to form in the main region of the disc between $2 < r < 14$ kpc.
\subsection{Stellar feedback}
\label{Stellar feedback}
Here we describe a subgrid model for star formation feedback that includes the effects of ionising radiation from young stars, the momentum and energy released by individual SN explosions, and gas and isotope injections from stellar winds and SNe. The inclusion of multiple forms of feedback is critical for producing results that agree with observations in high-resolution simulations such as ours \citep[e.g.,][]{Hopkins11a, AgertzEtAl2013, Stinson13a, Renaud13a}. In particular, simulations with enough resolution to capture the $\approx 5$ Myr delay between the onset of star formation and the first supernova explosions require non-supernova feedback in order to avoid overproducing stars (compared to what is observed) before supernovae have time to disperse star-forming gas. We pause here to note that this means that implementations of feedback are inevitably tuned to the resolution of the simulations being carried out, with simulations that go to higher resolution requiring the inclusion of more physical processes to replace the artificial softening of gravity that occurs at lower resolution. The feedback implementation we use here is tuned to the $\sim 10$ pc resolution we achieve, and is very similar to that of other authors who run simulations at similar resolution.
All star particles form with a uniform initial mass of 300 $M_{\odot}$. Within each of these particles we expect there to be a few stars massive enough to produce SN explosions. We model this using the \textsc{slug} stellar population synthesis code \citep{da-Silva12a, KrumholzEtAl2015}. This stellar population synthesis method is used dynamically in our simulation; each star particle spawns an individual \textsc{slug} simulation that stochastically draws individual stars from the initial mass function, tracks their mass- and age-dependent ionising luminosities, determines when individual stars explode as SNe, and calculates the resulting injection of $^{60}\mbox{Fe}$ and $^{26}\mbox{Al}$. In the \textsc{slug} calculations we use a Chabrier initial mass function \citep{Chabrier2005} with \textsc{slug}'s Poisson sampling option, Padova stellar evolution tracks with Solar metallicity \citep{GirardiEtAl2000}, \textsc{starburst99} stellar atmospheres \citep{LeithererEtAl1999}, and Solar metallicity yields from \citet{SukhboldEtAl2016}.
We include stellar feedback from photoionisation and SNe, following \citet{GoldbaumKrumholzForbes2016}, though our numerical implementation is very similar to that used by a number of previous authors \citep[e.g.,][]{Renaud13a}. For the former, we use the total ionising luminosity $S$ from each star particle calculated by \textsc{slug} to estimate the Str\"{o}mgren volume $V_s = S/\alpha_{\rm B} n^2$, and compare with the cell volume, $V_c$. Here $\alpha_{\rm B} = 2.6\times 10^{-13}$ cm$^3$ s$^{-1}$ is the case B recombination rate coefficient, $n = \rho/\mu m_{\rm H}$ is the number density, and $\mu = 1.27$ and $m_{\rm H} = 1.67 \times 10^{-24}$ g are the mean particle mass and the mass of an H nucleus, respectively. If $V_s < V_c$, the cell is heated to $10^4 (V_s/V_c)$ K. If $V_s > V_c$, the cell is heated to a temperature of $10^4$ K, and then we calculate the luminosity $S_{\rm esc} = S - \alpha_{\rm B} n^2 V_c$ that escapes the cell. We distribute this luminosity evenly over the neighbouring 26 cells, and repeat the procedure.
For SN feedback, a critical challenge in high resolution simulations such as ours is that the Sedov-Taylor radius for supernova remnants may or may not be resolved, depending on the ambient density in which the supernova explodes. In this regime several authors have carried out numerical experiments showing that the feedback recipes that best reproduce the results of high-resolution simulations are those that switch smoothly injecting pure radial momentum in cases where the Sedov-Taylor radius is unresolved to adding pure thermal energy in cases where it is resolved \citep[e.g.,][]{Kimm15a, Hopkins18c}. Our scheme, which is identical to that used in \citet{GoldbaumKrumholzForbes2016}, is motivated by this consideration. We identify particles that will produce SNe in any given time step. For each SN that occurs, we add a total momentum of $3 \times 10^5\ M_{\odot}\ \rm km\ s^{-1}$, directed radially outward in the 26 neighbouring cells. This momentum budget is consistent with the expected deposition from single supernovae \citep{GentryEtAl2017}. The total net increase in kinetic energy in the cells surrounding the SN host cell are then deducted from the available budget of $10^{51}$ erg and the balance of the energy is then deposited in the SN host cell as thermal energy. This scheme meets the requirement of smoothly switching from momentum to energy injection depending on the ambient density: if the explosion occurs in an already-evacuated region such that the gas density is low, the kinetic energy added in the process of depositing the radially outward momentum will be $\ll 10^{51}$ erg, and the bulk of the supernova energy will be injected as pure thermal energy. In a dense region, on the other hand, little thermal energy will remain, and only the radial momentum deposited will matter. In the higher resolution phases of the simulation ($\Delta x =$ 15 pc, 8 pc), we increase the momentum budget to $5 \times 10^5\ M_{\odot}\ \rm km\ s^{-1}$ in order to maintain approximately the same total star formation rate; given that the actual momentum budget is uncertain by a factor of $\approx 10$ due to the effects of clustering \citep{GentryEtAl2017}, this value is still well within the physically plausible range.
We include gas mass injection from stellar winds and SNe to each star particle's host cell each time step. The mass loss rate of each star particles is calculated from the \textsc{slug} stellar population synthesis. Note that we do not include energy injection from stellar winds; these will be included in future work. However, even though the simulation does not include the effect, the total star formation rate in the simulated galaxy is consistent with observations.
We include isotope injection from stellar winds and SNe, which is calculated from the mass-dependent yield tables of \citet{SukhboldEtAl2016}. The explosion model for massive stars is one-dimensional, of a single metallicity (solar) and does not include any effects of stellar rotation. The chemical yields are deposited to the host cell. As discussed in \citet{SukhboldEtAl2016}, their nucleosynthesis model overpredicts\footnote{\citet{SukhboldEtAl2016} compared their ejected mass ratio of $^{60}\mbox{Fe}/{}^{26}\mbox{Al}$ (= 0.9) with the observed steady-state mass ratio of 0.34 \citep{WangEtAl2007}, and stated that their yield should be corrected by a factor of three. However, a steady-state mass ratio should be used, not the ejected mass ratio, to compare with the observed mass ratio. The steady-state mass ratio can be obtained by multiplying the ratio of half-lives, as $0.9 \times (2.62\ {\rm Myr}/0.72\ {\rm Myr}) = 3.3$. This steady-state mass is ten times larger than the observed steady-state mass ratio. That is why we modify their tables by reducing the $^{60}\mbox{Fe}$ yield by a factor of five and doubling the $^{26}\mbox{Al}$ yield.} the $^{60}\mbox{Fe}$ to $^{26}\mbox{Al}$ compared to that determined from $\gamma$-ray line observations \citep{WangEtAl2007}.
They note that the discrepancy might have to do with errors in poorly-known nuclear reaction rates, especially for $^{26}\mbox{Al}(n, p)^{26}\mbox{Mg}$, $^{26}\mbox{Al}(n, \alpha)^{23}\mbox{Na}$, $^{59, 60}\mbox{Fe}(n, \gamma)^{60, 61}\mbox{Fe}$, or with uncertainties in stellar mixing parameters such as the strength of convective overshoot. Rotational mixing is another possible effect that is not considered in their chemical yields \citep{ChieffiLimongi2013, LimongiChieffi2018}.
To ensure that our $^{60}\mbox{Fe}/{}^{26}\mbox{Al}$ ratio is consistent with observations, we modify their tables slightly by reducing the $^{60}\mbox{Fe}$ yield by a factor of five and doubling the $^{26}\mbox{Al}$ yield. This brings our Galaxy-averaged ratios of $^{60}\mbox{Fe}/{}^{26}\mbox{Al}$, $^{60}\mbox{Fe}/\mbox{SFR}$, and $^{26}\mbox{Al}/\mbox{SFR}$ into good agreement with observations. Although uncertainties in the chemical yields might affects our results, we expect the effect to be at most a factor of ten, not orders of magnitude, since this is the current level of discrepancy between the numerical results and the observations. It would be worthwhile repeating our simulations in the future with other models of chemical yields \citep{EkstromEtAl2012, LimongiChieffi2006, LimongiChieffi2018, ChieffiLimongi2013, NomotoEtAl2006, NomotoKobayashiTominaga2013, PignatariEtAl2016}.
\section{Simulation Results}
\label{Results}
\subsection{Evolution of the Disc}
\label{Evolution of the Disc}
\begin{figure}
\includegraphics[width=\columnwidth]{time_evolution}
\caption{The time evolution of SFR and isotope mass. The black solid line shows the total SFR in the galactic disc. The blue dotted and red dashed lines show the total mass of $\rm ^{60}Fe$ and $\rm ^{26}Al$ respectively. The sharp features at 600 and 660 Myr are transients caused when we increase the resolution.}
\label{fig:time_evolution}
\end{figure}
\begin{figure*}
\includegraphics[width=\hsize]{density_projection}
\caption{The morphology of the galactic disc. Panels show the gas (left), $\rm ^{60}Fe$ (middle) and $\rm ^{26}Al$ (right) surface densities of the face-on disc at $t = $ 750 Myr. Each image is 28 kpc across. The galactic disc rotates anticlockwise. The two circles indicate Galactocentric radii of 7.5 kpc and 8.5 kpc, roughly bounding the Solar annulus.}
\label{fig:density_projection}
\end{figure*}
\begin{figure*}
\includegraphics[width=\hsize]{density_projection_zoom}
\caption{Same as \autoref{fig:density_projection}, but zoomed in on a spot near the Solar Circle. Panels show the gas (top-left), star formation rate (top-right), $\rm ^{60}Fe$ (bottom-left) and $\rm ^{26}Al$ (bottom-right) surface densities at $t = $ 750 Myr. The two arcs show Galactocentric radii of 7.5 and 8.5 kpc, bounding the Solar annulus.}
\label{fig:density_projection_zoom}
\end{figure*}
To determine the equilibrium distributions of isotopes in newly-formed stars, we use a relaxation strategy to allow the simulated galaxy to settle into statistical equilibrium at high resolution. We first run the simulation at a resolution of 31 pc for 600 Myr, corresponding to two rotation periods at 10 kpc from the galactic centre. This time is sufficient to allow the disc to settle into statistical steady state, as we illustrate in \autoref{fig:time_evolution}, which shows the time evolution of the total star formation rate (SFR) and total $^{60}\textrm{Fe}$ and $^{26}\textrm{Al}$ masses within the Galaxy. We then increase the resolution from 31 pc to 15 pc and allow the disc to settle back to steady state at the new resolution, which takes until 660 Myr. At that point we increase the resolution again, to 8 pc. These refinement steps are visible in \autoref{fig:time_evolution} as sudden dips in the SFR, which occur because it takes some time after we increase the resolution for gas to collapse past the new, higher star formation threshold, followed by sudden bursts as a large mass of gas simultaneously reaches the threshold. However, feedback then pushes the system back into equilibrium. In the equilibrium state the SFR is $1-3$ $M_\odot$ yr$^{-1}$, consistent with the observed Milky-Way star formation rate \citep{ChomiukPovich2011}. Similarly, the total SLR masses in the equilibrium state are 0.7 $\rm M_\odot$ for $\rm ^{60}Fe$ and 2.1 $\rm M_\odot$ for $\rm ^{26}Al$, respectively, consistent with masses determined from $\gamma$-ray observations \citep{Diehl2017, WangEtAl2007}. Note that, as we change the resolution, the steady-state SFR and SLR abundances vary at the factor of $\approx 2$ level. This is not surprising, because our stellar feedback model operates on a stencil of $3^3$ cells around each star particles, and thus volume over which we inject feedback varies as does the resolution. However, we note that the variations in equilibrium SFR and SLR mass with resolution are well within both the observational uncertainties on these quantities.
\autoref{fig:density_projection} shows the global distributions of gas and isotopes in the galactic disc at $t = $ 750 Myr, when the maximum resolution is 8 pc and the galactic disc is in a quasi-equilibrium state. \autoref{fig:density_projection_zoom} shows the same data, zoomed in on a $3.5$ kpc-region centred on the Solar Circle\footnote{Simulation movies are available at \url{https://sites.google.com/site/yusuke777fujimoto/data}}. The Figures show that the disc is fully fragmented, and has produced GMCs and star-forming regions.
The distributions of $\rm ^{60}Fe$ and $\rm ^{26}Al$ are strongly-correlated with the star-forming regions, which correspond to the highest-density regions (reddish colours) visible in the gas plot.
This is as expected, since these isotopes are produced by massive stars, which, due to their short lives, do not have time to wander far from their birth sites.
However, there are important morphological differences between the distributions of $\rm ^{60}Fe$, $\rm ^{26}Al$, and star formation. The $^{60}\mbox{Fe}$ distribution is the most extended, with the typical region of $^{60}\mbox{Fe}$ enrichment exceeding 1 kpc in size, compared to $\sim 100$ pc or less for the density peaks that represent star-forming regions. The $^{26}\mbox{Al}$ distribution is intermediate, with enriched regions typically hundreds of pc in scale. The larger extent of $^{60}\mbox{Fe}$ compared to $^{26}\mbox{Al}$ is due to its larger lifetime (2.62 Myr versus 0.72 Myr for $^{26}$Al) and its origin solely in fast-moving SN ejecta (as opposed to pre-SN winds, which contribute significantly to $^{26}$Al).
In addition to the comparison between SLRs and star formation, it is interesting to compare SLRs to the distribution of hot gas produced by supernovae (defined here as gas with temperature $T>10^6$ K), which we show in \autoref{fig:hot_gas_projection_zoom}. We see that, as expected, regions of $^{60}\mbox{Fe}$ and $^{26}\mbox{Al}$ enrichment correlate well with bubbles of hot gas. However, it is interesting to note that the outer edges of the $^{60}\mbox{Fe}$ or $^{26}\mbox{Al}$ bubbles seen in \autoref{fig:hot_gas_projection_zoom} extend significantly further than the bubbles of hot ISM. This could be a result either of cooling of the hot gas on timescales shorter than the decay of SLRs, or of rapid mixing of SLRs into cooler regions. Regardless, our finding that regions of SLR enrichment are generally larger in extent than regions of hot gas may be testable in the future as higher resolution observations of $\gamma$-ray emission from SLRs observed \textit{in situ} in the ISM become available.
\begin{figure*}
\includegraphics[width=\hsize]{hot_gas_projection_comparison.pdf}
\caption{Same as \autoref{fig:density_projection_zoom}, but showing hot gas ($> 10^6\ \mathrm{K}$) on the top-right panel.}
\label{fig:hot_gas_projection_zoom}
\end{figure*}
\subsection{Abundance ratios in newborn stars}
To investigate abundance ratios of isotopes in newborn stars, whenever a star particle forms in our simulations, we record the abundances of $\rm ^{60}Fe$ and $\rm ^{26}Al$ in the gas from which it forms, since these should be inherited by the resulting stars. We do not add any additional decay, because our stochastic star formation prescription does not immediately convert gas to stars as soon as it crosses the density threshold, and instead accounts for the finite delay between gravitational instability and final collapse.
\autoref{fig:abundance_ratios} shows the probability distribution functions (PDFs) for the abundance ratios $^{60}\mbox{Fe} / {}^{56}\mbox{Fe}$ and $^{26}\mbox{Al} / {}^{27}\mbox{Al}$; we derive the masses of the stable isotopes ${}^{56}\mbox{Fe}$ and ${}^{27}\mbox{Al}$ from the observed abundances of those species in the Sun \citep{AsplundEtAl2009}, and we measure the PDFs for star particles that form between 740 and 750 Myr in the simulation, at galactocentric radii from $7.5 - 8.5$ kpc (i.e., within $\approx 0.5$ kpc of the Solar Circle). However, the results do not strongly vary with galactocentric radius, as shown in \aref{Radial dependence}. We also show that the PDFs are converged with respect to spatial resolution at their high-abundance ends (though not on their low-abundance tails) in \aref{Resolution}.
\begin{figure}
\includegraphics[width=\columnwidth]{abundance_ratios}
\caption{
The abundance ratios of short-lived isotopes in newly-formed stars. The central panel shows the joint PDF of $\rm ^{60}Fe/^{56}Fe$ and $\rm ^{26}Al/^{27}Al$ from our simulations, with colours showing probability density and black points showing individual stars in sparse regions. The top and right panels show the PDFs of $\rm ^{60}Fe/^{56}Fe$ and $\rm ^{26}Al/^{27}Al$ individually, with simulations shown in blue. All simulation data are for stars formed from $740 - 750$ Myr, at Galactocentric radii from 7.5 - 8.5 kpc. Green bands show the uncertainty range of Solar System meteoritic abundances \citep{LeeEtAl1976, MishraGoswami2014, TangDauphas2015, TelusEtAl2018}; for $^{60}\mbox{Fe}$, due to the wide range of values reported in the literature, we also show three representative individual measurements as indicated in the legend.
}
\label{fig:abundance_ratios}
\end{figure}
In \autoref{fig:abundance_ratios} we also show meteoritic estimates for these abundance ratios \citep{LeeEtAl1976, MishraGoswami2014, TangDauphas2015, TelusEtAl2018}.
The PDF of $\rm ^{60}Fe$ peaks near $^{60}\mbox{Fe} / {}^{56}\mbox{Fe} \sim 3\times10^{-7}$, but is $\sim 2$ orders of magnitude wide, placing all the meteoritic estimates well within the ranges covered by the simulated PDF. The $^{26}\mbox{Al}$ abundance distribution is similarly broad, but the measured meteoritic value sits very close to its peak, as $^{26}\mbox{Al} / {}^{27}\mbox{Al} \sim 5\times10^{-5}$.
Clearly, the abundance ratios measured in meteorites are fairly typical of what one would expect for stars born near the Solar Circle, and thus the Sun is not atypical.
\section{Discussion}
\label{Discussion}
Our simulations suggest a mechanism by which the SLRs came to be in the primitive Solar System that is quite different than proposed in earlier work based on smaller-scale simulations or analytic models. We call this new contamination scenario ``inheritance from Galactic-scale correlated star formation". Our scenario differs substantially from the triggered collapse or direct injection scenarios in that both of these require unusual circumstances -- the core that forms the Sun is either at just the right distance from a supernova to be triggered into collapse but well-mixed, or the protoplanetary disc was hit by supernova ejecta and managed to capture them without being destroyed. In either case stars with SLR abundances like those of the Solar System should be rare outliers, while we find that the Sun's abundances are typical.
\begin{figure*}
\includegraphics[width=\hsize]{l_b_plot.png}
\caption{Distributions of gas, star formation, and SLRs in Galactic coordinates, as viewed from the position of the Sun (i.e., a point 8 kpc from the Galactic centre). Panels show the gas, star formation rate, $\rm ^{60}Fe$ and $\rm ^{26}Al$ distributions (from top to bottom) in Galactic coordinates. Note that, although the absolute scales on the colour bars in each panel differ, all panels use the same dynamic range, and thus the distributions are directly comparable. The scalloping pattern that is visible at high latitudes and toward the outer galaxy is an artefact due to aliasing between the Cartesian grid and the angular coordinates in regions where the resolution is low.}
\label{fig:l_b_plot}
\end{figure*}
However, the scenario illustrated in our simulations is also very different from the GMC confinement hypothesis. To see why, one need only examine \autoref{fig:density_projection_zoom}. Observed GMCs, and those in our simulations, are at most $\sim 100$ pc in size, whereas in \autoref{fig:density_projection_zoom} we clearly see that regions of $^{60}\mbox{Fe}$ and $^{26}\mbox{Al}$ contamination are an order of magnitude larger. This difference between our simulations and the GMC confinement hypothesis is also visible in the distribution of $^{26}\mbox{Al}$ on the sky as seen from Earth. \autoref{fig:l_b_plot} shows all-sky maps of the gas, star formation rate, $^{60}\mbox{Fe}$, and $^{26}\mbox{Al}$ as viewed from a point 8 kpc from the Galactic Centre (i.e., at the location of the Sun). We should not regard \autoref{fig:l_b_plot} as an exact prediction of the $\gamma$-ray sky as seen from Earth, since we have not taken care to replicate the Sun's placement relative to spiral arms, nor have we tried to match the sky positions of local structures such as the Sco-Cen association that may have a large impact on what we observe from Earth. However, it is nonetheless interesting to examine the large-scale qualitative behaviour of the map shown in \autoref{fig:l_b_plot}, and its implications. If SLRs are confined by GMCs, then $\gamma$-rays from $^{26}$Al decay should have an angular thickness on the sky comparable to that of star-forming regions. \autoref{fig:l_b_plot} clearly shows that this is not the case in our simulations: $^{60}\mbox{Fe}$ and $^{26}\mbox{Al}$ extend to galactic latitude $b = 4^{\circ} - 5^{\circ}$, while star forming regions are confined to $b < 2^{\circ}$. The difference in scale heights we find is consistent with observations. The Galactic CO survey of \citet{DameEtAl2001} finds that most emission is confined to Galactic latitudes $b < 2^{\circ}$, while the $\gamma$-ray emission maps of $^{26}\mbox{Al}$ \citep{PluschkeEtAl2001, BouchetEtAl2015} show a thick disc with $b \approx 5^{\circ}$. Our simulation successfully reproduces the observed difference in $^{26}\mbox{Al}$ and CO angular distribution.
\begin{figure*}
\includegraphics[width=\hsize]{phase_plots.png}
\caption{Mass distributions with respect to gas temperature versus density. Left is the gas, middle is $\rm ^{60}Fe$ and right is $\rm ^{26}Al$ at $t = $ 750 Myr.}
\label{fig:phase_plots}
\end{figure*}
We can make this discussion more quantitative by examining the distribution of $^{60}\mbox{Fe}$ and $^{26}\mbox{Al}$ and their correlation with the gas and star formation properties of the galaxy. We first examine the distribution of the SLRs with respect to gas density and temperature, as illustrated in \autoref{fig:phase_plots}. We find that only 30\% of the $^{60}\mbox{Fe}$ and 56\% of the $^{26}\mbox{Al}$ by mass are found in GMCs (defined as gas with a density above 100 H cm$^{-3}$), compared to a total GMC mass fraction of 16\%; thus $^{60}\mbox{Fe}$ is overabundant in GMCs compared to the bulk of the ISM by less than a factor of 2, and $^{26}\mbox{Al}$ by less than a factor of 3.5. These modest enhancements are inconsistent with the hypothesis that SLRs abundances are high in the Solar System because SLRs are trapped within long-lived GMCs.
We can also reach a similar conclusion by examining the spatial correlation of star formation with SLRs. For any two-dimensional fields $f(\vec{r})$ and $g(\vec{r})$ defined as a function of position $\vec{r}$ within the galactic disc, we can define the normalised spatial cross-correlation function $(f * g) (r)$ as
\begin{equation}
\label{eq:correlation_function}
(f * g) (r) = \frac{\left\langle \int f(\vec{r}') g(\vec{r}'-\vec{r})\, d\vec{r}' \right\rangle}{\int f(\vec{r}') g(\vec{r}')\, d\vec{r}'}
\end{equation}
where $r = |\vec{r}|$, and the angle brackets indicate an average over all possible angles of the displacement vector $\vec{r}$. In practice we can compute the correlation numerically using projected images such as those shown in \autoref{fig:density_projection} for two quantities $f$ and $g$. The denominator is simply the product of the two images, while we can obtain the integral in the numerator for a displacement vector $\vec{r}$ by shifting one of the images by $\vec{r}$, multiplying the shifted and unshifted images, and measuring product of the two images. We then compute the average over angle by averaging the numerator over shifts of the same magnitude $r = |\vec{r}|$. We show the spatial cross-correlation between star formation and element abundance ratios in \autoref{fig:cross_correlation}. As one can see from the figure, star formation is correlated with $^{60}\mbox{Fe}$ abundance on scales of 1 kpc and $^{26}\mbox{Al}$ abundance on scales of hundreds of pc, much larger than an individual GMC or star-forming complex. The difference of the correlation scales between the $^{60}\mbox{Fe}$ and $^{26}\mbox{Al}$ comes from the different lifetimes (2.62 Myr versus 0.72 Myr) and the fact that $^{60}\mbox{Fe}$ is added to the ISM only through fast-moving SN ejecta, while $^{26}\mbox{Al}$ has contributions from both supernovae and pre-SN stellar winds. This is consistent with the different morphological distributions of $^{60}\mbox{Fe}$ and $^{26}\mbox{Al}$ as shown in \autoref{fig:density_projection_zoom}. The results do not strongly vary with galactocentric radius, as shown in \aref{Radial dependence}.
\begin{figure}
\includegraphics[width=\columnwidth]{cross_correlation_disc.pdf}
\caption{Normalised spatial cross-correlations $(f * g) (r)$ between the SFR surface density and the surface densities of $^{60}\mbox{Fe}$ and $^{26}\mbox{Al}$ divided by the gas surface density.}
\label{fig:cross_correlation}
\end{figure}
The overall picture that emerges from our simulations is that SLR abundances in newborn stars are large because star formation is highly-correlated in time and space \citep{EfremovElmegreen1998, GouliermisEtAl2010, GouliermisEtAl2015, GouliermisEtAl2017, GrashaEtAl2017a, GrashaEtAl2017b}. SN ejecta are not confined to individual molecular clouds, and instead deposit radioactive isotopes in the atomic gas over $\sim 1$ kpc from their parent molecular clouds. However, because star formation is correlated, and because molecular clouds are not closed boxes but instead continually accrete the atomic gas during their star forming lives \citep{FukuiKawamura2010, GoldbaumEtAl2011, Zamora-AvilesVazquez-SemadeniColin2012}, the pre-enriched atomic gas within $\sim 1$ kpc of a molecular cloud stands a far higher chance of being incorporated into a molecular cloud and thence into stars within a few Myr than does a random portion of the ISM at similar density and temperature. Conversely, the atomic gas in a galaxy that will be incorporated into a star a few Myr in the future does not represent an unbiased sampling of all the atomic gas in the galaxy. Instead, it is preferentially that atomic gas that is close to sites of current star formation, and thus is far more likely than average to have been contaminated with SLRs. It is the galactic scale correlation of star formation that is the key physical mechanism that produces high SLR abundances in the primitive Solar System and other young stars.
\section{Conclusions}
\label{Conclusions}
Short-lived radioisotopes (SLRs) such as $^{60}\mbox{Fe}$ and $^{26}\mbox{Al}$ are radioactive elements with half-lives less than 15 Myr that studies of meteorites have shown to be present at the time when the most primitive Solar System bodies condensed. The most likely origin site for the $^{60}\mbox{Fe}$ and $^{26}\mbox{Al}$ in meteorites is nucleosynthesis in massive stars, but the exact delivery mechanism by which these elements entered the Solar System's protoplanetary disc are still debated.
To address this question, we have performed the first chemo-hydrodynamical simulation of the entire Milky-Way Galaxy (\autoref{fig:density_projection}), including stochastic star formation and stellar feedback in the form of H \textsc{ii} regions, supernovae, and element injection. Our simulations have enough resolution to capture individual supernovae, so that we can properly measure the full range of variation in SLR abundances that results from the stochastic nature of element production and transport. From our simulations we measure the expected distribution of $^{60}\mbox{Fe} / {}^{56}\mbox{Fe}$ and $^{26}\mbox{Al} / {}^{27}\mbox{Al}$ ratios for all stars in the Galaxy (\autoref{fig:abundance_ratios}). We find that the Solar abundance ratios inferred from meteorites are well within the normal range for Milky-Way stars; contrary to some models for the origins of SLRs, the Sun's SLR abundances are not atypical.
Our results lead us to propose a new enrichment scenario: SLR enrichment via Galactic-scale correlated star formation. We find that GMCs are at most 100 pc in size and their star forming regions are much smaller, while regions of $^{60}\mbox{Fe}$ and $^{26}\mbox{Al}$ contamination due to supernovae are an order of magnitude larger (\autoref{fig:density_projection_zoom}). The extremely broad distribution of $^{26}\mbox{Al}$ produced in our simulations is consistent with the observed distribution on the sky, which shows an angular scale height that is close to twice that of the molecular gas and star formation in the Milky-Way (\autoref{fig:l_b_plot}). The SLRs are not confined to the molecular clouds in which they are born (\autoref{fig:phase_plots}). However, SLRs are nonetheless abundant in newborn stars because star formation is correlated on galactic scales (\autoref{fig:cross_correlation}). Thus, although SLRs are not confined, they are in effect pre-enriching a halo of the atomic gas around existing GMCs that is very likely to be subsequently accreted or to form another GMC, so that new generations of stars preferentially form in patches of the Galaxy contaminated by previous generations of stellar winds and supernovae.
In future work, we will extend our simulations to include other SLRs such as $^{41}\mbox{Ca}$ and $^{53}\mbox{Mn}$, which also have been claimed to place severe constraints on the birth environment of the Solar system \citep{HussEtAl2009}.
\section*{Acknowledgements}
The authors would like to thank the referee, Roland Diehl, for his careful reading and helpful suggestions. Simulations were carried out on the Cray XC30 at the Center for Computational Astrophysics (CfCA) of the National Astronomical Observatory of Japan and the National Computational Infrastructure (NCI), which is supported by the Australian Government. Y.F. and M.R.K.~acknowledge support from the Australian Government through the Australian Research Council's \textit{Discovery Projects} funding scheme (project DP160100695). Computations described in this work were performed using the publicly available \textsc{enzo} code (\citealt{BryanEtAl2014}; \url{http://enzo-project.org}), which is the product of a collaborative effort of many independent scientists from numerous institutions around the world. Their commitment to open science has helped make this work possible. We acknowledge extensive use of the \textsc{yt} package (\citealt{TurkEtAl2011}; \url{http://yt-project.org}) in analysing these results and the authors would like to thank the \textsc{yt} development team for their generous help.
\bibliographystyle{mnras}
|
{
"timestamp": "2018-08-07T02:07:22",
"yymm": "1802",
"arxiv_id": "1802.08695",
"language": "en",
"url": "https://arxiv.org/abs/1802.08695"
}
|
\section{Introduction}
The last decade has seen a great increase in the use of simulation-based
inference where numerical approximations are based on either Markov
chain Monte Carlo or sequential Monte Carlo sampling. These approaches
have become popularized, in part, because of the increasing computational
power and the emergence of efficient stochastic optimization algorithms.
On the Bayesian paradigm, particle Markov chain Monte Carlo has been
introduced and popularized by Doucet and collaborators \cite{andrieu10,andrieu2015convergence,pitt2012some}.
Similar ideas have been developed previously \cite{leggetter1995maximum,doucet02,gaetan03,jacquier07}
but in different contexts than simulation-based inferences. On the
frequentist paradigm, \cite{ionides06-pnas,ionides11} have introduced an original
approach to perform simulation-based parameter inference in POMP models
by combining stochastic gradient approximation and particle filtering.
In this paper, we will focus on improving one of the most popular
algorithm of this class, namely, iterated filtering (IF). Iterated
filtering uses an approximation estimate of the gradient of the log likelihood
computed from particle filters while proposing an artificial perturbation
moves to update the parameters. This class of algorithm is attractive
because it enables routine simulation-based parameter inferences in
general POMP model, even in the cases of intractable likelihoods.
Due to some interesting theoretical properties \cite{ionides11,ionides15,nguyen2016another},
its applications range in various fields such as biology, ecology,
economics and engineering \cite{lindstrom12,lindstrom2013tuned,laneri10,bhadra10,breto11,breto2014idiosyncratic}.
Iterated filtering was later theoretically developed by \citet{ionides11}.
Recently, \citet{lindstrom12} extended it to improve on numerical
performance while \citet{doucet2013derivative} expanded it to include
filtering/smoothing with quite attractive theoretical properties.
\citet{ionides15} generalized \citet{lindstrom12}'s approach and
combined the idea with data cloning \citep{lele07}, developing a Bayes
map iterated filtering with an entirely different theoretical approach.
\citet{nguyenis215} revisited the approach of \citet{doucet2013derivative},
using a different perturbation noise and computed both the gradient
and the Hessian. Similar to intractable likelihood in the context
of iterated filtering, \citet{Poyiadjis-etal:2009, nemeth2014sequential, doucet2013derivative}
showed that the gradient and Hessian information can also be computed from
particle filter. In the same line, manifold Langevin Monte Carlo (mMALA)
\cite{girolami2011riemann} exploits the Hessian information to simplify
the tedious tuning method while improving on convergence rate. However, this
relies on rather strong assumptions that the gradient, and Hessian
information of transition density and observation density can be sampled
from. This is quite unrealistic in many real world applications. We,
therefore, followed the formal approaches, based solely on very weak
assumptions of being able to sample from transition density and evaluate
from observation density. Motivated from the fact that the gradient and
Hessian information can be approximated using the first and the second moments
\cite{ionides11, doucet2013derivative}, we propose to use such approximations
in the context of accelerate iterated filtering. Ionides uses score
vector merely while Doucet includes the Hessian information for
the independent white noise, which is not quite useful in the context
of iterated filtering with natural random walk noise. \citet{nguyenis215}
proposed to approximate the gradient and Hessian using random walk noise
to efficiently explore the mode of the likelihood. Other than exploiting
approximations of the Hessian under weak assumption, we chose an alternative
approach. That is, we apply the accelerate gradient approach to the
approximation of the gradient of the log likelihood for an effective
estimation approach.
The key contributions of this paper are three folds. Firstly, we developed
and showed that accelerate iterated filtering algorithm converges using
a general non-increasing step size with bias approximation of the
gradient. It is simple, elegant, and generalizable to faster algorithms.
Secondly, we proved that it has a higher convergence rate in general
convex and non-convex conditions of the objective log likelihood. Finally,
we showed substantial improvements of the method on a toy problem and
on a real world challenge problem of vivax malaria model compared
to previous simulation-based inference approaches.
The paper is organized as follows. In the next section we introduce
some notations and we develop the framework of accelerate iterated
filtering. In Sections \ref{sec:AIF}, we state the convergence of
this approximation method to the true maximum likelihood estimation
by iterating and accelerating noisy gradient of the log likelihood.
We validate the proposed methodology by a toy example and a challenging
inference problem of fitting a malaria transmission model to time
series data in Section \ref{sec:5Experiments}, showing substantial
gains for our methods over current alternatives. We conclude in Section
\ref{sec:5Conclusion} with the suggesting of the future works to
be extended. The proofs are postponed to the Appendix.
\section{Background of simulation-based inferences}
We are interested in a general latent variable model since this is an ubiquitous
model for applied sciences. Let $\mathcal{X}$ be a latent state space
with a density $q_{\theta}(x)$ parameterized by $\theta\in\Theta=\mathbb{R}^{d}$,
and let $\mathcal{Y}$ be an observation space equipped with a conditional
density $f_{\theta}(y|x)$. The observation $y\in\mathcal{Y}$ are
considered as fixed and we write the log-likelihood function of
the data $\ell(\theta)\overset{\triangle}{=}\log\int q_{\theta}(x)f_{\theta}(y|x)dx$.
We work with the maximum likelihood estimator, $\hat{\theta}=\arg\max\ell(\theta)$
where $\ell(\theta)$ is intractable but $f_{\theta}(y|x)$ can be
evaluated, by using samples where $f_{\theta}(y|x)$
is also intractable. This process often uses the first order stochastic
approximation \cite{kushner78}, which involves a Monte Carlo approximation
to a difference equation, $\theta_{m}=\theta_{m-1}+\gamma_{m}\nabla\ell(\theta_{m-1}),$
where $\theta_{0}\in\Theta$ is an arbitrary initial estimate and
$\{\gamma_{m}\}_{m\geq1}$ is a sequence of step sizes with ${\sum_{m\geq1}\gamma_{m}=\infty}$
and ${\sum_{m\geq1}\gamma_{m}^{2}<\infty}$. The algorithm converges
to a local maximum of $\ell(\theta)$ under regularity conditions.
The term $\nabla\ell(\theta)$, also called the score function, is shorthand
for the $\mathbb{R}^{d}$-valued vector of partial derivatives, $\nabla\ell(\theta)=\frac{\partial\ell(\theta)}{\partial\theta}$.
Sequential Monte Carlo (SMC) approaches have previously been developed
to estimate the score function \citep{Poyiadjis-etal:2009,nemeth2013particle,DahlinLindstenSchon2015a}.
However, under the simulation-based setting, which does not require the
ability to evaluate transition densities and their derivatives, these
approaches are not applicable. As a result, \cite{ionides11}, \cite{doucet2013derivative}
used an artificial dynamics approach to estimate the derivatives. Specifically,
\cite{nguyenis215} considers a parametric model consisting of a
density $p_{Y}(y;\theta)$ with the log-likelihood of the data $y^{*}\in\mathcal{Y}$
given by $\ell(\theta)=\log p_{Y}(y^{*};\theta)$. A stochastically
perturbed model corresponding to a pair of random variables $(\breve{\Theta},\breve{Y})$
having a joint probability density on $\mathbb{R}^{d}\times\mathcal{Y}$
can be defined as $p_{\breve{\Theta},\breve{Y}}(\breve{\vartheta},\ y;\theta,\ \tau)=\tau^{-d}\kappa\left\{ \tau^{-1}(\breve{\vartheta}-\theta)\right\} p_{Y}(y;\breve{\vartheta}).$
Suppose the following regularity conditions, identical to the assumptions
of \cite{doucet2013derivative}:
\begin{assumption}\label{ass1} There exists $C<\infty$ such that
for any integer $k\geq1,1\leq i_{1},\ \ldots,\ i_{k}\leq d$ and $\beta_{1},\ \ldots,\ \beta_{k}\geq1$,
$\int\left|u_{i_{1}}^{\beta_{1}}u_{i_{2}}^{\beta_{2}}\cdots u_{i_{k}}^{\beta_{k}}\right|\kappa(u)\ du\leq C,$
where $\kappa$ is a symmetric probability density on $\mathbb{R}^{d}$
with respect to Lebesgue measure and $\Sigma=(\sigma_{i,j})_{i,j=1}^{d}$
is the non-singular covariance matrix associated to $\kappa$. \end{assumption}
\begin{assumption}\label{ass2} There exist $\gamma,\ \delta,\ M>0,$
such that for all $u\in\mathbb{R}^{d}$, $|u|>M\Rightarrow\kappa(u)<e^{-\gamma|u|^{\delta}}.$
\end{assumption}
\begin{assumption}\label{ass3} $\ell$ is four times continuously
differentiable and $\delta$ defined as in Assumption \ref{ass2}.
For all $\theta\in\mathbb{R}^{d}$, there exists $0<\eta<\delta,\ \epsilon,\ D>0,$
such that for all $u\in\mathbb{R}^{d}$, $\mathcal{L}(\theta+u)\leq De^{\epsilon|u|^{\eta}},$
where $\mathcal{L}$ : $\mathbb{R}^{d}\rightarrow\mathbb{R}$ is the
associated likelihood function $\mathcal{L}=\exp\ell$. \end{assumption}
Under these regularity assumptions, \cite{doucet2013derivative}
show that
\begin{equation}
\left|\tau^{-2}\Sigma^{-1}{\mathbb{E}}\left(\breve{\Theta}-\theta\left|\breve{Y}=y^{*}\right.\right)-\nabla\ell\left(\theta\right)\right|<C\tau^{2}.\label{eq:4.1}
\end{equation}
These approximations are useful for latent variable models, where
the log-likelihood of the model consists of marginalizing over a latent
variable, $X$,
$$\ell(\theta)=\log\int{p_{{X},{Y}}(x,y^{*};\theta)\, dx}.$$
In this case, the expectations in equation \ref{eq:4.1} can be approximated
by Monte Carlo importance sampling, as proposed by \cite{ionides11}
and \cite{doucet2013derivative}.
In \cite{nguyenis215}, the POMP model is a specific latent variable
model with ${X}=X_{0:N}$ and ${Y}=Y_{1:N}$. A perturbed POMP model
is defined to have a similar construction to our perturbed latent
variable model with ${\breve{X}}=\breve{X}_{0:N}$, $\breve{Y}=\breve{Y}_{1:N}$
and $\breve{\Theta}=\breve{\Theta}_{0:N}$. \citep{ionides11} perturbed
the parameters by setting $\breve{\Theta}_{0:N}$ to be a random walk
starting at $\theta$, whereas \citep{doucet2013derivative} took
$\breve{\Theta}_{0:N}$ to be independent additive white noise perturbations
of $\theta$. We take advantage of the asymptotic developments of
\citep{doucet2013derivative} while maintaining some practical advantages
of random walk perturbations for finite computations, so we use the
construct $\breve{\Theta}_{0:N}$ as in \cite{nguyenis215} as follows.
Let $Z_{0},\ \ldots,\ Z_{N}$ be $N+1$ independent draws from a density
$\psi$. \cite{nguyenis215} introduces $N+2$ perturbation parameters,
$\tau$ and $\tau_{0},\ldots,\tau_{N}$, and construct a process $\breve{\Theta}_{0:N}$
by setting ${\breve{\Theta}_{n}=\theta+\tau\sum_{i=0}^{n}\tau_{i}Z_{i}}$ for $0\le n\le N$.
We later consider a limit where $\tau_{0:N}$
as fixed and the scale factor $\tau$ decreases toward zero, and subsequently
another limit where $\tau_{0}$ is fixed but $\tau_{1:N}$ decrease
toward zero together with $\tau$. Let $p_{\breve{\Theta}_{0:N}}(\breve{\vartheta}_{0:N};\theta,\ \tau,\ \tau_{0:N})$
be the probability density of $\breve{\Theta}_{0:N}$. We define
the artificial random variables $\breve{\Theta}_{0:N}$ via their
density,
\begin{multline*}
p_{\breve{\Theta}_{0:N}}(\breve{\vartheta}_{0:N};\theta,\ \tau,\ \tau_{0:N})=\\
(\tau\tau_{0})^{-d}\psi\left\{ (\tau\tau_{0})^{-1}(\breve{\vartheta_{0}}-\theta)\right\}\times\prod_{n=1}^{N}(\tau\tau_{n})^{-d}\psi\left\{ (\tau\tau_{n})^{-1}(\breve{\vartheta}_{t}-\breve{\vartheta}_{t-1})\right\}.
\end{multline*}
We define the stochastically perturbed model with a Markov process
$\{(\breve{X}_{n},\breve{\Theta}_{n}),\ 0\leq n\leq N\}$, observation
process $\breve{Y}_{1:N}$ and parameter $(\theta,\ \tau,\ \tau_{0:N})$
by the factorization of their joint probability density
\begin{multline*}
p_{\breve{X}_{0:N},\breve{Y}_{1:N},\breve{\Theta}_{0:N}}(x_{0:N},y_{1:N},\breve{\vartheta}_{0:N};\theta,\ \tau,\ \tau_{0:N})\\
=p_{\breve{\Theta}_{0:N}}(\breve{\vartheta}_{0:N};\theta,\ \tau,\ \tau_{0:N})p_{\breve{X}_{0:N},\breve{Y}_{1:N}|\breve{\Theta}_{0:N}}(x_{0:N},\ y_{1:N}|\breve{\vartheta}_{0:N}),
\end{multline*}
where
\begin{multline*}
p_{\breve{X}_{0:N},\breve{Y}_{1:N}|\breve{\Theta}_{0:N}}(x_{0:N},y_{1:N}|\breve{\vartheta}_{0:N};\theta,\tau,\ \tau_{0:N})=\\
\mu(x_{0};\breve{\vartheta}_{0})\prod_{n=1}^{N}f_{n}(x_{n}|x_{n-1};\breve{\vartheta}_{n})\prod_{n=1}^{N}g_{n}(y_{n}|x_{n};\breve{\vartheta}_{n}).
\end{multline*}
This extended model can be used to define a perturbed parameter log-likelihood
function, defined as
\begin{equation}
\breve{\ell}(\breve{\vartheta}_{0:N})=\log p_{\breve{Y}_{1:N}|\breve{\Theta}_{0:N}}(y_{1:N}^{*}|\breve{\vartheta}_{0:N};\theta,\tau,\tau_{0:N}).\label{eq:eloglik-1}
\end{equation}
Here, the right hand side does
not depend on $\theta$, $\tau$ or $\tau_{0:N}$. We have designed
(\ref{eq:eloglik-1}) so that, setting $\breve{\vartheta}^{[N+1]}=(\theta,\theta,\dots,\theta)\in\mathbb{R}^{d(N+1)},$ the log-likelihood of the unperturbed model can be written as
$\ell(\theta)=\breve{\ell}(\breve{\vartheta}^{[N+1]}).$
For the perturbed likelihood, we need an additional assumption of
the extended version.
\begin{assumption}\label{ass5-1} $\breve{\ell}$ is four times continuously
differentiable. For all $\theta\in\mathbb{R}^{d}$, there exist $\epsilon>0$,
$D>0$ and $\delta$ defined as in Assumption \ref{ass2}, such that
for all $0<\eta<\delta$ and $u_{0:N}\in\mathbb{R}^{d(N+1)}$,$\breve{\mathcal{L}}(\breve{\vartheta}^{[N+1]}+u_{0:N})\leq De^{\epsilon\sum_{n=1}^{N}|u_{n}|^{\eta}},$
where $\breve{\mathcal{L}}(\breve{\vartheta}_{0:N})=\exp\{\breve{\ell}(\breve{\vartheta}_{0:N})\}$
is the perturbed likelihood. \end{assumption} Let $\breve{\mathbb{E}}_{\theta,\tau,\tau_{0:N}}$,
${\mathrm{\breve{C}ov}}_{\theta,\tau,\tau_{0:N}}$, ${\mathrm{\breve{V}ar}}_{\theta,\tau,\tau_{0:N}}$
denote the expectation, covariance and variance with respect to the
associated posterior, $p_{\breve{\Theta}_{0:N}|\breve{Y}_{1:N}}(\breve{\vartheta}_{0:N}|y_{1:N}^{*};\theta,\ \tau,\tau_{0:N}).$
By using $\breve{\mathbb{E}}$, ${\mathrm{\breve{C}ov}}$, ${\mathrm{\breve{V}ar}}$
instead of $\breve{\mathbb{E}}_{\theta,\tau,\tau_{0:N}}$, ${\mathrm{\breve{C}ov}}_{\theta,\tau,\tau_{0:N}}$,
${\mathrm{\breve{V}ar}}_{\theta,\tau,\tau_{0:N}}$ respectively, a
theorem similar to theorem 4 of \cite{doucet2013derivative} but
for random walk noise instead of independent white noise is derived.
\begin{theorem} \label{thm1-1} [Theorem 2 of \cite{nguyenis215}] Suppose Assumptions \ref{ass1}, \ref{ass2}
and \ref{ass5-1}, there exists a constant $C$ independent of $\tau,\tau_{1},...\tau_{N}$
such that,
\[
\left|\nabla\ell\left(\theta\right)-\tau^{-2}\Psi^{-1}\left\{ \tau_{0}^{-2}\breve{\mathbb{E}}\left(\breve{\Theta}_{0}-\theta|\breve{Y}_{1:N}=y_{1:N}^{*}\right)\right\} \right|<C\tau^{2},
\]
where $\Psi$ is the non-singular covariance matrix associated to
$\psi$. \end{theorem}
Theorem \ref{thm1-1} formally allows an approximation of $\nabla{\ell}\left(\theta\right)$.
\cite{nguyenis215} also presents an alternative variations on these
results which lead to more stable Monte Carlo estimation. \begin{theorem}
\label{thm3}[Theorem 3 of \cite{nguyenis215}] Suppose Assumption
\ref{ass1}, \ref{ass2} and \ref{ass5-1} hold. In addition, assume
that $\tau_{n}=O(\tau^{2})$ for all $n=1\ldots N$, the following
holds true,
\begin{equation}
\left|\nabla\ell\left(\theta\right)-\frac{1}{N+1}\tau^{-2}\tau_{0}^{-2}\Psi^{-1}\sum_{n=0}^{N}\left\{ \breve{\mathbb{E}}\left(\breve{\Theta}_{n}-\theta|\breve{Y}_{1:N}=y_{1:N}^{*}\right)\right\} \right|=O(\tau^{2}).
\end{equation}
\end{theorem}
These theorems are useful for our approaches because we can approximate
the gradient of the log-likelihood of the extended model to the second
order of $\tau$ which we will later show that it fits well with our
accelerate simulation based setup.
\section{Proposed accelerate iterated filtering} \label{sec:AIF}
Our motivation comes from the accelerated gradient method for smooth non-linear
stochastic programming literature. By using an approximation of the
score function, it is possible to use an accelerated gradient method
as in Nesterov acceleration scheme in optimization literature. One issue
with the accelerated gradient approach is that it is not clear how
the technique can be used in situations where both the likelihood
and the gradient are intractable. These sorts of examples are common
in scientific applications of state space models where the state process
is a diffusion process or an ordinary differential equation (ODE)
with stochastic coefficients. However, in these family
of iterated filtering approaches, the score function can be approximated
with noise under control without affecting the convergence rate. Specifically,
applying an accelerated inexact gradient algorithm in the iterated
filtering approach can obtain an optimal rate of convergence.
In this paper, $\epsilon_{k}$ denotes the error in the approximation
of the gradient. Using the same notation as \cite{ghadimi2016accelerated},
denote the sequences of magnitudes of the errors in the gradient approximations
$\left\{ \left\Vert \epsilon_{k}\right\Vert \right\} $. Suppose the
following assumptions: \begin{assumption}\label{ass6} The function
$\ell$ : $\Theta\rightarrow\mathbb{R}$ is differentiable, bounded
from above and has a L-Lipschitz-continuous gradient, i.e. for all
$\theta,\ \vartheta\in\Theta$, $\left\Vert \nabla\ell(\theta)-\nabla\ell(\vartheta)\right\Vert \leq L\left\Vert \theta-\vartheta\right\Vert ,$
where $\nabla\ell$ denotes the gradient of $\ell$. The function
$\ell$ attains its maximum at a certain $\theta^{*}\in\Theta$. \end{assumption}
In the sequel, $\Theta$ denotes a finite-dimensional Euclidean space
with norm $\left\Vert \cdot\right\Vert $ and inner product $\left\langle \cdot,\cdot\right\rangle $.
It can be shown that (e.g. in \cite{nesterov2005smooth}) Assumption \ref{ass6}
is equivalent to
\begin{equation}
\left|\ell(\vartheta)-\ell(\theta)-\left\langle \nabla\ell(\theta),\mathrm{\vartheta}-\theta\right\rangle \right|\leq\frac{L}{2}\left\Vert \vartheta-\theta\right\Vert ^{2},\;{\displaystyle \forall\theta,\ \vartheta\in\Theta}\label{eq:2.1}
\end{equation}
It is well-known that the gradient descent method converges for a
general non-convex optimization problem but it does not achieve the
optimal rate of convergence, in terms of the functional optimality
gap, when $\ell(\cdot)$ is convex \cite{ghadimi2016accelerated}.
In contrast, the accelerated gradient method in \cite{nesterov2013introductory}
is optimal for solving convex optimization problems, but does not
necessarily converge for solving nonconvex optimization problems.
\cite{ghadimi2016accelerated} proposed a modified accelerated gradient
method which can converge in both convex and non-convex optimization
problem. However, they assumed unbiased estimation of the gradient
which is not satisfied for most simulation-based inferences. Below,
we extend the approach of Ghadimi to an accelerated inexact gradient
(AIG) method in the context of accelerate iterated filtering. That is, we allow bias in gradient approximation
by properly specifying the stepsize policy. We prove that it not only
achieves the same optimal rate of convergence for both convex and non-convex optimizations,
but also exhibits the best-known rate of convergence for simulation-based
inference problems.
\begin{algorithm}[H]
\caption{Accelerate Inexact Gradient (AIG)}
\label{alg0}
\begin{algorithmic}[1]
\Statex
\INPUT
\Statex $\theta_{0}\in\Theta.$
\Statex $\left\{ \beta_{\mathrm{k}}>0\right\}$, $\left\{ \lambda_{k}>0\right\}$ for any $k\geq2$.
\Statex $\left\{ \alpha_{k}\right\} \in\left(0,1\right)$ for $k>1$ and $\alpha_{1}=1$.
\newline
\State $\theta_{0}^{ag}=\theta_{0}$. \Comment Initialize
\For {$k$ in $1...N$}
\State \begin{equation}\theta_{k}^{md}=(1-\alpha_{k})\theta_{k-1}^{ag}+\alpha_{k}\theta_{k-1}\label{eq:2.2}\end{equation}
\State \begin{equation}\theta_{k}=\theta_{k-1}-\lambda_{k}\left(\widehat{\nabla\ell(\theta_{k}^{md})}\right)\label{eq:2.3}\end{equation}
\State \begin{equation}\theta_{k}^{ag}=\theta_{k-1}^{md}-\beta_{k}\left(\widehat{\nabla\ell(\theta_{k}^{md})}\right)\label{eq:2.4}\end{equation} \Comment where $\widehat{\nabla\ell(\theta_{k}^{md})}$ is an estimation of
$\nabla\ell(\theta_{k}^{md})$ with error $\epsilon_{k}$.
\EndFor
\end{algorithmic}
\end{algorithm}
In addition to Assumption \ref{ass6}, we assume a noise control condition
for Algorithm \ref{alg0}.
\begin{assumption}\label{ass7}
$\Theta$ is bounded. There exists an $A<\infty$ such that $\sum_{k=1}^{N}\lambda_{k}\left\Vert \epsilon_{k}\right\Vert <A.$
\end{assumption}
Given some mild conditions often satisfied by controlling the artificial
noises, we have the following result.
\begin{theorem}\label{thm4}
(Extension of Theorem 1 of \cite{ghadimi2016accelerated}).\\
Suppose Assumptions \ref{ass6} and \ref{ass7} hold. In addition, let $\{\theta_{k},\ \theta_{k}^{ag}\}$ $k\geq1$ be computed by Algorithm \ref{alg0}.\\
a) If sequences $\left\{ \alpha_{k}\right\} ,\left\{ \beta_{k}\right\} $,
$\left\{ \lambda_{k}\right\} $ and $\left\{ \Gamma_{k}\right\} $
satisfy
\begin{equation}
\Gamma_{k}:=\begin{cases}
1 & k=1\\
(1-\alpha_{k})\Gamma_{k-1} & k\geq2
\end{cases},\ \label{eq:2.6}
\end{equation}
\begin{equation}
C_{k}:=1-L{\displaystyle \lambda_{k}-\frac{L(\lambda_{k}-\beta_{k})^{2}}{2\lambda_{k}\alpha_{k}\Gamma_{k}}\left(\sum_{\tau=k}^{N}\frac{1}{\Gamma_{\tau}}\right)>0, \mbox{ for}\ 1\leq k\leq N}\label{eq:2.7},
\end{equation}
then for any $N\geq1$, we have for some $B<\infty$,
\begin{equation}
{\displaystyle \min_{k=1,...,N}\left\Vert \nabla\ell(\theta_{k}^{md})+\epsilon_{k}\right\Vert ^{2}\leq\frac{\ell(\theta_{0})-\ell^{*}+B}{\sum_{k=1}^{N}\lambda_{k}C_{k}}}.\label{eq:2.8}
\end{equation}
b) Suppose that $\ell(\cdot)$ is convex. If sequences $\left\{ \alpha_{k}\right\} ,\left\{ \beta_{k}\right\} $,$\left\{ \lambda_{k}\right\} $
and $\left\{ \Gamma_{k}\right\} $ satisfy
\begin{equation}
\alpha_{k}\lambda_{k}\leq\beta_{k}<\frac{1}{L}\label{eq:2.9},
\end{equation}
\begin{equation}
\frac{\alpha_{1}}{\lambda_{1}\Gamma_{1}}\geq\frac{\alpha_{2}}{\lambda_{2}\Gamma_{2}}\geq\ldots\label{eq:2.10},
\end{equation}
then for any $N\geq1$, we have
\begin{multline}
\min_{k=1,...,N}\left\Vert \nabla\ell(\theta_{k}^{md})+\epsilon_{k}\right\Vert ^{2}\\
\leq2\frac{\frac{\Vert \theta^{*}-\theta_{0}\Vert^{2}}{2\lambda_{1}}+\sum_{k=1}^{N}\Gamma_{k}^{-1}\left[\beta_{k}\left\Vert \epsilon_{k}\right\Vert \left\Vert \nabla\ell(\theta_{k}^{md})+\epsilon_{k}\right\Vert +\alpha_{k}\left\Vert \epsilon_{k}\right\Vert \left\Vert \theta_{k-1}-\theta_0\right\Vert \right]}{\sum_{k=1}^{N}\Gamma_{k}^{-1}\beta_{k}(1-L\beta_{k})},\label{eq:2.11}
\end{multline}
\begin{multline}
\ell(\theta_{N}^{ag})-\ell(\theta^{*})\\
\leq\Gamma_{N}\left[\frac{\left\Vert \theta_{0}-\theta^{*}\right\Vert ^{2}}{\lambda_{1}}+\sum_{k=1}^{N}\Gamma_{k}^{-1}\left[\beta_{k}\left\Vert \epsilon_{k}\right\Vert \left\Vert \nabla\ell(\theta_{k}^{md})+\epsilon_{k}\right\Vert +\alpha_{k}\left\Vert \epsilon_{k}\right\Vert \left\Vert \theta_{k-1}-\theta_0\right\Vert \right]\right]\label{eq:2.12}.
\end{multline}
\end{theorem}
There are various options for selecting $\left\{ \alpha_{k}\right\} ,\left\{ \beta_{k}\right\} $,$\left\{ \lambda_{k}\right\} $,$\left\{ \Gamma_{k}\right\}$.
By controlling error $\epsilon_{k}$, we can provide some of these
selections below which guarantee the optimal convergence rate of the
AIG algorithm for both convex and nonconvex problems.
\begin{theorem}\label{thm5}
Suppose Assumptions \ref{ass6} and \ref{ass7} hold. In addition, suppose that $\left\{ \beta_{k}\right\} $ in the accelerated
gradient method are set to $\beta_{k}=\frac{1}{2L}$.
a) If sequences $\left\{ \alpha_{k}\right\}$ and $\left\{ \lambda_{k}\right\}$
satisfy
\begin{equation}
\lambda_{k}\in\left[\beta_{k},(1+\frac{1}{k})\beta_{k}\right], \mbox{ for }{\displaystyle \;\forall k\geq1}\label{eq:2.28},
\end{equation}
then for any $N\geq1$, we have
\begin{equation}
{\displaystyle \min_{k=1,\ldots, N}\Vert\nabla\ell(\theta_{k}^{md})+\epsilon_{k}\Vert^{2}\leq O\left(\frac{1}{N}\right)}\label{eq:2.29}.
\end{equation}
Suppose that $\epsilon_{k}=O\left(\tau^{2}\right)\leq O(\frac{1}{k})$,
then the AIG method can find a solution $\bar{\theta}$ such
that $\left\Vert \nabla\ell(\bar{\theta})\right\Vert ^{2}\leq\epsilon$
in at most $O(1/\epsilon^{2})$ iterations.
b) Suppose that $\ell(\cdot)$ is convex and $\epsilon_{k}=O\left(\tau^{2}\right)\leq O(\frac{1}{k^{2+\delta+\delta_{1}}})$
for some $\delta_{1}>0$. If $\left\{ \lambda_{k}\right\} $ satisfies
\begin{equation}
{\displaystyle \lambda_{k}=\left(k^{1+\delta}-\left(k-1\right)^{1+\delta}\right)\:\forall k\geq1},\label{eq:2.30}
\end{equation}
then for any $N\geq1$, we have
\begin{equation}
\min_{k=1,...,N}\left\Vert \nabla\ell(\theta_{k}^{md})+\epsilon_{k}\right\Vert ^{2}{\displaystyle \leq}O\left(\frac{1}{N^{2+\delta}}\right),\label{eq:2.31}
\end{equation}
\begin{equation}
{\displaystyle \ell(\theta_{N}^{ag})-\ell(\theta^{*})\leq O\left(\frac{1}{N^{1+\delta}}\right)},\label{eq:2.32}
\end{equation}
then the AIG method can find a solution $\bar{\theta}$ such
that $\left\Vert \nabla\ell(\bar{\theta})\right\Vert ^{2}\leq\epsilon$
in $O\left(1/\epsilon^{\frac{1}{2+\delta}}\right)$ at most.
\end{theorem}
\begin{algorithm}[H]
\caption{Accelerate Iterated Filtering (AIF)}
\label{alg1}
\begin{algorithmic}[1]
\Statex
\INPUT
\Statex Starting parameter, $\theta_0=\theta^{ag}_0$ , sequences, $\alpha_n, \beta_n, \lambda_n, \Gamma_n$
\Statex simulator for $f_{X_0}(x_0|\theta)$, $f_{X_n|X_{n-1}}(x_n| x_{n-1}|\theta)$, evaluator for $f_{Y_n|X_n}(y_n| x_{n}|\theta)$
\Statex data, $y^*_{1:N}$, labels designating IVPs, $I\subset\{1,\dots,p\}$, initial scale multiplier, $C>0$
\Statex number of particles, $J$, number of iterations, $M$, cooling rate, $0<a<1$, perturbation scales, $\sigma_{1:p}$
\Ensure
\Statex Maximum likelihood estimate $\theta_{MLE}$
\newline
\State $\theta^{md}_0=\theta_0$ \Comment Initialize \label{alg1:init:perturb}
\State $[\Theta^F_{0,j}]_i \sim N\left([\theta^{md}_0]_i, (C a^{m-1} \sigma_i)^2\right)$ for $i$ in $1..p$, $j$ in $1... J$. \Comment Initialize filter mean for parameters
\State simulate $X_{0,j}^F \sim f_{X_0}\big(\cdot;{\Theta^F_{0,j}}\big)$ for $j$ \textrm{in} $1..J$. \Comment Initialize states \label{alg1:initstates}
\For {$m$ in $1...M$}
\State $\theta^{md}_m= (1-\alpha_m)\theta^{ag}_{m-1}+\alpha_{m_1}\theta_{m-1}$.
\For {$n$ \textrm{in} $1... N$}
\State $\big[\Theta_{n,j}^{P}\big]_{i}\sim\mathcal{N}\big(\big[\Theta^F_{n-1,j}\big]_{i},(c^{m-1}\sigma_{i})^{2}\big)$ for $i\notin I$, $j$ in $1:J$.\;\Comment Perturb \label{alg1:perturb}
\State ${X}_{n,j}^{P}\sim{f}_n\big({x}_{n}|{X}_{n-1,j}^{F};{\Theta_{n,j}^{P}}\big)$ for $j$ in $1:J$.\; \Comment Simulate prediction particles \label{alg1:sim}
\State $w(n,j)=g_n(y_{n}^{*}|X_{n,j}^{P};\Theta_{n,j}^{P})$ for $j$ in $1:J$.\; \Comment Evaluate weights \label{alg1:weights}
\State $\breve{w}(n,j)=w(n,j)/\sum_{u=1}^{J}w(n,u)$.\; \Comment Normalize weights \label{alg1:normalize}
\State $k_{1:J}$ with $P\left\{ k_{u}=j\right\} =\breve{w}\left(n,j\right)$.\label{alg1:syst}\; \Comment Apply systematic resampling to select indices
\State $X_{n,j}^{F}=X_{n,k_{j}}^{P}$ and $\Theta_{n,j}^{F}=\Theta_{n,k_{j}}^{P}$ for $j$ in $1:J$.\; \Comment Resample particles \label{alg1:resample}
\EndFor
\State $S_{m}=c^{-2(m-1)}\Psi^{-1}\sum_{n=1}^N\big[\left(\bar{\theta}_{n}-\theta^{md}_{m-1}\right)\big]$ \Comment Update Parameters \label{alg1:updateivps}
\State $\big[\theta_{m}\big]_i=\theta_{m-1}-\lambda_{m-1}\big[S_{m}\big]_{i}$ for $i \notin I$.\;
\State $\big[\theta^{ag}_{m}\big]_i=\theta^{md}_{m-1}-\beta_{m-1}\big[S_{m}\big]_{i}$ for $i \notin I$.\;
\State $\big[\theta_{m}\big]_i=\frac{1}{J}\sum_{j=1}^J\big[\Theta^F_{L,j}\big]_i$ for $i \in I$.\;
\EndFor
\end{algorithmic}
\end{algorithm}
We now add a few remarks about the extension results obtained in Theorem \ref{thm5}. First,
if the problem is convex, by choosing more aggressive stepsizes
$\{\lambda_{k}\}$ in \eqref{eq:2.30}, the AIG method exhibits the optimal
rate of convergence in \eqref{eq:2.32}. It is also worth noting that
with such a selection of $\{\lambda_{k}\}$, the AIG method can find
a solution $\bar{\theta}$ such that $\left\Vert \nabla\ell(\bar{\theta})\right\Vert ^{2}\leq\epsilon$
in at most $O(1/\epsilon^{1/2+\delta})$ iterations. The latter result has been shown by \cite{nesterov2005smooth},
\cite{ghadimi2016accelerated} but only for the accelerate unbiased gradient method. Second,
observe that $\left\{ \lambda_{k}\right\} $ in \eqref{eq:2.28} for
general nonconvex problems is in the order of $O(1/L)$,
while the one in \eqref{eq:2.30} for convex problems are more aggressive
(in $O(k/L))$. The value $\delta$ is optimal at $1$ for
convergence rate. However, it may not be optimal
for computation of controlling the noises. Finally, we show that we can apply the stepsize policy in \eqref{eq:2.28}
for solving general inexact gradient problems for both convex and nonconvex optimization.
The sequential Monte Carlo filter can be arbitrarily approximated to the exact filter
by choosing sufficiently large number of particles \citep{ionides11}.
It can be seen that we can choose the perturbation sequence so that the gradient noise satisfies
condition in Theorem~4. For completeness, we present the pseudo code of the proposed algorithm as in Algorithm \ref{alg1}.
\section{Numerical examples\label{sec:5Experiments}}
To measure the performance of the new inference algorithm, we evaluate
our accelerate iterated filtering on some benchmark examples and compare
it to the existing simulation-based approaches. We make use
of well tested and maintained code of R \cite{amanual} packages such
as pomp \cite{king15pomp}. Specifically, models are coded using C
snipet declarations \cite{king15pomp}. New algorithm is written in
R package is2, which provides user friendly interfaces in R and efficient
matrix operations in the highly optimized Rcpp \cite{eddelbuettel2011rcpp}.
All the simulation-based approaches mentioned above use sequential
Monte Carlo algorithm (SMC), implemented using bootstrap filter.
Experiments were carried out on a cluster of $32$ cores Intel
Xeon E5-2680 $2.7$ Ghz with $256$ GB memory. For a fair comparison,
we try to use the same setup and assessment for every inference method.
A public Github repository containing scripts for reproducing our
results may be found at https://github.com/nxdao2000/AIFcomparisons.
\subsection{Toy example: A linear, Gaussian model}
In this subsection, we compare our accelerate iterated filtering algorithm
to the original iterated filtering algorithm IF1 \cite{ionides06-pnas},
Bayes map iterated filtering (IF2) \cite{ionides15} and the second-order
iterated smoothing (IS2) \cite{nguyenis215}. It has been shown in \cite{nguyenis215} and \cite{ionides15} that
the second-order iterated smoothing with white noise (IS1) \cite{doucet2013derivative} and particle Markov chain Monte Carlo (PMCMC) \cite{andrieu10}
do not perform as well as Bayes map iterated filtering so we leave them out.
For a computationally convenient setting, simple models provide an opportunity
to test the basic features of inference algorithms. Therefore, we
first consider a bivariate discrete time Gaussian autoregressive process,
a relatively simple mechanistic model. This model is chosen so that
the Monte Carlo calculations can be verified using a Kalman filter.
For this example, there are some alternatives to iterated filtering
class. For example, EM and MCMC algorithms would be practical in this
case although they do not scale well to large dynamic models, so we
do not include them here. The model is given by the state space forms:
$X_{n}|X_{n-1}=x_{n-1}\sim\mathcal{N}(\alpha x_{n-1},\sigma^{\top}\sigma),$
$Y_{n}|X_{n}=x_{n}\sim\mathcal{N}(x_{n},I_{2})$ where $\alpha$,
$\sigma$ are $2\times2$ matrices and $I_{2}$ is $2\times2$ identity
matrix. The data are simulated from the following parameters:
\[
{\displaystyle \alpha=\left[\begin{array}{cc}
\alpha_{1} & \alpha_{2}\\
\alpha_{3} & \alpha_{4}
\end{array}\right]=\left[\begin{array}{cc}
0.8 & -0.5\\
0.3 & 0.9
\end{array}\right],\ \sigma=\left[\begin{array}{cc}
3 & 0\\
-0.5 & 2
\end{array}\right].}
\]
The number of time points $N$ is set to $100$ and initial starting
point $X_{0}=\left(-3,4\right)$. For each method mentioned above,
we estimate parameters $\alpha_{2}$ and $\alpha_{3}$ for this model
using $J=1000$ particles and run our estimation for $M=25$ iterations.
We start the initial search uniformly on a large rectangular region
$[-1,1]\times[-1,1]$. As can be seen from Fig. 1, all of the distributions
of estimated maximized log likelihoods touch the true MLE (computed
from Kalman filter) at the vertical broken line, implying that they all
successfully converged. The results show that AIF is the most efficient
method of all because using AIF the results have higher mean and smaller
variance compared to other approaches, indicating a higher empirical
convergence rate. Algorithmically, AIF has similar computational costs
with the first order approaches IF1, IF2, and is cheaper than the
second order approach IS2. In deed, average computational
time of twenty independent runs of each approach is given in Table \ref{table:1}.
Additional overheads for estimating score make the computation time
of AIF a bit larger compared to computational time of IF2. However,
with complex models and large enough number of particles, these overheads
become negligible and computational time of AIF will be similar to
other first order approaches. The fact that it has the convergence
rate of second order with computation complexity of first-order shows
that it is a very promising algorithm. In addition, the results also imply that AIF is robust to initial starting
guesses.
\begin{table}
\begin{center}
\caption{Computation times, in seconds, for the toy example.}
\label{table:1}
\vspace{2mm}
\begin{tabular}{lrrr}
\hline
& \rule[-1.5mm]{0mm}{7mm}\hspace{6mm}$J=100$
& \hspace{6mm}$J=1000$ & \hspace{6mm} $J=10000$ \\
\hline
IF1 & 1.656 & 5.251 & 62.632 \\
IF2 & 1.591 & 5.156 & 61.072 \\
IS2 & 2.530 & 10.198 & 135.248 \\
AIF & 2.729 & 10.278 & 132.016\\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}
\vspace{-0.5cm}
\includegraphics[width=7cm,height=7cm]{ou2-compareAIF}
\caption{Comparison of estimators for the linear, Gaussian toy example, showing
the densities of the MLEs estimated by the IF1, IF2, AIF and
IS2 methods. The parameters $\alpha_{2}$ and $\alpha_{3}$ were
estimated, started from 200 randomly uniform initial values over a
large rectangular region $[-1,1]\times[-1,1]$. }
\label{fig:toy2}
\end{figure}
\begin{figure}
\includegraphics[width=12cm,height=12cm]{ou2-compareAIFb.pdf}
\caption{Comparison of different estimators. The likelihood surface for the
linear model, with the location of the MLE is marked with a green cross.
The crosses show final points from 40 Monte
Carlo replications of the estimators: (A) Original iterated filtering method; (B)
Bayes map iterated filtering method; (C) Accelerate iterated filtering
method; (D) Second-order iterated filtering method; Each method, was
started uniformly over the rectangle shown, with $M=25$ iterations,
$N=1000$ particles, and a random walk standard deviation decreasing
from $0.02$ geometrically to $0.011$ for both $\alpha_{2}$ and
$\alpha_{3}$. }
\label{fig:sup:1}
\end{figure}
To see how the final MLEs clustered around the true MLE, we only show
$40$ Monte Carlo replications for this toy example. As can be observed
from Fig.~\ref{fig:sup:1}, most of the replications clustered near
the true MLE for AIF approach, while none of them stays in a lower
likelihood region. It can be interpreted as a statistical summary
of Fig.~\ref{fig:sup:1}, with $200$ Monte Carlo replications. These
results indicate that AIF is clearly the best of the investigated
methods for this test compared to others. Given additional computational
resources, we also checked how the results of each method compared.
Specifically, we set $M=100$ iterations and $J=10000$ particles,
with the random walk standard deviation decreasing geometrically from
$0.02$ down to $0.0018$ for each method. In this situation, we
confirm that AIF is the best among other IF1, IF2 and IS2. All methods
have comparable computational demands for given $M$ and $J$.
\begin{figure}
\vspace{-0.5cm}
\includegraphics[width=7cm,height=7cm]{ou2-compareAIFSupp2}
\caption{Comparison of estimators for the linear, Gaussian toy example, showing
the densities of the MLEs estimated by the IF1, IF2, AIF and
IS2 methods using $M=100$ iterations and $J=10000$ particles. The parameters $\alpha_{2}$ and $\alpha_{3}$ were
estimated, started from 200 randomly uniform initial values over a
large rectangular region $[-1,1]\times[-1,1]$. }
\label{fig:toy2}
\end{figure}
\subsection{Malaria benchmark}
Many real world dynamic systems are highly nonlinear, partially observed
and even weakly identifiable. To demonstrate the capabilities of accelerate
iterated filtering for such situations, we apply it to evaluate the
likelihood in a stochastic differential equation for vivax malaria
model of \citet{roy12}. The reason to choose this challenging model
is that it provides a rigorous performance benchmark for our verification.
The model $SEIH^{3}QS$ we consider splits up the study population
of size $P(t)$ into seven classes: susceptible individuals, $S(t)$,
exposure $E(t)$, infected individuals, $I(t)$, dormant classes $H_{1}(t)$,
$H_{2}(t)$, $H_{3}(t)$ and recovered individuals, $Q(t)$. This
strain of malaria characterized by relapse following initial recovery
from symptoms \cite{nguyenis215}. Therefore the the last $S$ in
the model name indicates the possibility that a recovered person can
return to the class of susceptible individuals. The data, denoted
by $y_{1:N}^{*}$, are in the form of monthly time series over a
20-year period, counting the malaria morbidity. $\delta$ denotes the
mortality rate, $\kappa(t)$ a delay stage, $\mu_{SE}(t)$ the current
force of infection, and $\tau_{D}$ the mean latency time. The state process
is
\[
X(t)=\big(S(t),E(t),I(t),Q(t),H_{1}(t),H_{2}(t),H_{3}(t),\kappa(t),\mu_{SE}(t)\big),
\]
where transition rates from stage $H_{1}$ to $H_{2}$, $H_{2}$ to
$H_{3}$ and $H_{3}$ to $Q$ are specified to be $3\mu_{HI}$ while
infected population to dormancy transition rate is $\mu_{IH}$.
The model satisfies the following stochastic differential equation
system
\begin{eqnarray*}
dS/dt & = & \delta P+\mathrm{d}P/dt+\mu_{IS}I+\mu_{QS}Q\\
& & \hspace{5mm}+a\mu_{IH}I+b\mu_{EI}E-\mu_{SE}(t)S-\delta S,\\
dE/dt & = & \mu_{SE}(t)S-\mu_{EI}E-\delta E,\\
dI/dt & = & (1-b)\mu_{EI}E+3\mu_{HI}H_{n}-(\mu_{IH}+\mu_{IS}+\mu_{IQ})I-\delta I,\\
dH_{1}/dt & = & (1-a)\mu_{IH}I-n\mu_{HI}H_{1}-\delta H_{1},\\
dH_{i}/dt & = & 3\mu_{HI}H_{i-1}-3\mu_{HI}H_{i}-\delta H_{i}\hspace{5mm}\mbox{for \ensuremath{i\in\{2,3\}}},\\
dQ/dt & = & \mu_{IQ}I-\mu_{QS}Q-\delta Q.
\end{eqnarray*}
In addition, the malaria pathogen reproduction within the mosquito
vector is given by
\begin{eqnarray*}
\mathrm{d}\kappa/\mathrm{d}t & = & [\lambda(t)-\kappa(t)]/\tau_{D},\\
\mathrm{d}\mu_{SE}/\mathrm{d}t & = & [\kappa(t)-\mu_{SE}(t)]/\tau_{D},
\end{eqnarray*}
where $\lambda(t)$ is the latent force of infection and $\lambda(t)$,
$\kappa(t)$ and $\mu_{SE}(t)$ satisfies
\begin{equation}
\mu_{SE}(t)=\int_{-\infty}^{t}\gamma(t-s)\lambda(s)\mathrm{d}s,
\end{equation}
with $\gamma(s)=\frac{(2/\tau_{D})^{2}s^{2-1}}{(2-1)!}\exp(-2s/\tau_{D})$,
a gamma distribution with shape parameter $2$. Since the latent force
of infection is constrained by rainfall covariate $R(t)$ and some
Gamma white noise, from \citet{roy12} we have:
\[
{\lambda(t)=\left(\frac{I+qQ}{P}\right)\times\exp\left\{ \sum_{i=1}^{N_{s}}b_{i}s_{i}(t)+b_{r}R(t)\right\} {\times\left[\frac{\mathrm{d}\Gamma(t)}{\mathrm{d}t}\right]}}.
\]
In this equation, $q$ denotes a reduced infection risk from humans
in the $\mathrm{Q}$ class and $\{s_{i}(t),i=1,\dots,N_{s}\}$ is
a periodic cubic B-spline basis, with $N_{s}=6$. The observation
model for $Y_{n}$ is a negative binomial distribution with mean $M_{n}$
and variance $M_{n}+M_{n}^{2}\sigma_{\mathrm{obs}}^{2}$ where $M_{n}=\rho\int_{t_{n-1}}^{t_{n}}[\mu_{EI}E(s)+3\mu_{HI}H_{3}(s)]ds$
is the number of new cases observed from time $t_{n-1}$to time $t_{n}$
and $\rho$ it the mean age. The coupled system of stochastic differential
equations is solved using an Euler-Maruyama scheme \citep{kloeden99}
with a time step of $1/20$ month in our case.
\begin{figure}
\vspace{-0.5cm}
\includegraphics[width=7cm,height=7cm]{compareAIFmalaria.pdf}
\caption{The density of the maximized log likelihood approximations
estimated by IF1, IF2, IS2, RIS1 and AIF for the malaria model when using
J = 1000 and M = 50. The log likelihood at a previously computed
MLE is shown as a dashed vertical line. }
\label{fig:malaria}
\end{figure}
Given the data obtained from National Institutes of Malaria Research
\citep{roy12}, we carried out simulation-based inference via the
original iterated filtering (IF1), the perturbed Bayes map iterated filtering
(IF2), the second order iterated smoothing (IS2), and the new accelerate
iterated filtering (AIF). The inference goal used to assess all of these methods is to find
high likelihood parameter values starting from randomly drawn values
in a large hyperrectangle. In the presence of possible
multi-modality, weak identifiability, and considerable Monte~Carlo
error of this model, we start $200$ random searches. The random walk
standard deviation is initially set to $0.1$ for estimated parameters
while the cooling rate $c$ is set to $0.1^{0.02}\approx0.95$. These
corresponding quantities for initial value parameters are $2$ and
$0.1^{0.02}$, respectively, but they are applied only at time zero.
We run our experiment on a cluster computers with $M=50$ iterations
and with $J=1000$ particles. The reason to choose these values for
this model is that increasing the iterations to $100$ and the number
of particles to $10000$ does not improve the results much but it
takes significant longer time. Figure~\ref{fig:malaria} shows the
distribution of the MLEs estimated by IF1, IF2, IS2 and AIF. All distributions
touch the global maximum as expected and the higher mean and smaller
variance of IF2, AIF estimation clearly demonstrate that they are
considerably more effective than IF1. Note that the computational
times for IF1, IF2, IS2 and AIF are 44.86, 43.92, 53.10 and 52.25 minutes respectively, confirming
that accelerate iterated filtering has essentially the same computational
cost as first order methods IF1, IF2 and is cheaper a bit than IS2, for a given Monte Carlo sample
size and number of iterations. In this hard problem, while IF1 reveal
their limitations, we have shown that IF2 and AIF can still offer
a substantial improvement. A natural heuristic idea to further improve
the method is hybridizing IF2 and AIF but we leave it for the future
work.
\section{Conclusion\label{sec:5Conclusion}}
In this paper, we have proposed a novel class of iterated filtering theory using
an accelerated inexact gradient approach. We have shown that choosing
perturbation sequence and number of particles carefully results in an
algorithm which has led to many advances including the statistical and computational efficiency. This is also very
fruitful as it is extendable to a more generalized class of algorithm,
based on proximal theory. Previous proof of iterated filtering class
require some difficult conditions, which is not easily verifiable. However,
in this article, we use only general standard gradient conditions. We are going further down the road of
a more systematic approach which could be easily generalized to the state of the art
algorithm in the optimization literatures. The convergence rate is also explicitly stated
and it is better than standard theory. From a theoretical
point of view, it could be an interesting perspective and insight.
In addition, from practical point of view, we have provided an efficient
framework, applicable to a general class of nonlinear, non-Gaussian
non-standard POMP models, especially suitable in the control feedback
system. There are a lot of such systems, which are not
well-treated by current available modeling framework. We simultaneously
present the performance of our open source software package is2 to
facilitate the needs of the community. The performance of this new
approaches surpass the other frameworks by a large margin of magnitude.
It may be surprising that this simple accelerated inexact gradient approach has the
needed convergence properties, and can easily be generalized, at least
in some asymptotic sense. It is not hard to show that the accelerated inexact proximal gradient iterated filtering theory
can be adapted to apply with iterated smoothing and with either independent
white noise or random walk perturbations while our empirical results still show strong evidences of the improvements.
In principle,
different simulation-based inference methods can readily be hybridized
to build on the strongest features of multiple algorithms. Our results
could also be applied to develop other simulation-based methodologies
which can take advantage of proximal map.
For example, it may be possible to use our approach to help design efficient
proposal distributions for particle Markov chain Monte Carlo algorithms.
The theoretical and algorithmic innovations of this paper will help to build a new direction
for future developments on this frontier.
Applying this approach to methodologies like Approximate Bayesian Computation
(ABC), Liu-West Particle Filter (LW-PF), Particle Markov chain Monte Carlo (PMCMC),
with different samplers scheme,
e.g. forward backward particle filter, forward smoothing or forward backward smoothing
are foreseeable extensions.
|
{
"timestamp": "2018-02-26T02:10:47",
"yymm": "1802",
"arxiv_id": "1802.08613",
"language": "en",
"url": "https://arxiv.org/abs/1802.08613"
}
|
\section{Introduction}
\input{tex/intro.tex}
\section{Related Works}
\input{tex/related.tex}
\section{The AdobeIndoorNav Dataset}
\input{tex/dataset.tex}
\section{Experiments}
\input{tex/exps.tex}
\section{Discussion~\label{sec:discussion}}
\input{tex/discussion.tex}
\section{CONCLUSIONS}
\input{tex/conclusion.tex}
\addtolength{\textheight}{-8cm}
\bibliographystyle{IEEEtran}
\subsection{Data Acquisition Pipeline}
\begin{figure*}[h]
\centering
\includegraphics[width=\linewidth]{fig/pipeline_overview2.pdf}
\caption{The Pipeline to Collect the AdobeIndoorNav Datatset and the Robot Setting. (a) 3D reconstruction of the scene is obtained by the Tango device; (b) A 2D obstacle map is generated from the 3D point cloud and it indicates the area where robots can navigate; (c) A 2D laser-scan map is generated from the 3D point cloud and it is used later for robot to do localization; (d) Densely sampled grid locations are generated on the 2D obstacle map; (e) Robot runs in the real scenes and captures the RGB-D and panoramic images at all grid locations; (f) Our Turtlebot equipped with one RGB-D camera, one 360 panoramic camera and a series of laser scanners.}
\label{fig:dataset_pipeline}
\end{figure*}
The detailed pipeline is shown in Figure~\ref{fig:dataset_pipeline}.
First, we manually scan the scene with a Lenovo Phab 2 Tango phone to get the 3D reconstructed scene in point cloud (Figure~\ref{fig:dataset_pipeline} (a)). This process can be readily done using out of the box solution from Tango without prior training. Other 3D reconstructed scenes datasets such as Stanford 2D-3D-S~\cite{armeni2017joint} and Matterport 3D~\cite{chang2017matterport3d}, both use Matterport cameras to do the scanning which require professional operations as well as significant budget. Compared with their acquisition process, our solution is much more portable, lower in cost, and easily reproducible.
Then the 3D reconstructed scene generates two 2D maps: an obstacle map (Figure~\ref{fig:dataset_pipeline} (b)) and a laser-scan map (Figure~\ref{fig:dataset_pipeline} (c)). The obstacle map is generated by aggregating the occupancy maps in 3D within the height range of the robot. The obstacle map identifies the free space for the robot to move around without collision in the entire scene. It is also used later to sample the grid locations to collect visual inputs. The laser-scan map is a 2D map that summarizes the 3D occupancy map at the height of the RGB-D sensor on our robot, which can be leveraged later for localization.
Next, we generate grid locations on the obstacle map with a simple Depth-first Searching (DFS) algorithm. The starting position is specified by the user. We take a grid size of $0.5$m$\times0.5$m in most scenes and reduce it to $0.4$m$\times0.4$m for smaller scenes.
After the manual efforts, the most time-consuming data collection part is automatically done by the robot. The depth channel from the RGB-D sensor is used as a laser scanning to localize itself with respect to the laser-scan map~\cite{bailey2006simultaneous}. Then the robot walks on the grid locations following the generated DFS path to collect data. It stops at each grid location to take the 360-view image. We also have a set of range sensors around to avoid hitting into unexpected obstacles.
\subsection{Dataset Statistics}
The AdobeIndoorNav dataset currently contains 24 indoor rooms: 15 offices, 5 conference rooms, 1 storage room, 1 kitchen and 2 open spaces, all of them are collected inside Adobe buildings. There are totally 3,544 densely sampled locations. On average, each scene has 148 locations for robots to navigate. We split the entire dataset into a train split (15 scenes) and a test split (9 scenes).
\setlength{\tabcolsep}{3pt}
\begin{table}[]
\centering
\caption{The AdobeIndoorNav Dataset Statistics}
\begin{tabular}{l|c|ccc}
\hline
\textbf{Scene Name} & \textbf{Split} & \textbf{Total Locs}& \textbf{Target Locs}& \textbf{Featureful Locs}\\
\hline
et07-imagination-lab& train & 284& 7 &267\\
et12-corner-1&test & 156 &6 &131\\
et12-corner-2&train & 388& 8& 337\\
et07-cr-galgary &test &164& 8& 99\\
et12-cr-hamburg &train &176& 6& 111\\
et12-cr-helsinki&train & 168 &6& 105\\
et12-cr-hongkong&train & 260 &8& 175\\
et12-cr-honolulu&test & 252 &7& 149\\
et12-kitchen&test & 424 &10& 396\\
et12-office-104&train & 40 &5& 32\\
et12-office-108&train & 76& 5& 60\\
et12-office-110&train & 60& 5& 52\\
et12-office-111&train & 68& 5& 55\\
et12-office-112&train & 72& 5& 63\\
et12-office-113&train & 96& 5& 61\\
et12-office-114&train & 84& 4& 63\\
et12-office-115&train & 72& 4& 53\\
et12-office-117&train & 68& 5& 39\\
et12-office-132&train & 232& 7& 144\\
et07-office-114&test & 48& 4& 39\\
et07-office-419&test & 68& 5& 57\\
et07-office-420&test & 76& 5& 49\\
et07-office-423&test & 100& 5& 73\\
et07-office-424&test & 112& 6& 81\\
\hline
\textbf{total} & & 3,544& 141& 2,691\\
\hline
\end{tabular}
\label{tab:dataset_stats}
\end{table}
Table~\ref{tab:dataset_stats} summarizes the statistics on the 24 scenes in our dataset. To provide a benchmark for the community, we identify a set of landmark target locations for each scene (5$\sim$10 per scene), as suggested in \cite{zhu2017target}. We also detect SIFT key points~\cite{lowe2004distinctive} from all visual inputs to identify targets with distinctive image features. Figure~\ref{fig:dataset_feature_less} shows a sample featureful target and a feature-less one. The featureful targets include all interesting targets for navigation and are super-sets of the landmark ones. The \textit{Target Locs} and \textit{Featureful Locs} columns summarize the number of the landmark targets and these featureful targets respectively.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\linewidth]{fig/dataset_feature_less2.pdf}
\caption{Sample Landmark and Feature-less Image Views. (a) Sample featureful target; (b) Sample feature-less target.}
\label{fig:dataset_feature_less}
\end{figure}
With the visual inputs collected at all grid locations, it is straightforward to use our dataset to train and test robot visual navigation. In addition to that, with the densely collected 360-view images, our dataset can potentially be used to study view selection. We will release the dataset to public and we hope the dataset can facilitate research in multiple areas.
\subsection{Dense-target Training}
Despite of its exceptional capability to navigate to the training targets, the A3C models fail dramatically on unseen testing targets. This over-fitting problem suggests that the agent only learns to memorize the right action to take at certain location given one target, instead of understanding the spatial relationship from current state to the target and the consequence of taking one of the four actions.
We argue that the issue comes from the fact that only 5$\sim$10 sparsely spreading targets per scene were used in training in~\cite{zhu2017target}. To avoid over-fitting and encourage the agent to learn more generalizable knowledge, we need to let the agent work on more targets in training.
To learn a navigation model that generalizes to all targets in the scenes, we propose to densely sample training targets and train the navigation A3C models on all of them.
We validate this dense target training idea on the \textit{et12-kitchen} scene, which is the largest scene in our dataset. We randomly select 23 out of the 396 distinctive views for testing. We avoid using texture-less views as targets, since they can introduce confusion in learning. Except for the selected testing targets, all the rest are for training. We train three A3C models with different number of training targets: (a) trained on 10 sparsely sampled targets, (b) trained on 100 sampled targets, and (c) trained on the rest 373 locations. We train all the three models for about 20 million frames. Figure~ \ref{fig:exp_kitchen_dense_train} shows the episode length statistics. In general, when trained with more targets, the robot navigates better at testing targets. 10 sparsely sampled training targets end up with most failure episodes. Similar behavoirs have been observed on other scenes as reported in Table~\ref{tab:exp_dense_train_table}. However, the gain is at the cost of longer training time as shown in Figure~\ref{fig:exp_dense_train_curve}.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\linewidth]{fig/exp_kitchen_dense_train.pdf}
\caption{Episode Length Statistics for the Testing Targets in the \textit{et12-kitchen} Scene. We report the evaluation of the models that are trained on 10 (sparse), 100 (middle), and 373 (dense) samples. We run 10 episodes with different random starting locations for each one of 23 testing targets. The maximum episode length is set to 10,000.}
\label{fig:exp_kitchen_dense_train}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.8\linewidth]{fig/out.pdf}
\caption{The A3C Model Training Progress Curves: dense-target training takes longer to converge.}
\label{fig:exp_dense_train_curve}
\end{figure}
\setlength{\tabcolsep}{2.5pt}
\begin{table}[h]
\centering
\caption{Evaluation of Dense Target Training}
\begin{tabular}{c|c|ccccc}
\hline
& \textbf{All} & \textbf{Office} & \textbf{Conf}& \textbf{Open} & \textbf{Storage}\\
\hline
\#scenes& 15 & 10 & 3 & 1 & 1 \\
\hline
Random & 249.56 & 190.21 & 366.99 & 401.21 & 339.19 \\
Shortest-path & 7.18 & 5.44 & 11.33 & 11.27 & 7.99 \\
\hline
Train on Sparse Targets & 4359.36 & 3841.52 & 5409.82 & 5770.88 & 4974.86 \\
Train on Dense Targets & 197.92 & 7.10 & 487.56 & 1401.07 & 34.10 \\
\hline
\end{tabular}
\label{tab:exp_dense_train_table}
\end{table}
\subsection{Spatial-aware Feature}
By carefully examining the failure cases, we observed a lot of near target failures as shown in Figure~\ref{fig:method_spatial_aware_fail}. Although the robot reaches a nearby location to the target, it fails to output the right action to move to the target.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{fig/method_spatial_aware_fail.pdf}
\caption{Near target failure cases: In each image pair, the left one shows the current observation and the right image is the target view. In all these examples, the agent fails to reach the target even though the target is just nearby.}
\label{fig:method_spatial_aware_fail}
\end{figure}
A very common scenario in testing is that after starting from a random location, the agent quickly navigates to the area around the testing target in a few steps. This behavior shows that the model successfully learns to navigate to this target probably by learning a rough room layout. However, after reaching the nearby area, the agent begins to vacillate around the target location and finally fails to arrive at the exact target view. It suggests that the robot can not differentiate two nearby and similar views.
To address this issue, we propose to use spatial-aware feature to represent the visual inputs. Instead of using the 2048-dim feature extracted from the last layer of a pre-trained CNN~\cite{he2016deep}, we can leverage features extracted from early layers which encode more spatial information.
The 2,048-dim feature lacks the spatial information on the image space because of the precedent global average pooling operation. We choose to use the $7\times7\times2,048$ feature maps right before the last global average pooling layer to as the representation to address this issue.
We propose a diagnostic experiment to validate this design. Given a pair of images that have a large portion of view overlap and are only one-step away (i.e. go forward, go backward, move to the left, move to the right), we train a simple siamese-style network over extracted image features to predict the action to match the two views. The four types of testing image pairs are illustrated in Figure~\ref{fig:exp_spatial_aware_examples}.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{fig/exp_spatial_aware_examples2.pdf}
\caption{The four types of near-by target testing: the left image shows the current view and the right image shows the target view, which can be reached by (a) going forward; (b) going backward; (c) moving to the left; and (d) moving to the right.}
\label{fig:exp_spatial_aware_examples}
\end{figure}
We first train two models using 2,048-dim input features and the proposed spatial-aware features on the image pairs from all 24 scenes in the AdobeIndoorNav dataset. The image pairs for training and testing are different, but are all taken from the same scenes. In Table \ref{tab:diag_spatial_aware}, we observe a classification accuracy improvement from 83.5\% to 87.5\% when we replace the original 2,048-dim feature to the spatial-aware feature. Then, we train the two models on image pairs from the 15 training scenes and test on the other 9 scenes for testing. We observe larger performance improvement from 53.0\% to 72.6\%. This experiment validates that the spatial-aware feature encodes more spatial information that can be used to resolve the near-by target failures.
\begin{table}[h]
\centering
\caption{Diagnostic Experiments on Spatial-aware Feature}
\begin{tabular}{c|cc}
\hline
& \textbf{2,048-dim Feature} & \textbf{Spatial-aware Feature} \\
\hline
On the same scenes & 83.5\% & 87.5\% \\
On different scenes & 53.0\% & 72.6\% \\
\hline
\end{tabular}
\label{tab:diag_spatial_aware}
\end{table}
\subsection{Future Work}
The target-driven visual navigation setting is very difficult even for human. Imagine one being dropped in a new environment and given an image of a target view. It is largely impossible even for human to figure out the shortest path. We believe one promising future direction could be combing map-building methods with DRL to construct a map either implicitly or explicitly for navigation. It is not clear how to formulate everything into a DRL framework, but we believe it is an interesting future work.
Another direction to explore DRL for robot navigation is to design a task simpler than the target-driven setting. For example, a more intuitively feasible task could be unknown environment exploration. In this case, DRL can be trained more efficiently since there is a fixed goal independent to the scenes.
Navigation target can also be specified as a relative position to the robot. This paradigm would be easier to address too. The relative position implicitly suggests a shortest path, which would serve as a strong regularization in DRL training.
Despite of the problems we observed by evaluating this DRL based visual navigation algorithm over our dataset, we still believe DRL is promising towards real-world visual navigation.
\subsection{Evaluation on the AdobeIndoorNav Dataset}
To briefly review their method, Zhu et al.~\cite{zhu2017target} use a Siamese A3C model that takes the most recent four frames of visual observations and a target view as inputs to predict the action to execute. The action is one of the four pre-defined movements including moving forward, moving backward, turning left, and turning right. In training, the robot is given a large goal-reaching reward upon task completion and small step penalty to encourage short trajectory. We evaluate the proposed method and a number of its variants on our dataset. We also include the result from random policy and the ground-truth shortest-path length for comparison. The methods we evaluate are the following:
\begin{itemize}
\item \textbf{Random}: the agent uniformly chooses a random action to take at every timestep.
\item \textbf{Shortest-path}: the agent has an oracle that tells it the shortest path from the starting location to the target.
\item \textbf{A3C Four-frame}: the model uses four history frames of observations as input. This is the design by Zhu et al.~\cite{zhu2017target}.
\item \textbf{A3C One-frame}: the model only uses the current single frame as input.
\item \textbf{A3C LSTM}: the model uses a LSTM network to keep the memory for history observations.
\end{itemize}
We report the navigation performance on all scenes (24 scenes) as well as five categories of scenes including the office scenes (15 office rooms), the conference room scenes (5 conference rooms), the open area scenes (\textit{et12-corner-1} and \textit{et12-corner-2}), the kitchen scene (\textit{et12-kitchen}) and the storage room scene (\textit{et07-imagination-lab}).
All the models are implemented in Tensorflow~\cite{abadi2016tensorflow} and trained with 100 CPU threads. Each thread trains for one scene and is assigned one landmark target to train. In each training episode, the agent starts from a random position and tries to navigate to the given target. The episode ends when the robot reaches the target or exceeds 500 steps. All the models are trained for about 40 million frames (total length of trajectories).
In evaluating the learned policy, the maximum episode length is set to 10,000. For the episodes that the robot eventually fails to reach the target, we use 10,000 as the episode length. To avoid agent getting stuck or failing in loops, we add 5\% chance of exploration by executing a random policy. For each navigation target, we randomly select 10 different starting positions and report the average episode lengths.
\setlength{\tabcolsep}{2.5pt}
\begin{table}[h]
\centering
\caption{Evaluation on the AdobeIndoorNav Dataset for training targets: how the learned policy works on the targets observed in training.}
\begin{tabular}{c|c|ccccc}
\hline
& \textbf{All} & \textbf{Office} & \textbf{Conf}& \textbf{Open}& \textbf{Kitchen} & \textbf{Storage}\\
\hline
\#scenes& 24 & 15 & 5 & 2 & 1 & 1 \\
\hline
Random & 258.50 & 183.80 & 369.38 & 397.83 & 409.88 & 394.73 \\
Shortest-path & 7.53 & 5.19 & 11.20 & 11.37 & 15.60 & 8.56\\
\hline
A3C One-frame& 56.88 & 6.14 & 14.12 & 586.02 & 17.21 & 13.23 \\
A3C Four-frame& 35.31 & 6.04 & 14.60 & 303.83 & 19.13 & 56.94 \\
A3C LSTM & 9.23 & 6.19 & 13.26 & 14.85 & 20.19 & 12.44 \\
\hline
\end{tabular}
\label{tab:eval_on_train}
\end{table}
As shown in Table~\ref{tab:eval_on_train},
the target-driven A3C models demonstrate successful navigation to the targets seen during training. Giving the robot more history frames seems to be helpful to learn a better policy to navigate to training targets.
To evaluate the target generalization of these models, we randomly select 5$\sim$20 featureful targets that the robot has never seen during training and we run the learned policy to navigate to these new targets. Table~\ref{tab:eval_on_test} shows the average episode length for the testing targets. Compared to the success on navigating to the training targets, all the A3C models fail to generalize to new targets. The performance is even worse than the random policy.
\begin{table}[h]
\centering
\caption{Evaluation on the AdobeIndoorNav Dataset for testing targets: how the learned policy generalize to unseen targets.}
\begin{tabular}{c|c|ccccc}
\hline
& \textbf{All} & \textbf{Office} & \textbf{Conf}& \textbf{Open}& \textbf{Kitchen} & \textbf{Storage}\\
\hline
\#scenes& 24 & 15 & 5 & 2 & 1 & 1 \\
\hline
Random & 256.05 & 187.34 & 370.89 & 371.08 & 399.19 & 339.19 \\
Shortest-path & 7.37 & 5.37 & 11.00 & 9.97 & 13.23 & 7.99 \\
\hline
A3C One-frame& 5543.02 & 4861.32 & 7198.17 & 6100.12 & 7278.01 & 4643.45 \\
A3C Four-frame& 4468.67 & 3630.33 & 6296.34 & 4751.94 & 6832.84 & 4974.86 \\
A3C LSTM & 4390.89 & 3843.10 & 5315.40 & 4785.22 & 7552.55 & 4034.85\\
\hline
\end{tabular}
\label{tab:eval_on_test}
\end{table}
Besides its weak target generalization, these A3C methods are weak in generalizing across scenes as well. As reported by Zhu et al.~\cite{zhu2017target}, fine-tuning a learned policy to a new scene requires 10 million frames, which is far from practical in real-world scenario.
We test the scene generalization issue on our dataset. We train a A3C Four-frame navigation network on 15 train scenes for around 30 million frames and then fine-tune the network to 4 exemplar test scenes: two office rooms (\textit{et07-office-114} with 48 locations and \textit{et07-office-424} with 112 locations) and two conference rooms (\textit{et07-cr-galgary} with 164 locations and \textit{et12-cr-hongkong} with 260 locations). For each test scene, we train a new scene-specific layer from scratch.
We observe that the network converges after fine-tuning for 3 million frames, which takes about six hours on our machine with 40 Intel-i7 CPUs. Table~\ref{tab:eval_on_test_finetune} reports the navigation performance evaluated on the landmark targets for the four test scenes (\textbf{Finetune}). For comparison, we also include the results of a model trained from scratch on the entire 24 scenes for 40 million frames (\textbf{Scratch}), a model trained with unified scene layer on 15 train scenes and directly tested on the four test scenes (\textbf{Unified}), the random policy (\textbf{Random}) and the shortest-path lengths (\textbf{Shortest}).
\setlength{\tabcolsep}{1pt}
\begin{table}[h]
\centering
\caption{Evaluation of Scene Generalization: how the learned policy fine-tune to unseen scenes.}
\begin{tabular}{c|cccc}
\hline
& et07-cr-galgary & et12-cr-hongkong & et07-office-114 & et07-office-424 \\
\hline
Random & 376.39 & 359.94 & 106.63 & 179.08 \\
Shortest & 10.98 & 12.50 & 3.60 & 5.07\\
Unified&1697.96&2514.66&97.92&1385.70\\
\hline
Scratch&14.64&16.18&4.73&6.83\\
Finetune&15.19 & 2515.44 & 5.10 & 5.97 \\
\hline
\end{tabular}
\label{tab:eval_on_test_finetune}
\end{table}
Even though 3 million frames of fine-tuning successfully adapts the learned policy to the small scenes (i.e. \textit{et07-cr-galgary}, \textit{et07-office-114} and \textit{et07-office-424}), it fails to generalize to large scenes (i.e. \textit{et12-cr-hongkong}). Also, given the fact that training a model on 24 scenes from scratch only takes about 10 million frames to converge, as illustrated by the blue curve in Figure~\ref{fig:exp_dense_train_curve}, 3 millions frames of fine-tuning is evidently heavy.
\subsection{Dense-target Training}
Despite of its strong capability to navigate to the train targets, the A3C models proposed in \cite{zhu2017target} fails to generalize to the unseen test targets. In their experiments, they train the model on 8 landmark targets and test its performance on navigating to test targets that are few steps away from the train targets. They set the maximum number of steps to be 500 for each episode. Experiments show that the success rate quickly drops from near 100\% on the train targets to around 70\%, 40\%, 34\%, below 29\% on the test targets that are 1, 2, 4 and 8 steps away.
This fact reveals that the proposed A3C method \cite{zhu2017target} suffers from the target generalization issue. Their method learns the observation feature embedding that overfits to the train targets and fails to generalize to the other targets that are unseen during training. To address the issue, we propose to sample dense training targets from all targets that have distinctive image features and train A3C models on all the selected targets. For each scene, instead of just training the agent to navigate to only $5\sim10$ selected landmark target views, we train a model on densely sampled target locations across the entire scenes. By using dense-target training, the models are expected to learn better holistic room layout embedding and avoid the overfitting to the landmark targets. Experiments show that we learn a model that generalizes to unseen targets much better.
\subsection{Spatial-aware Feature}
Experiments show that dense-target training significantly improves the target generalization of the A3C models. However, it slows down the training procedure because the A3C models are trained for substantially more targets than the selected landmarks, which makes the training harder to converge. In this section, we study the common navigation failure cases and propose to use spatial-aware feature to speed up the training procedure.
Fig. \ref{fig:method_spatial_aware_fail} shows the common failure case that we obtains from training an A3C model on the \textit{et12-kitchen} and testing it on the unseen targets. We train the model on sparsely sampled landmark targets only and evaluate on the other locations with distinctive image features. For all of the four examples, the agent finally fails to reach the targets but it indeed reaches the nearby locations with similar views.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{fig/method_spatial_aware_fail.pdf}
\caption{Navigation Failure Examples. For each pair of images, the left one shows the current observation and the right image is the target view. In these examples, the agent finally does not reach the target even the current view is very similar to the target view.}
\label{fig:method_spatial_aware_fail}
\end{figure}
We observe that the learned model behaves clearly in two stages. Starting from a random location, the agent quickly navigates to the area around the test target in a few steps. This behavior shows that the model successfully learns a rough room layout and uses it for navigation. However, after reaching the nearby area, the agent begins to vacillate around the target location and finally fails to locate the exact target view. The potential reason for this behavior is that the room layout embedding is trained on the sparse train targets only and it is not accurate for the unseen targets. Also, the network lacks the ability to do image feature matching and discriminate local spatial relationship between similar views.
To address this issue, we propose to use spatial-aware feature for image representation. Spatial-aware features encode more spatial information that is crucial for image matching between two similar views and local spatial reasoning in the underlying 3D scene. In \cite{zhu2017target}, they use an Imagenet-pretrained ResNet network \cite{he2016deep} to extract features for the observations. Each image is represented by a 2,048-dim feature extracted from the second to the last layer right before the final classification layer.
The 2,048-dim feature lacks the spatial information on the image space because the prior pooling layers remove such knowledge. In our method, we propose to use the $7\times7\times2,048$ spatial-aware feature right before the last pooling layer to represent the image features. Experiments show that the proposed spatial-aware features encode more spatial information which is crucial for image matching. Also, using better image features as inputs, the A3C models converge significantly faster than the models trained with 2,048-dim image features.
\subsection{Indoor Robot Visual Navigation}
Robot visual navigation has a long history~\cite{giralt1979multi, moravec1980obstacle, nilsson1984shakey, moravec1983stanford, chatila1985position}. People have been investigating vision-based mapping and localization as early as 1980s. The indoor scenarios are especially challenging due to the GPS-denied environment. Existing methods along this line can generally be categorized into two groups: map-based navigation and map-less navigation. With a-prior constructed map, map-based methods allow the robot to plan a path ahead of time and localize itself with respect to the map from visual sensors. Usually the robot maintains a dynamic obstacle map to avoid obstacles when executing the planned path~\cite{moravec1981rover}. When navigating in an environment without a map, the robot can either incrementally build a map while navigating around~\cite{gupta2017cognitive} or simply following a reactive setting to navigate based on current state and target state~\cite{zhu2017target}.
\subsection{Deep Reinforcement Learning}
Ever since the success of seminal Deep Q-Networks (DQN) algorithm on playing Atari games~\cite{mnih2015human} and beating human GO players~\cite{silver2016mastering}, DRL received a lot of attention in the research community. Recent years observed great progresses on applying DRL to robotics as well. To name a few, Kohl et al.~\cite{kohl2004policy} introduce policy gradient RL method for locomotion of a four-legged robot. Peters et al.~\cite{peters2008reinforcement} use RL to learn motor primitives. Silver et al.~\cite{silver2014deterministic} extend the RL methods to handle MDPs with continuous actions. Kahn et al.~\cite{kahn2017uncertainty} introduce uncertainty-aware RL method to learn to navigate an a-priori unknown environment while avoiding collisions with applications on quad rotors and RC cars. The potential of learning a model-free optimal policy from end-to-end with minimal human supervision makes DRL a highly promising direction to pursue.
\subsection{Datasets}
DRL, as a data-driven method, demands a proper dataset to tune its neural networks for indoor robot visual navigation. To collect sufficient amount of trajectories to train the policy, online learning in a real-world scenario on a real robot is still far from realistic at this stage. A feasible and common practice is to leverage a simulator to generate visual inputs to the robot based on its position in a specific scene, which relies on 3D scenes datasets.
Synthetic 3D scenes provide plentiful annotations almost for free and have been widely used in research on scene understanding. Most of them can readily be leveraged to render visual inputs at a given position to enable training DRL for robot navigation.
SceneNet RGB-D~\cite{handa2015scenenet, mccormac2016scenenet} and AI2-THOR ~\cite{zhu2017target, kolve2017ai2} are two recently proposed synthetic 3D scenes datasets. As shown in Figure~\ref{fig:dataset_overview} (d), the rendered visual inputs are in impressively high quality. Compared with prior works, they provide photo-realistic textures and diversified scenes. While they are closing the domain gap between synthetic textures and real textures, they are not actually captured in real-world scenarios. Even though they may serve as good resources to train DRL policy, they are not desirable in evaluating the learned policy in real-world scenes.
Capturing the real-world scenes in 3D has been well explored in computer vision. Most of the existing datasets captured from real-world scenes contain certain formats of 3D reconstructed scenes, such as point cloud and 3D mesh. To use them to train DRL based robot navigation, the visual inputs can be rendered from the 3D reconstructions.
ScanNet~\cite{dai2017scannet} design an easy-to-use RGB-D capture system to collect 1,500 real-world scene scans, along with many semantic annotations. Stanford 2D-3D-Semantics Dataset~\cite{armeni2017joint} include real-world 3D scenes that expand to the entire building floors. Matterport 3D dataset~\cite{chang2017matterport3d} introduce a large-scale RGB-D dataset containing 90 building-scale scenes and 194,400 RGB-D images. Despite of the large number of scenes in these datasets, when rendering visual inputs from the point clouds or 3D meshes, we observed holes and artifacts as shown in Figure~\ref{fig:dataset_overview} (e). The quality for the rendered images can not match the real ones as if they were seen by the robot at a given location.
The dataset we present in this paper fills the gap between the above two categories. We design a semi-automatic pipeline to collect 360-view images at densely sampled grid locations in totally 24 scenes. To train DRL with our dataset, one can easily find the corresponding 360-view image at a given location to crop out the desired visual inputs to the robot to navigate. The low-cost setting makes the pipeline reproducible. People can use the pipeline to collect more real-world scene data for their purpose. The collected dataset can serve as both training data for DRL and benchmark for real-world robot visual navigation.
|
{
"timestamp": "2018-02-27T02:05:13",
"yymm": "1802",
"arxiv_id": "1802.08824",
"language": "en",
"url": "https://arxiv.org/abs/1802.08824"
}
|
\section*{Introduction}
\indent Flocking of self-propelled particles (SPPs) is an ubiquitous phenomenon in nature. The size of these flocks ranges from a few
microns to the order of a few kilometers, e.g., bacterial colony, cytoskeleton, shoal of fishes, animal herds, where the individual
constituent shows systematic movement at the cost of its free energy. Since the seminal work by Vicsek {\it et al.} \cite{vicsek1995},
numerous works are done to understand the flocking phenomena of SPPs
\cite{revmarchetti2013, revtoner2005, revramaswamy2010, revvicsek2012, revcates2012}. One of the interesting features of these
kinds of out-of-equilibrium systems is the realization of true long-range order (LRO) even in two dimensions (2D) \cite{tonertu1995, tonertu1998}.
Most of the previous analytical and numerical studies of SPPs were restricted to homogeneous or clean systems
\cite{tonertu1995, tonertu1998, chate2008, shradha2010, vicsek1995}. However, natural systems in general have some kind of
inhomogeneity. Therefore, some of the recent studies focus on the effects of different kinds of inhomogeneities present in the systems
\cite{morin2017, chepizhko2013, marchetti2017, quint2015, sandor2017}. The study in Ref.~\cite{morin2017} shows the breakdown
of the flocking state of artificially designed SPPs in the presence of randomly placed circular obstacles. In
Ref.~\cite{chepizhko2013}, Chepizhko {\it et al.} model obstacles such that the SPPs avoid those obstacles. They note a surprising non-monotonicity
in the isotropic to flocking state transition of the SPPs in the presence of the obstacles. They also report a transition from LRO to
quasi-long-range order (QLRO) state at some nonzero but finite density of obstacles. While commenting about these studies, the
authors of Ref.~\cite{reichhardt2017} stress upon the understanding of the flocking phenomena in the presence of different kinds
of inhomogeneities. In the same spirit, we study the effect of rotator type obstacles on the nature of ordering in polar SPPs. Moreover,
we propose a minimal model for SPPs in inhomogeneous medium, the results for which could easily be compared with its well-studied
equilibrium counterpart \cite{imry1975, grinstein1976}.
In this Rapid Communication, we consider a Vicsek-like model \cite{vicsek1995} of polar SPPs in the presence of obstacles in the medium.
The obstacles are modeled as random quenched rotators which rotate the orientation of neighboring SPPs by an angle determined by
the intrinsic orientations of the rotators. The model can be visualized as a large moving crowd, amid which some random ``road signs" have been
placed. Individual road sign dictates the neighboring people to take a roundabout by a certain angle from their direction of motion.
The specific issue we address here is the correlation of this collective motion in the presence of these random road signs.
In the limit of zero self-propulsion speed, our model reduces to the $XY$ model \cite{chaikin} with random
quenched obstacles. In the $XY$ model, any finite amount of quenched randomness is enough to destroy the orientationally ordered state
in dimension $d \leq 4$ \cite{imry1975, grinstein1976}. Therefore in 2D, an equilibrium system with quenched obstacles
does not have any ordered state. Analogous to this, we show that in a two-dimensional self-propelled system, quenched rotators destroy
the LRO, usually found in the clean polar SPPs.
In our numerical study, we note that small density of quenched rotators leads the system to a QLRO state. In this state, the absolute
value of average normalized velocity ${\rm V}$ decreases algebraically with the system size. Also, fluctuation in the orientations
of the SPPs increases logarithmically with system size. Moreover, below a critical density of rotators $c_{rc}$, both ${\rm V}$
and fluctuation in orientations of SPPs show nice scaling collapse with scaled system size. However, with further increase in density
of rotators $c_r$, the system shows a continuous QLRO to disorder (QLRO-disorder) state transition.
We also write hydrodynamic equations of motion for density and velocity fields of the SPPs in the presence of quenched inhomogeneities.
A linearized study of these equations predicts an anisotropic divergence of ${\mathcal O}(1/q^4)$ in the equal-time spatially Fourier
transformed correlations for the hydrodynamic fields for small $q$. However, neglected nonlinear terms probably suppress these
fluctuations to make the QLRO possible in the system.
We consider a collection of $N_s$ polar SPPs distributed over a 2D square substrate. Each particle moves with a fixed
speed ${v}_s$ along its orientation $\phi$. An individual SPP tries to reorient itself along the mean orientation of
all the neighboring SPPs (including itself) within an interaction radius $R_s$. However, ambience noise leads to orientational
perturbation. Moreover, there are $N_r$ immobile rotators randomly distributed on the substrate. Each rotator possesses an
intrinsic orientation $\varphi$, which can take any random value in the range $[-\pi,\pi]$ and remains fixed. Therefore, the rotators
are quenched in time, and we call these random quenched rotators (RQRs). Each RQR rotates the orientations of the SPPs within
an interaction radius $R_r$ by an angle determined by $\varphi$ and SPP-RQR interaction strength $\mu$.
The update rules governing position ${\bm r}_i$ and orientation $\phi_i$ of the $i^{th}$ SPP are as follows:
\begin{eqnarray}
{\bm r}_i\left(t+1\right) &=& {\bm r}_i\left(t\right) + {\bm v}_i\left(t\right), \label{rupdate} \\
\phi_i\left(t+1\right) &=& \langle\phi_j\left(t\right)\rangle_{j \in R_s} + \mu\langle\varphi_j\rangle_{j \in R_r} + \Delta\psi, \label{thetaupdate}
\end{eqnarray}
where ${\bm v}_i\left(t\right) = {v}_s \left(\cos\phi_i\left(t\right),\sin\phi_i\left(t\right)\right)$ is the velocity of the
particle $i$ at time $t$, and $\langle\phi\rangle_{R_s}$ and $\langle\varphi\rangle_{R_r}$ represent the mean orientation of all the SPPs
and the RQRs, respectively, within the interaction radii. Fluctuation in orientation of SPPs because of ambience noise is represented
by an additive noise term $\Delta\psi$ distributed within $\eta\left[-\pi,\pi\right]$, where noise strength $\eta \in \left[0,1\right]$.
We call this model ``active model with quenched rotators (AMQR)," which reduces to the celebrated Vicsek model \cite{vicsek1995}
for $\mu=0$ or in the clean system, i.e., $N_r=0$.
\begin{figure}[b]
\includegraphics[width=0.98\linewidth]{fig1.pdf}
\caption{ ${\rm V}$ versus $1/N_s$ plots in the (a) ordered and (b) disordered state for $\eta=0.10$.
The error bars indicate standard error in mean. The solid lines show the respective algebraic fits.
(c) Plot of ${\rm V}$ versus scaled system size $N_s/N_s^\gamma$ on log-log scale, where $\gamma$ is a function of $c_r$.
The data shows good scaling for $0<c_r\le0.0125$, but deviates for $c_r\ge0.0125$.
}
\label{figVNs}
\end{figure}
We numerically simulate the collection of $N_s$ SPPs spread over the $L \times L$ ($L \in [50, 300]$) 2D substrate with periodic
boundary condition. Initially the particles are chosen to have random velocity, but with constant speed ${v}_s$. The density
of the SPPs and the RQRs are defined as $c_s=N_s/L^2$ and $c_r=N_r/L^2$, respectively. We distribute these rotators
uniformly on the substrate, and randomly assign intrinsic orientation $\varphi \in [-\pi, \pi]$. In this system, the position and the
velocity of all the SPPs are updated simultaneously following Eqs.~(\ref{rupdate}) and (\ref{thetaupdate}). At every time step, we use
OpenMP Application Program Interface for a parallel updating procedure of all the SPPs.
In this Rapid Communication, we consider $c_s=1.0$, $v_s=1.0$, and $\mu=1.0$. Moreover, we take $R_s=R_r=1$ for simplicity. In the absence of the
rotators \cite{vicsek1995}, the system shows disorder to order transition with decreasing noise strength $\eta$.
The ordering in the system is measured in terms of the conventional absolute value of the average normalized velocity
\begin{equation}
{\rm V} = \langle \frac{1}{N_s {v}_s} |\sum_{i=1}^{N_s}{\bm v}_i| \rangle
\label{defvel}
\end{equation}
of the entire system \cite{vicsek1995}. Here $\langle \cdot \rangle$ indicates an average over many realizations and time in the steady state.
${\rm V}$ varies from zero to unity for disorder to order state transition. For the reported data, we start the averaging of
observables after $3\times10^5$ updates to assure reaching the steady state, and averaging is done for the next $5\times10^5$ updates.
Up to $30$ realizations are used for better averaging.
\begin{figure}[t]
\includegraphics[width=0.98\linewidth]{fig2.pdf}
\caption{ Steady-state snapshots are shown for $\eta=0.10$, $L=150$ and different $c_r$ as indicated on the respective
panels. The color bar indicates orientation of the SPPs. The rotators with random intrinsic orientation are not
shown for the clarity of the figure. }
\label{figsnap}
\end{figure}
For a fixed $\eta$, we calculate ${\rm V}$ for different $c_r$, and study its variation with system size.
As shown for $\eta=0.1$ in Fig.~\ref{figVNs}(a), in the clean system, ${\rm V}$ does not change with system size; consequently, the
system possesses a nonzero ${\rm V}$ in the thermodynamic limit. Therefore, the clean system remains in the LRO state, which is a
well-known phenomenon \cite{tonertu1998}. However, in the presence of the RQRs, ${\rm V}$ decreases algebraically with $N_s$ following
the relation
\begin{equation}
{\rm V} = {\cal A}(c_r) N_s^{-\nu(c_r)},
\label{algebreln}
\end{equation}
as shown in Figs.~\ref{figVNs}(a) and \ref{figVNs}(b). Here both ${\cal A}$ and $\nu$ are functions of $c_r$ for a fixed $\eta$. Therefore, in the
thermodynamic limit, ${\rm V}$ of the system with RQRs reduces to zero. We stress that for small $c_r$ the system remains in a QLRO
state, beyond which the AMQR shows a continuous QLRO-disorder state transition, as we will see shortly. In Fig.~\ref{figsnap}, we
show snapshots of the orientation and the local density of the SPPs for $\eta=0.1$ and different $c_r$. For $c_r=0$, all the particles
are in highly ordered state. RQRs perturb the LRO flocking as shown for $c_r=0.005, 0.01$. For high density $c_r = 0.02$, the SPPs
remain highly disordered.
We further study the fluctuation in the orientation of the SPPs. The width of a normalized distribution $P(\phi)$ of orientation of
the SPPs provides a measure of this fluctuation. It is calculated by averaging over the distributions at every time step in
the steady state, and also over many realizations. While averaging, we set the mean orientation of all the distributions at $\phi=0$.
\begin{figure}[b]
\includegraphics[width=0.98\linewidth]{fig3.pdf}
\caption{ (a) Distribution $P(\phi)$ of the orientation of the SPPs is shown for $\eta=0.10$ and $c_r=0.005$.
The curves are zoomed into the range $\phi \in [-\pi/2,\pi/2]$ for better visibility. The solid lines show the respective
fits with Voigt profile.
(b) Plot of the FWHM $f$ of $P({\phi})$ versus $N_s$. In the presence of quenched rotators, $f$ increases logarithmically
with $N_s$. The dashed lines show respective fits.
(c) Plot of shifted FWHM $f-g_1(c_r)$ with scaled system size $N_s/N_s^{\Gamma}$, where both $g_1$ and $\Gamma$ are
functions of $c_r$. The scaling holds good for $c_r \le 0.0125$.
}
\label{figoriflucdir}
\end{figure}
We note that $P(\phi)$ widens with the increasing density of RQRs. This is quite intuitive since the degree of disorder increases
with $c_r$. We fit these distributions with a Voigt profile, which is defined as the convolution of the Gaussian and the Lorentzian
functions \cite{voigt}. A brief discussion of the Voigt profile and the procedure used to fit $P(\phi)$ with it are provided in
Appendix~\ref{appVoigt}. From the respective fits, we calculate the full width at half maximum (FWHM) $f$ of the
distributions.
We note that, in the clean system, $P(\phi)$ does not change with system size. However, for any fixed $c_r>0$, $P(\phi)$ widens
with increasing system size, as shown in Fig.~\ref{figoriflucdir}(a) for $(\eta, c_r)=(0.10, 0.005)$
(also see Appendix~\ref{appVoigt}). In Fig.~\ref{figoriflucdir}(b), we show the
variation of $f$ with system size for different $c_r$. For $c_r=0$, $f$ does not change with $N_s$. Therefore, in the clean system,
the fluctuation in the orientation of the SPPs does not depend on the system size, and the system is in the LRO state. However,
for $c_r>0$, FWHM of $P(\phi)$ follows the relation $f=g_1(c_r)+g_2(c_r) \ln (N_s)$, where both $g_1$ and $g_2$ are functions of
$c_r$.
Since $g_2 \ge 0$, $f$ increases logarithmically with $N_s$, which further confirms the QLRO in the AMQR.
\begin{figure}[t]
\includegraphics[width=0.98\linewidth]{fig4.pdf}
\caption{ (a) Variation of ${\rm V}$ with $c_r$ shown for different system sizes and $\eta=0.10$.
(b) Variance $\chi$ of ${\rm V}$ plotted with $c_r$. The peaks in the curves indicate the critical density of the rotators
$c_{rc}(L)$ for the QLRO-disorder transition for the respective system sizes.
}
\label{figchi}
\end{figure}
In Fig.~\ref{figVNs}(c), we plot ${\rm V}$ versus scaled system size $N_s/N_s^{\gamma(c_r)}$ for $\eta=0.1$ and
different $c_r$. Here $\gamma(c_r) \simeq 1-kc_r$, where $k$ is a positive constant. Moreover, $\nu = z (1-\gamma)$, where $z$ is a
nonmonotonic function of $\eta$. We note nice scaling collapse for $c_r \le 0.0125$. This predicts that, for $c_r \le 0.0125$, the system
can be divided into sub-systems of size $N_s^{\gamma(c_r)}$ within which the SPPs remain ordered. Since $\gamma=1$ for $c_r=0$, ${\rm V}$
does not depend on system size, and therefore the clean system remains in the LRO state. However, in the presence of RQRs, the
system remains in the QLRO state. Moreover, the scaling predicts self-similarity of the system for different $c_r \le 0.0125$. As shown
in Fig.~\ref{figoriflucdir}(c), we also find nice scaling collapse of $f-g_1(c_r)$ with scaled system size $N_s/N_s^{\Gamma(c_r)}$ for
different $c_r \le 0.0125$, where $\Gamma = 1 - g_2$ that varies linearly with $c_r$, for small $c_r$.
Similar scaling holds for other $\eta$ values in the QLRO state.
In Fig.~\ref{figchi}(a), we show the variation of ${\rm V}$ with $c_r$ for $\eta=0.1$ and different system sizes.
Starting from the value of ${\rm V}$ close to $1$ for small $c_r$, ${\rm V}$ shows a transition to smaller values with increasing $c_r$.
Therefore, with increasing $c_r$, QLRO-disorder transition occurs in the system. We further calculate the variance $\chi$ of ${\rm V}$
for different system sizes, and plot these as a function of $c_r$ in Fig.~\ref{figchi}(b). Data shows systematic variation in $\chi$ as
a function of $c_r$, and a peak appears at $c_r = c_{rc}(L)$ where the fluctuation in ${\rm V}$ is large. This suggests a continuous
QLRO-disorder state transition in the AMQR. We consider $c_{rc}(L)$ as the critical density for the QLRO-disorder state transition for
system size $L$. The position of the peak shifts from $c_r= 0.016$ to $0.0125$ as $L$ is increased from $100$ to $300$. However, we note
that $c_{rc}(L)$ flattens on increasing $L$ for all $\eta$ values. Using the extrapolated values $c_{rc}(L \rightarrow \infty)$, we
construct a phase diagram in the $\eta$--$c_r$ plane. We stress that in the presence of RQRs, the system remains in the QLRO below the
phase boundary shown in Fig.~\ref{figphasediag}.
\begin{figure}[b]
\includegraphics[width=0.98\linewidth]{fig5.pdf}
\caption{ Phase diagram in noise strength versus density of rotator plane. For small $c_r$, the QLRO state
prevails, beyond which the system continuously goes to the disorder state. }
\label{figphasediag}
\end{figure}
Long-distance and long-time properties of the SPPs with quenched obstacles can also be characterized using a hydrodynamic description of
the model. The relevant hydrodynamic variables for this model are (i) SPP density $\rho({{\bm r}, t})$ which is a globally conserved
quantity and (ii) velocity ${\bm v}({{\bm r}, t})$ which is a broken-symmetry parameter in the ordered state. These variables can
be obtained by suitable coarsening of corresponding discrete variables in the microscopic model
\cite{tonertu1995, tonertu1998, aparna2008, bertin2006, bertin2009, ihle2011}.
Following the phenomenology of the system, we write the hydrodynamic equations of motion for the density and the
velocity fields as
\begin{eqnarray}
\partial_t \rho &+& \nabla \cdot ({\bm v}\rho) = D_{\rho} \nabla^2 \rho, \label{eqmden} \\
\partial_t {\bm v} &+& \lambda_1 ({\bm v} \cdot \nabla) {\bm v} + \lambda_2 (\nabla \cdot {\bm v}) {\bm v} + \lambda_3 \nabla({v}^2) \notag \\
&=& (\alpha_1 - \alpha_2 {v}^2) {\bm v} - \nabla P + D_B \nabla (\nabla \cdot {\bm v}) \notag \\
&& \quad + D_T \nabla^2 {\bm v} + D_2 ({\bm v} \cdot \nabla)^2 {\bm v} + \frac{\rho_o}{\rho} {\bm \zeta} + {\bm f}. \label{eqmvel}
\end{eqnarray}
${\bm f}$ represents the annealed noise term that provides a random driving force. We assume this to be a white Gaussian noise with the correlation
\begin{equation}
\langle f_i({\bm r}, t) f_j({\bm r}^\prime, t^\prime) \rangle = \Delta \delta_{ij} \delta({\bm r}-{\bm r}^\prime) \delta(t-t^\prime),
\label{fcorr}
\end{equation}
where $\Delta$ is a constant, and dummy indices $i, j$ denote Cartesian components. The effect of obstacles is contained in the term
$\frac{\rho_o}{\rho} {\bm \zeta}$ in Eq.~(\ref{eqmvel}), where $\rho_o$ represents obstacle density, and ${\bm \zeta}({\bm r},t)$
signifies the obstacle field. We assume the correlation
\begin{equation}
\langle \zeta_i({\bm r}, t) \zeta_j({\bm r}^\prime, t^\prime) \rangle = \zeta^2 \delta_{ij} \delta({\bm r}-{\bm r}^\prime),
\label{zetacorr}
\end{equation}
which contains no time dependence, and therefore represents a quenched noise. Equations~(\ref{eqmden})-(\ref{zetacorr}) represent the
Toner-Tu \cite{tonertu1998} model for $\zeta =0$.
We check whether a broken-symmetry state of the SPPs in the presence of the obstacle field survives to small fluctuation in the
hydrodynamic fields. In the hydrodynamic limit, a linearized study of Eqs.~(\ref{eqmden}) and (\ref{eqmvel})
gives spatially Fourier transformed equal-time correlation functions for the density
\begin{eqnarray}
C_{\rho\rho}({\bm q},t) =
\frac{1}{q^2} \left\{ \frac{\zeta^2 \rho_o^2 a_{\rho}(\theta)}{b(\theta)q^2+d(\theta)} + \Delta A_{\rho}(\theta) \right\}
\label{Crhorho2}
\end{eqnarray}
and the velocity
\begin{eqnarray}
C_{{vv}}({\bm q},t) =
\frac{1}{q^2} \left\{ \frac{\zeta^2 \rho_o^2 a_{v}(\theta)}{b(\theta)q^2+d(\theta)} + \Delta A_{v}(\theta) \right\}.
\label{Cvv2}
\end{eqnarray}
The parameters $a_{\rho, {v}}$, $A_{\rho, {v}}$, $b$, and $d$ depend on the specific microscopic model and the angle $\theta$ between the wave
number ${\bm q}$ and the flocking direction. A detailed calculation for Eqs.~(\ref{Crhorho2}) and (\ref{Cvv2}) is given in
Appendix~\ref{appA}.Our result matches with the earlier prediction by Toner and Tu \cite{tonertu1998} for $\zeta = 0$, where the two
structure factors diverge as $1/q^2$ for small $q$.
However, the linearized theory suggests $C_{\rho\rho, vv} \sim 1/q^4$ for $\zeta \neq 0$, provided $d(\theta)=0$. In general
for a Vicsek-like model as our AMQR, $d(\theta)$ vanishes for certain directions $\theta = \theta_c$ or $\pi - \theta_c$, where
$\theta_c$ depends on the model parameters. We stress that although the quenched inhomogeneities increase fluctuation in the
system as compared to the clean case, the neglected nonlinearities suppress these higher order fluctuations so that a QLRO state
can prevail. Alhough an exact nonlinear calculation is {\it not practically feasible} for the 2D polar flock \cite{toner2018},
presumption of convective nonlinearities as relevant terms offers a way out \cite{revtoner2005, tonertu1998}. A nonlinear calculation
\cite{toner2018} following this presumption renormalizes diffusivities as $1/q$ so that the term $b(\theta)q^2$ in
Eqs.~(\ref{Crhorho2}) and (\ref{Cvv2}) approaches a finite value, and therefore, a QLRO state exists in the system. This explanation is
consistent with the giant number fluctuation \cite{revramaswamy2010} in the AMQR. We have checked that inclusion of the RQRs increases
the fluctuation in the system as compared to the clean case. This enhanced fluctuation destroys the usual LRO of the clean system.
However, we note that the fluctuation decreases with further increase in $c_r$ which disagrees with Eq.~(\ref{Crhorho2}), as the
linearized hydrodynamics prescribes an increase in the effect of quenched inhomogeneity with $\rho_o$. Therefore, the neglected
nonlinearity indeed plays a pivotal role in stabilizing the QLRO state in the system. A detailed discussion of these phenomenologies
is given in Appendix~\ref{appnonlinear}.
In summary, we have studied the effect of random quenched rotators on the flocking state of polar SPPs. These rotators are one kind of
obstacle that rotate the orientation of the SPPs. We find that, for small density of the rotators, the usual LRO of the clean polar
SPPs is destroyed, and a QLRO state prevails. With further increase in density of the rotators, a continuous QLRO to disorder state
transition takes place in the system. Our linearized hydrodynamic analysis predicts an anisotropic higher order fluctuation which destroys
the usual LRO of the clean SPPs. However, the neglected nonlinearities suppress these fluctuations yielding a QLRO in the system. In
equilibrium systems with random quenched obstacles, an ordered state does not exist below four dimensions \cite{imry1975,grinstein1976}.
However, as compared to the equilibrium systems, in our model for polar SPPs with quenched rotators, we find QLRO in two-dimensions.
Our prediction of the QLRO in the polar SPPs in the presence of quenched obstacles agrees with recent observations \cite{toner2018, chepizhko2013}.
In contrast to the LRO and the QLRO reported in Ref.~\cite{chepizhko2013}, we note QLRO only, because of the basic difference in the
nature of obstacles. The SPP-obstacle interaction in Ref.~\cite{chepizhko2013} depends on the angle between their relative position
vector and the orientation of the SPP. Therefore, this force is a continuous function of the orientation distribution of the SPPs. On
the contrary, the quenched force offered by the obstacles in our model is random and discrete. However, similar to their results,
we note the existence of an optimal noise for which the system attains the maximum ordering in the presence of quenched rotators
(see Appendix~\ref{appoptnoise}). Our model can be applied in natural systems like a shoal of fishes moving in the sea in
the presence of vortices. An experiment on a collection of fishes living in a shallow water pool
\cite{jolles2017, couzin2008, krause2018, puckett2018}, in the presence of uncorrelated artificial vortices, may verify our predictions.
S.M. acknowledges Sriram Ramaswamy for pointing out an important correction in the hydrodynamic calculation and Sanjay Puri for useful discussions.
The authors thank John Toner for his useful comments and suggestions. S.M. also thanks S. N. Bose National Centre for Basic Sciences, Kolkata
for providing kind hospitality, and the Department of Science and Technology, India for financial support. M.K. acknowledges financial support
from the Department of Science and Technology, India under the Ramanujan Fellowship.
|
{
"timestamp": "2018-12-10T02:08:32",
"yymm": "1802",
"arxiv_id": "1802.08861",
"language": "en",
"url": "https://arxiv.org/abs/1802.08861"
}
|
\section{Introduction} \label{sec:introduction}
Since the IceCube detector \citep{2017JInst..12P3012A} provided evidence for a high-energy, extragalactic neutrino flux \citep{2014PhRvL.113j1101A}, it has been one of the large goals of high-energy astrophysics to connect the observed flux to known sources. Although photohadronic models for Blazars have evolved for a long time \citep{1993A&A...269...67M}, the small number of observed high-energy neutrinos makes the detection of a source or even a source population challenging. In addition to stacking analyses \citep{2017ApJ...835...45A}, time-dependent searches were carried out \citep{2015ApJ...805L...5A,2015ApJ...807...46A}, both of which have been unsuccessful so far. In either case, in order to derive a discovery potential from the photon spectrum or interpret a possible non-detection, a robust model is required. However, often extremely simplified models for the relation of the photon and neutrino flux are employed \citep[e.g.][]{2016arXiv160202012K,2016ApJ...831...12H}. Without a proper understanding of the validity of the employed assumptions, an interpretation of such models is rather difficult.
Furthermore, with the advances in computational power, much more complex models can produce results on single nodes within times of the order of minutes. Although we admit the difficulties in the analysis of large samples of sources, especially bright and strongly variable sources should be analyzed more rigorously. The work at hand aims to show that such an analysis is indeed easily possible and highly necessary. For this purpose we compare fluxes and neutrino rates, computed consistently within our model, with predictions of models recently used in the literature using the modeled spectral energy density (SED) as input.
The model used for this work is fully time-dependent and reflects the complex interplay of electron and proton synchrotron emission, photo meson production as well as the subsequent decays and pair-cascades. It is described in section~\ref{sec:model}.
In section~\ref{sec:results} first an approximate fit to the SED of \textit{3C 279} is presented, which serves as a starting point for the parameter scan, whose results are presented subsequently. Even simple hadronic AGN models make use of a large number of parameters. It is computationally not possible to scan the entire high-dimensional parameter-space. Therefore we focus on three parameters that presumably are the most important ones. Section ~\ref{sec:discussion} will discuss the obtained results and discrepancies between the analyzed models. A summary of our conclusions and a brief outlook on how complex modeling can become more mainstream are presented in section~\ref{sec:conclusion}.
\section{Model}
\label{sec:model}
The characteristic double hump structure found in the SEDs of Blazars can often be explained elegantly by the SSC paradigm. However, it was shown that the SEDs of low peaked BL Lacs and FSRQs can often also be explained by a combination of electron synchrotron emission, proton synchrotron emission and the photo-hadronic reaction chain, provided higher magnetic fields that confine the high-energy protons~\citep[e.g.][]{2013EPJWC..6105009W}. Those so called hybrid-models allow an unbiased modeling of many sources, from SSC dominated to strongly hadronic. They are also ideal tools for studies identifying parameter regions with a high neutrino efficiency, i.e. a large ratio of produced neutrinos over emitted photons.
Historically the model at hand emerges from a combination of the models described in \cite{2013EPJWC..6105009W} and \cite{2016ApJ...829...56R}. During this process the two independently developed code bases were also reviewed against each other. For an efficient computation we choose a homogeneous description adapting the two-zone geometry introduced by \cite{2013EPJWC..6105009W}, consisting of a spherical radiation zone with a radius $R_{\text{rad}}$ and a smaller, nested acceleration zone with radius $R_{\text{acc}}$\footnote{It should be noted that in time-independent analyses, this approach is equivalent to a homogeneous one zone model into which a fully developed power-law is injected.}. Monoenergetic protons and electrons are injected into the acceleration zone and gain energy until the synchrotron losses balance any further acceleration gains. Particles eventually escape into the radiation zone, where they undergo all implemented radiation and scattering processes, introducing further species, namely pions, muons, positrons and neutrinos. Except for the extremely short lived pions, all species are treated as time-dependent particle distributions.
The routines computing the leptonic radiation processes are taken from \cite{2016ApJ...829...56R}, dropping all spatial dependencies. A time-dependent implementation of the photo-hadronic processes follows the model \textit{Sim-B} of \cite{2010ApJ...721..630H}. All internal sources of radiation contribute to the target photon field, that is electron-, proton- and muon synchrotron, $\pi_0$-decay and the synchrotron pair-cascades. External photon fields are not considered.
The particle acceleration used here is closely related to the statistical approach of \cite{2016ApJ...829...56R}. Due to the omitted spatial dependency, a simplified approach was developed, closely following \cite{2004PASA...21....1P}. Diffusion of particles across a shock front is assumed to happen within the acceleration zone, leading to Fermi-I acceleration. The diffusion parameter $\eta$ is defined per particle species and is assumed to scale linearly with the mass. This allows to compute the escape timescale for the acceleration zone
\begin{equation}
T_{\text{esc}}=\frac{3}{4}\frac{\eta R_{\text{acc}}}{c}\quad.
\end{equation}
The probabilities for a particle to either return to the shock ($P_{\text{acc}}$) or escape further downstream and hence into the radiation zone ($P_{\text{esc}}$) are computed as~\citep{2004PASA...21....1P}
\begin{align}
P_{\text{esc}}&=\frac{4V_\mathrm S}{r}\quad,\\
P_{\text{acc}}&=1-P_{\text{esc}}\quad,
\end{align}
with the shock speed $V_\mathrm S$ and the compression ratio $r$. The energy gain by shock crossing therefore happens on a time scale
\begin{equation}
T_{\text{acc}}=T_{\text{esc}}\frac{P_{\text{esc}}}{P_{\text{acc}}}
\end{equation}
with an average energy gain of ~\citep{2004PASA...21....1P}
\begin{equation}
\frac{\Delta E}{E}=1+\frac{4}{3}\frac{r-1}{r}V_\mathrm S\quad.
\end{equation}
The acceleration is then implemented as explicit scatterings between energy bins. In comparison to an analytic acceleration term as part of the Fokker-Planck equation, this ansatz avoids the artificial cut-off at a set energy $\gamma_{\text{max}}$ (and the accompanying numerical difficulties) and correctly reflects the exponential decay of the particle distribution at $\gamma>\gamma_{\text{max}}$ due to the stochastic nature of the assumed acceleration process.
Eventually the photon and neutrino fluxes can be computed from the photon density $N_{\tilde\nu}(\tilde\nu)$ and neutrino density $M_{\tilde E}(\tilde E)$, respectively\footnote{In order to clearly distinguish the differential densities and fluxes of photons and neutrinos, we here use the uncommon variables $M$ and $G$ for neutrinos. Please note that for absolute numbers used in section \ref{sec:results}, we keep the variables $N_\gamma$ and $N_\nu$, respectively.}, employing
\begin{equation}
\nu F_\nu(\nu)=\frac{\mathcal{D}^4}{1+\mathcal Z}\frac{h\tilde\nu^2c}{4}\frac{R_{\text{rad}}^2}{d_l^2}N_{\tilde\nu}(\tilde\nu)\quad\quad E\cdot G_E(E)=\frac{\mathcal{D}^4}{1+\mathcal Z}\frac{\tilde E^2c}{4}\frac{R_{\text{rad}}^2}{d_l^2}M_{\tilde E}(\tilde E)\quad,
\end{equation}
with the Doppler factor $\mathcal{D}$, redshift $\mathcal Z$ and luminosity distance $d_l$. The neutrino density is boosted the same way. The frequency and energy, respectively, at which the flux is observed has to be boosted according to $\nu=\mathcal{D}\tilde\nu/(1+\mathcal Z)$ and $E=\mathcal{D}\tilde E/(1+\mathcal Z)$.
In summary the model requires 10 input parameters, summarized in Table~\ref{tab:parameters}. For the sake of simplicity the spatial diffusion parameter is fixed to $\eta=1$. It should be noted that as it is common to any lepto-hadronic model \citep{1993A&A...269...67M,2001APh....15..121M} the electron and proton densities describe solely the high-energy population and charge neutrality is given when low energy populations are accounted for.
\begin{table}[h]
\centering
\caption{Summary of parameters.}
\begin{tabular}{clcc}
\hline\hline
\thead{parameter} & \thead{description} & \thead{fit} & \thead{range}\\
\hline
\\[-3mm]
$R_{\text{acc}}$ & radius of acceleration zone & $\SI{1E14}{cm}$ & -\\
$R_{\text{rad}}$ & radius of radiation zone & $\SI{1E16}{cm}$ & -\\
$B$ & magnetic field strength & $\SI{5}{G}$ & $\{1,5,25\}\,\SI{}{G}$\\
$\gamma_{\text{inj,p}}$ & injection energy for protons & $\SI{100}{}$ & -\\
$\gamma_{\text{inj,el}}$ & injection energy for electrons & $\SI{100}{}$ & -\\
$N_{\text{inj,p}}$ & injection rate for protons & $\SI{1E46}{s^{-1}}$ & $\SI{2e42}{}-\SI{2e46}{s^{-1}}$ \\
$N_{\text{inj,el}}$ & injection rate for electrons & $\SI{5E44}{s^{-1}}$ & $\SI{1e42}{}-\SI{1e46}{s^{-1}}$\\
$V_\mathrm S$ & shock speed & $\SI{0.1}{c}$ & -\\
$r$ & shock compression ratio & $3$ & -\\
$\mathcal{D}$ & Doppler factor & $40$ & -\\
\hline
\end{tabular}
\label{tab:parameters}
\end{table}
\section{Results}
\label{sec:results}
The starting point for the parameter scan is an approximate fit to the SED of \textit{3C 279}. The SED is presented in Fig~\ref{fig:base_sed} and the fit parameters are shown in Table~\ref{tab:parameters}. The fit was done ``by eye'' and only aims to qualitatively model the distinct features of the SED, namely the flux scaling, spectral indices and energy cutoffs.
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{base_sed.pdf}
\caption{An approximate fit of the hybrid model to the SED of \textit{3C 279}. Data taken from~\citep{2011A&A...530A...4A}. We use the de-absorbed data as given in the paper.}
\label{fig:base_sed}
\end{figure}
The integral over the computed neutrino fluxes yields an IceCube detection rate\footnote{Neutrino absorption by matter can be ignored completely in this scenario the approximate gramage is on the order of 1 g/cm$^2$ yielding not neutrino interaction} of $dN_{IC}/dt=\SI{3.08E-8}{s^{-1}}$ or $\SI{2.6}{}$ events above $\SI{60}{TeV}$ in $988$ days. Considering that even the base SED data used here represents a state with unusual high flux, this is in line with the fitted number of events $n_s=1{.}1$ from~\citep{2016ApJ...823...65A}. The expected background of events with $E_{dep}>\SI{60}{TeV}$ from the declination of \textit{3C 279} is around one event in $988$ days~\citep{2014PhRvL.113j1101A} \footnote{The largest observed flare of the source~\citep{2015ApJ...808L..48P} reaches a flux of several $\SI{E-9}{erg\,cm^{-2} s^{-1}}$ in the Fermi range. This can be reproduced in our model by a fit with $B=\SI{25}{G}$ and a proton injection rate of $N_{inj,p}=\SI{2e46}{s^{-1}}$, yielding a neutrino rate of $285$ IceCube events in $988$ days above $\SI{60}{TeV}$. In order to derive a meaningful number, one would have to know the duty cycle of 3C279.}.
Subsequently to obtaining this fit, the electron and proton densities are altered over four orders of magnitude on a grid of size $30\times30$. This is done for three different values of the magnetic field strength $B=\{1,5,25\}\,\mathrm{G}$. Geometric parameters like jet bending or Doppler boosting are not altered in this model as they affect photons and neutrinos in the same way yielding no change their ratio.
\subsection{Verification}
\label{sec:verification}
\begin{figure*}[t!]
\centering
\begin{subfigure}[t]{0.31\textwidth}
\centering
\includegraphics[height=1.7in]{verification_k-factor_delta.pdf}
\caption{Factor $K$ from Eq. \ref{eq:factor_k} with $N_\gamma$ the number of photons produced in $\pi_0$-decay and $N_\nu$ the number of neutrinos produced via the $\delta$-resonance.}
\end{subfigure}
~
\begin{subfigure}[t]{0.31\textwidth}
\centering
\includegraphics[height=1.7in]{verification_k-factor.pdf}
\caption{Factor $K$ from Eq. \ref{eq:factor_k} with $N_\gamma$ the number of photons produced in $\pi_0$-decay and $N_\nu$ the number of neutrinos produced in all photohadronic reaction chains.}
\end{subfigure}%
~
\begin{subfigure}[t]{0.31\textwidth}
\centering
\includegraphics[height=1.7in]{verification_k-factor_delta_wrong.pdf}
\caption{Like panel (a), but using the definition from \citep[Eq. (1)][]{2016ApJ...831...12H}.}
\end{subfigure}
\caption{Comparison of different definitions of the factor $K$ at $B=\SI{1}{G}$.}
\label{fig:verification}
\end{figure*}
Numerical solutions always posses the possibility of conceptual and implementation errors. Therefore thorough testing is required. Although the full photohadronic reaction chain is implemented in the model used for this study, it is possible to investigate individual contributions. This allows to test our implementation and the correct scaling of the computed fluxes. For this purpose we adopt a relation presented in \citep[Eq. (1)][]{2016ApJ...831...12H}, with $K$ being computed from our simulations.
We would also like to point out what we believe is an error in the expression used in~\cite{2016ApJ...831...12H}, namely that the particle densities should be compared, not the fluxes.
Figure~\ref{fig:verification} panels a) and b) show the value $K$, computed from the photon current density $N_\gamma=F_\gamma/E_\gamma$ and the neutrino current density $N_\nu=F_\nu/E_\nu$
\begin{equation}
\label{eq:factor_k}
\int \frac{\mathrm{d} N_\gamma}{\mathrm{d} E_\gamma}\ \mathrm{d} E_\gamma=K\cdot\int \frac{\mathrm{d} N_\nu}{\mathrm{d} E_\nu}\ \mathrm{d} E_\nu\quad,
\end{equation}
the integrals being over the entire energy domain. For panel c), we use the same definition as~\cite{2016ApJ...831...12H}.
The expected value for $K$ in case of the $\Delta(1232)$-resonance can be computed form the branching ratio into $\pi^+\ (1/3)$ and $\pi^0\ (2/3)$ and the decay chain of $\pi^+$ \citep[e.g.][]{2010ApJ...721..630H}
\begin{equation}
\pi^+\rightarrow(e^++\nu_e+\bar{\nu}_\mu)+\nu_\mu\quad.
\end{equation}
When only considering the $\delta$-resonance, the theoretical value for $K=4/3$ should be reproduced by the model in the entire parameter space. In panel a) this can be seen almost perfectly. The next panel shows that even for multi-pion production, there is no dependence of $k$ on the particle densities, as might be expected. The number of neutrinos produced per photon from photo-hadronic interactions more than triples, resulting in a factor $k\approx0.4$. However, this number will in general depend on other parameters. Also note the deviation from the doubling of the neutrino production rate assumed by~\cite{2016ApJ...831...12H}. Moreover, their comparison of the fluxes breaks down for certain changes in the parameters, as can be seen in panel c). Here an increase in electron density leads to a shift of the target photon population. This in turn shifts the energy dependent branching ratios of the decay of secondary particles, namely the $\pi^+$.
\subsection{Parameter Scan}
In this section we investigate the complex connection between the photon flux in the Fermi range, which serves as a baseline for the majority of studies on candidates of neutrino sources, and the neutrino flux observable by the IceCube detector.
The probably most interesting number for a neutrino candidate source currently is the expected IceCube detection rate. For our model source this quantity is show in Fig.~\ref{fig:icecube_detection_rate}, for which we only considered muon neutrinos and used the IceCube effective area taken from~\cite{2014ApJ...796..109A}.
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{icecube_detection_rate.pdf}
\caption{Expected detection rate (counts$\cdot s^{-1}$) from a model source.}
\label{fig:icecube_detection_rate}
\end{figure}
For the fit parameters of \textit{3C 279} ($B=\SI{5}{G}$, $N_{\text{inj,p}}=\SI{1E46}{s^{-1}}$, $N_{\text{inj,el}}=\SI{5E44}{s^{-1}}$) a rate of approximately $\SI{e-8}{s^{-1}}$ is computed, equaling one neutrino per three years as might be expected from an extremely bright source like \textit{3C 279}.
Of further interest is the change in the dependency on the particle densities. For low magnetic fields both electron and proton densities almost equally influence the resulting neutrino rate, as would be expected from photo hadronic interactions. However, with increasing $B$, the electron density loses its influence, resulting in almost vertical lines of equal neutrino rate at $B=\SI{25}{G}$. Here the proton synchrotron emission is supplying the system with a sufficient amount of seed photons for the photohadronic interactions. However, electron synchrotron emission might still be needed to model the low energy emission of a given source.
An often cited quantity is the number of neutrinos emitted per high-energy photon. Either this value is set to a fixed rate, assuming all Fermi photons originating from $\pi_0$-decay, or power-law spectra with a fixed ratio are integrated over the Fermi and IceCube energy range, respectively.
Due to the super position of various processes (see Fig.~\ref{fig:base_sed}) such approaches might lead to large errors with non-linear dependencies on the source parameters.
Figure~\ref{fig:icecube_nu_count_over_fermi_photon_count} shows this ratio for our model source.
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{icecube_nu_count_over_fermi_photon_count.pdf}
\caption{Neutrino count in the energy range of IceCube (both muon and electron neutrinos) over the photon count in the Fermi energy range.}
\label{fig:icecube_nu_count_over_fermi_photon_count}
\end{figure}
It is important to note that, in contrast to the $K$ factor, values stay several orders of magnitude below unity which, in the case of \textit{3C 279}, can be attributed to the proton synchrotron emission dominating the Fermi range. Nevertheless this parameter stays a good proxy for the hadronness of a source, although its value will strongly depend on the dominance and shape of the proton synchrotron emission.
When plotting the ratio of energy fluxes instead of current densities (Fig.~\ref{fig:icecube_nu_flux_over_fermi_photon_flux}) the influence of the magnetic field is much more pronounced.
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{icecube_nu_flux_over_fermi_photon_flux.pdf}
\caption{Neutrino energy flux in the energy range of IceCube (both muon and electron neutrinos) over the photon energy flux in the Fermi energy range.}
\label{fig:icecube_nu_flux_over_fermi_photon_flux}
\end{figure}
For low $B$ and proton density as well as high electron density the source becomes dominated by inverse Compton scattering, leading to a higher photon flux in the Fermi range without altering the production of neutrinos (compare upper left corner of the left panel of Fig.~\ref{fig:icecube_nu_flux_over_fermi_photon_flux}). With an increasing magnetic field the flux ratio plot approaches the form of the previous plot. The increase of the color scales is due to the energy range of IceCube being at much higher values than the Fermi range.
Of importance more from a theorists rather than an observers point of view is the ratio of IceCube neutrinos over all high-energy photons, shown in Fig.~\ref{fig:icecube_nu_flux_over_he_photon_flux}.
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{icecube_nu_flux_over_he_photon_flux.pdf}
\caption{Neutrino energy flux in the energy range of IceCube (both muon and electron neutrinos) over the photon energy flux between the minimum of the SED (between the two peaks) and the highest energies.}
\label{fig:icecube_nu_flux_over_he_photon_flux}
\end{figure}
These plots largely follow Fig.~\ref{fig:icecube_nu_flux_over_fermi_photon_flux}, especially for high magnetic fields. This underlines the assessment of the Fermi range being a good proxy for the neutrino production rate, in otherwise similar sources. If the source does not hold a large proton density and only moderate number of primary electrons, this plot will show a much more structured picture due to the complex interaction of processes in the high-energy regime.
Finally, Figs.~\ref{fig:icecube_nu_flux_over_fermi_pizero_flux},\ref{fig:fermi_photon_flux_over_fermi_pizero_flux} emphasize the statement, that the assumption of the $\pi_0$s decay playing an important roll in Blazars does not hold.
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{icecube_nu_flux_over_fermi_pizero_flux.pdf}
\caption{Neutrino energy flux in the energy range of IceCube (both muon and electron neutrinos) over the photon energy flux originating from the decay of $\pi_0$s in the Fermi energy range.}
\label{fig:icecube_nu_flux_over_fermi_pizero_flux}
\end{figure}
Naturally the production rate of $\pi_0$-decay-photons is strongly correlated to the neutrino rate, which however is irrelevant as the former is no observable.
On the contrary, if one would attribute the Fermi-flux to the $\pi_0$-decay alone, the resulting error in the estimated neutrino flux would be at least four order of magnitude.
This can be seen from the comparison of the color-bar scales of Figs.~\ref{fig:icecube_nu_flux_over_fermi_photon_flux} and~\ref{fig:icecube_nu_flux_over_fermi_pizero_flux}.
This is largely because of the small real conribution of the $\pi_0$ photons to the Fermi flux.
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{fermi_photon_flux_over_fermi_pizero_flux.pdf}
\caption{Ratio of the total photon energy flux and the energy flux originating from $\pi_0$ decay integrated over the Fermi energy range.}
\label{fig:fermi_photon_flux_over_fermi_pizero_flux}
\end{figure}
Even for the largest densities the contributions do not go beyond $10^{-3}$ (Fig.~\ref{fig:fermi_photon_flux_over_fermi_pizero_flux}) and show very little dependence on the magnetic field.
\section{Discussion}
\label{sec:discussion}
\subsection{Model restrictions}
As mentioned in section~\ref{sec:model} the chosen two-zone geometry is equivalent to a one-zone geometry in the steady-state case. On the other hand, a more advanced spatially-resolved model will only affect energies well below the high-energy regime, most notably the radio-regime~\citep{2016ApJ...829...56R}.
The most severe restriction of our model is the absence of proton cooling caused by photo-hadronic interactions. In a large parameter regime, where losses are dominated by synchrotron radiation, this will have no affect on the validity of our results. However, if, for extremely high injection rates, the photo-hadronic interactions produce a sufficient amount of photons to self-propel the process, a closed loop will form. In this case, energy conservation is no longer obeyed and the model would become unstable.
\subsection{Interpretation of results}
The analysis of the performed parameter scan shows a diverse, but still very clear picture.
The similarity of Figs.~\ref{fig:icecube_nu_flux_over_fermi_photon_flux} and~\ref{fig:icecube_nu_flux_over_he_photon_flux} both in structure and overall scale show that the Fermi-flux is still a representative observable for the high energy emissions of a particular source.
However, the mitigation between photons and neutrinos is driven very differently than often assumed:
Although many simplified assumptions on the photohadronic chain hold even for multi-pion production, the simple fact of other photon emission processes having a significant contribution to the Fermi flux will result in large deviations from predictions based on those models.
As can be seen from the SED in Fig.~\ref{fig:base_sed}, the $\pi_0$ photons are usually produced at much higher energies, leaving the Fermi range dominated by either proton synchrotron or pair cascade emission, rendering the photon flux in that energy range highly parameter dependent.
This crucial difference will lead to a more complex dependency on parameters, mainly those influencing the proton synchrotron emission.
The resulting errors for the neutrino-flux estimates can easily reach several orders of magnitude.
However, at least for large magnetic fields, once there is an estimate for the high-energy proton content of the source, a reliable estimate for the neutrino flux can be computed without a full fit of the SED (compare the vertical symmetry in the right panel of Fig.~\ref{fig:icecube_nu_flux_over_fermi_photon_flux}).
\section{Conclusion}
\label{sec:conclusion}
The work presented shows that there is no bijective mapping of the high-energy photon and neutrino properties for a given source, i.e. independent of any parameter changes. Especially variations of the injected proton density will directly influence the ratio of the emitted neutrino flux to the high-energy photon flux, with even more complex dependencies in lower magnetic fields.
This raises concerns on the reliability of many neutrino flux estimates, especially when involving a large number of sources. Since for such stacking analyses it is not easily possible to extract any source parameters, they might show significant systematic errors that are usually neglected.
In contrast, for the analysis of a small number of sources it would be feasible to get much more precise estimates for the neutrino flux from the shape of the SED. Even a simple modeling of the main contributors would yield parameters like the magnetic field strength and the proton density, which in turn could be used to derive neutrino fluxes using for example the plots presented in this work.
Such an approach would at least require measurements of the spectral indices, fluxes and cut-off frequencies for both peaks of the SED.
Precise models, however, would require flux and slope in five different wave bands to yield constraints for all 10 parameters of the model.
Judging from our experience, reasonable model parameters can be calculated when the electron synchrotron emission can be constrained and when sufficient data for the HE flux slope is available.
For strong and highly variable sources it might also be worthwhile to employ time-dependent models. Photohadronic processes will result in distinct variability patterns as was shown by \cite{2015A&A...573A...7W} and also allow to investigate the underlying physical processes leading to the acceleration of particles in the first place.
\section*{acknowledgments}
F.S. acknowledges support from NRF through the MWL program. This work is based upon research supported by the National Research Foundation and Department of Science and Technology. Any opinion, findings, and conclusions or recommendations expressed in this material are those of the authors and therefore the NRF and DST do not accept any liability in regard thereto.
\bibliographystyle{elsarticle-num}
|
{
"timestamp": "2018-02-27T02:05:04",
"yymm": "1802",
"arxiv_id": "1802.08820",
"language": "en",
"url": "https://arxiv.org/abs/1802.08820"
}
|
\section*{List of Acronyms}
\begin{flushleft}
\noindent \begin{tabular}[h]{p{0.2\linewidth} p{0.72\linewidth}}
BBHT-QSA & Boyer-Brassard-H{\o}yer-Tapp Quantum Search Algorithm\\
BF & Brute Force\\
BER & Bit Error Ratio\\
CDP & Classical Dynamic Programming\\
CF(E) & Cost Function (Evaluation)\\
DHA & Durr-H{\o}yer Algorithm\\
DN & Destination Node\\
EQPO & Evolutionary Quantum Pareto Optimization\\
HP & Hardware Parallelism\\
MODQO & Multi-Objective Decomposition Quantum Optimization\\
\end{tabular}
\end{flushleft}
\begin{flushleft}
\begin{tabular}[h]{p{0.2\linewidth} p{0.72\linewidth}}
MO-ACO & Multi-Objective Ant Colony Optimization\\
NDQO & Non-Dominated Quantum Optimization\\
(P-)NDQIO & (Preinitialized) Non-Dominated Quantum Iterative Optimization\\
NSGA-II & Non-dominated Sorting Genetic Algorithm II\\
OF & Objective Function\\
OPF-SR & Optimal Pareto Front Self-Repair \\
(O)PF & (Optimal) Pareto Front\\
QoS & Quality-of-Service\\
QP & Quantum Parallelism\\
RN & Relay Node\\
SN & Source Node\\
UV & Utility Vector\\
UF & Utility Function \\
WMHN & Wireless MultiHop Network
\end{tabular}
\end{flushleft}
\section{Introduction}
\IEEEPARstart{T}{he} concept of \emph{Wireless Multihop Networks}~(WMHN) \cite{jiao2016backpressure} enables the communication of remote nodes by forwarding the transmitted packets through a cloud of mobile relays. Naturally, the specific choice of the relays plays a significant role in the performance of WMHNs \cite{alawieh2009mwh}, thus bringing their routing optimization in the limelight. Explicitly, optimal routing relies on a fragile balance of diverse and often conflicting \emph{Quality-of-Service}~(QoS) requirements \cite{chen2011fundamental}, such as the route's overall \emph{Bit-Error-Ratio}~(BER) or \emph{Packet Loss Ratio}~(PLR), its total power consumption, its end-to-end delay, the route's achievable rate, the entire system's sum-rate and its ``lifetime'' \cite{yetgin2015network}.
For the sake of taking into account multiple QoS requirements, several studies consider single-component \emph{Objective Functions}~(OF) as their optimization objectives. In this context, the metric of {Network Lifetime}~(NL) \cite{Abdulla:HYMN,yetgin2015network} has been utilized, which involves the routes' power consumption in conjunction with the nodes' battery levels. Additionally, the so-called \emph{Network Utility}~(NU) \cite{tan2015utility} also constitutes a meritorious single-component optimization OF. Apart from the aforementioned QoS requirements, NU also takes into account the routes' achievable rate~\cite{shi2008cross}. In conjunction with the construction of aggregate functions, the authors of \cite{caleffi2012opera,banirazi2014heat} also incorporate QoS as constraints, thus providing a more holistic view of the routing problem. In this context, Banirazi \emph{et al.}~\cite{banirazi2014heat} optimized an aggregate function of the Dirichlet routing cost as well as the average network delay at specific operating points that maximize the network throughput.
The beneficial properties of \emph{dynamic programming} \cite{dasgupta2006algorithms} have been exploited for the sake of identifying the optimal routes, while relying on single-component aggregate functions. In this context, Dijkstra's algorithm~\cite{ramirez2013optimal,zuo2014cross,luo2014green} has been employed, since it is capable of approaching the optimal routes at the cost of imposing a complexity on the order of $O(E^3)$, where $E$ corresponds to the number of edges in the network's graph. Additionally, the appropriately modified Viterbi decoding algorithm \cite{you2012anear,wang2013dynamic} has also been utilized for solving single-component routing optimization problems, where the route exploration process can be viewed as a \emph{trellis graph} and thus the routing problem is transformed into a decoding problem. Explicitly, this transformation is reminiscent of the famous \emph{Bellman-Ford} algorithm~\cite{yao2016secure}.
The aforementioned approaches fail to identify the potential discrepancies among the QoS requirements, but they can be unified by the concept of \emph{Pareto Optimality}~\cite{deb2005mo}. However, the search-space of multi-component optimization is inevitably expanded due to combining the single-component OFs. Furthermore, the complexity is proportional to $O(N^2)$, where $N$ corresponds to the total number of eligible routes. Additionally, since $N$ increases exponentially as the relay nodes proliferate~\cite{Yetgin:NSGA_2}, the Pareto-optimal routing problem is classified as \emph{Non-deterministic Polynomial hard}~(NP-hard) \cite{alanis2014ndqo}. This escalating complexity can be partially mitigated by identifying a single Pareto-optimal solution. For instance, Gurakan \emph{et al.} \cite{gurakan2016optimal} conceived an optimal iterative routing scheme for identifying a single Pareto-optimal solution in terms of the sum rate and the energy consumption of wireless energy-transfer-enabled networks. However, in our application we are primarily interested in identifying the entire set of Pareto-optimal solution, since it provides fruitful insights into the underlying trade-offs \cite{deb2005mo}. In this context, multi-objective evolutionary algorithms \cite{Yetgin:NSGA_2,Camelo:NSGA, Martins:DCCP} have been employed for addressing the escalating complexity. In particular, Yetgin \emph{et al.} \cite{Yetgin:NSGA_2} used both the \emph{Non-dominated Sorting Genetic Algorithm II}~(NSGA-II) and the \emph{Multi-Objective Differential Evolution Algorithm}~(MODE) for optimizing the transmission routes in terms of their end-to-end delay and power dissipation. While considering a similar context, Camelo \emph{et al.} \cite{Camelo:NSGA} invoked the NSGA-II for optimizing the same QoS requirements for both the ubiquitous \emph{Voice over Internet Protocol}~(VoIP) and for file transfer. Additionally, the so-called \emph{Multi-Objective Ant Colony Optimization}~(MO-ACO) algorithm \cite{lopez2012theauto} has been employed in \cite{alanis2014ndqo} for the sake of addressing the multi-objective routing problem in WMHNs.
Quantum computing provides a powerful framework \cite{grover1996fast, PROP:PROP493, durr1996quantum} for the sake of rendering Pareto-optimal routing problems tractable by exploiting the so-called \emph{Quantum Parallelism}~(QP) \cite{nielsen2010quantum}. Explicitly, in \cite{wang2016quantum} \emph{Quantum Annealing}~\cite{wang2016differential}, has been invoked for the sake of optimizing the activation of the wireless links in wireless networks, while maintaining the maximum throughput and minimum interference as well as providing a substantial complexity reduction w.r.t. its classical counterpart, namely simulated annealing. In terms of Pareto optimal routing using \emph{universal quantum computing}~\cite{nielsen2010quantum}, the so-called \emph{Non-Dominated Quantum Optimization}~(NDQO) algorithm proposed in \cite{alanis2014ndqo} succeeded in identifying the entire set of Pareto-optimal routes at the expense of a complexity, which is on the order of $O(N\sqrt{N})$, relying on QP. As an improvement, the so-called \emph{Non-Dominated Quantum Iterative Optimization}~(NDQIO) algorithm was proposed in \cite{alanis2015ndqio}. Explicitly, the NDQIO algorithm is also capable of identifying the entire set of Pareto-optimal routes, while imposing a parallel complexity and a sequential complexity defined\footnote{We define the \emph{parallel complexity} as the complexity imposed while taking into account the degree of parallelism. By contrast, the sequential complexity does not consider any kind of parallelism. In \cite{alanis2015ndqio}, they are referred to as \emph{normalized execution time} and \emph{normalized power consumption}, respectively.} in \cite{alanis2015ndqio}, which is on the order of $O(N_\text{OPF}\sqrt{N})$ and $O(N^2_\text{OPF}\sqrt{N})$, respectively, by relying on the beneficial synergy between QP and \emph{Hardware Parallelism}~(HP). Note that $N_\text{OPF}$ corresponds to the number of Pareto-optimal routes.
Despite the substantial complexity reduction offered both by the NDQO and the NDQIO algorithms, the multi-objective problem still remains intractable, when the network comprises an excessively high number of nodes due to the escalating complexity. Explicitly, Zalka~\cite{zalka1999grover} has demonstrated that the complexity order of $O(\sqrt{N})$ is the minimum possible, as long as the database values are uncorrelated. By contrast, when the formation of the Pareto-optimal route-combinations becomes correlated owing to socially-aware networking \cite{alanis2016modqo}, a further complexity reduction can be achieved. Based on this specific observation, we will design a novel algorithm, namely the \emph{Evolutionary Quantum Pareto Optimization}~(EQPO), in order to exploit the correlations exhibited by the individual Pareto-optimal routes by appropriately constructing trellis graphs that guide the search process in the same fashion as in Viterbi decoding. Furthermore, we will also exploit the synergies between QP and HP for the sake of achieving an additional complexity reduction by considering as low a fraction of the database entries as possible, while still guaranteeing a near-full-search-based performance.
Our contributions are summarized as follows:
\emph{
\begin{enumerate}
\item[\emph{1)}] In Section~\ref{sec:mordp}, we develop a novel multi-objective dynamic programming framework for generating potentially Pareto-optimal routes relying on the correlations of the specific links constituting the Pareto-optimal routes, hence substantially reducing the total number of routes considered. Explicitly, this framework is a multi-objective extension of the popular single-objective Bellman-Ford algorithm.
\item[\emph{2)}] In Section~\ref{sec:eqpo}, we propose a novel quantum-assisted algorithm, namely the \emph{Evolutionary Quantum Pareto Optimization} algorithm, which jointly exploits our novel dynamic programming framework as well as the synergies between the QP and the HP for the sake of solving the multi-objective routing problem of WMHNs.
\item[\emph{3)}] In Section~\ref{sec:accVScomp}, we also characterize the performance versus complexity of the EQPO algorithm and demonstrate that it achieves both a parallel and a sequential complexity reduction of at least an order of magnitude for a 9-node WMHN, when compared to that of the NDQIO algorithm.
\end{enumerate}
}
The rest of this paper is organized as follows. In Section~\ref{sec:netSec}, we will briefly discuss the specifics of the network model considered in our case study. In Section~\ref{sec:mordp}, we will present a dynamic programming framework, which is optimal in terms of its heuristic accuracy. In Section~\ref{sec:eqpo}, we will relax the optimal framework of Section~\ref{sec:netSec} for the sake of striking a better accuracy versus complexity trade-off with the aid of our EQPO algorithm. Subsequently, in Section~\ref{subsec:complexity} we will analytically characterize the EQPO algorithm's complexity and in Section~\ref{subsec:accuracy} we will evaluate its performance.
\section{Network Specifications}\label{sec:netSec}
In the context of this treatise, the model of the networks considered both in \cite{alanis2014ndqo} and in \cite{alanis2015ndqio} has been adopted. To elaborate further, the WMHN considered is a fully connected network and it consists of a single \emph{Source Node}~(SN), a single \emph{Destination Node}~(DN) and a cloud of \emph{Relay Nodes}~(RN). The SN and the DN are located in the opposite corners of a (100$\times$100) m$^2$ square-block area, which is the WMHN coverage area considered. By contrast, the RNs are considered to be roaming within the coverage area having random locations, which obey the uniform distribution within the WMHN coverage area. A WMHN topology is exemplified in Fig.~\ref{fig:network-topology} for a WMNH consisting of 5 nodes in total. Additionally, a cluster-head equipped with a quantum computer, which is responsible for collecting all the required WMHN information, such as the nodes' geolocations and their interference levels, is considered to be present at the DN side. Therefore, we should point out that this treatise is focused on a centralized protocol.
\begin{figure}[htb]
\centering
\includegraphics[scale=0.8]{network-topology}
\caption{Exemplified WMHN topology associated with 5 nodes. The presence of a cluster-head in possession of a quantum computer is considered at the DN side as in \cite{alanis2014ndqo} and in \cite{alanis2015ndqio}. The interference levels experienced by each node are presented in the legend.\label{fig:network-topology}}
\end{figure}
Based on the network information gathered, the WMHN cluster-head has to identify the optimal routes emerging from the SN to the DN based on certain \emph{Utility Functions} (UF). Similar to \cite{alanis2014ndqo} and \cite{alanis2015ndqio}, we have jointly taken into account the route's overall delay, its overall power consumption and its overall \emph{Bit Error Ratio} (BER). Before delving into the UFs, let us define a legitimate route of our WMHN consisting of $N_\text{nodes}$ nodes, as $x_r = [\text{SN},\dots,\text{DN}]$, which contains each RN only once for the sake of limiting the total number $N$ of legitimate routes, while at the same time avoiding routes associated with excessive power consumption and delay. Note that we have associated the SN and the DN with the node indices 1 and $N_\text{nodes}$, respectively, in the context of this treatise. Additionally, these legitimate routes are mapped to a specific index $x$ under lexicographic ordering using \emph{Lehmer Encoding}\footnote{Lehmer Encoding maps a specific permutation to an index in the \emph{factoradic basis} \cite{lehmer1960teaching}.} \cite{lehmer1960teaching}. The route's overall delay $D(x)$ is considered as one of our UFs, which is quantified in terms of the number of hops established by the route. This is formally formulated as follows:
\begin{equation}\label{eq:delay}
D(x) = \abs{x_r}-1,
\end{equation}
where the operator $\abs{\cdot}$ corresponds to the number of nodes along the route $x_r$ including the SN and DN. Moving on to the $x$-th route's overall power consumption $P(x)$, it is proportional to the sum of path-losses incurred by each of the individual links constituting the route. Explicitly, the path-loss $L_\text{dB}(i,j)$ quantified in dB for a single link between the $i$-th and the $j$-th nodes is equal to \cite{alanis2015ndqio}:
\begin{equation}
L_\text{dB}(i,j) = P_{Tx,ij} - P_{Rx,ij}= 10\alpha\log_{10}\left(\frac{4 \pi d_{i,j}}{\lambda_c}\right),
\end{equation}
where $\alpha$ corresponds to the path-loss exponent, $d_{i,j}$ is the distance between the two nodes and $\lambda_c$ denotes the carrier's wavelength. In our case-study we have set $\alpha=3$ and $\lambda_c\simeq 0.125$ m corresponding to a frequency of $f_c=2.4$ GHz. Consequently, the second UF is formulated as follows:
\begin{equation}\label{eq:power}
L(x) = \sum\limits_{i=1}^{\abs{x_r}-1}10^{L_\text{dB}(x_{r}^{(i)},x_{r}^{(i+1)})/10}.
\end{equation}
Moving on to the final UF, namely the BER, let us first elaborate on the interference levels experienced by the nodes. In our specific scenario, there is only a single pair of source and destination nodes, resulting in a single route being active. Additionally, we have assumed that the WMHN has a sufficient number of orthogonal spreading codes and sub-carriers for the sake of efficiently separating the routes as in \cite{alanis2016modqo}. In this context, there is no interference stemming from the WMHN itself; however, we have assumed that a sufficiently high number of users access the channel, hence the resultant interference can be treated as \emph{Additive White Gaussian Noise} (AWGN), owing to the \emph{Central Limit Theorem} (CLT) \cite{steele1999mobile}. Therefore, the interference is modeled by a random Gaussian process, with its mean set to -90 dBm and its standard deviation set to 10 dB, while the transmission power is set to $P_{Tx}=20$~dBm. Additionally, the nodes transmit their messages using the uncoded QPSK scheme \cite{hanzo2004single} over uncorrelated Rayleigh fading channels and utilize \emph{Decode-and-Forward} relaying \cite{yang2015isthe} for forwarding the respective messages. Based on these assumptions, we can readily use the closed-form BER performance of the adopted scheme versus the received \emph{Signal-to-Noise Ratio} (SNR), while the overall route's BER $P_e(x)$ can be calculated using the following recursive formula \cite{alanis2014ndqo}:
\begin{equation}\label{eq:ber_rec}
P_{e,tot}=P_{e,1}+P_{e,2}-2P_{e,1}P_{e,2},
\end{equation}
which corresponds to the output BER $P_{e,tot}$ of a two-stage \emph{Binary Symmetric Channel} (BSC) \cite{alanis2014ndqo}, where $P_{e,1}$ and $P_{e,2}$ represent the BER associated with the first and the second stage, respectively.
Having described the UFs considered, let us now proceed by defining our optimization problem. Explicitly, we will jointly consider the UFs in the form of a \emph{Utility Vector}~(UV) $\mathbf{f}(x)$, which is defined as follows:
\begin{equation}\label{eq:uv}
\mathbf{f}(x) = \left[ P_e(x), L(x), D(x) \right],
\end{equation}
where $D(x)$ and $L(x)$ correspond to the $x$-th route's delay and power consumption defined in Eqs.~(\ref{eq:delay}) and (\ref{eq:power}), while $P_e(x)$ denotes the $x$-th route's end-to-end BER, which is recursively evaluated using Eq.~(\ref{eq:ber_rec}). Explicitly, we opt for jointly minimizing the entire set of UFs considered by the UV of Eq.~(\ref{eq:uv}). Therefore, for the evaluation of the fitness of the UVs we will utilize the concept of \emph{Pareto Optimality}\footnote{The readers should refer to \cite{alanis2016modqo} for a more detailed tutorial on Pareto optimality.}~\cite{deb2005mo}, which is encapsulated by Definitions~\ref{dfn:pd} and \ref{dfn:po}.
\begin{dfn}
\label{dfn:pd} {\bf Pareto Dominance} ~\cite{deb2005mo}: A particular route $x_i$ associated with the UV $\mathbf{f}(x_i) = [f_1(x_i),\dots,$ $f_K(x_i)]$, where $K$ is the number of the UFs considered, is said to strongly dominate another route $x_j$ associated with the UV $\mathbf{f}(x_j) = [f_1(x_j),\dots, f_K(x_j)]$, denoted by $\mathbf{f}(x_i)\succ\mathbf{f}(x_j)$, iff we have $f_k(x_i)<f_k(x_j)$, $\forall{k}\in\{1,\dots,K\}$. Equivalently, the route $x_i$ is said to weakly dominate another route $x_j$, denoted by $\mathbf{f}(x_i)\succeq\mathbf{f}(x_j)$, iff we have $f_k(x_i)\leq f_k(x_j)$, $\forall{k}\in\{1,\dots,K\}$ and $\exists k^\prime\in\{1,\dots,K\}$, so that we have $f_{k^\prime}(x_i)<f_{k^\prime}(x_j)$.
\end{dfn}
\begin{dfn}
\label{dfn:po} {\bf Pareto Optimality}~\cite{deb2005mo}: A particular route $x_i$ associated with the UV $\mathbf{f}(x_1)$ is Pareto-optimal, iff there is no route that dominates $x_i$, i.e. we have $\nexists{x_j}$ so that $\mathbf{f}(x_j)\succ\mathbf{f}(x_i)$ is satisfied. Equivalently, the route $x_i$ is strongly Pareto-optimal iff there is no route that weakly dominates $x_i$, i.e. we have $\nexists{x_j}$, so that $\mathbf{f}(x_i)\succeq\mathbf{f}(x_j)$ is satisfied.
\end{dfn}
Explicitly, Definition~\ref{dfn:pd} provides us with the criterion for evaluating the fitness of a specific route with respect to another reference route, while Definition~\ref{dfn:po} outlines the condition of the specific route's optimality. Based on the number of routes dominating a specific route, it is possible to group the routes into the so-called \emph{Pareto Fronts}~(PF). Explicitly, the PF comprises the Pareto-optimal routes, which are dominated by no other routes according to Definition~\ref{dfn:po}, which is often referred to as the \emph{Optimal Pareto Front}~(OPF).
In our application, our aim is to identify the entire set of weakly Pareto-optimal routes for the sake of gaining insight into the routing trade-offs associated with the UFs considered. Naturally, for the sake of identifying a specific route as Pareto-optimal we have to perform precisely $(N-1)$ Pareto-dominance comparisons, where $N$ corresponds to the total number of legitimate routes. Therefore, the complexity imposed by the exhaustive search aiming for identifying the entire set of routes belonging to the OPF is on the order of $O(N^2)$. Explicitly, the total number $N$ of legitimate routes increases exponentially as the number $N_\text{nodes}$ of nodes increases \cite{alanis2014ndqo}, hence rendering the multi-objective routing problem as NP-hard. Thus sophisticated methods are required for finding all of the solutions.
Let us now proceed by elaborating on our novel dynamic framework designed for efficiently exploring the search space.
\section{Mutli-Objective Routing Dynamic Programming Framework}\label{sec:mordp}
Before delving into the analysis of our multi-objective dynamic programming framework, which is specifically tailored for our routing problem, we will express each of the UFs considered in the UV of Eq.~(\ref{eq:uv}) as a weighted sum of the specific UFs associated with the individual links comprised by a particular route. Explicitly, the power consumption has already been expressed in this form based on Eq.~(\ref{eq:power}). As for the delay, which we have defined as the number of hops, it may be redefined as follows:
\begin{equation}\label{eq:delay_convex}
D(x) = \sum\limits_{i=1}^{\abs{x_r}-1}\left(1-\delta_{x_r^{(i)},x_r^{(i+1)}}\right),
\end{equation}
where $\delta_{i,j}$ corresponds to the \emph{Kronecker delta} function \cite{abramowitz1966handbook}, while $x_r$ and $x$ represent the route and its associated index, respectively. As for the route's overall BER, the recursive formula of Eq.~(\ref{eq:ber_rec}) may be approximated as follows:
\begin{equation}\label{eq:ber_convex}
P_e(x) = \sum\limits_{i=1}^{\abs{x_r}-1}P_{e,x_r^{(i)},x_r^{(i+1)}}-\epsilon(x)\thickapprox \sum\limits_{i=1}^{\abs{x_r}-1}P_{e,x_r^{(i)},x_r^{(i+1)}},
\end{equation}
where $P_{e,k,l}$ represents the BER of the specific link established between the $k$-th and the $l$-th nodes, while $\epsilon(x)$ is the approximation error, which is on the order of:
\begin{equation}\label{eq:ber_approx_error}
\epsilon(x) = O\left( \sum\limits_{i=1}^{\abs{x_r}-1}\sum\limits_{\scriptsize\begin{array}{c}
j=1\\ j\neq i
\end{array}}^{\abs{x_r}-1}P_{e,x_r^{(j)},x_r^{(i+1)}}P_{e,x_r^{(j)},x_r^{(j+1)}}\right).
\end{equation}
Since the sum of the products of all the links' BER will be several orders of magnitude lower than their sum, the approximation error of Eq.~(\ref{eq:ber_convex}) may be deemed to be negligible.
Having expressed the UFs considered as a weighted sum of the UFs associated with their links, we may now proceed by exploiting this specific property for the sake of achieving a further complexity reduction. In fact, it is possible to transform our composite multi-objective routing problem into a series of smaller subproblems, thus arriving at a dynamic programming structure. This transformation is performed with the aid of Definition~\ref{def:route_generation} in conjunction with Proposition~\ref{prop:route_optimality}.
\begin{dfn}\label{def:route_generation}
A specific route $x=\{SN{\rightarrow}\bar{R}_i{\rightarrow}DN\}$ is said to generate another route $x_g^{(j)}$ by inserting the single RN $R_j$ node between the previous RN and the DN. Explicitly, the resultant route $x_g^{(j)}$ is $x^{(j)}_g=\{SN{\rightarrow}\bar{R}_i{\rightarrow}R_j{\rightarrow}DN\}$, $\forall j \in \{1,\dots,N_\text{nodes}-2\}$.
\end{dfn}
\begin{prop}\label{prop:route_optimality}
Let us consider a specific route $x=\{SN{\rightarrow}\bar{R}_i{\rightarrow}DN\}$ associated with the UV $\mathbf{f}(x)= [f_1(x),\dots,f_K(x)]$ and its sub-route $x^\prime=\{SN{\rightarrow}\bar{R}_i\}$ associated with the UV $\mathbf{f}(x^\prime)= [f_1(x^\prime),\dots,f_K(x^\prime)]$. Let us assume furthermore that each component $f_k(x)$ of the UV associated with the route $x$ has a positive value and that it can be expressed as a sum of the respective UFs of its individual links $x_{i,i+1}$, i.e. we have:
\begin{equation}\label{eq:uf_prot}
f_k(x)=\sum\limits^{\abs{x}-1}_{i=1}f_k(x_{i,i+1}),
\end{equation}
with ${f}_k(x_{i,i+1})>0,~\forall~k,i,x:~k\in\{1,...,K\},~ i\in\{1,...,|x|-1\},~x\in S$, where $K$ and $S$ are the number of optimization objectives and the set of legitimate routes, respectively. The route $x$ cannot generate any Pareto-optimal routes using the rule of Definition~\ref{def:route_generation} if there is a route $x_d=\{SN{\rightarrow}\bar{R}_j{\rightarrow}DN\}$ from the SN to the DN associated with $\bar{R}_j\neq \bar{R}_i$ that weakly dominates the sub-route $x^{\prime}$, i.e. if we have $\exists x_d\in S: \mathbf{f}(x_d)\succeq \mathbf{f}(x^\prime)$. The respective proof is presented in Appendix~\ref{app:proof}.
\end{prop}
Explicitly, Proposition~\ref{prop:route_optimality} guarantees that a specific route $x=\{SN{\rightarrow}\bar{R}_i{\rightarrow}DN\}$ comprised by the sub-route $x^\prime=\{SN{\rightarrow}\bar{R}_i\}$ cannot generate Pareto-optimal routes by adding an intermediate RN to $x$ between its last RN and the DN, if the sub-route $x^\prime$ is weakly dominated by any of the legitimate routes. Explicitly, should its sub-route $x^\prime$ be sub-optimal, the respective route $x$ will be sub-optimal as well, since we have $\exists x_d\in S: \mathbf{f}(x_d)\succeq \mathbf{f}(x^\prime)\succ \mathbf{f}(x)$, based on Proposition~\ref{prop:route_optimality}. Note that the opposite of this statement does not apply, since there exist sub-optimal routes, whose sub-routes are indeed Pareto-optimal.
\begin{figure*}[thb]
\begin{center}
\includegraphics[width=\linewidth]{optimal-trellis}
\end{center}
\caption{Irregular trellis graph designed for guided search-space exploration for the 5-node WMHN of Fig.~\ref{fig:network-topology} using the optimal dynamic programming framework, encapsulated by Definition~\ref{def:route_generation} and Proposition~\ref{prop:route_optimality}. Note that the UVs of each route are presented in Table~\ref{tab:tut_uvs}.\label{fig:optimal-trellis}}
\end{figure*}\hfill\nopagebreak
\begin{table*}[h]
\begin{center}
\caption{Utility Vectors of the legitimate routes and of their respective sub-routes for the 5-node WMHN topology of Fig.~\ref{fig:network-topology}.\label{tab:tut_uvs}}
\begin{tabular}{c|c|c|c|c}
\hline
Route $x$ & Route UV & Sub-route UV & Optimal Route & Optimal Sub-route \\
\hline
$\{1~5\}$ & $[4.52~10^{-4},74.15,1]$ & $[\infty,\infty,\infty]$ & \checkmark & \checkmark\\
\hline
$\{1~2~5\}$ & $[2.52~10^{-4},73.10,2]$ & $[2.52~10^{-4},73.10,1]$ & \checkmark & \checkmark\\
$\{1~3~5\}$ & $[2.35~10^{-4},70.89,2]$ & $[3.13~10^{-5},57.30,1]$ & \checkmark & \checkmark \\
$\{1~4~5\}$ & $[1.43~10^{-2},71.76,2]$ & $[1.41~10^{-2},67.50,1]$ & \checkmark & \checkmark \\
\hline
$\{1~2~3~5\}$ & $[9.49~10^{-4},76.09,3]$ & $[7.45~10^{-4},74.61,2]$ & &\\
$\{1~2~4~5\}$ & $[1.91~10^{-2},75.72,3]$ & $[1.89~10^{-2},74.46,2]$ & &\\
$\{1~3~2~5\}$ & $[1.36~10^{-4},69.55,3]$ & $[1.36~10^{-4},69.54,2]$ & \checkmark & \checkmark\\
$\{1~3~4~5\}$ & $[1.29~10^{-2},71.74,3]$ & $[1.28~10^{-2},67.46,2]$ & & \checkmark \\
$\{1~4~2~5\}$ & $[1.42~10^{-2},71.19,3]$ & $[1.42~10^{-2},71.19,2]$ & & \checkmark \\
$\{1~4~3~5\}$ & $[1.46~10^{-2},73.50,3]$ & $[1.44~10^{-2},70.27,2]$ & & \checkmark \\
\hline
$\{1~2~3~4~5\}$ & $[1.36~10^{-2},76.36,4]$ & $[1.34~10^{-2},75.30,3]$ & &\\
$\{1~2~4~3~5\}$ & $[1.94~10^{-2},76.50,4]$ & $[1.92~10^{-2},75.18,3]$ & &\\
$\{1~3~2~4~5\}$ & $[1.90~10^{-2},74.13,4]$ & $[1.88~10^{-2},72.18,3]$ & &\\
$\{1~3~4~2~5\}$ & $[1.28~10^{-2},71.18,4]$ & $[1.28~10^{-2},71.17,3]$ & &\\
$\{1~4~2~3~5\}$ & $[1.49~10^{-2},75.23,4]$ & $[1.47~10^{-2},73.35,3]$ & &\\
$\{1~4~3~2~5\}$ & $[1.45~10^{-2},72.82,4]$ & $[1.45~10^{-2},72.81,3]$ & &\\
\hline
\end{tabular}
\end{center}
\end{table*}
This specific property can be exploited for the sake of reducing the search-space size required for identifying the entire OPF. To elaborate further, we can devise an \emph{irregular trellis graph}~\cite{wenbo2015irrtrellis} for the sake of guiding the search space exploration, as portrayed in Fig.~\ref{fig:optimal-trellis} for the 5-node WMHN of Fig.~\ref{fig:network-topology}. Note however that this specific trellis graph is different from those used for channel coding in \cite{wenbo2015irrtrellis}, since in the latter we only have as many legitimate paths as many legitimate symbols. By contrast, here all transitions represent legitimate routes in our scenario. Additionally, we rely on Definition~\ref{def:route_generation} for the sake of determining the possible trellis-node transitions. For instance, observe in Fig.~\ref{fig:optimal-trellis} that a trellis-path emerging from the trellis-node associated with the generator route $\{1\rightarrow 2 \rightarrow 5\}$ is only capable of visiting the nodes associated with the routes $\{1\rightarrow 2 \rightarrow 3 \rightarrow 5\}$ and $\{1\rightarrow 2 \rightarrow 4 \rightarrow 5\}$, since a single RN is inserted before the DN into the generator route based on Definition~\ref{def:route_generation}. Moving on to the next trellis stages, during the $i$-th trellis stage the following three steps are carried out:
\subsubsection{Surviving Routes} The set $S^{\text{gen}}_{(i)}$ of generated routes are constructed based on the set $S^{\text{surv}}_{(i-1)}$ of surviving routes of the previous stage and relying on Definition~\ref{def:route_generation}.
\subsubsection{Pareto-Optimal Routes}The set $S^{\text{OPF}}_{(i)}$ of Pareto-optimal routes is identified based on the following optimization problem:
\begin{equation}\label{eq:sopf_i_optim}
\begin{array}{rl}
S^{\text{OPF}}_{(i)} =& \mathop{\text{argmin}}\limits_{x\in{S^{\text{gen}}_{(i)}\cup S^{\text{OPF}}_{(i-1)}}}\{\mathbf{f}(x)\},\\
{}& s.t.~ \nexists j\in{S^{\text{gen}}_{(i)}\cup S^{\text{OPF}}_{(i-1)}}: \mathbf{f}(j)\succ\mathbf{f}(x).
\end{array}
\end{equation}
Note that the optimization problem of Eq.~\eqref{eq:sopf_i_optim} considers the joint search space constituted by the all the routes $S^{\text{gen}}_{(i)}$ of the $i$-th trellis stage as well as by the Pareto-optimal routes $S^{\text{OPF}}_{(i-1)}$ of the previous stage. Using recursion, we can readily observe that the Pareto-optimal routes $S^{\text{OPF}}_{(i-1)}$ of the previous stage contain the Pareto-optimal routes across all stages up to the $(i-1)$-st stage. This property is beneficial for our dynamic programming framework, since it eliminates the need for backwards propagation, thus only requiring the employment of a feed-forward method for the identification of the entire OPF.
\subsubsection{Surviving Routes}The set $S^{\text{surv}}_{(i)}$ of surviving routes is identified based on the following optimization problem:
\begin{equation}\label{eq:ssurv_i_optim}
\begin{array}{rl}
S^{\text{surv}}_{(i)} =& \mathop{\text{argmin}}\limits_{x\in{S^{\text{gen}}_{(i)}}}\{\mathbf{f}(x)\},\\
{}& s.t.~ \nexists j\in{S^{\text{gen}}_{(i)}\cup S^{\text{OPF}}_{(i-1)}}: \mathbf{f}(j)\succeq\mathbf{f}(x^\prime).
\end{array}
\end{equation}
where $x^\prime$ corresponds to the particular sub-route of $x$, having all the links of $x$, except for the last hop, as detailed in Proposition~\ref{prop:route_optimality}.
The optimization process proceeds to the next trellis stage as long as either there exist surviving routes, i.e. we have $S^{\text{surv}}_{(i)}\neq \varnothing$, or if the maximum affordable number of trellis stages - which is equal to the maximum number of hops of the legitimate routes - has not been exhausted. Otherwise, the optimization process terminates by exporting the hitherto identified OPF.
Let us now proceed by elaborating on the route exploration process using the 5-node WMHN example of Fig~\ref{fig:network-topology}. Its respective trellis is portrayed in Fig~\ref{fig:optimal-trellis}, while the routes' and their respective sub-route's UVs are shown in Table~\ref{tab:tut_uvs}. Initially, the optimization process considers the set $S^{\text{gen}}_{(1)}$ of routes, which is constituted by all the legitimate routes having a single and two hops, namely the routes $\{1\rightarrow 5\}$, $\{1\rightarrow 2 \rightarrow 5\}$, $\{1\rightarrow 3 \rightarrow 5\}$ and $\{1\rightarrow 4 \rightarrow 5\}$, as portrayed in the $1^{st}$ trellis stage of Fig.~\ref{fig:optimal-trellis}. Based on Table~\ref{tab:tut_uvs}, all the routes considered are Pareto-optimal and thus the respective set is equal to $S^{\text{OPF}}_{(1)}=S^{\text{gen}}_{(1)}$. Subsequently, the set of surviving nodes is constructed. Explicitly, the direct route is not considered in this case, since its inclusion leads to the generation of routes, which have already been processed. Observe in Table~\ref{tab:tut_uvs} that all the routes constituted by 2 hops have Pareto optimal sub-routes and hence the set of surviving routes becomes $S^{\text{surv}}_{(1)} = \left[ \{1\rightarrow 2 \rightarrow 5\}, \{1\rightarrow 3 \rightarrow 5\},\{1\rightarrow 4 \rightarrow 5\}\right]$.
After the identification of the set of surviving routes $S^{\text{surv}}_{(1)}$, the set $S^{\text{gen}}_{(2)}$ of routes generated in the 2$^{nd}$ trellis stage is created by including an appropriate RN right before the DN, as annotated with the aid of black arrows in Fig~\ref{fig:optimal-trellis}. Naturally, since all the routes constituted by two hops have been identified as being Pareto-optimal, the entire set of routes having three hops is visited by the trellis-paths in the 2$^{nd}$ trellis stage, as seen in Fig.~\ref{fig:optimal-trellis}. The set $S^{\text{OPF}}_{(1)}$ of Pareto-optimal routes of the $1^{st}$ trellis stage is then concatenated to the set $S^{\text{gen}}_{(2)}$ of the routes generated in the $2^{nd}$ trellis stage and the set $S^{\text{OPF}}_{(2)}$ of Pareto-optimal routes is identified. After this operation, the latter is set to $S^{\text{OPF}}_{(2)}=\left[ \{1\rightarrow 5\},\{1\rightarrow 2 \rightarrow 5\}, \{1\rightarrow 3 \rightarrow 5\},\{1\rightarrow 4 \rightarrow 5\}\right.$, $\left.\{1\rightarrow 3 \rightarrow 2 \rightarrow 5\}\right]$, hence including the route $\{1\rightarrow 3 \rightarrow 2 \rightarrow 5\}$ to the OPF, as denoted with the aid of the bold rectangle in Fig.~\ref{fig:optimal-trellis}. The surviving routes of the $2^{nd}$ trellis stage are then identified using the optimization problem of Eq.~\eqref{eq:ssurv_i_optim}. Explicitly, they constitute the set $S^{\text{surv}}_{(2)}=\{1\rightarrow 3 \rightarrow 2 \rightarrow 5\},\{1\rightarrow 3 \rightarrow 4 \rightarrow 5\},\{1\rightarrow 4 \rightarrow 2 \rightarrow 5\},\{1\rightarrow 4 \rightarrow 3 \rightarrow 5\}$, as it may be verified by Table~\ref{tab:tut_uvs} and denoted with the aid of the gray-filled nodes of Fig.~\ref{fig:optimal-trellis}.
In the presence of surviving nodes, the optimization process proceeds with the final trellis stage; however, in this case the routes $\{1\rightarrow 2 \rightarrow 3 \rightarrow 4 \rightarrow 5\}$ and $\{1\rightarrow 2 \rightarrow 4 \rightarrow 3 \rightarrow 5\}$ are not considered, since their generators do not have Pareto-optimal sub-routes. This is portrayed in Fig.~\ref{fig:optimal-trellis} with the aid both of the gray dashed arrows and of the gray dashed nodes. Hence, the set $S^{\text{gen}}_{(3)}=\{1\rightarrow 3 \rightarrow 2 \rightarrow 4 \rightarrow 5\},\{1\rightarrow 3 \rightarrow 4 \rightarrow 2 \rightarrow 5\},\{1\rightarrow 4 \rightarrow 2 \rightarrow 4 \rightarrow 5\},\{1\rightarrow 4 \rightarrow 3 \rightarrow 2 \rightarrow 5\}$ is generated. The set $S^{\text{OPF}}_{(2)}$ is then concatenated to that of the routes generated in the final trellis stage and the final set $S^{\text{OPF}}_{(3)}$ of Pareto-optimal routes is identified. Explicitly, the latter is identical to the respective set of the $2^{nd}$ trellis stage, since none of the routes generated in the final stage is Pareto-optimal, as verified by Table~\ref{tab:tut_uvs}. Additionally, since we have reached the final stage, the set of surviving routes is not identified and the process exits by exporting the hitherto observed OPF.
In a nutshell, this route exploration process succeeds in transforming the multi-objective routing problem into a series of significantly less complex sub-problems, each corresponding to a single trellis stage, hence inheriting the structure of dynamic programming problems~\cite{dasgupta2006algorithms}. Note that the metric-accumulation, which is typical in dynamic programming problems, is constituted by the update of the Pareto-optimal routes. Note that this dynamic programming framework is optimal in terms of its efficacy in identifying the entire OPF, just like the exhaustive search method. Primarily, this is a benefit of Proposition~\ref{prop:route_optimality}, which excludes the routes that are incapable of generating Pareto-optimal routes during the next trellis stages.
\section{Evolutionary Quantum Pareto Optimization}\label{sec:eqpo}
In Section~\ref{sec:mordp}, we introduced a novel dynamic programming framework for the sake of guiding the search process in identifying the Pareto-optimal routes, thus effectively reducing the complexity. In this section, we exploit this framework and further improve it with the aid of our EQPO algorithm. More specifically, we have relaxed the dynamic programming framework of Section~\ref{sec:mordp} for the sake of striking a better accuracy versus complexity trade-off. Additionally, we have improved the quantum-assisted process of \cite{alanis2015ndqio} for identifying the Pareto-optimal routes, so that it becomes capable of ``remembering'' the OPF identified in the previous trellis stages. We will refer to this improved quantum-assisted process as the \emph{Preinitialized-NDQIO}~(P-NDQIO) algorithm. In this context, the P-NDQIO and the EQPO algorithms are presented in Sections~\ref{subsec:preinitNDQIO} and \ref{subsec:EQPOoverview}, respectively. Let us now proceed by presenting the P-NDQIO algorithm.
\subsection{Preinitialized NDQIO algorithm}\label{subsec:preinitNDQIO}
The P-NDQIO algorithm, which is formally stated in Alg.~\ref{alg:pndqio}, is the main technique of \emph{memorization}~\cite{dasgupta2006algorithms}, thus providing a significant complexity reduction by remembering and propagating the OPF identified across the previous trellis stages to the next ones. Its memorization is performed in Step~1 of Alg.~\ref{alg:pndqio}, where the OPF of the current trellis stage is initialized to that of the previous stage. Subsequently, the P-NDQIO algorithm performs its iterations, looking for Pareto-optimal routes in Steps~2-14 of Alg.~\ref{alg:pndqio}.
\begin{algorithm}[h]
\caption{Preinitialized Non-Dominated Quantum Iterative Optimization Algorithm (P-NDQIO)} \label{alg:pndqio}
\begin{algorithmic}[1]
\STATE Initialize the OPF to $S^\text{OPF}_{(i)}\leftarrow S^\text{OPF}_{(i-1)}$.
\REPEAT
\STATE $\mathcal{T} \leftarrow 0$.
\STATE Invoke the BBHT-QSA of \cite[Alg.~1]{alanis2015ndqio} searching for routes in $S^\text{gen}_{(i)}$ that are not dominated by any of the routes of $S^\text{OPF}_{(i)}$ and output $x_s$.
\IF {$\mathbf{f}(j)\nsucc \mathbf{f}(x_s),~\forall j\in S^\text{OPF}_{(i)}$}
\REPEAT
\STATE Set $j \leftarrow x_s$.
\STATE Invoke the BBHT-QSA of \cite[Alg.~1]{alanis2015ndqio} searching for routes in $S^\text{gen}_{(i)}$ that dominate the route $j$ and output $x_s$.
\UNTIL {$\mathbf{f}(x_s)\nsucc \mathbf{f}(j)$}.
\STATE Discard the routes from $S^\text{OPF}_{(i)}$ that are dominated by the route $j$ and append it to the OPF.
\ELSE
\STATE Set $\mathcal{T} \leftarrow \mathcal{T} + 1$.
\ENDIF
\UNTIL {$\mathcal{T}=2$}
\STATE Export the $S^\text{OPF}_{(i)}$ and exit.
\end{algorithmic}
\end{algorithm}
During each iteration, which results in identifying a single Pareto-optimal route, the P-NDQIO algorithm first invokes the so-called \emph{Boyer-Brassard-Hoyer-Tapp Quantum Search Algorithm}~(BBHT-QSA)~\cite{PROP:PROP493} for the sake of identifying routes that are not dominated by any of the routes belonging to the hitherto identified OPF. We refer to this process as the \emph{Backward BBHT-QSA}~(BW-BBHT-QSA) process \cite{alanis2015ndqio}. If an invalid route-solution - i.e. a route that is indeed dominated by the OPF identified so far - is output by the BBHT-QSA, the P-NDQIO algorithm concludes that the entire OPF has been identified. However, since the BBHT-QSA exhibits a low probability of failing to identify a valid solution\footnote{We define a valid route-solution as the specific route that satisfies the condition in Step~5 of Alg.~\ref{alg:pndqio}}, the BW-BBHT-QSA step is repeated for an additional iteration in order to ensure the detection of the entire OPF, as seen in Steps~12 and 14 of Alg.~\ref{alg:pndqio}. Otherwise, should a valid route-solution {\color{blue} be} identified by the BW-BBHT-QSA step, this specific route is classified as ``potentially'' being Pareto-optimal. Consequently, the P-NDQIO algorithm invokes the so-called \emph{BBHT-QSA chain process}~\cite{alanis2014ndqo,alanis2015ndqio} in Steps~6-9 of Alg.~\ref{alg:pndqio}. Explicitly, the output of the BW-BBHT-QSA is set as the initial reference solution in Step~7 of Alg.~\ref{alg:pndqio} and a BBHT-QSA process is activated in Step~8 of Alg.~\ref{alg:pndqio}, which searches for routes that dominate the reference one. If a route that dominates the reference one is found, the reference route is updated to the BBHT-QSA output and a new BBHT-QSA process is activated. Naturally, the activation of the BBHT-QSA process is repeated until a particular route is output by the BBHT-QSA that does not dominate the reference route, thus indicating that the reference route is Pareto-optimal. Subsequently, the Pareto-optimal routes of the set $S^\text{OPF}_{(i)}$ are checked as to whether they are dominated by the reference route, so that they are removed and the reference route is then included in $S^\text{OPF}_{(i)}$, as seen in Step~10 of Alg.~\ref{alg:pndqio}. Explicitly, this check, which is referred to as the \emph{OPF Self-Repair}~(OPF-SR) process in \cite{alanis2015ndqio}, provides the EQPO algorithm with resilience against including sub-optimal routes in the early trellis stages due to the limited number of generated routes, hence preventing their propagation to the later stages.
Both the BW-BBHT-QSA process and the BBHT-QSA chains are parts of the original NDQIO algorithm; thus, the P-NDQIO algorithm employs quantum circuits that are identical to those of the NDQIO algorithm. Therefore, the motivated readers may refer to \cite{alanis2015ndqio} for extended discussions.
\subsection{EQPO algorithm}\label{subsec:EQPOoverview}
The dynamic framework introduced in Section~\ref{sec:mordp}, albeit optimal in terms of its capability of identifying the entire OPF, it may impose an excessive complexity quantified in terms of the number of dominance comparisons required for solving the optimization problem of Eq.~\eqref{eq:ssurv_i_optim}. To elaborate further, as the number of UFs considered increases, the number of surviving routes is increased due to the differences among the UFs. This in turn leads to the proliferation of the number of routes generated per trellis stage. However, only a relatively small fraction of the surviving route-population leads eventually to generating Pareto-optimal routes in the next trellis stages. Therefore, the employment of the optimal dynamic framework presented in Section~\ref{sec:mordp} imposes a significant complexity overhead for the sake of ensuring the detection of the entire set of Pareto-optimal routes. Consequently, a performance versus complexity trade-off has to be struck for the sake of mitigating this complexity overhead. In fact, this specific balance is struck in the context of the EQPO algorithm by jointly relying on Relaxations~\ref{rel:relaxed_generators} and \ref{rel:relaxed_generation}.
\begin{rel}\label{rel:relaxed_generators}
A route can only generate optimal routes based on Definition~\ref{def:route_generation}, if it is Pareto-optimal. This is formally formulated as follows:
\begin{equation}\label{eq:rel_surv}
S^\text{surv}_{(i)}\triangleq S^\text{OPF}_{(i)}-S^\text{OPF}_{(i-1)}.
\end{equation}
\end{rel}
Relaxation~\ref{rel:relaxed_generators} restricts the set $ S^\text{surv}_{(i)}$ of the surviving routes at the end of the $i$-th trellis stage to the set of the newly-discovered Pareto-optimal routes at this specific trellis stage. This relaxation provides beneficial complexity reduction, since it makes the identification both of the set $S^\text{surv}_{(i)}$ of surviving routes and of the set $S^\text{OPF}_{(i)}$ of Pareto-optimal routes possible by simply solving the optimization problem of Eq.~(\ref{eq:sopf_i_optim}). Explicitly, Proposition~\ref{prop:route_optimality} does not conflict with Relaxation~\ref{rel:relaxed_generators}, since the Pareto-optimal routes are guaranteed to have Pareto-optimal sub-routes. This is justified by the fact that the sub-routes dominate their routes due to the absence of the final hop, which results in increasing all the UFs considered. Thus, since there exist no route from the SN to the DN dominating the route identified, there exist no routes dominating the respective sub-route either. However, the complexity reduction offered by Relaxation~\ref{rel:relaxed_generators} comes at the price of reduced accuracy, since sub-optimal routes having Parero-optimal sub-routes do exist, which might potentially lead to the generation of Pareto-optimal routes in the next trellis stages. This specific limitation is mitigated with the aid of Relaxation~\ref{rel:relaxed_generation}.
\begin{rel}\label{rel:relaxed_generation}
For the sake of facilitating the identification of all Pareto-optimal routes, Definition~\ref{def:route_generation} is relaxed as follows: a specific route $x$ is said to generate another route $x_g^{(j,k)}$ by inserting the single RN $R_j$ between the $k$-th and the $(k+1)$-st nodes.
\end{rel}
Relaxation~\ref{rel:relaxed_generation} extends the set $S^\text{gen}_{(i)}$ of generated routes, which are created by the set $S^\text{surv}_{(i-1)}$ of surviving routes of the previous trellis stage. This is realized by replacing a single direct link established either by two RNs or by an RN and the DN with an indirect link involving an appropriate RN as an intermediate relay. Naturally, this specific modification enhances the heuristic accuracy of the EQPO algorithm, since it allows the generation of additional routes, thus acting similarly to the \emph{mutation operation} of \emph{genetic algorithms} \cite{Deb:NSGA_2}.
\begin{algorithm}[htb]
\caption{Evolutionary Quantum Pareto Optimization (EQPO) Algorithm\label{alg:eqpo}.}
\begin{algorithmic}[1]
\STATE Set $S^{\text{gen}}_{(0)} \leftarrow \{SN\rightarrow DN\}$, $S^\text{OPF}_{(0)}\leftarrow S^{\text{gen}}_{(0)}$, $S^\text{surv}_{(0)}\leftarrow S^{\text{gen}}_{(0)}$, $i\leftarrow 0$.
\REPEAT
\STATE Set $i\leftarrow i+1$.
\STATE Generate the set of routes $S^{\text{gen}}_{(i)}$ from the set $S^\text{surv}_{(i-1)}$ based on Relaxation~\ref{rel:relaxed_generation} by appropriately inserting a single RN between two intermediate nodes.
\STATE Set $S^{\text{gen}}_{(i)}\leftarrow S^{\text{gen}}_{(i)}\cup S^\text{OPF}_{(n-1)}$.
\STATE Invoke the P-NDQIO algorithm of Alg.~\ref{alg:pndqio} in the set $S^{\text{gen}}_{(i)}$ and initialize the identified OPF to $S ^\text{OPF}_{(n)}\leftarrow S ^\text{OPF}_{(n-1)}$.
\STATE Set $S^\text{surv}_{(i)}\leftarrow S^\text{OPF}_{(n)} - S^\text{OPF}_{(n-1)}$.
\UNTIL{$\abs{S^\text{surv}_{(i)}}=0$ \OR $i=N_\text{nodes}-1$}
\STATE Export the OPF $S ^\text{OPF}_{(n)}$ and terminate.
\end{algorithmic}
\end{algorithm}
Let us now proceed by elaborating on the specifics of the EQPO algorithm, which is formally presented in Alg.~\ref{alg:eqpo}. To elaborate further, in Step~1 of Alg.~\ref{alg:eqpo} the EQPO algorithm initializes the set of routes generated, the Pareto-optimal routes as well as the surviving routes to the direct route, i.e. to the route $\{SN\rightarrow DN\}$. It then proceeds with the trellis stages using Steps~2-8 of Alg.~\ref{alg:eqpo}. During each trellis stage, the set $S^{\text{gen}}_{(i)}$ of generated routes is constructed in Step 4 of Alg.~\ref{alg:eqpo} relying on Relaxation~\ref{rel:relaxed_generation}. Upon applying Relaxations~\ref{rel:relaxed_generators} and \ref{rel:relaxed_generation} in the trellis of Fig.~\ref{fig:optimal-trellis} results in the trellis of Fig.~\ref{fig:eqpo-trellis}.
This set is then concatenated with the set $S^{\text{OPF}}_{(i-1)}$ of Pareto-optimal routes identified in the previous stage. Subsequently, the P-NDQIO algorithm is invoked in Step~6 of Alg.~\ref{alg:eqpo} for the sake of identifying the set $S^{\text{OPF}}_{(i)}$ of Pareto-optimal routes from the set $S^{\text{gen}}_{(i)}$. Then, the set $S^{\text{surv}}_{(i)}$ of surviving routes is determined in Step~7 of Alg.~\ref{alg:eqpo}, relying on Relaxation~\ref{rel:relaxed_generators}.
More specifically, the steps carried out as part of the EQPO algorithm's dynamic programming framework during a single trellis stage are listed as follows:
\begin{figure*}[thb]
\begin{center}
\includegraphics[width=\linewidth]{eqpo-trellis}
\end{center}
\caption{Irregular trellis graph designed for guided search-space exploration for the 5-node WMHN of Fig.~\ref{fig:network-topology} using the EQPO algorithm's dynamic programming framework, encapsulated by Relaxations~\ref{rel:relaxed_generators} and \ref{rel:relaxed_generation}. Note that the UVs of each route are presented in Table~\ref{tab:tut_uvs}\label{fig:eqpo-trellis}}
\end{figure*}
\subsubsection{Route Generation} EQPO creates the set $S^\text{gen}_{(i)}$ of routes based on the set $S^\text{surv}_{(i-1)}$ of surviving routes from the previous trellis stage using Relaxation~\ref{rel:relaxed_generation}, as seen in Step~4 of Alg.~\ref{alg:eqpo}. For instance, observe in Fig.~\ref{fig:eqpo-trellis} that the route $\{1\rightarrow 2 \rightarrow 5\}$ is capable of generating 4 routes, namely the routes $\{1\rightarrow 2 \rightarrow 3 \rightarrow 5\}$, $\{1\rightarrow 2 \rightarrow 4 \rightarrow 5\}$, $\{1\rightarrow 3 \rightarrow 2 \rightarrow 5\}$, $\{1\rightarrow 4 \rightarrow 2 \rightarrow 5\}$. By contrast, Definition~\ref{def:route_generation} allows the generation of only the first two routes, as portrayed in Fig.~\ref{fig:optimal-trellis}. Additionally, in contrast to the optimal dynamic programming framework of Section~\ref{sec:mordp}, each route of the current trellis stage in Fig.~\ref{fig:eqpo-trellis} can be generated by multiple surviving routes of the previous stage. This specific feature of Relaxation~\ref{rel:relaxed_generation} enhances the heuristic accuracy, since it enables the generation of potentially Pareto-optimal routes, which have suboptimal constructors and hence would be disregarded based on Relaxation~\ref{rel:relaxed_generators}.
\subsubsection{Pareto-Optimal and Surviving Routes} Following the construction of the set $S^\text{gen}_{(i)}$ of the routes generated, the EQPO algorithm invokes the P-NDQIO algorithm of Section~\ref{subsec:preinitNDQIO} in Step~6 of Alg.~\ref{alg:eqpo} in order to search for new Pareto-optimal routes belonging to the set $S^\text{gen}_{(i)}$. However, based on Definition~\ref{dfn:po}, the optimality of the route depends on the set of eligible routes considered. Consequently, the OPF $S^\text{OPF}_{(i-1)}$ hitherto identified across all the previous trellis stages has to be concatenated with $S^\text{gen}_{(i)}$ in Step~5 of Alg.~\ref{alg:eqpo}, thus ensuring that the routes identified as optimal by the P-NDQIO algorithm are indeed Pareto-optimal with respect to the entire set of legitimate routes. Note that the set $S^\text{OPF}_{(i)}$ contains the Pareto-optimal routes across all trellis stages all the way up to the $i$-th one, as in the optimal dynamic programming framework of Section~\ref{sec:mordp}. Consequently, using Relaxation~\ref{rel:relaxed_generators} the Pareto-optimal routes identified at the current trellis stage are considered as surviving routes. Note that the Pareto-optimal routes identified throughout the previous stages are not taken into account, since they would generate routes already processed during the previous trellis stages.
The EQPO algorithm continues processing the trellis stages either until it reaches a trellis stage having no surviving paths or when the maximum affordable number of trellis stages is exhausted, in a similar fashion to the optimal dynamic programming framework of Section~\ref{sec:mordp}.
Let us now highlight the differences between the trellises of Figs.~\ref{fig:optimal-trellis} and \ref{fig:eqpo-trellis} considering the 5-node example of Fig.~\ref{fig:network-topology}. Note that the same annotation is used in Fig.~\ref{fig:eqpo-trellis} as that of Fig.~\ref{fig:optimal-trellis} Explicitly, based on Eq.~\eqref{eq:rel_surv}, the EQPO algorithm classified the specific routes, which are Pareto-optimal as being ``Pareto-Optimal'' and those that have been generated in the current stage as ``Visited \& Surviving''. Hence in contrast to Fig.~\ref{fig:optimal-trellis}, they are equivalent in Fig.~\ref{fig:eqpo-trellis}. Similar to the optimal dynamic programming framework of Section~\ref{sec:mordp}, the EQPO algorithm initializes the set $S^{\text{gen}}_{(1)}$ of generated routes to the set of the legitimate routes having either single or two hops, as portrayed in the $1^{st}$ trellis stage of Fig.~\ref{fig:eqpo-trellis}. Based on Table~\ref{tab:tut_uvs}, all the routes having two hops are Pareto-optimal and thus the EQPO algorithm classifies them as the surviving routes of the $1^{st}$ trellis stage, as seen in Fig.~\ref{fig:eqpo-trellis}. Similar to Fig.~\ref{fig:optimal-trellis}, the EQPO algorithm's trellis paths visit the entire set of routes having three hops and then the algorithm identifies the route $\{1\rightarrow 3 \rightarrow 2 \rightarrow 5\}$ as Pareto-optimal with the aid of the P-NDQIO algorithm. Consequently, this specific route is deemed to be the sole surviving route in Fig~\ref{fig:eqpo-trellis}. This is in contrast to Fig.~\ref{fig:optimal-trellis}, where three more routes have been identified as surviving ones. Recall from Fig.~\ref{fig:optimal-trellis} that these routes do not lead to Pareto-optimal routes in the last trellis stage. This in turn results in the EQPO algorithm visiting one less route in the $3^{rd}$ trellis stage, i.e. not considering the sub-optimal route $\{1\rightarrow 4 \rightarrow 2 \rightarrow 3 \rightarrow 5\}$ as potentially Pareto-optimal.
\section{Complexity versus Accuracy Discussions}\label{sec:accVScomp}
In this section, we will characterize the complexity imposed by the EQPO Alg. presented in Alg.~\ref{alg:eqpo} and evaluate its heuristic accuracy versus the complexity invested. Additionally, note that since we had no quantum computer at our disposal, the simulations of the QSAs were carried out using a classical cluster. Explicitly, since the \emph{quantum oracle gate}~$O$ \cite{nielsen2010quantum} calculates in parallel the UF vectors of all the legitimate routes in the QD, they were pre-calculated. We note that this results in an actual complexity higher than that of the full-search method. Therefore, the employment of the quantum algorithms in a quantum computer is essential for observing a complexity reduction as a benefit of the QP. Hence, in our simulations, we have made the assumption of employing a quantum computer and we count the total number of $O$ activations for quantifying the EQPO's complexity. This number would be the same for both classical and quantum implementations. Note that in the following analysis we will use the notation $N^{x}_{(i)}\equiv\abs{S^{x}_{(i)}}$, where $N^{x}_{(i)}$ corresponds to the cardinality of the set $S^{x}_{(i)}$.
Furthermore, our simulation results have been averaged over $10^8$ runs. During each run we have randomly generated the node's locations as well as the interference levels experienced by them with the aid of the respective distributions mentioned in Section~\ref{sec:netSec}. We have ensured that each run is uncorrelated with the rest of the runs.
Let us now proceed by analytically characterizing the complexity imposed by our proposed algorithm.
\subsection{Complexity}\label{subsec:complexity}
We will first characterize the complexity imposed by the EQPO algorithm's dynamic progamming framework, when the exhaustive search is employed instead of the P-NDQIO algorithm in Step~6 of Alg.~\ref{alg:eqpo}. We will refer to this method as the \emph{Classical Dynamic Programming}~(CDP) method and we will use it as a benchmarker for assessing the complexity reduction offered by the QP.
Prior to characterizing the EQPO algorithm and the CDP method we will analyze the the orders of the number $N_{(i)}^{\text{surv}}$ of the surviving routes and of the number $N_{(i)}^{\text{OPF}}$ of the Pareto-optimal routes identified across the first $i$ trellis stages. As far as the number $N_{(i)}^{\text{OPF}}$ of the Pareto-optimal routes identified across the first $i$ trellis stages is concerned, the trellis graph guiding the search for Pareto-optimal routes identifies more Pareto-optimal routes, as it proceeds through more trellis stages. Explicitly, its order can be formally expressed as follows:
\begin{equation}\label{eq:NOPF_order}
O(N_{(i)}^{\text{OPF}}) = O(a_iN_{\text{OPF}})=O(N_{\text{OPF}}),~\forall i\in \{1,\dots,N_\text{nodes}-1\},
\end{equation}
where $a_i$ corresponds to the fraction of the OPF identified by the first $i$ trellis stages. Naturally, this fraction $a_i$ approaches unity as the number $i$ of trellis stages moves closer to the maximum number of hops.
Moving on to the number $N_{(i)}^{\text{surv}}$ of the surviving routes at the $i$-stage, it is equal to the number of Pareto-optimal routes identified at the $i$-th trellis stage, based on Relaxation~1. Explicitly, $N_{(i)}^{\text{surv}}$ is a fraction of the total number $N_{(i)}^{\text{OPF}}$ of the Pareto-optimal routes identified across the first $i$ trellis stages. Hence, we have $N_{(i)}^{\text{surv}} = b_iN^{\text{OPF}}_{(i)}$ with $b_i\leq 1$ $\forall i\in \{1,\dots,N_\text{nodes}-1\}$, since the set $S_{(i)}^{\text{surv}}$ of Pareto-optimal routes at the $i$-th trellis stage is included in the set $S_{(i)}^{\text{OPF}}$ of Pareto-optimal routes identified at the first $i$ trellis stages. Therefore we can evaluate the order $O(N_{(i)}^{\text{surv}})$ as follows:
\begin{equation}\label{eq:Nsurv_order}
O(N_{(i)}^{\text{surv}}) = O(b_iN^{\text{OPF}}_{(i)}) \mathop{=}\limits^{\eqref{eq:NOPF_order}} O(b_ia_iN_{\text{OPF}})= O(N_{\text{OPF}}).
\end{equation}
Consequently, in Eqs.~\eqref{eq:NOPF_order} and \eqref{eq:Nsurv_order}, we have upper bounded the order $O(N_{(i)}^{\text{surv}})$ of the number of surviving routes at the $i$-th stage as well as the order $O(N_{(i)}^{\text{OPF}}) $ of the number of Pareto-optimal routes identified at the first $i$ stages by the order $O(N_{\text{OPF}})$ of the total number of Pareto-optimal routes, i.e. we have $ O(N_{(i)}^{\text{surv}})=O(N_{(i)}^{\text{OPF}})=O(N_{\text{OPF}})$. Naturally, Eq.~\eqref{eq:NOPF_order} and \eqref{eq:Nsurv_order} will facilitate the complexity analysis, since they render the aforementioned orders independent of the index $i$ of the trellis stages. Let us now proceed by characterizing the complexity imposded by the CDP method.
\subsubsection{CDP method's complexity}
Let us assume that there is a total of $N^\text{gen}_{(i)}$ generated routes arriving at the $i$-th trellis stage. These particular routes are generated by the specific Pareto-optimal routes identified at the previous trellis stage, which are $N^\text{surv}_{(i-1)}$ in total. Based on the aforementioned assumptions, the number of generated routes arriving at the $i$-th trellis stage is formulated as follows:
\begin{align}
N^\text{gen}_{(i)} &= N^\text{surv}_{(i-1)}(N_\text{nodes}-1-i)i=O\left[N^\text{surv}_{(i-1)}N_\text{nodes}i\right],\nonumber\\
& \mathop{=}\limits^{\eqref{eq:Nsurv_order}}O\left[N_\text{OPF}N_\text{nodes}i\right]\label{eq:Ngen_i}.
\end{align}
Since the set of Pareto-optimal routes of the previous trellis stage are concatenated to the set of generated routes in Step~5 of Alg.~\ref{alg:eqpo}, the total number of routes considered at the $i$-th trellis stage is given by:
\begin{equation}\label{eq:Nroutes_i}
N^\text{routes}_{(i)} = N^\text{gen}_{(i)}+N^\text{OPF}_{(i-1)}\mathop{=}\limits^{\eqref{eq:NOPF_order},\eqref{eq:Ngen_i}}O\left[N_\text{OPF}N_\text{nodes}i\right].
\end{equation}
Additionally, the CDP method performs $O[(N^\text{routes}_{(i)})^2]$ dominance comparisons, which we will refer to as the \emph{Cost Function Evaluation}~(CFE), since each generated route has to be compared to all of the routes considered. Therefore, the total complexity imposed by the CDP method across all trellis stages may be quantified in terms of the number of dominance comparisons, which is formulated as follows:
\begin{equation}\label{eq:Lcdp}
L_\text{CDP}=\sum\limits_{i=1}^{N_\text{nodes}-1}O\left[\left(N^\text{routes}_{(i)}\right)^2\right]= O(N_\text{OPF}^2N_\text{nodes}^5),
\end{equation}
where we have exploited the property of the sum of squared numbers \cite{abramowitz1966handbook}, where we have $\sum^{n}_{i=1}{i^2}=O(n^3)$.
\subsubsection{EQPO algorithm's complexity}
Moving on to the EQPO algorithm's complexity analysis, the P-NDQIO algorithm is activated once per trellis stage, based on Alg.~\ref{alg:eqpo}. Note that we will classify the complexity imposed by the P-NDQIO into two different domains, namely that of the parallel and that of the sequential complexity. To elaborate further, the P-NDQIO algorithm also exploits the synergies between QP and HP, which was utilized by the NDQIO algorithm of \cite{alanis2015ndqio}. Explicitly, the parallel complexity, which is termed as ``normalized execution time'' in \cite{alanis2015ndqio}, is defined as the number of dominance comparisons, when taking into account the degree of HP. Therefore, it may be deemed to be commensurate with the algorithm's actual normalized execution time. By contrast, the sequential complexity, which is termed as ``normalized power consumption'' in \cite{alanis2015ndqio}, is defined as the total number of dominance comparisons, without considering the potential degree of HP. Hence, this specific complexity may be deemed to be commensurate with the algorithm's normalized power consumption, as elaborated in \cite{alanis2015ndqio} as well.
Let us now proceed by characterizing the complexity of the individual sub-processes of the P-NDQIO process. During each trellis stage, the P-NDQIO algorithm activates its BW-BBHT-QSA step. This step invokes the BBHT-QSA once; however, since the quantum circuits of the original NDQIO algorithm are utilized, each activation of the quantum oracle, namely the operator $U_G$ in \cite[Fig.~8]{alanis2015ndqio}, compares each of the generated routes to all the routes comprising the OPF identified so far. Since this set of comparisons is carried out in parallel, a single activation imposes a single CFE and $N^\text{OPF}_{(i)}$ CFEs in the parallel and sequential domains, respectively. Note that the BW-BBHT-QSA process will be activated $(N^\text{surv}_{(i)}+2)$ times during a single trellis stage, since we opted for repeating this step for an additional iteration, when the BBHT-QSA fails to identify a valid route. Therefore, the parallel and sequential complexity imposed by the BW-BBHT-QSA process are quantified as follows:
\begin{align}
L^{BW,P}_{(i)} &= (N^\text{surv}_{(i)}+2)L_\text{BBHT}(N^\text{routes}_{(i)}),\label{eq:Lbw_P}\\
&=O(N_\text{OPF}\sqrt{N_\text{OPF}N_\text{nodes}i}),\label{eq:Lbw_P_ord}\\
L^{BW,S}_{(i)} &=\sum\limits_{j=0}^{N^\text{surv}_{(i)}}{(j+N^\text{OPF}_{(i-1)})}~L_\text{BBHT}(N^\text{routes}_{(i)})+\nonumber\\ &\;\;\;\; +N^\text{surv}_{(i)}L_\text{BBHT}(N^\text{routes}_{(i)})\label{eq:Lbw_S},\\
&= O(N_\text{OPF}^2\sqrt{N_\text{OPF}N_\text{nodes}i}).\label{eq:Lbw_S_ord}
\end{align}
Recall that the term $N^\text{surv}_{(i)}$ in Eqs.~\eqref{eq:Lbw_P} and \eqref{eq:Lbw_S} corresponds to the number of Pareto-optimal routes identified . Additionally, for the calculation of the orders of complexity we have relied on the fact that the BBHT-QSA has a complexity on the order of $L_\text{BBHT}(N)=O(\sqrt{N})$ as demonstrated both in \cite{alanis2015ndqio} and in \cite{PROP:PROP493}. Moving on to the complexity imposed by the BBHT-QSA chains, it has been demonstrated in \cite{alanis2015ndqio} that the complexity imposed by a single of BBHT-QSA chain - which leads to the identification of a single Pareto-optimal route - is identical to that of the so-called \emph{Durr-Hoyer Algorithm}~(DHA) \cite{durr1996quantum}, namely on the order of $L_{\text{DHA}}(N)=O(\sqrt{N})$ in terms of the number of quantum oracle gate activations. As for the latter, the $U_{g^\prime}$ quantum operator of \cite[Fig.~7]{alanis2015ndqio} has been utilized, which implements a dominance comparison. Explicitly, each activation of this operator imposes a parallel complexity of $1/K$ CFEs and a sequential complexity of a single CFE, owing to the parallel implementation of the UF comparisons. Therefore, the parallel and sequential complexity imposed by the BBHT-QSA chains are quantified as follows:
\begin{align}
L^{chain,P}_{(i)} &= \frac{N^\text{surv}_{(i)}}{K}L_\text{DHA}(N^\text{routes}_{(i)}),\label{eq:Lchain_P}\\
&=O(N_\text{OPF}\sqrt{N_\text{OPF}N_\text{nodes}i}),\label{eq:Lchain_P_ord}\\
L^{chain,S}_{(i)} &= N^\text{surv}_{(i)}L_\text{DHA}(N^\text{routes}_{(i)}),\label{eq:Lchain_S}\\
&=O(N_\text{OPF}\sqrt{N_\text{OPF}N_\text{nodes}i}).\label{eq:Lchain_S_ord}
\end{align}
Finally, as for the OPF-SR dominance comparisons of Step~10 of Alg.~\ref{alg:pndqio}, the parallel and sequential complexity imposed by this process are quantified as follows:
\begin{align}
L^{SR,P}_{(i)} &= \frac{1}{K}\sum\limits_{j=1}^{N^\text{surv}_{(i)}}(j+N^\text{OPF}_{(i-1)})=O(N_\text{OPF}^2),\label{eq:Lsr_P}\\
L^{SR,S}_{(i)} &= \sum\limits_{j=1}^{N^\text{surv}_{(i)}}(j+N^\text{OPF}_{(i-1)})=O(N_\text{OPF}^2).\label{eq:Lsr_S}
\end{align}
Recall from Eqs.~\eqref{eq:Lbw_P_ord}, \eqref{eq:Lbw_S_ord}, \eqref{eq:Lchain_P_ord}, \eqref{eq:Lchain_S_ord}, \eqref{eq:Lsr_P} and \eqref{eq:Lsr_S} that we used Eqs.~\eqref{eq:NOPF_order} and \eqref{eq:Nsurv_order}, where we have $O(N^\text{surv}_{(i)})=O(N^\text{OPF}_{(i)})=O(N_\text{OPF})$ with $N_\text{OPF}$ corresponding to the total number of Pareto-optimal routes. Let us now proceed with the evaluation of the total parallel and sequential complexities of the EQPO algorithm. In the worst-case scenario the EQPO algorithm will process $(N_\text{nodes}-1)$ trellis stages, corresponding to the maximum possible number of hops, whilst visiting each node at most once. Thus, the total parallel and sequential complexities imposed by the EQPO algorithm are quantified as follows:
\begin{align}
L_{EQPO}^{P} &= \sum\limits_{i=1}^{N_\text{nodes}-1}{L^{BW,P}_{(i)}+L^{chain,P}_{(i)}+L^{SR,P}_{(i)}},\label{eq:Leqpo_P}\\
&= O(N_\text{OPF}^{3/2}N_\text{nodes}^2),\label{eq:Leqpo_P_ord}\\
L_{EQPO}^{S} &= \sum\limits_{i=1}^{N_\text{nodes}-1}{L^{BW,S}_{(i)}+L^{chain,S}_{(i)}+L^{SR,S}_{(i)}},\label{eq:Leqpo_S}\\
&= O(N_\text{OPF}^{5/2}N_\text{nodes}^2)\label{eq:Leqpo_S_ord}.
\end{align}
Note that in Eqs.~\eqref{eq:Leqpo_P_ord} and \eqref{eq:Leqpo_S_ord} we have exploited the specific property of the sum of square roots, where we have $\sum_{i=1}^{n}{\sqrt{i}}=O(n^{3/2})$ \cite{abramowitz1966handbook}. Observe from Eqs.~\eqref{eq:Lcdp} and \eqref{eq:Leqpo_P_ord} that the EQPO algorithm achieves a parallel complexity reduction against the CDP method by a factor on the order of $O(N^3_\text{nodes}\sqrt{N_\text{OPF}})$. Additionally, the respective sequential complexity reduction is by a factor on the order of $O(N^3_\text{nodes}/\sqrt{N_\text{OPF}})$, based on Eqs.~\eqref{eq:Lcdp} and \eqref{eq:Leqpo_S_ord}. Hence, the EQPO imposes a lower sequential complexity than the CDP method, as long as we have $O(N_\text{nodes}^3)>O(\sqrt{N_\text{OPF}})$. As far as the EQPO algorithm's predecessors are concerned, it has been proven in \cite{alanis2015ndqio} that the NDQO algorithm imposes identical parallel and sequential complexities, which are on the order of $O(N\sqrt{N})$. By contrast, the NDQIO algorithm imposes a parallel and a sequential complexity, which are on the order of $O(N_\text{OPF}\sqrt{N})$ and $O(N^2_\text{OPF}\sqrt{N})$, respectively, where $N$ corresponds to the total number of legitimate routes. Consequently, the complexity imposed by both the NDQO and the NDQIO algorithms is proportional to $O(\sqrt{N})$ in both domains, yielding an exponential increase in the order of complexity as the number nodes increases. By contrast, both the EQPO algorithm and the CDP method exhibit a complexity order similar to polynomial scaling, since its has been demonstrated in \cite[Fig.~11]{alanis2015ndqio} that the total number $N_{\text{OPF}}$ of Pareto-optimal routes increases at a significantly lower rate than that of the total number $N$ of routes.
\begin{figure*}[htb]
\centering
\subfloat[Parallel Complexity\label{subfig:parallel-complexity}]{\includegraphics[width=0.5\linewidth]{eqpo-parallel}} \hfill
\subfloat[Sequential Complexity\label{subfig:sequential-complexity}]{\includegraphics[width=0.5\linewidth]{eqpo-sequential}}\hfill
\caption{EQPO Alg. (a)~parallel and (b)~sequential complexity quantified in terms of the number of CFEs. The results have been averaged over $10^8$ runs.\label{fig:complexity}}
\end{figure*}
Let us now proceed by presenting the average parallel and the average sequential complexity imposed both by the EQPO algorithm and by the CDP method, which are shown in Figs.~\ref{subfig:parallel-complexity} and \ref{subfig:sequential-complexity}, respectively. We will compare the complexities imposed by the aforementioned algorithms to those of the \emph{Brute-Force}~(BF) method as well as to those of the EQPO algorithm's predecessors, namely the NDQO and the NDQIO algorithms. The aforementioned methods consider the entire set of legitimate routes, hence they have no database correlation exploitation capabilities. Additionally, the NDQO algorithm and the BF method do not employ any HP scheme, thus their respective parallel and sequential complexities are identical. As far as the average complexity of the CDP method is concerned, observe in Figs.~\ref{subfig:parallel-complexity} and \ref{subfig:sequential-complexity} that it requires a higher number of CFEs than the BF method for WMHNs having less than 8 nodes. This parallel complexity overhead is justified by the fact that the number $N_{\text{OPF}}$ of Pareto-optimal routes w.r.t. the total number $N$ of legitimate routes is relatively high. This in turn yields an increase in the fraction of trellis nodes that are classified as survivors, hence leading to more dominance comparisons. However, this trend is reversed for WMHNs having more than 7 nodes, where the CDP method exhibits a complexity reduction compared to the BF method. More specifically, for WMHNs constituted by 9 nodes, this complexity reduction is close to an order of magnitude. Still referring to 9-node WMHNs, the CDP method imposes a slightly higher parallel complexity than that of the NDQO algorithm, while it matches the sequential complexity of the NDQIO algorithm for the same 9-node network, based on Figs.~\ref{subfig:parallel-complexity} and \ref{subfig:sequential-complexity}, respectively.
Moving on to the average parallel complexity of the EQPO algorithm, observe in Fig.~\ref{subfig:parallel-complexity} that the EQPO algorithm imposes fewer CFEs than the rest of the algorithms considered for WHMNs having more than 5 nodes. Explicitly, this complexity reduction becomes more substantial, as the number of nodes increases, reaching a parallel complexity reduction of almost an order of magnitude for 9-node WMHNs, when compared to the NDQIO algorithm, which is capable of exploiting the HP as well. As for its sequential complexity, observe in Fig.~\ref{subfig:sequential-complexity} that the EQPO algorithm imposes more CFEs than the rest of the algorithms for WMHNs having less than 7 nodes. This may be justified by the relatively small number of surviving routes, which does not allow the QP to excel by providing beneficial complexity reduction. However, this trend is reversed for WMHNs having more than 6 nodes, where the number of surviving routes becomes higher. More specifically for 9-node WMHNs, the EQPO algorithm begins to impose a sequential complexity reduction w.r.t. all the remaining algorithms considered. Additionally, observe in Figs.~\ref{subfig:parallel-complexity} and \ref{subfig:sequential-complexity} that the EQPO algorithm's complexity increases with a much lower gradient, as the number of nodes increases, when compared to the full-search-based algorithms, namely to the BF method as well as to the NDQO and the NDQIO algorithms. Explicitly, this is justified by the ``almost polynomial'' order of complexity, as demonstrated in Eqs.~\eqref{eq:Leqpo_P_ord} and \eqref{eq:Leqpo_S_ord}.
\subsection{Accuracy}\label{subsec:accuracy}
Having elaborated on the complexity imposed by the EQPO let us now proceed by discussing its heuristic accuracy. Since our design target is to identify the entire set of Pareto-optimal routes, we will evaluate the EQPO algorithm's accuracy versus the complexity imposed in terms of two metrics, namely that of the average \emph{Pareto distance} $E[P_d]$ and that of the average \emph{Pareto complection} $E[C]$. The same set of metrics have been considered in \cite{alanis2015ndqio} for the evaluation of NDQIO algorithm's accuracy as well. To elaborate further, the Pareto distance of a particular route is defined as the probability of this specific route being dominated by the rest of the legitimate routes. Explicitly, given a set of Pareto-optimal routes identified by the EQPO algorithm, their average Pareto distance is a characteristic of the OPF, since it provides insights into the proximity of the exported OPF to the true OPF. Naturally, a Pareto distance having a value of $E[P_d]=0$ implies that the OPF identified by the EQPO is fully constituted by true Pareto-optimal routes. By contrast, the average Pareto completion is defined as the specific fraction of the solutions on the true OPF identified by the EQPO. Therefore, our goal is to achieve a Pareto completion as close to $E[C]=1$ as possible.
\begin{figure*}[htb]
\centering
\subfloat[\label{subfig:7-pd-par}]{\includegraphics[width=0.5\linewidth]{7-node-Pd-par}} \hfill
\subfloat[\label{subfig:7-pd-ser}]{\includegraphics[width=0.5\linewidth]{7-node-Pd-ser}}\hfill
\subfloat[\label{subfig:7-com-par}]{\includegraphics[width=0.5\linewidth]{7-node-Com-par}}\hfill
\subfloat[\label{subfig:7-com-ser}]{\includegraphics[width=0.5\linewidth]{7-node-Com-ser}}\hfill
\caption{EQPO algorithm performance in terms of its Pareto dinstance~(a,b) and its Pareto Completion~(c,d) versus the parallel complexity~(a,c) and the sequential complexity~(b,d) for required 7-node WHMNs. The results have been averaged over $10^8$ runs.\label{fig:accuracy-7-node}}
\end{figure*}
Having defined the performance metrics, let us now present the performance versus complexity results of the EQPO algorithm, which are shown in Fig.~\ref{fig:accuracy-7-node} for 7-node WMHNs. The reason we have evaluated the aforementioned metrics for 7-node WMHNs is for the sake of comparison to the methods analyzed in \cite{alanis2014ndqo} as well as in \cite{alanis2015ndqio}. Apart from the NDQO and NDQIO algorithms, we have used as benchmarks two additional classical evolutionary algorithms\footnote{The readers should refer to \cite{alanis2014ndqo} and to \cite{Yetgin:NSGA_2} for a detailed description of the MO-ACO and the NSGA-II, respectively.}, namely the NSGA-II and the MO-ACO. Using the same convention as in \cite{alanis2014ndqo} and \cite{alanis2015ndqio}, we have set the number of individuals equal to the number of generations and we have matched the total parallel complexity imposed by these classical algorithms to that of the NDQO algorithm, since the NDQO algorithm appears to impose the highest parallel complexity, based on Fig.~\ref{subfig:parallel-complexity}. As for their total sequential complexity we have set it to that of the NDQIO algorithm. Consequently, we have considered employing 19 individuals over 19 generations for the parallel complexity matching and 29 individuals over 29 generations for the sequential complexity matching for both the NSGA-II and the MO-ACO algorithm.
Let us now proceed by elaborating on the average Pareto distance exhibited by 7-node WMHNs versus the parallel complexity invested, as portrayed in Fig.~\ref{subfig:7-pd-par}. Observe in this figure that the EQPO algorithm performs optimally -- in the sense that no suboptimal routes are included in the OPF -- for about 130 CFEs and then exhibits an error floor around $6\cdot 10^{-6}$. Similar trends are observed for the classical NSGA-II and for MO-ACO algorithm as well as for the quantum-assisted NDQO algorithm; the classical benchmark algorithms both exhibit an error floor around $10^{-3}$, while the respective NDQO algorithm's error floor is around $7\cdot 10^{-9}$. By contrast, the NDQIO algorithm initially has an error floor of about $3\cdot 10^{-5}$, which then decays to infinitesimally low levels, when more CFEs are invested owing to its OPF-SR process \cite{alanis2015ndqio}. This specific trend is visible in Fig.~\ref{subfig:7-pd-par}, where the NDQIO algorithm outperforms the NDQO technique in terms of their $E[P_d]$ beyond 8842 CFEs in the sequential complexity domain. Additionally, the NDQIO algorithm begins to exhibit a lower $E[P_d]$ than that of the EQPO algorithm after 498 and 2932 CFEs in the parallel and sequential domains, respectively.
Let us now provide some further insights into the significance of the aforementioned error floors. Explicitly, a particular route is considered suboptimal, if there exists even just a single route dominating it, i.e. if it has a Pareto distance higher than or equal to $P^{th}_d=1/N$, where $N$ corresponds to the total number of legitimate routes. This threshold is visually portrayed with the aid of the dashed and dotted horizontal lines in Figs.~\ref{subfig:7-pd-par} and \ref{subfig:7-pd-ser}. Hence, we can normalize the results w.r.t. this threshold for exporting the probability of a specific route becoming suboptimal. Consequently, EQPO algorithm's error floor is translated into a probability of a specific route being suboptimal, which is equal to 0.2\%, while the respective probability of the NDQO algorithm is equal to $2\cdot 10^{-6}$. Additionally, the respective probabilities of the classical benchmark algorithms are about 33\% and 3.3\%, when parallel and sequential complexity are considered, respectively. Consequently, the EQPO algorithm's probability of opting for a suboptimal route may be regarded as negligible.
The evaluation of the average Pareto completion probability versus the parallel and the sequential complexity are shown in Figs.~\ref{subfig:7-com-par} and \ref{subfig:7-com-ser}. Note that the subplots inside these figures portray the portion of unidentified true Pareto-optimal routes, as encapsulated by the expression of $1-E[C]$. Explicitly, we will utilize this metric for assessing the error floor w.r.t. the $E[C]$, which may not be visible from the main plots. Additionally, note that we examined both $E[P_d]$ and $E[C]$ versus the parallel and sequential complexity imposed up to the maximum value observed by the EQPO algorithm. As far as the EQPO algorithm's average Pareto completion versus the parallel complexity is concerned, observe in Fig.~\ref{subfig:7-com-par} that the EQPO is capable of identifying a higher portion of the true OPF, when compared to the rest of the algorithms examined, while considering the same number of CFEs in the parallel complexity domain. Explicitly, the EQPO algorithm succeeds in identifying almost the entire set of Parero-optimal routes, since it is only incapable of identifying as few as 0.1\% of the entire true OPF. This error floor is reached after 1301 and 14651 CFEs in the parallel and sequential complexity domains, respectively, as it can be verified by Figs.~\ref{subfig:7-com-par} and \ref{subfig:7-com-ser}.
By contrast, this trend is not echoed in the sequential complexity domain. To elaborate further, observe in Fig.~\ref{subfig:7-pd-ser} that the EQPO algorithm remains more efficient than its classical counterparts. On the other hand, while it is indeed more efficient than the NDQO algorithm up to a complexity budget of 2147 sequential CFEs, it identifies less Pareto-optimal routes than the NDQO algorithm. The same trend is observed for the NDQIO algorithm as well for a complexity budget of 4794 sequential CFEs. Nevertheless, this discrepancy between the parallel and sequential complexity is expected to be decreased, as the number of nodes increases. This is justified by the fact that the EQPO algorithm imposes a lower sequential complexity as the nodes proliferate, as seen in Fig.~\ref{subfig:sequential-complexity}.
Last but not least, the results portrayed on Fig.~\ref{fig:accuracy-7-node} rely on the intelligent central node having perfect knowledge both of the nodes' geo-locations and of the interference power levels experienced by them. This fundamental assumption, albeit impractical, provides us with the upper bound of the achievable performance of the routing schemes considered. Explicitly, despite its impractical nature, it facilitates a fair comparison of the EQPO algorithm to its predecessors in terms of their complexity and heuristic accuracy, which is the main focus of this treatise. Intuitively, a practical network information update process would result in both approximated and outdated network information, thus degrading the results of Fig.~\ref{fig:accuracy-7-node}, while maintaining the complexity per routing routing optimization at a similar order. Note that we plan on characterizing these imperfections and conceive a practical network information update scheme in our future work.
\section{Conclusions}\label{sec:conclusions}
In this treatise we have exploited the correlations in the formation of the Pareto-optimal routes for the sake of achieving a routing complexity reduction. In this context, we have first developed an optimal dynamic programming framework, which transforms the multi-objective routing problem into a decoding problem. However, this optimal framework imposes a high complexity. For this reason, we relaxed the aforementioned framework and proposed the EQPO algorithm, which is empowered by the P-NDQIO algorithm and thus jointly exploits the synergies between the QP and the HP along with the potential correlation in the formation of the Pareto-optimal routes. We then analytically characterized the complexity imposed by the EQPO algorithm showed that it is capable of solving the multi-objective routing problem in near-polynomial time. In fact, the EQPO achieved a parallel complexity reduction of almost an order of magnitude and a sequential complexity reduction by a factor of 3 for 9-node WMHNs. Finally, we demonstrated with the aid of simulations that this complexity reduction only imposes an almost negligible error, which was found to be 0.2\% and 0.1\% in terms of the average Pareto distance and the average Pareto completion probability for 7-node WMHNs.
\appendices
\section{Proof of Proposition~\ref{prop:route_optimality}\label{app:proof}}
\begin{proof}
Let us consider the route $x^{(j)}_g= \{ SN {\rightarrow} \bar{R}_i{\rightarrow }R_j{\rightarrow}DN\}$ generated by the route $x$. Based on Eq.~(\ref{eq:uf_prot}), the UFs associated with this specific route are equal to:
\begin{equation}\label{eq:uf_xg}
f_k(x^{(j)}_g) = f_k(SN{\rightarrow}\bar{R}_i)+f_k(\bar{R}_i{\rightarrow}R_j)+f_k(R_j{\rightarrow}DN).
\end{equation}
Additionally, the sub-route $x^\prime$ is associated with the following UFs
\begin{equation}\label{eq:uf_xp}
f_k(x^\prime) = f_k(SN{\rightarrow}\bar{R}_i).
\end{equation}
Since we have $f_k(x)>0,\;\forall x$ from Eq.~(\ref{eq:uf_prot}), the sub-route $x^\prime$ strongly dominates the route $x^{(j)}_g$ based on Eqs.~(\ref{eq:uf_xg}) and (\ref{eq:uf_xp}), i.e. we have $\mathbf{f}(x^\prime)\succ\mathbf{f}(x^{(j)}_g)$. Since now there is a specific route $x_d$ from the SN to DN that weakly dominates the sub-route $x^\prime$, i.e. we have $\mathbf{f}(x_d)\succeq\mathbf{f}(x^\prime)$, the route $x_d$ strongly dominates the route $x^{(j)}_g$ as well, yielding:
\begin{align}
\mathbf{f}(x_d)&\succeq\mathbf{f}(x^\prime)\succ\mathbf{f}(x^{(j)}_g)\label{eq:final_proof_1},\\
\mathbf{f}(x_d)&\succ\mathbf{f}(x^{(j)}_g)\label{eq:final_proof_2}
\end{align}
Consequently, based on Eq.~(\ref{eq:final_proof_2}) the route $x^{(j)}_g$ cannot be Pareto-optimal, since it is strongly dominated by the route $x_d$.
\end{proof}
\bibliographystyle{ieeetr}
|
{
"timestamp": "2018-02-26T02:12:33",
"yymm": "1802",
"arxiv_id": "1802.08676",
"language": "en",
"url": "https://arxiv.org/abs/1802.08676"
}
|
\section{Introduction}
Let $M=(M_t)_{t=0,\ldots,T}$ be a discrete time martingale defined on a filtered probability space $(\Omega,{\mathcal F},\{{\mathcal F}_t\}_{t=0,\ldots,T},\P)$, where we suppose $\Omega$ is finite. Let $\overline M_t=\max_{s\le t}M_s$ be the running maximum. Just as the martingale $M$ can be recovered from its final value $M_T$ via the formula $M_t={\mathbb E}[M_T\mid{\mathcal F}_t]$, the running maximum process $\overline M$ can be recovered from its final value $\overline M_T$. In fact, for any $t\in\{0,\ldots,T\}$ and non-null $\omega\in\Omega$, we claim that
\begin{equation} \label{intro1}
\overline M_t(\omega)=\min_{\omega'\in A}\overline M_T(\omega'),
\end{equation}
where $A$ is the atom of ${\mathcal F}_t$ containing $\omega$. To see this, note that $\overline M_t(\omega)\le \min_{\omega'\in A}\overline M_T(\omega')$ since $\overline M$ is nondecreasing. If the inequality were strict, we would have $M_t\le \overline M_t<\overline M_T$ on the ${\mathcal F}_t$-measurable event $A$, contradicting the martingale property: on $A$, $M$ would be sure to experience a strict increase between $t$ and $T$. Thus \eqref{intro1} must hold.
The right-hand side of the identity \eqref{intro1} is the {\em conditional infimum of $\overline M_T$ given ${\mathcal F}_t$}, evaluated at~$\omega$, and the identity itself expresses an ``inf-martingale'' property of $\overline M$. The goal of the present paper is to develop these ideas in some generality. For a large class of complete lattices $S$, we show that the conditional infimum of an $S$-valued random element $X$ given a sub-$\sigma$-algebra ${\mathcal E}$ is well-defined; we denote it by $\bigwedge[X\mid{\mathcal E}]$. In the presence of a filtration one is led to consider ``inf-martingales'' $\bigwedge[X\mid{\mathcal F}_t]$, $t\ge0$, and a key message of this paper is that many naturally occurring nondecreasing processes turn out to have this property. They can then be recovered from their final value. Examples include running maxima of supermartingales and, more generally, of processes that become supermartingales after an equivalent change of measure (Proposition~\ref{P:maxM}). Running maxima, local times, and various integral functionals of so-called sticky processes also have this property (Propositions~\ref{P:f(X) run max}, \ref{P:X U K sticky}, \ref{P:int f sticky}, and their corollaries). More exotic examples include the process of convex hulls of certain multidimensional processes, and the process of sites visited by a random walk (Propositions~\ref{P:convex hull sticky} and~\ref{P:RV sticky}). These results are derived from a simple ``no-arbitrage'' principle for nondecreasing lattice-valued processes (Theorem~\ref{T:NCR}). In the martingale context, an interesting corollary is that any positive local martingale can be recovered from its final value and its global maximum (Proposition~\ref{P:maxM X}).
The general theory covers a rather broad class of measurable complete lattices $S$. One only needs measurability of the ``triangle'' $\{(x,y)\colon x\le y\}$ in the product space $S\times S$, measurability of the countable supremum and infimum maps, and existence of a strictly increasing measurable map $S\to{\mathbb R}$. These hypotheses are stated precisely in \ref{A1}--\ref{A3} below. Apart from the extended real line $[-\infty,\infty]$, we prove that this covers the family of closed convex subsets of ${\mathbb R}^d$, as well as the family $2^{{\mathcal X}}$ of subsets of a countable set ${\mathcal X}$ (Theorems~\ref{T:convex sets} and~\ref{T:countable set}, respectively).
Conditional infima (and suprema) for real-valued random variables have appeared previously in the literature, along with real-valued ``inf-martingales'' (or ``sup-martingales'', also called max-martingales or maxingales); see for instance \cite{bar_car_jen_03,elk_mez_08}. We extend these constructions to general complete lattices with the additional structural properties mentioned above. A related but different notion of maxingale has been used by \cite{puh_97,puh_99,puh_01} and \cite{fle_04} in the context of idempotent probability with applications to large deviations theory and control theory. The notion of stickiness, which is closely related to the developments in the present paper, plays an important role in mathematical finance; see e.g.~\citet{GRS:08,BPS:15,RS:16}. Conditional infima in lattices of sets have also been useful in problems from multidimensional martingale optimal transport; see \cite{OS:17}, who make use of our Example~\ref{E:CI 2} below.
The rest of the paper is organized as follows. After ending this introduction with some remarks on notation, we turn to Section~\ref{S:CI} where the general theory of conditional infima in complete lattices is developed, including analogues of the martingale regularization and optional stopping theorems. Section~\ref{S:sticky} discusses sticky processes and their relations to conditional infima. Applications to martingale theory are given in Section~\ref{S:martingales}, including a general version of \eqref{intro1}. Examples involving processes of convex hulls and processes of subsets of a countable set a given in Section~\ref{S:further}. Section~\ref{S:closed subsets} develops several general results, mainly of measure theoretic nature, for lattices of closed sets. These results should be of independent interest.
\subsection{Remarks on notation}
Throughout this paper, $(\Omega,{\mathcal F},\P)$ is a probability space. Relations between random quantities are understood in the almost sure sense, unless stated otherwise. The probability space is endowed with a filtration ${\mathbb F}=\{{\mathcal F}_t\}_{t\ge 0}$ of sub-$\sigma$-algebras of ${\mathcal F}$, and we set ${\mathcal F}_\infty=\bigvee_{t\ge0}{\mathcal F}_t$. The filtration ${\mathbb F}$ need not be augmented with the $\P$-nullsets, but unless stated otherwise it is assumed throughout the paper that ${\mathbb F}$ is right-continuous. It is sometimes convenient to work with the order-theoretic indicator $\chi_A$ of a subset $A\subseteq\Omega$, defined by
\[
\chi_A(\omega) = \begin{cases} -\infty, & \omega\in A\\ +\infty, & \omega\notin A. \end{cases}
\]
The meaning of the symbols $+\infty$ and $-\infty$ are discussed below.
\section{Conditional infimum} \label{S:CI}
Throughout this section, let $(S,\le)$ be a complete lattice. That is, $S$ is a partially ordered set such that any subset $A\subseteq S$ has a least upper bound, denoted by $\sup A$. This implies that the greatest lower bound $\inf A$ also exists, and that $S$ contains a greatest element $+\infty$ and smallest element $-\infty$. We write $x\vee y$ for $\sup\{x,y\}$ and $x\wedge y$ for $\inf\{x,y\}$, and use $x<y$ as shorthand for $x\le y$ and $x\ne y$.
We assume that $S$ is equipped with a $\sigma$-algebra ${\mathcal S}$ that satisfies the following two properties:
\begin{enumerate}[label={\rm(A\arabic*)}]
\item\label{A1} The set $\{(x,y)\in S^2\colon x\le y\}$ lies in the product $\sigma$-algebra ${\mathcal S}^2={\mathcal S}\otimes{\mathcal S}$.
\item\label{A2} The countable supremum and infimum maps
\[
(x_1,x_2,\ldots) \mapsto \sup\{x_1,x_2,\ldots\} \qquad\text{and}\qquad
(x_1,x_2,\ldots) \mapsto \inf\{x_1,x_2,\ldots\}
\]
are measurable, where the set of sequences $S^\infty=\{(x_1,x_2,\ldots)\colon x_n\in S \text{ for all $n$}\}$ is equipped with the product $\sigma$-algebra ${\mathcal S}^\infty=\bigotimes_{n=1}^\infty{\mathcal S}$.
\end{enumerate}
These properties ensure that random elements of~$S$ (i.e., measurable maps $\Omega\to S$) behave well. Indeed, let $X_n$, $n\in{\mathbb N}$, be random elements of $S$. Assumption~\ref{A1} implies that $\{X_1\le X_2\}\in{\mathcal F}$, and hence also $\{X_1<X_2\}\in{\mathcal F}$.\footnote{Indeed, $\{X_1<X_2\}$ equals $\{X_1\le X_2\}\setminus(\{X_1\le X_2\}\cap\{X_2\le X_1\})$ and is therefore measurable.} Assumption~\ref{A2} implies that $\sup_n X_n$ and $\inf_n X_n$ are again random elements of $S$. This will be used repeatedly in what follows.
Finally, we make the following assumption, where, of course, strictly increasing means that $x<y$ implies $\phi(x)<\phi(y)$:
\begin{enumerate}[label={\rm(A\arabic*)},resume]
\item\label{A3} There exists a strictly increasing measurable map $\phi\colon S\to{\mathbb R}$.
\end{enumerate}
\begin{remark}
In some cases, naturally appearing lattices are not complete, but only {\em Dedekind complete}: suprema (infima) are guaranteed to exist only for subsets $A\subseteq S$ that are bounded above (below). In such cases one can extend the given lattice to a complete lattice satisfying \ref{A1}--\ref{A3}, provided these properties hold in the given lattice; see Proposition~\ref{P:ext Dedekind}.
\end{remark}
There are plenty of examples of complete lattices which satisfy \ref{A1}--\ref{A3}, some of which are discussed below. The first example below concerns the familiar (extended) real-valued case, while the subsequent examples involve more complicated complete lattices.
\begin{example} \label{E:CI 1}
$\overline{\mathbb R}=[-\infty,\infty]$ together with the usual order and the Borel $\sigma$-algebra is a complete lattice which clearly satisfies \ref{A1}--\ref{A3}.
\end{example}
\begin{example} \label{E:CI 2_new}
Let ${\mathcal X}$ be a countable set, and let $S=2^{\mathcal X}$ be the collection of all subsets of~${\mathcal X}$ partially ordered by set inclusion. Supremum is set union, $-\infty=\emptyset$, and $+\infty={\mathcal X}$. With these operations $S$ is a complete lattice, and it admits a $\sigma$-algebra ${\mathcal S}$ such that \ref{A1}--\ref{A3} are satisfied; see Theorem~\ref{T:countable set}.
\end{example}
\begin{example} \label{E:CI 2}
Let $S={\rm CO}({\mathbb R}^d)$ be the collection of all closed convex subsets $C\subseteq{\mathbb R}^d$ partially ordered by set inclusion. For a subset $A\subseteq S$ one has $\sup A=\overline\conv (\bigcup_{C\in A}C)$, the closed convex hull of the union of all $C\in A$, and $\inf A=\bigcap_{C\in A}C$. Moreover, $-\infty=\emptyset$ and $+\infty={\mathbb R}^d$. With these operations $S$ is a complete lattice, and it admits a $\sigma$-algebra ${\mathcal S}$ such that \ref{A1}--\ref{A3} are satisfied; see Theorem~\ref{T:convex sets}.
\end{example}
The following lemma is a consequence of the existence of a strictly increasing measurable real-valued map. We will use it to define the conditional infimum.
\begin{lemma} \label{L:esssup}
Let ${\mathcal L}$ be a set of random elements of $S$ closed under countable suprema. Then ${\mathcal L}$ contains a maximal element. That is, there exists $X^*\in{\mathcal L}$ such that $X\le X^*$ almost surely for every $X\in{\mathcal L}$. The maximal element $X^*$ is unique up to almost sure equivalence.
\end{lemma}
\begin{proof}
The uniqueness statement is obvious since any other maximal element $X^{**}\in{\mathcal L}$ satisfies $X^*\le X^{**}\le X^*$ almost surely. To prove existence, let $\phi\colon S\to{\mathbb R}$ be a strictly increasing measurable map, without loss of generality taken to be bounded, and define
\[
\alpha = \sup\{ {\mathbb E}[\phi(X)] \colon X\in{\mathcal L}\}.
\]
Let $(X_n)_{n\in{\mathbb N}}$ be a maximizing sequence and define $X^*=\sup_n X_n \in{\mathcal L}$. Then
\[
\alpha\ge{\mathbb E}[\phi(X^*)]\ge{\mathbb E}[\phi(X_n)] \to \alpha,
\]
so ${\mathbb E}[\phi(X^*)]=\alpha$. Consider any $X\in{\mathcal L}$ and assume for contradiction that $\P(X\not\le X^*)>0$. Then the random element $Y=X^*\vee X \in {\mathcal L}$ satisfies $X^*\le Y$ and $\P(X^*<Y)>0$. Therefore, since $\phi$ is strictly increasing,
\[
\alpha \ge {\mathbb E}[\phi(Y)] > {\mathbb E}[\phi(X^*)] = \alpha,
\]
a contradiction. Thus $X\le X^*$ almost surely, as desired.
\end{proof}
Although it will not be used in this paper, let us mention that Lemma~\ref{L:esssup} implies the existence of essential suprema. Given a set ${\mathcal L}$ of random elements of $S$, a random element $X^*$ is the {\em essential supremum of ${\mathcal L}$} (necessarily a.s.~unique) if $X^*$ a.s.~dominates ${\mathcal L}$ and satisfies $X^*\le Y$ a.s.~for any random element $Y$ that also a.s.~dominates ${\mathcal L}$.
\begin{corollary}
Let ${\mathcal L}$ be any set of random elements of $S$. Then ${\mathcal L}$ admits an essential supremum $X^*$, which can be expressed as the supremum of countably many elements of ${\mathcal L}$.
\end{corollary}
\begin{proof}
Let $\overline{\mathcal L}$ be the set of all countable suprema $\sup_nX_n$ of elements $X_n\in{\mathcal L}$. This set is closed under countable suprema, and thus admits a maximal element $X^*$ by Lemma~\ref{L:esssup}. Moreover, if $Y$ dominates ${\mathcal L}$, it also dominates $\overline{\mathcal L}$, whence $X^*\le Y$. Finally, being an element of $\overline{\mathcal L}$, $X^*$ is the supremum of countably many elements of ${\mathcal L}$.
\end{proof}
The following definition introduces the key object of interest in this paper, the conditional infimum. Lemma~\ref{L:esssup} implies that the the conditional infimum always exists and is unique up to almost sure equivalence.
\begin{definition} \label{D:cond inf}
Let $X$ be a random element of $S$, and let ${\mathcal E}\subseteq{\mathcal F}$ be a sub-$\sigma$-algebra. The {\em conditional infimum of $X$ given ${\mathcal E}$}, denoted by $\bigwedge[X\mid{\mathcal E}]$, is the maximal element of
\[
\{Z:\Omega\to S \text{ such that $Z$ is ${\mathcal E}$-measurable and $Z\le X$ almost surely} \}.
\]
That is, $\bigwedge[X\mid{\mathcal E}]$ is the greatest ${\mathcal E}$-measurable lower bound on $X$.
\end{definition}
The following lemma collects some basic properties of the conditional infimum, which are immediate consequences of the definition. These properties are well-known in the literature, at least in the case $S=\overline{\mathbb R}$; see \cite{bar_car_jen_03}.
\begin{lemma}[Properties of the conditional infimum] \label{L:PCI}
Let $X$ and $Y$ be random elements of~$S$, and let ${\mathcal E}$ and ${\mathcal G}$ be sub-$\sigma$-algebras of ${\mathcal F}$. Then the following properties hold:
\begin{enumerate}
\item\label{PCI:mon I}If ${\mathcal E}\subseteq{\mathcal G}$ then $\bigwedge[X\mid{\mathcal E}]\le\bigwedge[X\mid{\mathcal G}]$.
\item\label{PCI:mon II}If $X\le Y$ then $\bigwedge[X\mid{\mathcal E}]\le\bigwedge[Y\mid{\mathcal E}]$.
\item\label{PCI:tower}If ${\mathcal E}\subseteq{\mathcal G}$, then $\bigwedge[\,\bigwedge[X\mid{\mathcal G}]\mid{\mathcal E}] = \bigwedge[X\mid{\mathcal E}]$.
\item\label{PCI:cont I}Let $\{{\mathcal E}_n\}_{n\in{\mathbb N}}$ be a non-increasing sequence of sub-$\sigma$-algebras and suppose ${\mathcal E} = \bigcap_{n\in{\mathbb N}}{\mathcal E}_n$. Then $\bigwedge[X\mid{\mathcal E}] = \inf_{n\in{\mathbb N}} \bigwedge[X\mid{\mathcal E}_n]$.
\item\label{PCI:cont II} Let $\{X_n\}_{n\in{\mathbb N}}$ be a sequence of random elements of~$S$. Then $\bigwedge[\inf_{n\in{\mathbb N}}X_n\mid{\mathcal E}] = \inf_{n\in{\mathbb N}} \bigwedge[X_n\mid{\mathcal E}]$.
\item\label{PCI:max linear} If $Y$ is ${\mathcal E}$-measurable and $\le$ is a total order, then $\bigwedge[X\vee Y\mid{\mathcal E}] = \bigwedge[X\mid{\mathcal E}] \vee Y$.
\end{enumerate}
\end{lemma}
\begin{proof}
\ref{PCI:mon I}: $\bigwedge[X\mid{\mathcal E}]$ is ${\mathcal G}$-measurable and dominated by $X$.
\ref{PCI:mon II}: Any lower bound of $X$ is also a lower bound on $Y$.
\ref{PCI:tower}: By monotonicity, $\bigwedge[\,\bigwedge[X\mid{\mathcal G}]\mid{\mathcal E}]\le\bigwedge[X\mid{\mathcal E}]\le X$. Moreover, if $Z\le X$ is ${\mathcal E}$-measurable, then it is also ${\mathcal G}$-measurable, whence $Z\le\bigwedge[X\mid{\mathcal G}]$. Thus $Z=\bigwedge[Z\mid{\mathcal E}]\le \bigwedge[\,\bigwedge[X\mid{\mathcal G}]\mid{\mathcal E}]$.
\ref{PCI:cont I}: Since ${\mathcal E}_n$ is non-increasing in $n$, \ref{PCI:mon I} yields $\inf_{n\in{\mathbb N}} \bigwedge[X\mid{\mathcal E}_n]=\inf_{n\ge m} \bigwedge[X\mid{\mathcal E}_n]$ for each $m$. Thus $\inf_{n\in{\mathbb N}} \bigwedge[X\mid{\mathcal E}_n]$ is ${\mathcal E}$-measurable. Moreover, it dominates any ${\mathcal E}$-measurable~$Z\le X$.
\ref{PCI:cont II}: $\bigwedge[X_n\mid{\mathcal E}]$ is a lower bound on $X_n$, whence $\inf_{n\in{\mathbb N}} \bigwedge[X_n\mid{\mathcal E}]$ is a lower bound on $\inf_{n\in{\mathbb N}}X_n$. Thus $\inf_{n\in{\mathbb N}} \bigwedge[X_n\mid{\mathcal E}]\le \bigwedge[\inf_{n\in{\mathbb N}}X_n\mid{\mathcal E}]$. The reverse inequality follows from~\ref{PCI:mon II}.
\ref{PCI:max linear}: Set $X'=\bigwedge[X\mid{\mathcal E}]$. Then $X'\le X$, hence $X'\vee Y\le X\vee Y$. It remains to pick an arbitrary ${\mathcal E}$-measurable $Z\le X\vee Y$ and show that $Z\le X'\vee Y$. On $\{Y<Z\}$ one has $Y<Z\le X\vee Y$ and hence $X\vee Y=X$. Thus
\[
Z\wedge \chi_{\{Y<Z\}^c} \le (X\vee Y) \wedge \chi_{\{Y<Z\}^c} = X \wedge \chi_{\{Y<Z\}^c} \le X.
\]
The left-hand side is ${\mathcal E}$-measurable, so $Z\wedge \chi_{\{Y<Z\}^c}\le X'$ by definition of $X'$. It follows that $Z\le X'\vee Y$ on $\{Y<Z\}$. Since $\le$ is a total order, $\{Y<Z\}^c=\{Z\le Y\}$, so that $Z\le X'\vee Y$ also on this set. Thus $Z\le X'\vee Y$, as required.
\end{proof}
We now consider $S$-valued stochastic processes $V=(V_t)_{t\ge0}$ adapted to the right-continuous filtration~${\mathbb F}$. A process $V$ with nondecreasing paths is called {\em right-continuous} if it satisfies
\[
V_t(\omega) = \inf_{s>t} V_s(\omega) \quad\text{for all}\quad (\omega,t)\in\Omega\times{\mathbb R}_+.
\]
This amounts to a slight abuse of terminology, since $S$ need not have any topological structure.
Given a random element $X$, one can consider the family $V=(V_t)_{t\ge0}$ of random variables $V_t=\bigwedge[X\mid{\mathcal F}_t]$. In view of Lemma~\ref{L:PCI}\ref{PCI:mon I}, $V_t$ is non-decreasing in $t$; however, at this stage it is only defined up to a nullset that may depend on $t$. The following result confirms that one can choose a regular version.
\begin{lemma}[Right-continuous version] \label{L:RC version}
Let $X$ be a random element of $S$. Then there exists an adapted nondecreasing right-continuous $S$-valued process $V=(V_t)_{t\ge0}$ such that $V_t=\bigwedge[X\mid{\mathcal F}_t]$ for all $t\ge0$. The process $V$ is unique up to evanescence.
\end{lemma}
\begin{proof}
Fix a dense countable subset $D\subset{\mathbb R}_+$, and let $V^0_t$ be a version of $\bigwedge[X\mid{\mathcal F}_t]$ for each $t\in D$. For each $t\in {\mathbb R}_+$, define
\[
H_t = \bigcap_{u>t} H^0_u, \qquad H^0_t = \bigcap_{\substack{r,s\in D\\ r<s\le t}} \left\{ \omega\colon V^0_r(\omega) \le V^0_s(\omega) \right\}.
\]
Thus $H_t$ is the set of $\omega$ such that the map $s\mapsto V^0_s(\omega)$ is nondecreasing on $D\cap[0,t+\varepsilon]$ for some $\varepsilon>0$. One has $\P(H_t)=1$ by Lemma~\ref{L:PCI}\ref{PCI:mon I}, as well as $H_t\in{\mathcal F}_t$ by right-continuity of ${\mathbb F}$. Define $V=(V_t)_{t\ge 0}$ by
\[
V_t(\omega) = \begin{cases} \inf_{s\ge t,\,s\in D} V^0_s(\omega) & \omega\in H_t, \\ +\infty & \omega\notin H_t.\end{cases}
\]
It follows that $V$ is adapted, nondecreasing, and right-continuous. Furthermore, Lemma~\ref{L:PCI}\ref{PCI:cont I} and right-continuity of ${\mathbb F}$ yield $V_t=\inf_{s\ge t,\,s\in D} V^0_s = \bigwedge[X\mid F_t]$. The uniqueness statement follows from the almost sure uniqueness of each $V_t$ together with right-continuity.
\end{proof}
\begin{lemma}[Optional stopping] \label{L:optional sampling}
Let $X$ be a random element of $S$ and let $V=(V_t)_{t\ge0}$ be the regular version of $V_t=\bigwedge[X\mid{\mathcal F}_t]$. Then
\[
V_\tau = \bigwedge[X\mid {\mathcal F}_\tau ]
\]
for every stopping time $\tau$.
\end{lemma}
\begin{proof}
It suffices to prove the result for $\tau$ taking finitely many values. Indeed, in the general case one has $\lim_{m\to\infty}\tau_m=\tau$ for some non-increasing sequence of stopping times~$\tau_m$ taking finitely many values. Lemma~\ref{L:PCI}\ref{PCI:cont I} and right-continuity of ${\mathbb F}$ and $V$ then yield the results.
We therefore suppose $\tau=\sum_n t_n\bm 1_{A_n}$ for finitely many distinct $t_n\in[0,\infty]$ and pairwise disjoint sets $A_n\in{\mathcal F}_{t_n}$ forming a partition of $\Omega$. Let $Y$ be any ${\mathcal F}_\tau$-measurable random element of $S$ with $Y\le X$. We must show that $Y\le V_\tau$. To this end, define the random elements
\[
Y_n = \begin{cases} Y &\text{on $A_n$} \\ -\infty &\text{on $A_n^c$} \end{cases}
\]
For any $B\in{\mathcal S}$ one has $\{Y_n\in B\} = \left(\{Y\in B\}\cap A_n\right) \cup \left(\{\infty\in B\}\cap A_n^c\right)$. This event lies in ${\mathcal F}_{t_n}$ since $\{Y\in B\}\cap A_n=\{Y\in B\}\cap\{\tau=t_n\}\in{\mathcal F}_{t_n}$ by ${\mathcal F}_\tau$-measurability of $Y$, and since $A_n^c=\{\tau\ne t_n\}\in{\mathcal F}_{t_n}$ due to the fact that $\tau$ is a stopping time. Consequently $Y_n$ is ${\mathcal F}_{t_n}$-measurable and satisfies $Y_n\le Y\le X$, so by definition of the conditional infimum we have $Y_n\le\bigwedge[X\mid{\mathcal F}_{t_n}]=V_{t_n}$. Therefore, $Y = \inf_n (Y_n\vee\chi_{A_n}) \le \inf_n (V_{t_n}\vee\chi_{A_n}) = V_\tau$, as required.
\end{proof}
\begin{example}
It is not true in general that $V_{t-}=\bigwedge[X\mid{\mathcal F}_{t-}]$. For example, suppose $S=\overline{\mathbb R}$. Let $W$ be standard Brownian motion and ${\mathbb F}$ the right-continuous filtration it generates. Set $X=|W_1|$ and let $V$ be the regular version of $V_t=\bigwedge[X\mid{\mathcal F}_t]$. Then $V_t=0$ for all $t<1$, but since ${\mathcal F}_{1-}={\mathcal F}_1$ one has $\bigwedge[X\mid{\mathcal F}_{1-}]=X>0$.
\end{example}
The following theorem is the main result of this section. It provides equivalent conditions for when a monotone process can be recovered from its final value by taking conditional infima.
\begin{theorem}[Recovery of monotone processes] \label{T:NCR}
Let $U=(U_t)_{t\ge0}$ be an adapted nondecreasing right-continuous $S$-valued process, and define $U_\infty=\sup_{t\ge0}U_t$. The following conditions are equivalent, where the regular version of $\bigwedge[U_\infty\mid{\mathcal F}_t]$ is understood:
\begin{enumerate}
\item\label{T:NCR:1} $U_t = \bigwedge[U_\infty\mid{\mathcal F}_t]$ for all $t\ge0$;
\item\label{T:NCR:2} Any stopping time $\tau$ with $U_\tau<Y$ on $\{\tau<\infty\}$ for some ${\mathcal F}_\tau$-measurable $S$-valued random variable $Y\le U_\infty$ satisfies $\P(\tau<\infty)=0$.
\item\label{T:NCR:3}
$\P(Y\le U_\infty \mid{\mathcal F}_\tau)<1$ holds on $\{U_\tau<+\infty\}$ for every stopping time $\tau$ and every ${\mathcal F}_\tau$-measurable $S$-valued random variable $Y$ with $U_\tau<Y$ on $\{U_\tau<+\infty\}$.
\end{enumerate}
\end{theorem}
Condition~\ref{T:NCR:2} of Theorem~\ref{T:NCR} excludes sure improvements. Indeed, if the condition fails for some stopping time $\tau$, then on the positive probability event $\{\tau<\infty\}$, one has $U_\tau <Y\le U_\infty$, where $Y$ is ${\mathcal F}_\tau$-measurable. At time $\tau$ one is therefore guaranteed that $U$ will increase in the future by an amount that is ``bounded away from zero''. In economic terms, supposing $U$ is real-valued to fix ideas, one can think of a situation where exchanging the current value $U_\tau$ for the final outcome $U_\infty$ is guaranteed to result in an ${\mathcal F}_\tau$-measurable gain of at least $Y-U_\tau>0$. On the other hand, if condition~\ref{T:NCR:2} is satisfied, then there is no nontrivial ${\mathcal F}_\tau$-measurable lower bound on the gain. In this sense, \ref{T:NCR:2} is reminiscent of the no-arbitrage type conditions appearing in mathematical finance. This analogy is brought further by the equivalent characterization~\ref{T:NCR:1}, which can be thought of as a martingale condition.
In contrast to the correspondence between no arbitrage and martingales however, Theorem~\ref{T:NCR} does not involve any equivalent changes of probability measure. This is because the conditional infimum only depends on the probability measure through its nullsets, which carries over to the ``martingale'' condition~\ref{T:NCR:1}. Both~\ref{T:NCR:1} and~\ref{T:NCR:2} are thus invariant with respect to equivalent measure changes.
The equivalent property \ref{T:NCR:3} is similar to~\ref{T:NCR:2}, but looks more convoluted. The reason for stating it is that it is closely related to the notion of {\em stickiness} for real-valued increasing processes. In fact, \ref{T:NCR:3} may be viewed as a natural generalization of the stickiness property to processes on $[0,\infty)$ with values in a lattice $S$ which satisfies the assumptions~\ref{A1}--\ref{A3}. Sticky processes are discussed in Section~\ref{S:sticky}.
\begin{proof}[Proof of Theorem~\ref{T:NCR}]
\ref{T:NCR:1} $\Rightarrow$ \ref{T:NCR:2}: Pick a stopping time $\tau$ and an ${\mathcal F}_\tau$-measurable random variable $Y\le U_\infty$ such that $U_\tau<Y$ on $\{\tau<\infty\}$. In particular, $Y\le\bigwedge[U_\infty\mid{\mathcal F}_\tau]$. Together with~\ref{T:NCR:1} and Lemma~\ref{L:optional sampling}, this yields
\[
\bigwedge[U_\infty\mid{\mathcal F}_\tau] = U_\tau < Y \le \bigwedge[U_\infty\mid{\mathcal F}_\tau]
\]
on $\{\tau<\infty\}$. Thus $\P(\tau<\infty)=0$ as required, showing that~\ref{T:NCR:2} holds.
\ref{T:NCR:2} $\Rightarrow$ \ref{T:NCR:3}: Pick a stopping time $\tau$ and an ${\mathcal F}_\tau$-measurable random variable $Y$ with $Y\le U_\infty$ and $U_\tau<Y$ on $\{U_\tau<\infty\}$. Define $A=\{\P(Y\le U_\infty\mid{\mathcal F}_\tau)=1\}\cap\{U_\tau<\infty\}$. We must show that $\P(A)=0$. To this end, define the stopping time
\[
\sigma = \tau\bm 1_A + \infty \bm 1_{A^c}
\]
and the ${\mathcal F}_\sigma$-measurable random variable
\[
Z = (Y\vee \chi_A)\wedge (U_t\vee \chi_{A^c}) = \begin{cases} Y & \text{on $A$}\\ U_t &\text{on $A^c$.}\end{cases}
\]
Since $Y\le U_\infty$ on $A$, we have $Z\le U_\infty$. Moreover, since $\{\sigma<\infty\}\subseteq A$, we have $U_\tau<Z$ on $\{\sigma<\infty\}$. Thus~\ref{T:NCR:2} implies $\P(\sigma<\infty)=0$. Therefore, using also that $A\cap\{\tau=\infty\}$ is a nullset since $Y\le U_\infty=U_\tau<Y$ there, we obtain
\[
\P(A) = \P(A\cap\{\tau<\infty\}) = \P(\sigma<\infty) = 0,
\]
as required.
\ref{T:NCR:3} $\Rightarrow$ \ref{T:NCR:1}: Pick $t\ge0$ and let $A=\{U_t<\bigwedge[U_\infty\mid{\mathcal F}_t]\}$. Define $Y=\bigwedge[U_\infty\mid{\mathcal F}_t] \vee \chi_{A^c}$. Then $Y$ is ${\mathcal F}_t$-measurable, with $U_t<Y$ on $A$ and $Y=+\infty$ on $A^c$, hence $U_t<Y$ on $\{U_t<+\infty\}$. Moreover, $Y\le U_\infty$ on $A$, hence $\P(Y\le U_\infty\mid{\mathcal F}_t)=1$ on $A\subseteq\{U_t<+\infty\}$. By \ref{T:NCR:3}, this forces $\P(A)=0$. Thus~\ref{T:NCR:1} holds.
\end{proof}
\section{Sticky processes} \label{S:sticky}
In this section we apply the theory of Section~\ref{S:CI} with the complete lattice $\overline{\mathbb R}=[-\infty,\infty]$ in Example~\ref{E:CI 1}. We are thus dealing with scalar (i.e., extended real-valued) non-decreasing processes and conditional infima of scalar random variables. In this context there is a close connection between condition~\ref{T:NCR:3} of Theorem~\ref{T:NCR} and the notion of {\em stickiness}.
Throughout this section, $X$ denotes a c\`adl\`ag adapted process with values in a given separable metric space $({\mathcal X},d)$, which is equipped with its Borel $\sigma$-algebra.
\begin{definition} \label{D:sticky}
We call $X$ {\em globally sticky} if $\P(\sup_{t\in[\tau,\infty)}d(X_t,X_\tau) \le \varepsilon \mid {\mathcal F}_\tau) > 0$ for every stopping time $\tau$ and every strictly positive ${\mathcal F}_\tau$-measurable random variable $\varepsilon$. We call $X$ {\em sticky} if the stopped process $X^T=(X_{t\wedge T})_{t\ge0}$ is globally sticky for every $T\in{\mathbb R}_+$.
\end{definition}
Note that on the event $\{\tau=\infty\}$, the supremum in this definition is taken over the empty set, and therefore equals $-\infty$ by convention. Thus the conditional probability is equal to one on this event. Furthermore, we never have to evaluate the possibly undefined quantity~$X_\infty$.
\begin{remark}
The terminology of Definition~\ref{D:sticky} is consistent with the existing literature, where stickiness is generally defined for process on a bounded time interval $[0,T]$. In our setting it is more natural to work with process on $[0,\infty)$, which makes the notion of global stickiness convenient.
\end{remark}
A wide variety of processes are sticky. For example, any one-dimensional regular strong Markov process is sticky, see \citet[Proposition~3.1]{G:06}. Moreover, any process with conditional full support is sticky; see \citet{GRS:08,BPS:15}. This includes most L\'evy processes, large classes of solutions of stochastic differential equations, processes like skew Brownian motion, as well as non-Markovian non-semimartingales like fractional Brownian motion. We will return to the conditional full support property in connection with Proposition~\ref{P:int f sticky} below. Continuous functions of sticky processes are sticky, and stickiness is preserved under bounded time changes. \citet{RS:16} provide further examples and references, and we develop some additional results in this direction below.
For a non-decreasing ${\mathbb R}$-valued process $U$, global stickiness reduces to the condition
\begin{equation} \label{eq:U sticky}
\P(U_\infty \le U_\tau + \varepsilon \mid {\mathcal F}_\tau) > 0 \text{ for any stopping time $\tau$ and ${\mathcal F}_\tau$-measurable $\varepsilon>0$,}
\end{equation}
where $U_\infty=\lim_{t\to\infty}U_t$. This immediately yields the following corollary of Theorem~\ref{T:NCR}, which explains the relevance of stickiness in the present context.
\begin{corollary} \label{C:sticky}
An adapted non-decreasing right-continuous ${\mathbb R}$-valued process $U$ satisfies $U_t=\bigwedge[U_\infty\mid{\mathcal F}_t]$ for all $t\ge0$ if and only if it is globally sticky.
\end{corollary}
\begin{proof}
View $U$ as taking values in $S=\overline{\mathbb R}$. By Theorem~\ref{T:NCR}, the equality $U_t=\bigwedge[U_\infty\mid{\mathcal F}_t]$ holds for all $t\in{\mathbb R}_+$ if and only if $\P(U_\tau + \varepsilon \le U_\infty\mid{\mathcal F}_\tau)<1$ holds on $\{U_\tau<\infty\}$ for every stopping time $\tau$ and every ${\mathcal F}_\tau$-measurable $\varepsilon>0$. Applying this with~$\varepsilon/2$ in place of~$\varepsilon$, one sees that the weak inequality can be replaced by a strict inequality. Therefore the inequality $\P(U_\tau + \varepsilon \le U_\infty\mid{\mathcal F}_\tau)<1$ can be replaced by $\P(U_\infty \le U_\tau + \varepsilon \mid{\mathcal F}_\tau)>0$. Consequently, since $U$ is actually finite-valued, the above statement is equivalent to the stickiness property~\eqref{eq:U sticky}.
\end{proof}
\begin{remark} Inspired by Corollary~\ref{C:sticky}, one may use condition \ref{T:NCR:3} of Theorem~\ref{T:NCR} to {\em define} global stickiness for nondecreasing processes valued in a complete lattice satisfying the assumptions~\ref{A1}--\ref{A3}.
\end{remark}
Corollary~\ref{C:sticky} is useful because non-decreasing functionals of sticky processes are often sticky, which means that there is an abundance of non-decreasing sticky processes. We now provide a number of results in this direction.
\begin{proposition} \label{P:f(X) run max}
Let $U_t=\sup_{s\le t}f(X_s)$, where $f:E\to{\mathbb R}$ is a continuous map. If $X$ is (globally) sticky, then $U$ is also (globally) sticky.
\end{proposition}
\begin{proof}
Assume that $X$ is globally sticky. The result for $X$ sticky but not globally sticky follows by replacing $X$ by $X^T$ for any $T<\infty$ in the argument below. Fix any stopping time $\tau$ and ${\mathcal F}_\tau$-measurable random variable $\varepsilon>0$. On the event $\{\tau<\infty\}$, the random set $C=f^{-1}((-\infty,f(X_\tau)+\varepsilon))$ is open and contains $X_\tau$. One can find a strictly positive ${\mathcal F}_\tau$-measurable random variable $\varepsilon'$ such that, on $\{\tau<\infty\}$, $C$ contains the closed ball of radius $\varepsilon'$ centered at $X_\tau$. Consequently,
\begin{align*}
\P(U_\infty \le U_\tau+\varepsilon \mid {\mathcal F}_\tau) &\ge\P(f(X_t) \le f(X_\tau) + \varepsilon \text{ for all $t\in[\tau,\infty)$} \mid {\mathcal F}_\tau) \\
&\ge \P( d(X_t,X_\tau) \le \varepsilon' \text{ for all $t\in[\tau,\infty)$} \mid{\mathcal F}_\tau) \\
&>0,
\end{align*}
using that $X$ is globally sticky. Thus $U$ is also globally sticky.
\end{proof}
\begin{corollary}
If $X$ is real-valued and (globally) sticky, then $\overline X_t=\max_{0\le s\le t}X_s$ and $X^*_t=\max_{0\le s\le t}|X_s|$ are also (globally) sticky.
\end{corollary}
The next result looks somewhat abstract, but has useful consequences. In particular, it implies that the local time of a sticky semimartingale is again sticky; see Corollary~\ref{C:sticky local time} below. We let $d(x,K)=\inf\{d(x,y)\colon y\in K\}$ denote the distance from a point $x\in {\mathcal X}$ to a subset $K\subseteq{\mathcal X}$.
\begin{proposition} \label{P:X U K sticky}
Let $K\subseteq {\mathcal X}$ be a closed subset, and let $U$ be a nondecreasing right-continuous adapted process that satisfies the following property for almost every~$\omega$:
\begin{equation}\label{P:X U K sticky cond}
\text{\parbox{0.8\textwidth}{If $[t_1,t_2]$ is an interval such that either $X(\omega)\in K$ on $[t_1,t_2)$, or\\ $d(X(\omega),K)\ge a$ on $[t_1,t_2]$ for some $a>0$, then $U$ is constant on $[t_1,t_2]$.}}
\end{equation}
If $X$ is (globally) sticky, then $U$ is also (globally) sticky.
\end{proposition}
\begin{proof}
We prove the result for $X$ globally sticky. Fix any stopping time $\tau$ and any ${\mathcal F}_\tau$-measurable $\varepsilon>0$. For each $a>0$, define the stopping time $\sigma_a=\inf\{t\ge \tau\colon d(X_t,K)\ge a\}$. Since $U$ satisfies \eqref{P:X U K sticky cond}, the equality $U_\infty=U_{\sigma_a}$ holds on the event where $d(X_t,K) \ge a/2$ for all $t\in[\sigma_a,\infty)$. Consequently, for any $a>0$,
\begin{align*}
\P(U_\infty \le U_\tau + \varepsilon \mid{\mathcal F}_\tau)
&\ge \P(U_{\sigma_a} \le U_\tau + \varepsilon \text{ and } d(X_t,K) \ge \frac{a}{2} \text{ for all $t\in[\sigma_a,\infty)$} \mid{\mathcal F}_\tau) \\
&= {\mathbb E}\left[ \bm 1_{\{U_{\sigma_a} \le U_\tau + \varepsilon\}}\, \P( d(X_t,K) \ge \frac{a}{2} \text{ for all $t\in[\sigma_a,\infty)$} \mid {\mathcal F}_{\sigma_a}) {\ \Big|\ } {\mathcal F}_\tau \right].
\end{align*}
Consider now the event
\[
A = \{\P(U_\infty \le U_\tau + \varepsilon \mid{\mathcal F}_\tau) = 0\} \in {\mathcal F}_\tau.
\]
Then, by the inequality above,
\[
\bm 1_{\{U_{\sigma_a} \le U_\tau + \varepsilon\}}\, \P( d(X_t,K) \ge a/2 \text{ for all $t\in[\sigma_a,\infty)$} \mid {\mathcal F}_{\sigma_a})=0
\]
holds on~$A$ for all $a>0$. The global stickiness property states that the conditional probability is strictly positive, whence $\bm 1_{\{U_{\sigma_a} \le U_\tau + \varepsilon\}}=0$ on $A$ for all $a>0$. Define the stopping time $\sigma_0=\inf\{t\ge \tau\colon X_t\notin K\}$. Sending $a$ to zero and using that $K$ is closed, we obtain $\sigma_a\downarrow \sigma_0$. Right-continuity and non-decrease of $U$ then yields $U_{\sigma_a}\downarrow U_{\sigma_0}$, and hence
\begin{equation}\label{P:X U K sticky:eq2}
\bm 1_{\{U_{\sigma_0} < U_\tau + \varepsilon\}} \le \lim_{a\downarrow0}\bm 1_{\{U_{\sigma_a} \le U_\tau + \varepsilon\}}=0 \quad\text{on $A$.}
\end{equation}
But since $X$ lies in $K$ on $[\tau,\sigma_0)$, the assumption~\eqref{P:X U K sticky cond} on $U$ implies that $U_{\sigma_0}=U_\tau$. Thus the left-hand side of~\eqref{P:X U K sticky:eq2} equals one, which forces $\P(A)=0$. Thus $\P(U_\infty \le U_\tau + \varepsilon \mid{\mathcal F}_\tau)>0$, that is, $U$ is sticky.
\end{proof}
\begin{corollary} \label{C:sticky local time}
Suppose $X$ is a real semimartingale, and let $L^x$ be its local time at level $x$. If $X$ is (globally) sticky, and $x_1,\ldots,x_n\in{\mathbb R}$, $n\in{\mathbb N}$ are arbitrary, then $L^{x_1}+\cdots+L^{x_n}$ is also (globally) sticky.
\end{corollary}
\begin{proof}
Apply Proposition~\ref{P:X U K sticky} with $K=\{x_1,\ldots,x_n\}$ and $U=L^{x_1}+\cdots+L^{x_n}$.
\end{proof}
\begin{example}
Without the stickiness assumption on $X$, there is no guarantee that its local time is sticky. Indeed, let $W$ be Brownian motion, which is not globally sticky. Its local time $L^0_t(W)$ tends to infinity with $t$, so that $\bigwedge[L^0_\infty\mid{\mathcal F}_t]=\infty\ne L^0_t(W)$. This can be turned into an example on a finite time horizon as follows. Let $\tau=\inf\{t\ge0\colon L^0_t(W)\ge 1\}$, which is a finite stopping time. Define $X_t = W_{(t/(1-t))\wedge\tau}$ for $t\in[0,1]$, which for $t=1$ should be read $X_1=W_\tau$. Then $X$ is a semimartingale with respect to the time-changed filtration, and its local time is given by $L^0_t(X)=L^0_{(t/(1-t))\wedge\tau}(W)$. In particular, one has $\bigwedge[L^0_1(X)]=1\ne 0=L^0_0(X)$.
\end{example}
For the next result we assume that ${\mathcal X}$ is an open connected subset of ${\mathbb R}^n$ and that $X$ has continuous paths. For any deterministic times $\tau\le T<\infty$, the restriction $X|_{[\tau,T]}=(X_t)_{t\in[\tau,T]}$ is a random element of the space $C([\tau,T];{\mathcal X})$ of all continuous maps $[\tau,T]\to {\mathcal X}$, equipped with the uniform metric. The process $X$ is said to have {\em conditional full support} if for any choice of deterministic times $0\le\tau<T$, the support of the regular conditional distribution of $X|_{[\tau,T]}$ given ${\mathcal F}_\tau$ is almost surely all of $C_{X_\tau}([\tau,T];{\mathcal X})$, the closed subset of $C([\tau,T];{\mathcal X})$ whose paths are equal to $X_\tau$ at time $\tau$. The notion of conditional full support plays an important role in mathematical finance, and implies the stickiness property; see e.g.~\citet{BPS:15}.
\begin{proposition} \label{P:int f sticky}
Let $f:{\mathcal X}\to{\mathbb R}_+$ be a nonnegative continuous function with $0\in f({\mathcal X})$. If $X$ has conditional full support, then the process $U$ given by
\[
U_t = \int_0^t f(X_s) ds
\]
is also sticky.
\end{proposition}
\begin{proof}
We must show that for any $T<\infty$, any stopping time $\tau\le T$, and any strictly positive ${\mathcal F}_\tau$-measurable $\varepsilon$, we have $\P(U_T\le U_\tau+\varepsilon\mid{\mathcal F}_\tau)>0$. By \citet[Lemma~3.1]{BPS:15}, it suffices to take $\tau<T$ deterministic; see also \citet[Lemma~2.1]{RS:16}. Consider the regular conditional distribution of $(\varepsilon,X|_{[\tau,T]})$ given ${\mathcal F}_\tau$, under which $X_\tau$ and $\varepsilon$ are both Dirac distributed almost surely and therefore can be treated as being deterministic. Let $\gamma\colon [0,1]\to {\mathcal X}$ be a continuous map with $\gamma(0)=X_\tau$ and $f(\gamma(1))=0$. Such a map exists since ${\mathcal X}$ is connected and $0\in f({\mathcal X})$. Let $m=\max_{s\in[0,1]}f(\gamma(s))$ be the largest value that $f$ attains along $\gamma$.
Now define the set $G\subset C_{X_\tau}([\tau,T];{\mathcal X})$ to consist of all ${\mathcal X}$-valued continuous paths $x\colon [\tau,T]\to {\mathcal X}$ with $x(\tau)=X_\tau$ satisfying the following two properties:
\begin{enumerate}
\item $f(x(t)) < m+\varepsilon$ for all $t\in[\tau,T]$.
\item $f(x(t)) < \varepsilon/(2T)$ for all $t\in[\sigma,T]$, where we define $\sigma\in(\tau,T)$ by
\[
\sigma=\tau + \frac{\varepsilon/2}{m + \varepsilon} \wedge \frac{T-\tau}{2}.
\]
\end{enumerate}
Then $G$ is open, being the intersection of two open sets. Moreover, $G$ is nonempty since it contains the path $(x(t))_{t\in[\tau,T]}$ given by
\[
x(t) = \gamma\left( 1\wedge\frac{t-\tau}{\sigma-\tau}\right).
\]
The conditional full support property therefore implies that the event $\{X|_{[\tau,T]}\in G\}$ has strictly positive regular conditional probability. On the other hand, whenever $X|_{[\tau,T]}$ remains in $G$ one has
\[
U_T - U_\tau = \int_\tau^\sigma f(X_t) dt + \int_\sigma^T f(X_t)dt \le \frac{\varepsilon/2}{m+\varepsilon}\times(m+\varepsilon) + T\times \frac{\varepsilon}{2T} = \varepsilon.
\]
In conclusion, one has $\P(U_T-U_\tau \le \varepsilon\mid{\mathcal F}_\tau)>0$ as required.
\end{proof}
\section{Martingales and supermartingales} \label{S:martingales}
Martingales are not always sticky: one example is the martingale $M_t=\P(W_1>0\mid{\mathcal F}_t)$ where $W$ is Brownian motion. This martingale starts at $1/2$ and converges to zero or one at time $t=1$. Therefore it does not remain in any interval around $1/2$ of width strictly less than $1/2$. Nonetheless, certain functionals of martingales are necessarily sticky, and consequently satisfy the ``inf-martingale'' property~\ref{T:NCR:1} of Theorem~\ref{T:NCR}. This includes the running maximum process $\overline M_t=\sup_{s\le t}M_s$ as well as the local time processes.
The following result is an immediate consequence of Theorem~\ref{T:NCR}. Recall that a c\`adl\`ag supermartingale $M$ is {\em closed on the right} if there exists an integrable random variable $X$ such that $M_t\ge {\mathbb E}[X\mid{\mathcal F}_t]$ for all $t\ge0$. For instance, this holds if $M$ is nonnegative or, more generally, if $\{M_t^-\colon t\ge0\}$ is a uniformly integrable family; see VI.8 in~\citet{DM:82}.
\begin{proposition} \label{P:maxM}
Let $M$ be a c\`adl\`ag supermartingale, possibly after an equivalent change of probability measure. Then $\overline M$ is sticky, that is,
\[
\overline M_t = \bigwedge[ \overline M_T \mid{\mathcal F}_T], \qquad t\le T<\infty.
\]
If additionally $M$ is closed on the right, then $\overline M$ is globally sticky, that is,
\[
\overline M_t = \bigwedge[ \overline M_\infty \mid{\mathcal F}_t], \qquad t\ge0.
\]
\end{proposition}
\begin{proof}
We apply the theory of Section~\ref{S:CI} with the complete lattice $\overline{\mathbb R}=[-\infty,\infty]$ in Example~\ref{E:CI 1}. Since the desired conclusion is invariant under equivalent changes of probability measure, we may suppose $M$ is already a supermartingale. We may also suppose it is closed on the right, since we otherwise replace $M$ by $M^T$. The result now follows from Theorem~\ref{T:NCR} with $U=\overline M$, once condition~\ref{T:NCR:2} of the theorem is verified. Thus, consider any stopping time~$\tau$ such that $\overline M_\tau < Y$ on $\{\tau<\infty\}$ for some ${\mathcal F}_\tau$-measurable random variable $Y \le \overline M_\infty$. Define $\sigma=\inf\{t>\tau\colon M_t\ge Y\}$. Then
\[
0 \ge {\mathbb E}[ M_\sigma - M_\tau ] = {\mathbb E}[ (M_\sigma - M_\tau)\bm 1_{\{\tau<\infty\}} ] \ge {\mathbb E}[ (Y - \overline M_\tau)\bm 1_{\{\tau<\infty\}} ] \ge 0.
\]
Therefore ${\mathbb E}[ (Y - \overline M_\tau)\bm 1_{\{\tau<\infty\}} ]=0$, and we deduce $\P(\tau<\infty)=0$, as required.
\end{proof}
An interesting consequence of Proposition~\ref{P:maxM} is that it allows to reconstruct any nonnegative local martingale $M$ from the pair $(M_\infty,\overline M_\infty)$. For uniformly integrable martingales this is obvious, since $M_t={\mathbb E}[M_\infty\mid{\mathcal F}_t]$ for all $t\ge0$. For general nonnegative local martingales the result is less obvious and even counterintuitive (at least to the author); in particular, many such local martingales satisfy $M_\infty=0$, in which case the global maximum $\overline M_\infty$ alone contains the same information as the entire process.
To reconstruct $M$ from $(M_\infty,\overline M_\infty)$, simply observe that a reducing sequence for $M$ is given by the crossing times $\tau_n=\inf\{t\ge0:\overline M_t\ge n\}$, so that
\[
M_{t\wedge\tau_n} = {\mathbb E}[M_{\tau_n}\mid{\mathcal F}_t] = {\mathbb E}[\overline M_{\tau_n}\bm 1_{\{\tau_n<\infty\}} + M_\infty\bm 1_{\{\tau_n=\infty\}}\mid{\mathcal F}_t].
\]
Thus $M_t=\lim_{n\to\infty}M_{t\wedge\tau_n}$ is determined by $(M_\infty,\overline M)$, which by Proposition~\ref{P:maxM} is determined by $(M_\infty,\overline M_\infty)$.
In fact, a stronger statement is true: it is enough to know only the very largest values of $\overline M_\infty$, in the following sense.
\begin{proposition} \label{P:maxM X}
Let $M$ be a nonnegative local martingale and let $X$ be any bounded random variable. Then $M$ can be reconstructed from the pair $(M_\infty, X\vee \overline M_\infty)$.
\end{proposition}
\begin{proof}
Define $V_t=\bigwedge[X\vee \overline M_\infty\mid{\mathcal F}_t]$ and $\tau_n=\inf\{t\ge0\colon V_t\ge n\}$. Let $c\ge X$ be a deterministic upper bound on $X$. We claim that $\overline M_t=V_t$ on $A=\{V_t\ge n\}$ for any $n>c$ and any $t\ge0$. To see this, note that $X<V_t\le X\vee\overline M_\infty$ on $A$ and hence $X<\overline M_\infty$ on $A$. Thus by Lemma~\ref{L:PCI}\ref{PCI:max linear} and Proposition~\ref{P:maxM},
\[
V_t \vee\chi_A = \bigwedge[X\vee \overline M_\infty \vee \chi_A \mid{\mathcal F}_t] = \bigwedge[\overline M_\infty \vee \chi_A \mid{\mathcal F}_t] = \bigwedge[\overline M_\infty \mid{\mathcal F}_t] \vee \chi_A = \overline M_t\vee\chi_A.
\]
This proves that $\overline M_t=V_t$ on $\{V_t\ge n\}$, as claimed. In conjunction with the inequality $\overline M_t\le V_t$, this implies that $\tau_n=\inf\{t\ge0:\overline M_t\ge n\}$ and $V_{\tau_n}=\overline M_{\tau_n}$ on $\{\tau_n<\infty\}$ for all $n>c$. The argument preceding the theorem now yields the desired result.
\end{proof}
The fact that a nonnegative local martingale $M$ can be reconstructed from the pair $(M_\infty, \overline M_\infty)$ can be deduced from results that already exist in the literature, under the additional assumption that $\overline M$ is continuous. For example, assuming without loss of generality that $M_\infty=0$, a conditional version of an argument by \citet{ELY:97} shows that
\begin{equation} \label{eq:ELYeq}
M_t=\lim_{n\to\infty}n\,\P(\overline M_\infty\ge n\mid{\mathcal F}_t).
\end{equation}
An alternative argument is based on the following identity due to \citet{NY:06}, where it is additionally assumed that $M_0=1$ and $M>0$:
\begin{equation} \label{eq:NYeq}
{\mathbb E}[f(\overline M_\infty)\mid{\mathcal F}_t] = f(\overline M_t)\left(1 - \frac{M_t}{\overline M_t}\right) + M_t \int_{\overline M_t}^\infty \frac{f(x)}{x^2}dx
\end{equation}
for any positive or bounded Borel function $f$. Choosing for $f$ functions $f_n$ such that $f_n=0$ on $(-\infty,n]$ and $\int_n^\infty x^{-2}f_n(x)dx=1$, the right-hand side of~\eqref{eq:NYeq} becomes equal to $M_t$ as soon as $n$ exceeds $\overline M_t$. This shows that
\[
M_t = \lim_{n\to\infty} {\mathbb E}[f_n(\overline M_\infty)\mid{\mathcal F}_t],
\]
which shows that $M_t$ can be recovered from $\overline M_\infty$.
Note that \eqref{eq:ELYeq} crucially relies on the assumption that $\overline M$ is continuous. Indeed, \citet[Example~3.2]{HR:15} construct a nonnegative martingale $M$, with very large but unlikely upward jumps, such that $M_0=1$, $M_\infty=0$, and
\[
\lim_{n\to\infty} n\,\P(\overline M_\infty \ge n) = 0.
\]
This is inconsistent with~\eqref{eq:ELYeq}. The continuity of $\overline M$ is similarly crucial for \eqref{eq:NYeq}.
Our next result shows that another interesting functional, namely the local time process of a local martingale, is always sticky.
\begin{proposition}
Let $M$ be a local martingale, and let $L^x$ denote its local time at level $x$. Then $L^x$ is sticky, that is,
\[
L^x_t = \bigwedge[L^x_T\mid{\mathcal F}_t], \qquad t\le T<\infty.
\]
\end{proposition}
\begin{proof}
By localization we may assume that $M$ is a martingale. Pick any $T<\infty$, any stopping time $\tau\le T$, and any strictly positive ${\mathcal F}_\tau$-measurable random variable $\varepsilon$. To verify the stickiness property~\eqref{eq:U sticky}, we must show that $\P(L^x_T-L^x_\tau \le 2\varepsilon\mid{\mathcal F}_\tau)>0$. To this end, define stopping times
\begin{align*}
\rho_n&=\inf\{t\ge\tau\colon |M_t-x|\ge n^{-1}\}\wedge T, \\
\rho&=\inf\{t\ge\tau\colon M_t\ne x\}\wedge T.
\end{align*}
We first show that
\begin{equation} \label{eq:M LT 1}
\P(L^x_T - L^x_{\rho_n} \le \varepsilon \mid{\mathcal F}_{\rho_n}) >0.
\end{equation}
Let $B=\{\P(L^x_T - L^x_{\rho_n} \le \varepsilon \mid{\mathcal F}_{\rho_n}) = 0\}$ and define the stopping time
\[
\upsilon = \inf\{t\ge\rho_n\colon M_t = x\} \wedge T.
\]
On $B$ we know that the local time process increases over the interval $[\rho_n,T]$ (in fact, it increases by more than $\varepsilon$). By \citet[Theorem~IV.7]{P:05}, the local time measure $dL_t$ is concentrated on those time points $t$ for which $M_{t-}=M_t=x$. Therefore $M_\upsilon=x$ on $B$. Moreover, $\rho_n$ occurs strictly before $T$ on $B$, so that $|M_{\rho_n}-x|\ge n^{-1}$ on $B$. Combining these observations yields
\[
{\mathbb E}[M_\upsilon - M_{\rho_n} \mid {\mathcal F}_{\rho_n}]
= \begin{cases}
{\mathbb E}[x-M_{\rho_n}\mid{\mathcal F}_{\rho_n}] \le -{\displaystyle\frac{1}{n}} & \text{on $\{M_{\rho_n}\ge x+n^{-1}\}\cap B$}, \\[1ex]
{\mathbb E}[x-M_{\rho_n}\mid{\mathcal F}_{\rho_n}] \ge {\displaystyle\frac{1}{n}} & \text{on $\{M_{\rho_n}\le x-n^{-1}\}\cap B$}.
\end{cases}
\]
Thus $\left|{\mathbb E}[M_\upsilon - M_{\rho_n} \mid {\mathcal F}_{\rho_n}] \right| \ge n^{-1}$ on $B$. The martingale property then forces $\P(B)=0$, which proves~\eqref{eq:M LT 1}.
Next, we prove that
\begin{equation} \label{eq:M LT 2}
\P(L^x_T - L^x_\rho \le 2\varepsilon \mid{\mathcal F}_\rho) >0.
\end{equation}
To this end, define the stopping time
\[
\sigma = \inf\{t\ge \rho\colon L^x_t \ge L^x_\rho + \varepsilon\}.
\]
On the event $\{\sigma>T\}$, clearly $L^x_T-L^x_\rho\le\varepsilon$. On the event $\{\rho_n\le\sigma\le T\}$, one has $L^x_\sigma\ge L^x_{\rho_n}$ and $L^x_\sigma-L^x_\rho=\varepsilon$, hence $L^x_T-L^x_\rho\ge L^x_T-L^x_{\rho_n} + \varepsilon$. Consequently, for each $n$,
\begin{align*}
\P(L^x_T - L^x_\rho \le 2\varepsilon \mid{\mathcal F}_\rho)
&\ge \P(L^x_T - L^x_{\rho_n} \le \varepsilon, \ \rho_n\le \sigma \mid{\mathcal F}_\rho) \\
&= {\mathbb E}[\bm 1_{\{\rho_n\le \sigma\}} \P(L^x_T - L^x_{\rho_n} \le \varepsilon \mid{\mathcal F}_{\rho_n}) \mid {\mathcal F}_\rho].
\end{align*}
Let $A=\{\P(L^x_T - L^x_\rho \le 2\varepsilon \mid{\mathcal F}_\rho)=0\}$. The above inequality along with~\eqref{eq:M LT 1} yields
\[
\bm 1_{\{\rho_n\le \sigma\}\cap A} = 0
\]
for all $n$. Since $\rho_n\downarrow \rho$, and since $\sigma>\rho$, it follows that $\P(A)=0$. This proves~\eqref{eq:M LT 2}.
Finally, just observe that $M$ is constant and equal to $x$ on $[\tau,\rho)$, so that $L^x_\tau=L^x_\rho$. Therefore
\[
\P(L^x_T-L^x_\tau \le 2\varepsilon\mid{\mathcal F}_\tau) = {\mathbb E}[ \P(L^x_T-L^x_\rho \le 2\varepsilon\mid{\mathcal F}_\rho)\mid{\mathcal F}_\tau] > 0
\]
due to~\eqref{eq:M LT 2}. This completes the proof.
\end{proof}
\section{Further examples of recovery of monotone processes} \label{S:further}
We now consider two examples of set-valued nondecreasing processes that can be recovered from their final values. The first example deals with convex hulls, and we apply the theory of Section~\ref{S:CI} with the complete lattice $S={\rm CO}({\mathbb R}^d)$ in Example~\ref{E:CI 2}. The second example deals with the collection of sites visited by a random walk on a countable set ${\mathcal X}$, and uses the complete lattice $S=2^{\mathcal X}$ in Example~\ref{E:CI 2_new}.
\subsection{Convex hulls}
Let $X=(X_t)_{t\ge0}$ be a c\`adl\`ag adapted process with values in ${\mathbb R}^d$. By Lemma~\ref{L:conv is adapted}, the ${\rm CO}({\mathbb R}^d)$-valued process $U=(U_t)_{t\ge0}$ given by
\[
U_t = \overline\conv(X_s\colon s\le t)
\]
is adapted. We have the following result.
\begin{proposition} \label{P:convex hull sticky}
If $X$ is sticky, then
\[
U_t = \bigwedge [U_T \mid {\mathcal F}_t], \qquad t\le T<\infty.
\]
\end{proposition}
\begin{proof}
Relying on the implication $\ref{T:NCR:3}\Rightarrow\ref{T:NCR:1}$ of Theorem~\ref{T:NCR}, it suffices to consider any stopping time $\tau\le T$ and ${\mathcal F}_\tau$-measurable ${\rm CO}({\mathbb R}^d)$-valued random variable $Y$ such that $U_\tau\subsetneq Y$, and prove that $\P(Y\subseteq U_T\mid{\mathcal F}_\tau)<1$. Define the ${\mathcal F}_\tau$-measurable random variable
\[
\varepsilon = 1\wedge \frac12 \sup_{y\in Y} \inf_{x\in U_\tau} |x-y|
\]
which is strictly positive since $U_\tau\subsetneq Y$. Furthermore, one has
\[
Y \not\subseteq \overline\conv( U_\tau \cup B(X_\tau,\varepsilon)),
\]
where $B(X_\tau,\varepsilon)$ is the ball of radius $\varepsilon$ centered at $X_\tau$. Since $X$ is sticky, one therefore gets
\begin{align*}
0 &< \P(\sup_{t\in[\tau,T]} |X_t-X_\tau| \le \varepsilon \mid{\mathcal F}_\tau) \\
&\le \P( U_T \subseteq \overline\conv(U_\tau \cup B(X_\tau,\varepsilon) \mid{\mathcal F}_\tau) \\
&\le \P( Y \not\subseteq U_T \mid{\mathcal F}_\tau).
\end{align*}
This yields $\P(Y\subseteq U_T\mid{\mathcal F}_\tau)<1$ as required.
\end{proof}
\subsection{Sites visited by a random walk}
Let $X=(X_t)_{t\ge0}$ be a c\`adl\`ag process with values in a countable set ${\mathcal X}$. Define the $2^{\mathcal X}$-valued process $U=(U_t)_{t\ge0}$ by
\[
U_t = \bigcup_{s\le t}\{X_s\}.
\]
This is the process whose value at time $t$ is the set of all sites $X$ has visited up to and including time $t$, and is adapted by Lemma~\ref{L:range of SP}. In this context, if we equip ${\mathcal X}$ with the discrete metric $d(x,y)=\bm1_{\{y\}}(x)$, stickiness of $X$ simply means that
\[
\P(X_t = X_\tau \text{ for all $t\in[\tau,T]$} \mid{\mathcal F}_\tau) > 0
\]
for every $T\ge0$ and every stopping time $\tau\le T$. That is, $X$ has conditionally unbounded holding times.
\begin{proposition} \label{P:RV sticky}
Assume $X$ has conditionally unbounded holding times in the above sense. Then
\[
U_t = \bigwedge [U_T \mid {\mathcal F}_t], \qquad t\le T<\infty.
\]
\end{proposition}
\begin{proof}
The proof is similar to that of Proposition~\ref{P:convex hull sticky}, but simpler.
\end{proof}
\section{Spaces of closed sets} \label{S:closed subsets}
Let $({\mathcal X},d)$ be a complete separable metric space, and let ${\rm CL}({\mathcal X})$ denote the collection of all nonempty closed subsets of ${\mathcal X}$. In our applications, ${\mathcal X}$ is either a countable set or ${\mathbb R}^d$, but we do not impose this yet. The distance between a point $x\in{\mathcal X}$ and a subset $A\subseteq{\mathcal X}$ is denoted by
\[
d(x,A)=\inf\{d(x,y)\colon y\in A\}.
\]
The {\em Wijsman topology} on ${\rm CL}({\mathcal X})$ is the smallest topology for which the maps $A\mapsto d(x,A)$, $x\in{\mathcal X}$, are all continuous; see~\cite{W:66}. It was proved by \citet[Theorem~4.3]{Beer:91} that with the Wijsman topology, ${\rm CL}({\mathcal X})$ becomes a Polish space.
The space ${\rm CL}({\mathcal X})$ is partially ordered by set inclusion. It is however not a lattice under union and intersection since it does not include the empty set. The space
\[
{\rm CL}_0({\mathcal X}) = {\rm CL}({\mathcal X})\cup\{\emptyset\}
\]
on the other hand is a complete lattice with $\inf_\alpha A_\alpha=\bigcap_\alpha A_\alpha$ and $\sup_\alpha A_\alpha=\overline{\bigcup_\alpha A_\alpha}$ for arbitrary collections $\{A_\alpha\}\subseteq{\rm CL}_0({\mathcal X})$. The Wijsman topology is extended to ${\rm CL}_0({\mathcal X})$ by declaring a sequence of closed sets $A_n$ convergent to $\emptyset$ if $d(x,A_n)\to\infty$ for all $x\in{\mathcal X}$. Equipped with the extended Wijsman topology, ${\rm CL}_0({\mathcal X})$ is again a Polish space; see \citet[Theorem~4.4]{Beer:91}.
The spaces ${\rm CL}({\mathcal X})$ and ${\rm CL}_0({\mathcal X})$ are convenient from the point of view of stochastic analysis. The reason is a characterization due to~\citet{Hess:83,Hess:86} of the Borel $\sigma$-algebra on ${\rm CL}({\mathcal X})$. Namely, the Borel $\sigma$-algebra coincides with the {\em Effros $\sigma$-algebra}, which is generated by the sets $\{A\in{\rm CL}({\mathcal X})\colon A\cap V\ne\emptyset\}$, where $V$ ranges over the open subsets of ${\mathcal X}$. This identification leads to the following lemma.
\begin{lemma} \label{L:range of SP}
Let $X=(X_t)_{t\ge0}$ be an ${\mathcal X}$-valued c\`adl\`ag adapted process on a filtered measurable space $(\Omega,{\mathcal F},{\mathbb F})$, whose filtration ${\mathbb F}$ is not necessarily right-continuous. Then the ${\rm CL}({\mathcal X})$-valued process $Y=(Y_t)_{t\ge0}$ given by
\[
Y_t = \overline{\{X_s\colon s\le t\}}
\]
is adapted. The process is then also adapted when viewed as taking values in ${\rm CL}_0({\mathcal X})$.
\end{lemma}
\begin{proof}
We need to argue that $\omega\mapsto Y_t(\omega)$ is ${\mathcal F}_t$-measurable for each $t$. Using Hess's characterization, it suffices to inspect inverse images of sets $\{A\in{\rm CL}({\mathcal X})\colon A\cap V\ne\emptyset\}$ with $V$ open. That is, we must check that the event
\[
F = \left\{\omega\in\Omega\colon \overline{\{X_s(\omega)\colon s\le t\}}\cap V \ne \emptyset\right\}
\]
lies in ${\mathcal F}_t$. For a c\`adl\`ag process $X$, the set $\overline{\{X_s(\omega)\colon s\le t\}}\cap V$ is nonempty if and only if $X_s(\omega)\in V$ for some $s\le t$. Consequently,
\[
F = \{\omega\in\Omega\colon \tau(\omega) < t \text{ or } X_t(\omega)\in V\}, \qquad \text{where }\tau=\inf\{s\ge0\colon X_{s-} \in V\}.
\]
Since $X_-$ is left-continuous, $\tau$ is predictable, and hence $F\in{\mathcal F}_t$; see \citet[Theorem~IV.73(b)]{DM:78}. The final assertion follows from the fact that $Y$ can never take the value $\emptyset$.
\end{proof}
The following result will be used later. Its proof illustrates the use of the two alternative descriptions of the Borel $\sigma$-algebra on ${\rm CL}({\mathcal X})$. We use the notation
\begin{equation} \label{eq:A_epsilon}
A_\varepsilon=\{x\in {\mathcal X}\colon d(x,A)\le\varepsilon\}
\end{equation}
for any $A\in{\rm CL}({\mathcal X})$ and any $\varepsilon\ge0$. If $A=\emptyset$ then $A_\varepsilon=\emptyset$ by convention.
\begin{lemma} \label{L:gen prop}
\begin{enumerate}
\item\label{L:gen prop:mu} The map $A\mapsto\mu(A)$ from ${\rm CL}_0({\mathcal X})$ to ${\mathbb R}$ is measurable, where $\mu$ is any finite measure on~${\mathcal X}$.
\item\label{L:gen prop:eps} The map $A\mapsto A_\varepsilon$ from ${\rm CL}_0({\mathcal X})$ to itself is measurable for any $\varepsilon>0$.
\end{enumerate}
\end{lemma}
\begin{proof}
In both cases it suffices to show that the respective maps restricted to ${\rm CL}({\mathcal X})$ are measurable.
\ref{L:gen prop:mu}: Using closedness of $A$ and the dominated convergence theorem, one obtains the equalities $\mu(A)=\int\bm 1_A(x)\mu(dx)=\int\bm 1_{\{0\}}(d(x,A))\mu(dx)=\lim_n \int(1-nd(x,A))^+\mu(dx)$, where $y^+$ denotes the positive part of~$y\in{\mathbb R}$. Each map $A\mapsto \int(1-nd(x,A))^+\mu(dx)$ is continuous, hence measurable, by definition of the Wijsman topology and the fact that $\delta\mapsto\int(1-n\delta)^+\mu(dx)$ is continuous due to the dominated convergence theorem. Thus the map $A\mapsto\mu(A)$ is the pointwise limit of real-valued measurable maps, and therefore itself measurable.
\ref{L:gen prop:eps}: One readily verifies $A_\varepsilon\cap V = A\cap V_\varepsilon$ for any open set $V$, where we define the open set $V_\varepsilon=\{x\in{\mathcal X}: d(x,V)<\varepsilon\}$. Therefore $\{A\in{\rm CL}({\mathcal X})\colon A_\varepsilon \cap V\} = \{A\in{\rm CL}({\mathcal X})\colon A \cap V_\varepsilon\}$. The left-hand side is the inverse image of $\{A\colon A\cap V\ne\emptyset\}$ under the map $A\mapsto A_\varepsilon$, and the right-hand side lies in the Effros $\sigma$-algebra on ${\rm CL}({\mathcal X})$. Measurability now follows from Hess's characterization.
\end{proof}
\subsection{Lattice operations}
In the following lemma, measurability is always understood with respect to the Borel $\sigma$-algebra. Since ${\rm CL}_0({\mathcal X})$ is Polish, the Borel $\sigma$-algebra on ${\rm CL}_0({\mathcal X})^k$ for $k\in\{2,3,\ldots,\infty\}$ coincides with the corresponding product $\sigma$-algebra.
\begin{lemma} \label{L:latop}
\begin{enumerate}
\item\label{L:latop:cup} The map $(A,B)\mapsto A\cup B$ from ${\rm CL}_0({\mathcal X})^2$ to ${\rm CL}_0({\mathcal X})$ is continuous.
\item\label{L:latop:closed} The set $\{(A,B)\colon A\subseteq B\}$ is closed in ${\rm CL}_0({\mathcal X})^2$.
\item\label{L:latop:incr} If $A_n$ is a nondecreasing sequence in ${\rm CL}_0({\mathcal X})$, meaning that $A_n\subseteq A_{n+1}$ for all~$n$, then $A_n$ converges to $\overline{\bigcup_n A_n}$ in ${\rm CL}_0({\mathcal X})$.
\item\label{L:latop:cup2} The map $(A_n)\mapsto \overline{\bigcup_n A_n}$ from ${\rm CL}_0({\mathcal X})^\infty$ to ${\rm CL}_0({\mathcal X})$ is measurable.
\item\label{L:latop:cap} If ${\mathcal X}$ is $\sigma$-compact, the map $(A_n)\mapsto \bigcap_n A_n$ from ${\rm CL}_0({\mathcal X})^\infty$ to ${\rm CL}_0({\mathcal X})$ is measurable.
\end{enumerate}
\end{lemma}
\begin{proof}
\ref{L:latop:cup}: Observe that $d(x,A\cup B) \le d(x,A)\wedge d(x,B)$ for all $x\in{\mathcal X}$, where we use the convention $d(x,\emptyset)=\infty$. We claim that strict inequality is impossible. Indeed suppose $A\cup B\ne\emptyset$ and let $x_n\in A\cup B$ achieve $d(x,x_n)\to d(x,A\cup B)$. Suppose $A$ contains infinitely many of the $x_n$ (otherwise $B$ does, and we work with $B$ instead). Then $x_n\in A$ along a subsequence, so that $d(x,A)\le d(x,x_n)\to d(x,A\cup B)$. Therefore strict inequality is impossible, and we have $d({\,\cdot\,},A\cup B) = d({\,\cdot\,},A)\wedge d({\,\cdot\,},B)$. The stated continuity property now follows from the definition of the extended Wijsman topology.
\ref{L:latop:closed}: If $A_n\subseteq B_n$ and $(A_n,B_n)\to(A,B)$, then $B=\lim_n B_n=\lim_n A_n\cup B_n = A\cup B$ in view of~\ref{L:latop:cup}. Thus $A\subseteq B$, as required.
\ref{L:latop:incr}: The statement is obvious if $A_n=\emptyset$ for all $n$, so we suppose $A_n\ne\emptyset$ for some $n$, and then without loss of generality for all $n$. Define $A=\overline{\bigcup_n A_n}$ for ease of notation. Fix any $x\in{\mathcal X}$. Since $A_n\subseteq A$, we have $d(x,A_n)\ge d(x,A)$ and hence $\lim_n d(x,A_n)\ge d(x,A)$. For the reverse inequality, pick any $\varepsilon>0$ and $y\in A$ such that $d(x,y)\le d(x,A)+\varepsilon$. Since $A$ is the closure of $\bigcup_nA_n$, there exists some $m$ and some $z\in A_m$ with $d(y,z)\le\varepsilon$. Consequently,
\[
d(x,A_m) \le d(x,z) \le d(x,y) + d(y,z) \le d(x,A) + 2\varepsilon.
\]
Since $d(x,A_n)$ is non-increasing, and since $\varepsilon$ was arbitrary, it follows that $\lim_n d(x,A_n)\le d(x,A)$. We deduce that $d(x,A_n)\to d(x,A)$ for all $x\in{\mathcal X}$, which means that $A_n\to A$.
\ref{L:latop:cup2}: First note that the map $\varphi_k\colon{\rm CL}_0({\mathcal X})^\infty\to{\rm CL}_0({\mathcal X})$, $(A_n)\mapsto\bigcup_{n\le k}A_n$ is continuous, being a composition ${\rm CL}_0({\mathcal X})^\infty \to {\rm CL}_0({\mathcal X})^k \to {\rm CL}_0({\mathcal X})$, $(A_n)\mapsto(A_1,\ldots,A_k)\mapsto \bigcup_{n\le k}A_n$ of two maps that are continuous by definition of the product topology and due to repeated use of~\ref{L:latop:cup}. By~\ref{L:latop:incr}, the map $(A_n)\mapsto \overline{\bigcup_n A_n}$ is the pointwise limit of the maps $\varphi_k$, and therefore measurable by \citet[Lemma~4.29]{AB:06}.
\ref{L:latop:cap}: Let $\varphi\colon(A_n)\mapsto \bigcap_n A_n$ denote the intersection map. We will prove that $\varphi^{-1}({\bf F})$ is a measurable subset of ${\rm CL}({\mathcal X})^\infty$, hence of ${\rm CL}_0({\mathcal X})^\infty$, for any measurable ${\bf F}\subseteq{\rm CL}({\mathcal X})$. The same then holds for any measurable ${\bf F}\subseteq{\rm CL}_0({\mathcal X})$, since $\varphi^{-1}(\{\emptyset\})=(\varphi^{-1}({\rm CL}({\mathcal X})))^c$ is measurable. This readily implies the assertion.
We must thus argue that $\varphi^{-1}({\bf F})$ is measurable for any measurable ${\bf F}\subseteq{\rm CL}({\mathcal X})$. In view of Hess's characterization of the Borel $\sigma$-algebra on ${\rm CL}({\mathcal X})$ it suffices to consider sets of the form ${\bf F}=\{A\in{\rm CL}({\mathcal X})\colon A\cap V\ne\emptyset\}$ with $V$ open. For such sets we have
\begin{equation} \label{eq:L:latop:phi-1}
\varphi^{-1}({\bf F}) = \big\{(A_n)\colon V\cap\bigcap_n A_n \ne\emptyset\big\} = \bigcup_m \big\{(A_n)\colon K_m\cap V\cap\bigcap_n A_n \ne\emptyset\big\},
\end{equation}
where $\{K_m\}_{m\in{\mathbb N}}$ is a compact cover of ${\mathcal X}$, which exists by $\sigma$-compactness. Thus it suffices to prove measurability of any set of the form $\{(A_n)\colon K\cap V\cap\bigcap_n A_n \ne\emptyset\}$ with $V$ open and $K$ compact. Fix a countable dense subset $D\subseteq{\mathcal X}$. We claim that for any $(A_n)\in{\rm CL}_0({\mathcal X})^\infty$ we have
\begin{equation} \label{KcapV}
K\cap V\cap\bigcap_n A_n \ne\emptyset \qquad\Longleftrightarrow \qquad
\begin{minipage}[c][4.5em][c]{.40\textwidth}
\begin{center}
$\exists\varepsilon>0$ rational, $\forall k\in{\mathbb N}$, $\exists x_k\in D$, $d(x_k,K)\le k^{-1}$, $d(x_k,V^c)\ge\varepsilon$, and $\forall n\in{\mathbb N}$, $d(x_k,A_n)\le k^{-1}$.
\end{center}
\end{minipage}
\end{equation}
To prove ``$\Rightarrow$'', let $x\in K\cap V\cap\bigcap_n A_n$. Since $V$ is open, there exists some rational $\varepsilon>0$ such that $d(x,V^c)\ge 2\varepsilon$. Since $D$ is dense, there exist points $x_k\in D$ such that $d(x_k,x)\le k^{-1}\wedge\varepsilon$. The triangle inequality then yields $d(x_k,V^c)\ge d(x,V^c)-d(x_k,x)\ge\varepsilon$, and we have $d(x_k,K)\le d(x_k,x)\le k^{-1}$ as well as $d(x_k,A_n)\le d(x_k,x)\le k^{-1}$ for all~$n$. This proves the forward implication. To prove ``$\Leftarrow$'', let $\varepsilon>0$ and $x_k$, $k\in{\mathbb N}$, have the stated properties. Since $d(x_k,K)\le k^{-1}$, there exist $y_k\in K$ with $d(x_k,y_k)\le 2k^{-1}$. By compactness of $K$, we may pass to a subsequence and assume that $y_k\to x$ for some $x\in K$. Then also $x_k\to x$, and continuity of the distance function implies $d(x,V^c)\ge\varepsilon$ and $d(x,A_n)=0$ for all $n$. We conclude that $x\in K\cap V\cap\bigcap_n A_n$, which is therefore nonempty. This completes the proof of \eqref{KcapV}.
Now, observe that \eqref{KcapV} can be expressed as
\begin{equation} \label{eq:L:latop:cube}
\big\{(A_n)\colon K\cap V\cap\bigcap_n A_n \ne\emptyset\big\}
= \bigcup_{\substack{\varepsilon>0\\[.5ex]\varepsilon\in{\mathbb Q}}} \bigcap_{k\in{\mathbb N}} \bigcup_{\substack{x_k\in D \text{ with}\\[1ex]d(x_k,K)\le k^{-1}\\[1ex]d(x_k,V^c)\ge\varepsilon}} \big\{(A_n)\colon d(x_k,A_n)\le k^{-1} \ \forall n\big\}.
\end{equation}
The right-hand side is formed through countable unions and intersections of sets of the form $\{(A_n)\colon d(x_k,A_n)\le k^{-1} \ \forall n\}$. Such a set is actually a cube ${\bf G}_k\times {\bf G}_k\times\cdots\subseteq{\rm CL}({\mathcal X})^\infty$, where ${\bf G}_k=\{A\colon d(x_k,A)\le k^{-1}\}$ is the inverse image of $[0,k^{-1}]$ under the continuous map $A\mapsto d(x_k,A)$. We deduce that the right-hand side of \eqref{eq:L:latop:cube}, and hence the left-hand side, is measurable. Thus $\varphi^{-1}({\bf F})$ in \eqref{eq:L:latop:phi-1} is also measurable, as required.
\end{proof}
\begin{remark}
It appears unlikely to the author that $\sigma$-compactness is really be needed for measurability of the intersection map; dropping this assumption would be desirable and natural. However, it is interesting to note that there are some striking differences between unions and intersections. For instance, $A\cap B$ may be empty even if $A$ and $B$ are not. Also, the map $(A,B)\mapsto A\cap B$ is not continuous, even if one restrict to compact convex sets. Indeed, let ${\mathcal X}={\mathbb R}^2$, and let $A_n=\{(x_1,x_2):0\le x_1\le 1/n,\, x_2=nx_1\}$ be the straight line from the origin to the point $(1/n,1)$. Then $A_n\to B$, where $B=\{0\}\times[0,1]$ is the line from the origin to $(0,1)$. Thus $A_n\cap B=\{(0,0)\}$ does not converge to $(\lim_n A_n)\cap B=B$.
\end{remark}
\subsection{Vector space operations}
If $A$ and $B$ are subsets of a vector space, their sum is defined by $A+B=\{x+y\colon x\in A,\,y\in B\}$. This operation is associative and commutative, so the expression $A+B+C$ is unambiguous and equal to $A+C+B$, etc. Similarly, we define $\lambda A=\{\lambda x\colon x\in A\}$ for any scalar~$\lambda$. The dimension of an affine subspace $V$ is denoted $\dim(V)$, with the convention $\dim(\emptyset)=-1$.
\begin{lemma} \label{L:vec op}
Assume ${\mathcal X}$ is a locally convex topological vector space.\footnote{Of course, the topology is assumed to coincide with the one generated by the given metric $d$.}
\begin{enumerate}
\item\label{L:vec op:+} The map $(A_1,\ldots,A_n)\mapsto \overline{A_1+\cdots+A_n}$ from ${\rm CL}_0({\mathcal X})^n$ to ${\rm CL}_0({\mathcal X})$ is measurable for any $n\in{\mathbb N}$.
\item\label{L:vec op:scal} The map $A\mapsto\lambda A$ from ${\rm CL}_0({\mathcal X})$ to itself is measurable, where $\lambda$ is any scalar.
\item\label{L:vec op:conv} The map $A\mapsto\cconv(A)$ from ${\rm CL}_0({\mathcal X})$ to itself is measurable.
\item\label{L:vec op:aff} The map $A\mapsto\caff(A)$ from ${\rm CL}_0({\mathcal X})$ to itself is measurable.
\item\label{L:vec op:dim aff} The map $A\mapsto\dim(\aff(A))$ from ${\rm CL}_0({\mathcal X})$ to ${\mathbb R}\cup\{\infty\}$ is lower semicontinuous.
\end{enumerate}
\end{lemma}
\begin{proof}
In each case, we only need to consider inverse images of measurable subsets of ${\rm CL}({\mathcal X})$, since the inverse image of $\{\emptyset\}$ is obviously measurable for each of the given maps. The proofs all use Hess's characterization in terms of the Effros $\sigma$-algebra. Thus we inspect inverse images of the set $\{A\in{\rm CL}({\mathcal X})\colon A\cap V\ne\emptyset\}$, where $V$ is any nonempty open subset of ${\mathcal X}$.
\ref{L:vec op:+}: It suffices to consider the case $n=2$, as the general case follows by induction together with the fact that $\overline{A_1+A_2+A_3}=\overline{\overline{A_1+A_2}+A_3}$. Define the maps
\[
\varphi_\varepsilon\colon (A_1,A_2) \mapsto \overline{(A_1)_\varepsilon + (A_2)_\varepsilon}
\]
for any $\varepsilon\ge0$, where we use the notation~\eqref{eq:A_epsilon}. We may assume without loss of generality that the metric $d$ is translation invariant, see e.g.~\citet[Lemma~5.75]{AB:06}, in which case one readily verifies the inequalities
\[
d(x,A_1+A_2)-4\varepsilon \le d(x,(A_1)_\varepsilon+(A_2)_\varepsilon)\le d(x,A_1+A_2)
\]
for any $x\in{\mathcal X}$ and $A_1,A_2\in {\rm CL}_0({\mathcal X})$. It follows that $\lim_{\varepsilon\to0}\varphi_\varepsilon = \varphi_0$ pointwise with respect to the Wijsman topology. Thus it suffices to prove measurability of $\varphi_\varepsilon$ for $\varepsilon>0$. To this end, let $D\subseteq{\mathcal X}$ be a countable dense subset. Observe that $\overline{(A_1)_\varepsilon+(A_2)_\varepsilon}$ intersects the open set $V$ if and only if $(A_1)_\varepsilon+(A_2)_\varepsilon$ does. Since each $(A_i)_\varepsilon$ has nonempty interior, this holds if and only if $x_1+x_2\in V$ for some points $x_i\in D\cap(A_i)_\varepsilon$. This can be expressed as follows:
\[
\{(A_1,A_2)\colon \overline{(A_1)_\varepsilon+(A_2)_\varepsilon}\cap V\ne\emptyset\} = \bigcup_{\substack{x_1,\,x_2\in D \\ x_1+x_2\in V}} \{(A_1,A_2)\colon d(x_i,A_i)\le\varepsilon,\, i=1,2\}.
\]
The right-hand side is a countable union of products of the sets $\{A_i\colon d(x_i,A_i)\le\varepsilon\}$, $i=1,2$, which are measurable since $d(x_i,{\,\cdot\,})$ is continuous. Hence $\varphi_\varepsilon$ is measurable, as required.
\ref{L:vec op:scal}: If $\lambda=0$, the inverse image is either empty or all of ${\rm CL}({\mathcal X})$, so we may suppose that $\lambda$ is nonzero. But then $\{A\colon (\lambda A)\cap V\ne\emptyset\}=\{A\colon A\cap (\lambda^{-1}V)\ne\emptyset\}$ is measurable since $\lambda^{-1}V$ is open whenever $V$ is.
\ref{L:vec op:conv}: Since $V$ is open, we have $V\cap\cconv(A)\ne\emptyset$ if and only if $V\cap\conv(A)\ne\emptyset$. This is equivalent to $\sum_i\lambda_i x_i \in V$ for some (finitely many) convex weights $\lambda_i$ and points $x_i\in A$. Again since $V$ is open, the $\lambda_i$ may be chosen rational. Therefore,
\[
\{A\colon \cconv(A)\cap V\ne\emptyset\} = \bigcup_n \bigcup_{\substack{\lambda_i\in{\mathbb Q}_+\text{ with}\\[0.5ex]\sum_{i=1}^n\lambda_i=1}} \{ A\colon (\lambda_1A + \cdots + \lambda_n A)\cap V\ne\emptyset\}.
\]
The right-hand side is measurable in view of \ref{L:vec op:+} and \ref{L:vec op:scal}, so the left-hand side is measurable as well.
\ref{L:vec op:aff}: The proof is identical to the one for the convex hull, except that the $\lambda_i$ are affine weights rather than convex weights, meaning that they sum to one but are not constrained to be nonnegative.
\ref{L:vec op:dim aff}: Choose any convergent sequence $A_n\to A$ and set $k=\dim(\aff(A))$. We need to show that $\liminf_n \dim(\aff(A_n)) \ge k$. For $k=-1$, i.e.~$A=\emptyset$, the statement is obvious. Suppose instead $0\le k<\infty$. Then there exist $k+1$ affinely independent points $x_0,\ldots,x_k\in A$. By definition of the extended Wijsman topology, $d(x_i,A_n)\to0$ for $i=0,\ldots,k$. Thus for all large $n$, $A_n$ also contains $k+1$ affinely independent points, whence $\dim(\aff(A_n))\ge k$. Finally, if $k=\infty$, the above argument replaced by an arbitrary $k'\in{\mathbb N}$ shows that $\dim(\aff(A_n))\ge k'$ for all large $n$, and thus $\lim_n \dim(\aff(A_n)) =\infty$.
\end{proof}
\subsection{The space of convex subsets of Euclidean space}
In this subsection we assume that ${\mathcal X}={\mathbb R}^d$ and that the metric comes from the norm, $d(x,y)=\|x-y\|$. We consider the subspace ${\rm CO}({\mathcal X}) \subset {\rm CL}_0({\mathcal X})$ consisting of all closed convex subsets, equipped with the subspace topology and the associated Borel $\sigma$-algebra. The space ${\rm CO}({\mathcal X})$ is again partially ordered by set inclusion, and is a complete lattice with $\inf_\alpha A_\alpha = \bigcap_\alpha A_\alpha$ and $\sup_\alpha A_\alpha = \cconv(\bigcup_\alpha A_\alpha)$ for arbitrary collections $\{A_\alpha\}\subseteq{\rm CO}({\mathcal X})$. Note that ${\rm CO}({\mathcal X})$ is a closed subset of ${\rm CL}_0({\mathcal X})$. The following result shows that this complete lattice satisfies the assumptions imposed in Section~\ref{S:CI}.
\begin{theorem} \label{T:convex sets}
The complete lattice ${\rm CO}({\mathcal X})$ satisfies assumptions \ref{A1}--\ref{A3}. A strictly increasing measurable map $\phi\colon{\rm CO}({\mathcal X})\to{\mathbb R}$ is given by
\[
\phi(A) = \dim(\aff(A)) + \mu(A \mid \aff(A)),
\]
where $\mu({\,\cdot\,}\mid V)$ is the distribution of an ${\mathbb R}^d$-valued standard Gaussian random variable conditioned to lie in the affine subspace $V$. We set $\mu(\emptyset\mid\emptyset)=0$ by convention.
\end{theorem}
\begin{proof}
Due to Lemma~\ref{L:latop}\ref{L:latop:closed} the set $\{(A,B)\in{\rm CO}({\mathcal X})^2\colon A\subseteq B\}$ is closed in ${\rm CO}({\mathcal X})^2$ and hence measurable. Thus Assumption~\ref{A1} holds. Lemma~\ref{L:latop}\ref{L:latop:cap} yields that the countable infimum map is measurable, and Lemma~\ref{L:latop}\ref{L:latop:cup2} together with Lemma~\ref{L:vec op}\ref{L:vec op:conv} yield that the countable supremum map is measurable. Thus Assumption~\ref{A2} holds. Next, we claim that the map $\phi$ is strictly increasing. To see this, first note that $\phi(\emptyset)=0$ and $\phi(A)\ge1$ if $A\ne\emptyset$. Next, let $A\subsetneq B$ be two nonempty convex sets. If $\dim(\aff(A))<\dim(\aff(B))$ then $\phi(A) \le \dim(\aff(B)) - 1 + \mu(A\mid\aff(A)) \le \dim(\aff(B)) < \phi(B)$. On the other hand, if $\dim(\aff(A))=\dim(\aff(B))$, then the two affine hulls coincide and we denote them both by~$V$. Since $A$ is strictly contained in $B$ and both sets are convex and closed, $B\setminus A$ contains a set which is open in $V$. Therefore $\phi(B)-\phi(A)=\mu(B\setminus A\mid V)>0$. Finally, to see that $\phi$ is measurable, first note that $A\mapsto\dim(\aff(A))$ is measurable since it is lower semicontinuous by Lemma~\ref{L:vec op}\ref{L:vec op:dim aff}. Next, observe that
\[
\mu(A\mid \aff(A)) = \lim_{\varepsilon\downarrow0} \frac{\mu(A_\varepsilon)}{\mu(\aff(A)_\varepsilon)}, \qquad A\ne\emptyset,
\]
where $\mu({\,\cdot\,})$ is the standard Gaussian distribution on ${\mathbb R}^d$. Therefore, by Lemma~\ref{L:gen prop}\ref{L:gen prop:mu}--\ref{L:gen prop:eps} and Lemma~\ref{L:vec op}\ref{L:vec op:aff}, the map $A\mapsto\mu(A\mid\aff(A))$ is a limit of measurable maps, and hence itself measurable.
\end{proof}
\begin{lemma} \label{L:conv is adapted}
Let $X=(X_t)_{t\ge0}$ be an ${\mathbb R}^d$-valued c\`adl\`ag adapted process on a filtered measurable space $(\Omega,{\mathcal F},{\mathbb F})$, whose filtration ${\mathbb F}$ is not necessarily right-continuous. Then the ${\rm CO}({\mathbb R}^d)$-valued process $Y=(Y_t)_{t\ge0}$ given by
\[
Y_t = \cconv(X_s\colon s\le t)
\]
is adapted.
\end{lemma}
\begin{proof}
This follows from Lemma~\ref{L:range of SP} and Lemma~\ref{L:vec op}\ref{L:vec op:conv}.
\end{proof}
\subsection{The space of subsets of a countable set}
In this subsection we assume that ${\mathcal X}$ is countable set equipped with the discrete metric $d(x,y)=\bm 1_{\{y\}}(x)$. Then every subset of ${\mathcal X}$ is closed, so $2^{\mathcal X}={\rm CL}_0({\mathcal X})$. This space is partially ordered by set inclusion, and is a complete lattice under union and intersection. Furthermore, it satisfies the assumptions of Section~\ref{S:CI}.
\begin{theorem} \label{T:countable set}
The complete lattice $2^{\mathcal X}$ satisfies assumptions \ref{A1}--\ref{A3}. A strictly increasing measurable map $\phi:2^{\mathcal X}\to{\mathbb R}$ is given by
\[
\phi(A) = \sum_{x\in A}w(x),
\]
where $\{w(x):x\in{\mathcal X}\}$ is a countable set of strictly positive numbers summing to one.
\end{theorem}
\begin{proof}
Assumptions~\ref{A1} and~\ref{A2} follows directly from Lemma~\ref{L:latop}\ref{L:latop:closed}, \ref{L:latop:cup2}, and~\ref{L:latop:cap}. The map $\phi$ is clearly strictly increasing. To see that it is measurable, write $\phi(A)=\sum_{x\in{\mathcal X}}\bm 1_A(x)w(x)=\sum_{x\in{\mathcal X}}\bm (1-d(x,A))w(x)$ and observe that $A\mapsto d(x,A)$ is continuous and hence measurable.
\end{proof}
|
{
"timestamp": "2018-02-26T02:11:10",
"yymm": "1802",
"arxiv_id": "1802.08628",
"language": "en",
"url": "https://arxiv.org/abs/1802.08628"
}
|
\section{Introduction}
In the recent work \cite{BW:stableCMC}, a regularity and compactness theory has been developed (in a varifold setting) for weakly stable constant-mean-curvature (CMC) hypersurfaces. The question of whether there is an effective version of the compactness theorem of \cite{BW:stableCMC}, i.e.\ whether \emph{weakly} stable CMC hypersurfaces must satisfy a uniform local curvature estimate under appropriate hypotheses, arises naturally from that work. Here we settle this question by proving, for such hypersurfaces satisfying uniform mass and mean curvature bounds, a pointwise curvature estimate in the non-singular dimensions (i.e.\ in dimensions $\leq 6$) and a sheeting theorem (i.e.\ a pointwise curvature estimate subject to the additional hypothesis that the hypersurface is weakly close to a hyperplane) in all dimensions. Our results generalize the foundational works of Schoen--Simon--Yau \cite{SSY} that established a pointwise curvature estimate for \emph{strongly} stable minimal hypersurfaces in low dimensions and of Schoen--Simon \cite{SchoenSimon} that established a sheeting theorem in all dimensions for a class of strongly stable hypersurfaces (including CMC hypersurfaces) subject to an a priori smallness hypothesis on the singular set.
Recall that a smooth immersion $x \, : \, \Sigma \to {\mathbb R}^{n+1}$ has constant mean curvature if and only if every compact portion
$\Sigma_{1} \subset \Sigma$ is stationary with respect to the hypersurface area functional $\mathscr{a}(\Sigma_{1})$ for volume-preserving deformations. This condition is equivalent to the fact that for some constant $H$, every compact portion $\Sigma_{1} \subset \Sigma$ is stationary with respect to the functional
$$J (\Sigma_{1}) = \mathscr{a}(\Sigma_{1}) + H \mathscr{vol}(\Sigma_{1})$$
for arbitrary deformations, where
$\mathscr{vol}(\Sigma_{1})$ is the enclosed volume functional (which can be expressed as $\mathscr{vol}(\Sigma_{1}) = \frac{1}{n+1}\int_{\Sigma_{1}} x \cdot \nu \, d\Sigma$ where $\nu$ is a continuous unit normal to $\Sigma$ and $d\Sigma$ is the volume element with respect to the metric induced by the immersion $x$); in this case, $H$ is the value of the scalar mean curvature of $\Sigma$ with respect to $\nu$. If $\Sigma$ has constant mean curvature, then for any given $\phi \in C^{\infty}_{c}(\Sigma)$ and relative to any smooth 1-parameter family of deformations of $\Sigma$ with initial velocity $\phi \nu$, the second variation of $\Sigma$ with respect to $J$ is given by the quadratic form $$\delta^{2}J(\phi,\phi) = \int_{\Sigma} |\nabla \phi|^{2} - |A_{\Sigma}|^{2} \phi^{2},$$ where $A_{\Sigma}$ is the second fundamental form of $\Sigma$ and $\nabla$ is the gradient on $\Sigma$ (cf.\ \cite[Proposition 2.5]{BdoCE}).
We say that the CMC hypersurface $\Sigma$ is weakly stable if every compact portion $\Sigma_{1} \subset \Sigma$ is stable, i.e.\ has non-negative second variation, with respect to the area functional, or equivalently, with respect to $J$, for volume-preserving deformations. Weakly stable CMC hypersurfaces arise as stable critical points for the isoperimetric problem. The weak stability of $\Sigma$ is equivalent to the validity of the stability inequality
\[
\int_\Sigma |A_{\Sigma}|^2 \phi^{2} \leq \int_{\Sigma} |\nabla \phi|^2
\]
for any $\phi \in C^\infty_c(\Sigma)$ with $\int_\Sigma \phi =0$ (cf.\ \cite[Proposition 2.7]{BdoCE}), while strong stability of $\Sigma$ requires that this inequality holds for arbitrary $\phi \in C^{\infty}_{c}(\Sigma)$.
The methods used in \cite{SSY,SchoenSimon} for strongly stable hypersurfaces involve the use of \emph{positive} test functions $\phi$ in the stability inequality, and since these never integrate to zero, it is not clear how to carry over these methods to the setting of weak stability.
The strategy employed here is different: we take a geometric approach, combining the results of \cite{SSY,SchoenSimon} for strongly stable hypersurfaces with the fact that complete weakly stable minimal hypersurfaces have only one end, a result established by Cheng--Cheung--Zhou (\cite{ChengCheungZhou}) and generalized here (in a fairly straightforward manner) to allow the hypersurfaces to have a small singular set. This generalization is necessary for the sheeting theorem. A key difficulty in the proof of the sheeting theorem is to correctly ``localize'' the one-end result in order to transfer the ``flatness'' from large to small scales (see Remark \ref{rema:transfer}). This is handled by a careful blow-up procedure relying on the aforementioned regularity and compactness theorems in \cite{BW:stableCMC} for weakly stable CMC hypersurfaces and a rigidity theorem (Lemma~\ref{lemm:cone-small-curvature} below), due to Simons (\cite{Simons}), for minimal hypersurfaces of spheres.
Our main results are Theorem~\ref{theo:weakly-st-low-dim-est}, Theorem~\ref{theo:main-weakly-st-sheeting}, Theorem~\hyperref[theo:varifolds-low-dim-est]{$1^{\prime}$} and Theorem~\hyperref[theo:varifolds-main-weakly-st-sheeting]{$2^{\prime}$} below. Theorem~\ref{theo:weakly-st-low-dim-est} gives a pointwise curvature bound valid for mass bounded weakly stable CMC hypersurface of dimension $n$ with $3 \leq n \leq 6$ (that are assumed, in case $3 \leq n \leq 5$, to be immersed, or in case $n=6$, immersed without transverse intersections or immersed with a specific mass bound); Theorem \ref{theo:main-weakly-st-sheeting} establishes a sheeting result that holds in arbitrary dimensions for weakly stable CMC hypersurfaces satisfying an arbitrary uniform mass bound and allowed, a priori, to contain a small set of ``genuine'' singularities away from which the hypersurfaces are assumed smoothly immersed without transverse intersections. By virtue of the regularity theory of \cite{BW:stableCMC}, the hypotheses of absence or smallness of the set of genuine singularities in Theorem 1 and Theorem 2 respectively can immediately be replaced by considerably weaker structural conditions. These stronger results, which hold in a varifold setting, are given as
Theorem~\hyperref[theo:varifolds-low-dim-est]{$1^{\prime}$} and Theorem~\hyperref[theo:varifolds-main-weakly-st-sheeting]{$2^{\prime}$}.
It is interesting to note the following: Consider a CMC hypersurface $\Sigma$ immersed in $\mathbb{R}^{n+1}$ with mean curvature $H$ (possibly equal to zero). Recall that the \emph{Morse index} of $\Sigma$ is defined by setting $\Index(\Sigma)$ to be the maximum dimension of a linear subspace $W$ of $C^{\infty}_{c}(\Sigma)$ so that for any $\phi \in W\setminus\{0\}$, the second variation $\delta^{2}J(\phi, \phi) <0$, or equivalently,
\[
\int_{\Sigma} |A_{\Sigma}|^{2}\phi^{2} > \int_{\Sigma} |\nabla \phi|^{2}.
\]
It is easy to see that if $\Sigma$ is weakly stable, then $\Index(\Sigma) \leq 1$. On the other hand, Theorems \ref{theo:weakly-st-low-dim-est} and \ref{theo:main-weakly-st-sheeting} below are \emph{false} if we replace ``$\Sigma$ is weakly stable'' with ``$\Sigma$ satisfies $\Index(\Sigma) \leq 1$.'' This can be seen by considering rescalings of the higher-dimensional catenoid (the unique non-flat rotationally symmetric minimal hypersurface in $\mathbb{R}^{n+1}$) which converge weakly to a hyperplane with multiplicity two, but do not have bounded curvature (or satisfy the conclusion of the sheeting theorem). In the context of the results below, the crucial difference between ``weakly stable'' and ``$\Index(\Sigma)\leq 1$'' is that weakly stable surfaces cannot have two ends (cf.\ Appendix \ref{appendixCCZ}) while index one surfaces can (e.g., the catenoid).
\subsection{Results for hypersurfaces with small singular set}\label{small-sing}
In the non-singular dimensions (i.e.~in dimensions $\leq 6$), we have the following curvature estimates.
\begin{theo}\label{theo:weakly-st-low-dim-est}
For each $H_0 >0$ and $\Lambda \geq 1$, there exists $C = C(H_{0}, \Lambda)$ such that the following holds: Let $3 \leq n \leq 6$ and let $\Sigma \subset B_{R}(0)\subset \mathbb{R}^{n+1}$ be a smooth immersed hypersurface with $(\overline{\Sigma} \setminus \Sigma) \cap B_{R}(0) = \emptyset$, $\mathcal H^{n}(\Sigma) \leq \Lambda R^{n}$ and with constant scalar mean curvature $H$ such that $|H| \leq H_{0}R^{-1}.$ Assume that $\Sigma$ is weakly stable as a CMC immersion. For $n=6$ suppose additionally either that $\Sigma$ contains no point where $\Sigma$ intersects itself transversely (or equivalently, by the maximum principle, for each point $p \in \Sigma$ where $\Sigma$ is not embedded, there is $\rho >0$ such that $\Sigma \cap B^{n+1}_\rho(p)$ is the union of at two embedded smooth CMC hypersurfaces intersecting only tangentially), or that $\Lambda = 3-\delta$ for some $\delta \in (0, 1)$.
Then
\[
\sup_{x \in \Sigma \cap B_{R/2}(0)} |A_{\Sigma}|(x) \leq C R^{-1},
\]
where $A_{\Sigma}$ denotes the second fundamental form of $\Sigma$.
\end{theo}
We note that when $n=2$ (cf.\ \cite{Ye,EichmairMetzger}) stronger estimates are available---i.e., without the bounded area assumption---as consequences of the strong Bernstein type theorems available \cite{Barbosa-doCarmo,BdoCE,Palmer,DaSilveira,LopezRos}. As such, we will not consider this case here.
\begin{rema}
In case $n=6$, the reason for the additional restrictions in Theorem~\ref{theo:weakly-st-low-dim-est} (that either $\Sigma$ has no transverse points or $\Lambda = 3-\delta$) is that it is not known if a pointwise curvature estimate holds for 6-dimensional immersed strongly stable minimal hypersurfaces satisfying an arbitrary mass bound; such an estimate is only known to hold if the minimal hypersurface is either embedded (\cite{SchoenSimon}) or is immersed and satisfies a
mass bound corresponding to $\Lambda = 3 - \delta$ for some $\delta \in (0, 1)$ (\cite{Wic08}). See Proposition~\ref{prop:bernstein-low-dim} below.
\end{rema}
In all dimensions, we have the following sheeting theorem.
\begin{theo}\label{theo:main-weakly-st-sheeting}
Let $\Lambda, H_0 >0$ and $n\geq 3.$ Suppose that $\Sigma^{n}\subset B_{R}(0)\subset \mathbb{R}^{n+1}$ is an immersed hypersurface with $\mathcal H^{n}(\Sigma) \leq \Lambda R^{n}$, with constant scalar mean curvature $H$ such that $|H| \leq H_{0}R^{-1}$ and with $\mathcal{H}^{n-7+\alpha}((\overline{\Sigma}\setminus \Sigma) \cap B_{R}(0))=0$ for all $\alpha >0$ (in other words, $\Sigma$ may have a co-dimension $7$ singular set). Suppose that
$\Sigma$ contains no point where $\Sigma$ intersects
itself transversely (or equivalently, by the maximum principle, for each $p \in \Sigma$ where $\Sigma$ is not embedded there is $\rho >0$ such that $\Sigma \cap B^{n+1}_\rho(p)$ is the union of exactly two embedded smooth CMC hypersurfaces intersecting only tangentially), and that $\Sigma$ is weakly stable as a CMC immersion. There exists $\delta_{0}=\delta_{0}(n,H_{0},\Lambda)$ so that if additionally
\[
\Sigma \subset \{|x^{n+1}| \leq \delta_{0}R\}
\]
then $\overline{\Sigma}\cap B_{R/2}(0)$ separates into the union of the graphs of functions $u_{1} \leq \dots \leq u_{k}$ defined on $B^{n}_{R/2}(0) := B_{R/2}(0)\cap \{x^{n+1}=0\}$ satisfying
\[
\sup_{B^{n}_{R/2}(0)} \left( |Du_{i}| + R |D^{2}u_{i}|\right) \leq \delta_{0}
\]
for $i=1,\dots,k;$ moreover, each $u_i$ is separately a smooth CMC graph.
\end{theo}
\subsection{Results for varifolds} In view of \cite[Theorem 2.1]{BW:stableCMC}, Theorems \ref{theo:weakly-st-low-dim-est} and \ref{theo:main-weakly-st-sheeting} above imply the following stronger results for integral varifolds. We refer to \cite[Section 2.1]{BW:stableCMC} for precise definitions. Here we recall, slightly imprecisely, that:
\begin{itemize}
\item a \textit{classical singularity} of an integral varifold $V$ is a point $p$ such that, in a neighbourhood of $p$, ${\rm spt} \, \|V\|$ (where $\|V\|$ denotes the weight measure associated with $V$) is given by the union of three or more embedded $C^{1,\alpha}$ hypersurfaces-with-boundary that intersect pairwise only along their common boundary $L$ containing $p$ and such that at least two of the hypersurfaces-with-boundary meet transversely along $L$;
\item a (two-fold) \textit{touching singularity} of an integral varifold $V$ is a point $p \in {\rm spt} \, \|V\|$ such that ${\rm spt} \, \|V\|$ is not embedded at $p$ and in a neighborhood of $p$, the ${\rm spt} \, \|V\|$ is given by the union of exactly two $C^{1,\alpha}$ embedded hypersurfaces with only tangential intersection;
\item (see \cite{Simon:GMT} for details) the first variation of an integral varifold $V$ is a continuous linear functional on $C^1_c$ ambient vector fields and it represents the rate of change of the varifold's weight measure (area functional) computed along ambient deformations induced by the chosen vector field; when the first variation is a Radon measure (i.e.~it extends to a continuous linear functional on $C^0_c$ vector fields) the varifold is said to have locally bounded first variation; when, in addition, this Radon measure is absolutely continuous with respect to the weight measure $\|V\|$, and its Radon--Nikodym derivative (called generalized mean curvature of $V$) is in $L^p(\|V\|)$, the first variation of $V$ is said be locally summable to the exponent $p$ (with respect to the weight measure $\|V\|$). By the fundamental regularity theory of Allard, the class of integral $n$-varifolds $V$ with \textit{first variation locally summable to an exponent $p>n$} is compact in the varifold topology under uniform mass and $L^{p}$ mean curvature bounds, and such a $V$ enjoys an embryonic regularity property: there exists a dense open subset of ${\rm spt} \, \|V\|$ in which $\text{spt}\,\|V\|$ is $C^{1,\alpha}$ embedded, with $\alpha=1-\frac{n}{p}$ if $n < p < \infty$ and $\alpha \in (0, 1)$ arbitrary of $p = \infty$ (see \cite{Allard}).
\end{itemize}
\begin{theo-lowdim}\label{theo:varifolds-low-dim-est}
Let $\Lambda, H_0 >0$. For $3 \leq n \leq 6$, suppose that $V \in IV_n(B_{R}(0))$ is an integral varifold with $\|V\|(B_R(0)) \leq \Lambda R^n$. Assume that the following hypotheses hold:
\begin{enumerate}
\item the first variation of $V$ is locally summable to an exponent $p>n$ (with respect to the weight measure $\|V\|$);
\item $V$ has no classical singularities;
\item whenever $p$ is a (two-fold) touching singularity there exists $\rho>0$ such that $${\mathcal{H}}^{n}\left(\{y \in \supp{\|V\|} \cap B_\rho(p) : \Theta(\|V\|,y) = \Theta(\|V\|,p)\}\right)=0,$$
where $\Theta$ is the density;
\item the $C^1$ embedded part of $\supp{\|V\|}$ (non-empty by Allard's regularity theorem) has generalized mean curvature $h$ with $|h|=H$ for a constant $H\leq H_0$ (see \cite{BW:stableCMC} for the variational formulation of this assumption, which makes sense for a $C^1$ hypersurface and leads to its $C^2$ regularity by standard elliptic regularity theory);
\item the $C^2$ immersed part of $\supp{\|V\|}$ (which is a CMC immersion in view of (4)) is weakly stable, i.e.\ stable for the area measure under volume-preserving variations.
\end{enumerate}
Then $\Sigma=\supp{\|V\|} \cap B_{R}(0)$ is a smooth immersion and there is $C=C(H_{0},\Lambda)$ so that
\[
\sup_{x \in \Sigma \cap B_{R/2}(0)} |A_{\Sigma}|(x) \leq C R^{-1},
\]
where $A_{\Sigma}$ denotes the second fundamental form of $\Sigma$.
\end{theo-lowdim}
\begin{theo-highdim}\label{theo:varifolds-main-weakly-st-sheeting}
Let $\Lambda, H_0 >0$. For any $n\geq 3$ suppose that $V \in IV_n(B^{n+1}_{R}(0))$ is an integral varifold with $\|V\|(B^{n+1}_R(0)) \leq \Lambda R^n$. Assume that the following hypotheses hold:
\begin{enumerate}
\item the first variation of $V$ is locally summable to an exponent $p>n$ (with respect to the weight measure $\|V\|$);
\item $V$ has no classical singularities;
\item whenever $p$ is a (two-fold) touching singularity there exists $\rho>0$ such that $${\mathcal{H}}^{n}\left(\{y \in \supp{\|V\|} \cap B_\rho^{n+1}(p) : \Theta(\|V\|,y) = \Theta(\|V\|,p)\}\right)=0,$$
where $\Theta$ stands for the density;
\item the $C^1$ embedded part of $\supp{\|V\|}$ (non-empty by Allard's regularity theorem) has generalized mean curvature $h$ with $|h|=H$ for a constant $H\leq H_0$ (see \cite{BW:stableCMC} for the variational formulation of this assumption, which makes sense for a $C^1$ hypersurface and leads to its $C^2$ regularity by standard elliptic methods);
\item the $C^2$ immersed part of $\supp{\|V\|}$ (which is a CMC immersion in view of (4)) is weakly stable, i.e.\ stable for the area measure under volume-preserving variations.
\end{enumerate}
There exists $\delta_{0}=\delta_{0}(n,H_{0},\Lambda)$ so that if additionally
\[
\supp{\|V\|} \subset \{|x^{n+1}| \leq \delta_{0}R\}
\]
then $\supp{\|V\|} \cap B_{R/2}(0)$ separates into the union of the graphs of functions $u_{1} \leq \dots \leq u_{k}$ defined on $B^{n}_{R/2}(0) := B_{R/2}(0)\cap \{x^{n+1}=0\}$ satisfying
\[
\sup_{B^{n}_{R/2}(0)} \left( |Du_{i}| + R |D^{2}u_{i}|\right) \leq \delta_{0}
\]
for $i=1,\dots,k;$ moreover, each $u_i$ is separately a smooth CMC graph.
\end{theo-highdim}
\begin{rema}
The extension of the theorems above to the case of an ambient Riemannian manifold follows the same arguments, employing the result in \cite{BW:stableCMC-future}.
\end{rema}
\begin{rema}
Note that Theorems \ref{theo:weakly-st-low-dim-est}, \ref{theo:main-weakly-st-sheeting}, \hyperref[theo:varifolds-low-dim-est]{$1^{\prime}$} and \hyperref[theo:varifolds-main-weakly-st-sheeting]{$2^{\prime}$} hold in particular for $H=0$; in this case, the vanishing of the mean curvature prevents touching singularities, therefore assumption (3) in Theorems \hyperref[theo:varifolds-low-dim-est]{$1^{\prime}$} and \hyperref[theo:varifolds-main-weakly-st-sheeting]{$2^{\prime}$} is redundant. For $H=0$ our results generalize the works of Schoen--Simon--Yau \cite{SSY}, Schoen--Simon \cite{SchoenSimon} and the third author \cite[Theorem 3.3]{W:annals2014} from strong to weak stability.
\end{rema}
\begin{rema}
The conclusions of Theorems \ref{theo:main-weakly-st-sheeting} and \hyperref[theo:varifolds-main-weakly-st-sheeting]{$2^{\prime}$} clearly fail (even for strongly stable minimal hypersurfaces) for $n\geq 7$ without any flatness assumption, by the construction of Hardt--Simon \cite{HardtSimon:foliation}.
We also note that singularities do occur in stable CMC hypersurfaces (with $H \neq 0$) of dimension $\geq 7$, as shown by a recent construction of Irving (\cite{Irving}) modifying the earlier work of
Caffarelli--Hardt--Simon (cf.\ \cite{CHS}).
\end{rema}
\subsection{A remark on bounded index minimal surfaces}
\label{boundedindexminimal}
The discussion in the paragraph preceding Section~\ref{small-sing} notwithstanding, the techniques developed in this paper are relevant for the study of bounded index minimal surfaces in Riemannian $(n+1)$-manifold for $n\geq 7$ (i.e., in the singular dimensions). For example, if $\Sigma^{n}\subset B_{R}(0)\subset \mathbb{R}^{n+1}$ is a minimal surface with $\Index(\Sigma) \leq 1$, $\mathcal H^{n}(\Sigma) \leq \Lambda R^{n}$, and $\Sigma\subset \{|x^{n+1}| \leq \delta_{0}R\}$, then by a straightforward application of the Schoen--Simon sheeting theorem \cite{SchoenSimon}, $\Sigma$ splits into smooth sheets away from a given point. The argument used to prove Proposition \ref{prop:curv-est-sheeting-all-dim} extends to this situation to conclude that the sheets are connected by a small region that is close (depending on $\delta_{0}$) to an index one minimal hypersurface in $\mathbb{R}^{n+1}$ (with small singular set), \emph{having regular ends}. This last condition is the non-trivial conclusion; it follows from the argument in Proposition \ref{prop:curv-est-sheeting-all-dim}, transferring flatness from large scales to small scales (see Remark \ref{rema:transfer}). Using the arguments in \cite{CKM} (cf.\ \cite{BS:index}), similar statements hold for $\Index(\Sigma)\leq I_{0}$. See also \cite{Tysk}.
\subsection{Outline of the paper} Theorem \ref{theo:weakly-st-low-dim-est} will be proved in Section \ref{proofof1}, building on the Bernstein-type result given in Proposition \ref{prop:bernstein-low-dim} below (Section \ref{Bernsteintypetheorems}). Theorem \ref{theo:main-weakly-st-sheeting} will be proved in Section \ref{proofof2}, building on a different Bernstein-type result (Proposition \ref{prop:bernstein-all-dim} in Section \ref{Bernsteintypetheorems}). The proofs of both Bernstein-type results rely on a global result for weakly stable minimal hypersurfaces, namely the fact that they must be one-ended. This is proved in \cite{ChengCheungZhou} in the case of \emph{smooth} embedded hypersurfaces; this result, recalled in Theorem \ref{theo:CCZ-smooth} of Appendix \ref{appendixCCZ}, is all that is actually needed for Theorem \ref{theo:weakly-st-low-dim-est}, together with a classical blow-up argument. For the proof of Theorem \ref{theo:main-weakly-st-sheeting}, we extend the one-ended conclusion to the situation where the hypersurface may have a codimension-$7$ singular set; this is done in Theorem \ref{theo:CCZ-non-smooth} of Appendix \ref{appendixCCZ}. The proof of Theorem \ref{theo:main-weakly-st-sheeting} also relies on a careful blow-up argument for which we need to use certain results from \cite{BW:stableCMC}, which we recall in Appendix \ref{appendixBW}.
\subsection{Acknowledgements} O.C. was supported by an EPSRC grant EP/K00865X/1 as well as the Oswald Veblen Fund and NSF grant no.\ 1638352.
\section{Two Bernstein-type theorems}
\label{Bernsteintypetheorems}
We begin with the following Bernstein type result, which will yield Theorem \ref{theo:weakly-st-low-dim-est} when combined with a standard blow-up argument. We note that such a result holds for $n=2$ without the embededness or area growth assumptions, as discussed above. As a notational remark, we stress that we will always write $\nabla$ to denote the intrinsic gradient on a hypersurface, and will instead denote with $\nabla^{\mathbb{R}^{n+1}}$ the ambient gradient.
\begin{prop}\label{prop:bernstein-low-dim}
For $3 \leq n \leq 6$, suppose that $\Sigma^{n}\subset \mathbb{R}^{n+1}$ is a connected, weakly stable, immersed minimal hypersurface with no singularities and with $\mathcal H^{n}(\Sigma\cap B_{R}) \leq \Lambda R^{n}$ for some constant $\Lambda \geq 1$ and all $R>0$. When $n=6$ assume either that $\Lambda = 3 - \delta$ for some $\delta >0$ or that $\Sigma$ is embedded. Then $\Sigma$ is a hyperplane.
\end{prop}
\begin{proof}
We begin by showing that $\Sigma$ is (strongly) stable outside of a compact set. If all of $\Sigma$ is strongly stable, then by \cite{SSY,SchoenSimon} the proposition follows. If not, we may choose $R>0$ so that $\Sigma\cap B_{R}$ is unstable. If $\Sigma\setminus B_{2R}$ is unstable, then we may find functions $\varphi_{1} \in C^{\infty}_{c}(\Sigma\cap B_{R})$ and $\varphi_{2} \in C^{\infty}_{c}(\Sigma\setminus B_{2R})$ so that
\[
\int_{\Sigma} |A_{\Sigma}|^{2} \varphi_{i}^{2} > \int_{\Sigma} |\nabla \varphi_{i}|^{2} .
\]
By weak stability, $\int_{\Sigma} \varphi_{i} \neq 0$ for $i=1, 2$. Choose $t \in {\mathbb R}$ so that
\[
\int_{\Sigma} \varphi_{1} + t \varphi_{2} = 0.
\]
Because $\varphi_{1},\varphi_{2}$ have disjoint support, we find that
\[
\int_{\Sigma} |A_{\Sigma}|^{2} (\varphi_{1}+t\varphi_{2})^{2} > \int_{\Sigma} |\nabla (\varphi_{1} + t \varphi_{2})|^{2}.
\]
This contradicts the weak stability of $\Sigma$. Thus, $\Sigma$ is stable outside of a compact set.
We first assume that $\Sigma$ is embedded. We will explain below the modifications for the cases $\Sigma$ immersed and $3\leq n\leq 5,$ or $\Sigma$ immersed, $n=6$ and $\Lambda = 3 -\delta$. In the embedded case, we first show that there exists an integer $m$ such that any tangent cone at infinity is a hyperplane with multiplicity $m$.
\begin{claim}\label{claim:blowdown-low-dim}
There is $m \in \mathbb{N}$ so that for any sequence $\lambda_{j}\to0$, a subsequence of $\Sigma_{j}:=\lambda_{j}\Sigma$ converges smoothly and graphically on any compact subset of $\mathbb{R}^{n+1}\setminus\{0\}$ to a hyperplane of multiplicity $m$.
\end{claim}
\begin{proof}[Proof of the Claim]
By \cite[Theorem 3]{SchoenSimon} (for $n\leq 5$ the estimates in \cite{SSY} suffice) the magnitude of the second fundamental form decays as $\frac{1}{|y|}$ for $|y|\to \infty$, namely there exist $R_0>0$ and a constant $C>0$ such that $|A|(y) \leq \frac{C}{|y|}$ for $y \in \Sigma$ and $|y| \geq R_0$, where $|y|$ denotes the Euclidean norm of $y$ in $\mathbb{R}^{n+1}$. Therefore $\mathcal C$ is smooth away from $0$ and the convergence is smooth (possibly with multiplicity) on compact subsets of $\mathbb{R}^{n+1}\setminus\{0\}$. The smooth convergence implies that $\mathcal C \setminus \{0\}$ is (strongly) stable: by \cite{Simons} and the dimensional restriction, $\mathcal C$ is a flat hyperplane with some multiplicity $m \in \mathbb{N}$. Finally, the fact that the multiplicity $m$ is independent of the sequence $(\lambda_{j})$ is an immediate consequence of the monotonicity formula.
\end{proof}
The preceding claim implies that there exists $r_0$ such that, whenever $r>r_0$, the sphere $\partial B^{n+1}_r(0)$ intersects $\Sigma$ transversely: indeed, if that failed, we could produce a sequence of radii $r_i \to \infty$ where transversality fails but the corresponding sequence $\frac{1}{r_i} \Sigma$ would fail to converge graphically at some point on $\partial B^{n+1}_1(0)$.
Let $r>r_0$. The transversality condition established amounts to the fact that the gradient of $h:\Sigma \setminus \overline{B}_{r_0}(0) \to \mathbb{R}$, $h(x)=|x|$ (the ambient distance to the origin) is everywhere non-vanishing. By \cite[Theorem 3.1]{MilnorMorse} this implies that, for any $R>r$, $\Sigma \cap \left(B_R(0) \setminus B_r(0)\right)$ deformation retracts onto $\Sigma \cap \partial B_r(0)$. In particular, the number of connected components of $\Sigma \cap \left(B_R(0) \setminus B_r(0)\right)$ equals the number of connected components of $\Sigma \cap \partial B_r(0)$. Denoting with $D_1, \dots, D_N$ the connected components of $\Sigma \cap \partial B_r(0)$, we consider, for every $R>r$, $N$ disjoint open sets $A^R_1, \ldots, A^R_N$, each containing a single connected component of $\Sigma \cap \left(B_R(0) \setminus B_r(0)\right)$ and labelled so that $A^R_j$ contains $D_j$. Let $\tilde{A}_j=\cup_{R>r} A^R_j$: the open sets $\tilde{A}_j$ for $j=1, \dots, N$ are disjoint by construction and cover $\Sigma \setminus B_r(0)$, so the number of ends of $\Sigma$ is at least $N$.
The result of \cite{ChengCheungZhou} (see Theorem \ref{theo:CCZ-smooth} below) gives that $\Sigma$ is one-ended, i.e.~$N=1$, and so, for all $r>r_0$, $\Sigma \cap \partial B_r(0)$ is connected. On the other hand, $\SS^{n-1}$ is simply connected and, as such, does not admit a nontrivial connected cover. Therefore, recalling claim \ref{claim:blowdown-low-dim}, we conclude that $m=1$, or equivalently, that the density of $\Sigma$ at infinity is $1$. Hence by the monotonicity formula $\Sigma$ is a cone with density at the vertex (which is equal to the density at infinity) equal to 1. Since the density of $\Sigma$ at any other point is also 1, it follows again by the monotonicity formula that $\Sigma$ is translation invariant along every direction so it is a hyperplane.
We now consider the case where $\varphi: \Sigma\to\mathbb{R}^{n+1}$ is only assumed to be immersed and either $3\leq n\leq 5$ or $n=6$ and $\Lambda = 3-\delta$. In this case, we still have, by the local uniform mass bounds, that for any sequence $\lambda_{j} \to 0$, a subsequence of $\left(\lambda_{j}\right)_{\#} |\varphi(\Sigma)|$ converges as varifolds to a stationary cone ${\mathcal C}.$ By the locally uniform pointwise curvature bounds (given by
\cite{SSY} for $3 \leq n \leq 5$ or by \cite{Wic08} for $n=6$ and $\Lambda = 3 - \delta$), it follows that ${\rm spt} \, \|{\mathcal C}\|$ is smoothly immersed away from the origin, and the convergence is smooth and graphical in compact subsets of ${\mathbb R}^{n+1} \setminus \{0\}$; moreover, since $\Sigma \setminus B_{2R}$ is stable, it also follows that the stability inequality
$\int |A_{\mathcal C}|^{2}\zeta^{2} \leq \int |\nabla \, \zeta|^{2}$ holds true for every $\zeta \in C^{1}_{c}({\rm spt} \, \|{\mathcal C}\| \setminus \{0\})$, i.e.\ that ${\rm spt} \, \|{\mathcal C}\| \setminus \{0\}$ is stable as an immersion. (Indeed if $M_{j}$ is any sequence of immersed minimal hypersurfaces of an open set $U \subset {\mathbb R}^{n+1}$ with no singularities and with $\partial \, M_{j} \cap U = \emptyset$, and if $\limsup_{j \to \infty} \, {\mathcal H}^{n}(M_{j} \cap K) < \infty$ and $\limsup_{j \to \infty} \, \sup_{x \in M_{j} \cap K} \, |A_{M_{j}}(x)| < \infty$ for each compact $K \subset U$, then for any given compact set $K \subset U$, there is a fixed radius $\sigma = \sigma(K)>0$ independent of $j$ such that (after passing to a subsequence without changing notation) for every $j$ and every $p \in M_{j} \cap K$,
$M_{j} \cap B_{\sigma}(p)$ is the union of smooth embedded graphs with small gradient over some hyperplanes $P_{j,1}, \ldots, P_{j, N_{j}}$ passing through $p$ (with $\sum_{k=1}^{N_{j}}|P_{j, k}|$ equal to the tangent cone to $M_{j}$ at at $p$), where $N_{j} \leq N$ for some $N$ independent of $j$ and $p$; if $V$ is the varifold limit of $(M_{j})$, then for any $z \in {\rm spt} \, \|V\| \cap U$, choosing a sequence of points $z_{j} \in M_{j}$ with
$z_{j} \to z$ and applying this fact to $B_{\sigma}(z_{j}) \cap M_{j},$ we get, passing to a subsequence, that the hyperplanes $P_{j, k} \to P_{k}$ for $k =1, \ldots, Q$ for some $Q \leq N$, and so we can write
$M_{j} \cap B_{\sigma/2}(z_{j})$ as a union of embedded minimal graphs over the fixed planes $P_{1}, \ldots, P_{Q}$ with small gradient. By the higher derivative estimates for solutions to uniformly elliptic equations, we then see that
${\rm spt} \, \|V\| \cap B_{\sigma/4}(z)$ is the union of smoothly embedded minimal graphs over $P_{1}, \ldots, P_{Q},$ i.e.\ that ${\rm spt} \, \|V\| \cap U$ is immersed, and that the convergence of
$(M_{j})$ is smooth and graphical (via normal sections over ${\rm spt} \, \|V\| \cap U$) in any compact subset of $U$. From this, it is easy to verify that if $M_{j}$ are stable, i.e.\ if $\int_{M_{j}} |A_{M_{j}}|^{2} \zeta^{2} \leq \int_{M_{j}} |\nabla \, \zeta|^{2}$ for each $\zeta \in C^{1}_{c}(M_{j})$ then $\int |A_{{\rm spt} \, \|V\|}|^{2}\zeta^{2} \leq \int \nabla \, \zeta|^{2}$ for each $\zeta \in C^{1}_{c}({\rm spt} \, \|V\|)$.)
By Simons' theorem (\cite[Theorem 6.1.1]{Simons}; see the argument in \cite[Appendix B]{Simon:GMT} which is valid when the cone, as in our case, is immersed and stable as an immersion away from the origin), we conclude that ${\mathcal C} = \sum_{\ell = 1}^{M} m_{\ell}|L_{\ell}|$ for some hyperplanes $L_{1}, \ldots, L_{M}$ and positive integers $m_{1}, \ldots, m_{M}.$
Arguing by contradiction (as in the embedded case), this shows that $\varphi$ is transverse to $\partial B_{r}^{n+1}(0)$ for all $r > r_{0}$ sufficiently large. Again, as in the embedded case, we thus find that the number of connected components of $\varphi^{-1}(B_{R}(0)\setminus B_{r}(0))$ is equal to the number of connected components of $\varphi^{-1}(\partial B_{r}(0))$ for any $R\geq r > r_{0}$. Because $\Sigma$ has only one end by Theorem \ref{theo:CCZ-smooth}, there is only one such component. This proves both that $\mathcal C$ is supported on a single hyperplane, and that it has multiplicity one. Thus, $\Sigma$ is a flat hyperplane by the monotonicity formula. This completes the proof.
\end{proof}
The proof of Theorem \ref{theo:weakly-st-low-dim-est} will be achieved by employing Proposition \ref{prop:bernstein-low-dim} and a standard blow-up argument (see Section \ref{proofof1}).
We now present a version of Proposition \ref{prop:bernstein-low-dim} that holds in all dimensions. This, in conjunction with the sheeting-away-from-a-point result for weakly stable CMC hypersurfaces from \cite{BW:stableCMC} (recalled in Appendix \ref{appendixBW}, Theorem \ref{theo:sheeting-away-from-pt} below), will imply Theorem \ref{theo:main-weakly-st-sheeting} using a less standard rescaling argument. We point out that, in the proof of the next proposition, we make use of the one-end result of \cite{ChengCheungZhou} for weakly stable CMC hypersurfaces, genralized here to allow a co-dimension $7$ singular set. This generalisation is given in Appendix \ref{appendixCCZ}, Theorem \ref{theo:CCZ-non-smooth}.
\begin{prop}\label{prop:bernstein-all-dim}
For $n \geq 3$, suppose that $V$ is a stationary integral $n$-varifold in $\mathbb{R}^{n+1}$ with ${\rm spt} \, \|V\|$ a connected set, ${\rm sing} \, V \subset B_{1}(0)$ (so ${\rm spt} \, \|V\|$ is smooth in ${\mathbb R}^{n+1} \setminus B_{1}(0)$) and with ${\rm dim}_{\mathcal H} \, ({\rm sing} \, V) \leq n-7$. Assume that the regular part $\Sigma = {\rm reg} \, V$ ($= {\rm spt} \, \|V\| \setminus {\rm sing} \, V$) is weakly stable and that $V$ satisfies area growth $\Vert V\Vert (B_{R}(0)) \leq \Lambda R^{n}$ for some constant $\Lambda \geq 1$ and all $R >0$. Finally, assume that for some $\varepsilon >0$, $\Sigma$ satisfies
\begin{equation}\label{eq:curv-est-bernstein}
|A_{\Sigma}|(x) |x| \leq \sqrt{n-1}-\varepsilon
\end{equation}
for $x \in \Sigma \setminus B_{1}$, where $|\cdot|$ denotes the length in $\mathbb{R}^{n+1}$. Then $\supp \|V\|$ is a hyperplane.
\end{prop}
\begin{proof}
We begin by proving that Claim \ref{claim:blowdown-low-dim} from Proposition \ref{prop:bernstein-low-dim} holds in this setting as well. For a sequence $\lambda_{j}\to 0$, we consider $V_{j}:=(\lambda_{j})_{\#} \, V$. Passing to a subsequence, $V_{j}$ converges to a cone $\mathcal C$ in the sense of varifolds. Moreover, the assumed curvature estimates contained in \eqref{eq:curv-est-bernstein} imply that ${\rm spt} \, \|\mathcal C\|\setminus\{0\}$ is smooth and $\Sigma_{j}$ converges smoothly to ${\rm spt} \, \|\mathcal C\|$ (possibly with multiplicity) on compact subsets of $\mathbb{R}^{n+1}\setminus\{0\}$ (here, we use the fact that the estimate \eqref{eq:curv-est-bernstein} is scale invariant). The curvature estimates pass to the limit, implying that $|A_{{\rm spt} \, \|\mathcal C\|}|(x) |x| < \sqrt{n-1}$ for all $x \in {\rm spt} \,\|\mathcal C\|\setminus\{0\}$. Appealing to Lemma \ref{lemm:cone-small-curvature} below, we find that $\mathcal C$ is a flat hyperplane with some multiplicity $m \in {\mathbb N}$. This establishes Claim \ref{claim:blowdown-low-dim} in this setting (that the multiplicity $m$ is independent of the sequence follows again by monotonicity, as before).
Thus, any tangent cone at infinity of $V$ is a multiplicity $m$ plane. By Theorem \ref{theo:CCZ-non-smooth}, applied to $V$, $\Sigma$ has exactly one end. Arguing as we did in the proof of Proposition \ref{prop:bernstein-low-dim}, we can use the graphical convergence on compacts sets in $\mathbb{R}^{n+1}\setminus\{0\}$ (which follows from the curvature estimate (\ref{eq:curv-est-bernstein})) and the fact that $\SS^{n-1}$ does not admit any multiple cover, to obtain that, outside of $B_{1}$, $V$ must agree with the varifold given by $\Sigma$ with multiplicity $m$. Because the density at infinity of $V$ must be $m$, there must be equality in the monotonicity formula starting at any point in $\Sigma$ (which also has density $m$) which easily implies that the support of $V$ is a hyperplane.
\end{proof}
\begin{lemm}\label{lemm:cone-small-curvature}
Suppose that $\mathcal C$ is a $n$-dimensional minimal cone in $\mathbb{R}^{n+1}$ that is smooth away from $0$ and satisfies $|A_{\mathcal C}|(x)|x| < \sqrt{n-1}$. Then $\mathcal C$ is a flat hyperplane.
\end{lemm}
\begin{proof}
Note that $M : = \mathcal C \cap {\mathbb S}^{n}$ is smooth. By the given curvature estimate, we have that $|A_{M}| < \sqrt{n-1}$. By \cite[Corollary 5.3.2]{Simons}, $M$ must be totally geodesic. This proves the assertion.
\end{proof}
\begin{rema}\label{rema:transfer}
Observe that the Simons cone $\Sigma$ in $\mathbb{R}^{8}$ is (strongly) stable and satisfies $|A_{\Sigma}|(x)|x| = \sqrt{n-1}$ for all $x \in \Sigma\setminus B_{1}$. As such, we see that the constant $\sqrt{n-1}-\varepsilon$ in \eqref{eq:curv-est-bernstein} is sharp in the sense that Proposition \ref{prop:bernstein-all-dim} fails with any larger constant.
The importance of the size of the constant in a (scale invariant) curvature estimate of the form \eqref{eq:curv-est-bernstein} seems to have been first shown by White in \cite{Whi87}. This has been refined in \cite{MPR,CCE,CKM}. A key novelty contained in the present work is the combination of \eqref{eq:curv-est-bernstein} with Lemma \ref{lemm:cone-small-curvature} and with Theorem \ref{theo:CCZ-non-smooth}, allowing flatness to propagate from large to small scales. Furthermore, our work here seems to be the first use of such an estimate in a setting where a priori there could be singularities.
\end{rema}
\section{Proof of Theorem \ref{theo:weakly-st-low-dim-est}}
\label{proofof1}
Because the hypothesis and conclusion are scale invariant, it suffices to take $R=1$. Assume the theorem is false. Then, there is $\Sigma_{j}$ in $B_{2}\subset \mathbb{R}^{n+1}$, a sequence of embedded (when $\Sigma$ is immersed and $3\leq n\leq 5$, an identical argument will apply by considering instead rescalings and limits of the immersions) smooth weakly stable hypersurfaces with $|H| \leq H_{0}$ and $\mathcal H^{n}(\Sigma_{j}) \leq \Lambda$, but so
\[
\sup_{x \in \Sigma_{j}\cap B_{1/2}} |A_{\Sigma_{j}}|(x) \to \infty
\]
as $j\to\infty$. A standard blow-up argument (which we now recall) produces a surface which contradicts Proposition \ref{prop:bernstein-low-dim}.
Choose $x_{j}\in\Sigma_{j}\cap B_{1/2}$ with $|A_{\Sigma_{j}}|(x_{j})\to\infty$. Without loss of generality, we may assume $x_j \to x_0$. Choose $\rho_{j}\to 0$ sufficiently slowly so that $\rho_{j}|A_{\Sigma_{j}}|(x_{j})\to\infty$. Find $y_{j}\in \Sigma_{j}\cap B_{\rho_{j}}(x_{j})$ maximizing
\[
y\mapsto |A_{\Sigma_{j}}|(y) d(y,\partial B_{\rho_{j}}(x_{j})).
\]
Set $\sigma_{j} = d(y_{j},\partial B_{\rho_{j}}(x_{j}))$ and $\lambda_{j} = |A_{\Sigma_{j}}|(y_{j})$. Clearly $\sigma_j \leq \rho_j$ and $y_j \to x_0$, so that
\begin{equation}\label{eq:conseq-point-picking}
|A_{\Sigma_{j}}|(y) d(y,\partial B_{\rho_{j}}(x_{j})) \leq \sigma_{j}\lambda_{j} \text{ for } y\in \Sigma_j \cap B_{\sigma_j}(y_j).
\end{equation}
By the choice of $y_j$ we have $\sigma_j \lambda_j \geq \rho_j |A_{\Sigma_{j}}|(x_{j})$, which implies $\lambda_j:=|A_{\Sigma_{j}}|(y_{j}) \to \infty$ and $\lambda_j \sigma_j \to \infty$ as $j\to \infty$. We now define
\[
\tilde \Sigma_{j} = \lambda_{j}(\Sigma_{j}-y_{j}).
\]
We claim that $\tilde\Sigma_{j}$ has bounded curvature on compact subsets of $\mathbb{R}^{n+1}$. Indeed, for $x \in \tilde\Sigma_{j} \cap B_{\sigma_j \lambda_j}(0)$, scaling and \eqref{eq:conseq-point-picking} yield
\[
|A_{\tilde \Sigma_{j}}|(x) = \frac{1}{\lambda_{j}} |A_{\Sigma_{j}}|(y_{j} + \lambda_{j}^{-1}x) \leq \frac{\sigma_{j}}{\sigma_{j} - \lambda_{j}^{-1}|x|} \to 1
\]
for $|x|$ uniformly bounded. Note that $\tilde\Sigma_{j}$ has mean curvature $|H_{j}|\leq H_{0}/\lambda_{j} \to 0$.
The monotonicity formula (see e.g.~\cite{Simon:GMT}) shows that $\mathcal H^{n}(\tilde \Sigma_{j}\cap B_{R}) \leq \tilde \Lambda R^{n}$ for some constant $\tilde \Lambda=\tilde\Lambda(\Lambda,n,H_{0})$ independent of $j$. Then, by higher order elliptic estimates, $\tilde\Sigma_{j}$ converges (up to passing to a subsequence) smoothly (possibly with multiplicity) to a smooth, embedded, complete, weakly stable minimal hypersurface $\tilde\Sigma_{\infty}$ in $\mathbb{R}^{n+1}$.
Because $|A_{\tilde\Sigma_{j}}|(0) = 1$ for every $j$, we find that $|A_{\tilde\Sigma_{\infty}}|(0) = 1$, so $\tilde\Sigma_{\infty}$ is non-flat. This contradicts Proposition \ref{prop:bernstein-low-dim} (applied to $\tilde\Sigma_{\infty}$ with multiplicity one).
\section{Proof of Theorem \ref{theo:main-weakly-st-sheeting}}
\label{proofof2}
We begin by describing the setup of the proof of Theorem \ref{theo:main-weakly-st-sheeting}. By scaling we may take $R=10$. We consider a sequence of weakly stable hypersurfaces $\Sigma_{j}$ with mean curvature $|H| \leq H_{0}/10$ and $\mathcal H^{n}(\Sigma_{j}) \leq \Lambda 10^{n}$. We assume that each $\Sigma_{j}$ has a singular set of co-dimension at least $7$ and that $\Sigma_{j}$ satisfies
\[
\Sigma_{j} \subset \{|x^{n+1}| \leq 10/j\} \cap B_{10}(0).
\]
It follows that $\Sigma_{j}$ converges to the flat disk $\{x^{n+1}=0\} \cap B_{10}(0)$ with some positive integer multiplicity $k$, in the sense of varifolds. The final aim is to show that the conclusion of Theorem \ref{theo:main-weakly-st-sheeting} is valid for all sufficiently large $j$.
We will first establish the regularity assertion and the curvature estimate in Proposition~\ref{prop:curv-est-sheeting-all-dim} below; the proof of Theorem \ref{theo:main-weakly-st-sheeting} will then be completed at the end of the section. The curvature estimate of Proposition~\ref{prop:curv-est-sheeting-all-dim} will be a consequence of Proposition \ref{prop:bernstein-all-dim} and a blow-up argument. Its scale-breaking nature is reminiscent of the arguments in \cite{CKM}.
\begin{prop}\label{prop:curv-est-sheeting-all-dim}
Fix $\eta>0$. Then, for $j$ sufficiently large, $\Sigma_{j} \cap B_{9}$ is smooth and there is $z_{j}\in B_{6}$ so that
\[
|A_{\Sigma_{j}}|(x)|x-z_{j}| \leq \eta
\]
for all $x \in \Sigma_{j}\cap B_{9}$, where $|\cdot|$ stands for the length in $\mathbb{R}^{n+1}$.
\end{prop}
We briefly explain the idea of the proof. The conclusion is non-trivial only when we are in the second alternative of the partial sheeting result from \cite{BW:stableCMC} that is recalled in Theorem \ref{theo:sheeting-away-from-pt}, Appendix \ref{appendixBW}. This second alternative gives that, away from a point, $\Sigma_{j}$ is converging smoothly (with sheeting) to a hyperplane with multiplicity. Thus, there is some $y$ and $\delta>0$ small so that the conclusion holds outside of $B_{\delta}(y)$. The strategy of the proof is to pick the \emph{smallest} ball $B_{\delta_{j}}(z_{j})$ so that the conclusion for $\Sigma_{j}$ holds outside of the ball. The claim will follow if we can prove that actually $\delta_{j}=0$, so we will assume that $\delta_{j}>0$. Rescale $\Sigma_{j}$ to $\hat\Sigma_{j}$ so that the ball $B_{\delta_{j}}(z_{j})$ becomes $B_{1}(0)$ (outside of which the smoothness and \emph{scale invariant} curvature estimates hold). We can pass $\hat\Sigma_{j}$ to the limit, which inherits the curvature estimates (and smoothness) outside of $B_{1}(0)$. By Proposition \ref{prop:bernstein-all-dim}, the limit is a union of hyperplanes (note that here we have transferred the flatness estimates contained in the partial sheeting result to the smaller scale, as pointed out in Remark \ref{rema:transfer}).
Now, the partial sheeting result (applied to $\hat\Sigma_{j}$) implies, as above, that the convergence of $\hat\Sigma_{j}$ to the limit occurs smoothly away from a single point. This contradicts our choice of $B_{\delta_{j}}(z_{j})$, since for $j$ large, we could take a smaller ball around the point where sheeting fails in the rescaled picture. This will contradict the assumption that $\delta_{j}>0$, and will complete the proof.
\begin{proof}[Proof of Proposition \ref{prop:curv-est-sheeting-all-dim}]Clearly, it suffices to assume that $\eta < \sqrt{n-1}$. If the the first case of the conclusion of Theorem~\ref{theo:sheeting-away-from-pt} holds for every $\Sigma_{j}$ large enough, then the curvature estimate is true with $z_j=0$ (and the conclusion of Theorem \ref{theo:main-weakly-st-sheeting} is valid, so there is nothing further to prove). So we may assume (by the second case of the conclusion of Theorem~\ref{theo:sheeting-away-from-pt}) that there is a point $y \in B_{5}(0) \cap \{x^{n+1} = 0\}$ such that $\Sigma_{j}$ are sheeting away from $y$, i.e. for any $r>0$, $\Sigma_{j}\cap (B_{9}(0) \setminus B_{r}(y) )$ is smooth for $j$ sufficiently large and
\begin{equation}\label{eq:curv-away-from-pt}
\sup_{x\in\Sigma_{j}\cap (B_{9}(0) \setminus B_{r}(y) )} |A_{\Sigma_{j}}|(x) \to 0
\end{equation}
as $j\to\infty$. We will subsequently replace $\Sigma_{j}$ by $\Sigma_{j}\cap B_{9}(0)$ (to avoid any irrelevant issues with the behavior of $\Sigma_{j}$ near its boundary).
For $z \in B_{6}(0)$, we define
\[
\delta(\Sigma_{j},z) : = \inf\left\{ r>0 : \begin{aligned} \Sigma_{j}^{r}:= \Sigma_{j}\setminus \overline{B_{r}(z)} \text{ is smooth \qquad} \\\text{and } |A_{\Sigma_{j}}|(x)|x-z| \leq \eta \text{ for all }x\in \Sigma_{j}^{r} \end{aligned} \ \right\}.
\]
Note that $\delta(\Sigma_{j},y) \to 0$ as $j\to \infty$, by the partial sheeting result discussed above.
For every $j$ set $\delta_{j} : = \inf_{z \in B_{6}(0)} \delta(\Sigma_{j},z)$ and choose $z_{j,k}$ with $\delta(\Sigma_{j},z_{j,k}) \to \delta_{j}$ as $k\to\infty$. Passing to a subsequence, we may assume that $z_{j,k}\to z_{j} \in B_{6}(0).$ We claim that $\delta(\Sigma_{j},z_{j}) = \delta_{j}$. If not, there is $\epsilon > 0$ and $w \in \Sigma_{j} \setminus B_{\delta_{j}+2\epsilon}(z_{j})$ with either (i) $w \in \sing \Sigma_{j}$ or (ii) $|A_{\Sigma_{j}}|(w) |w-z_{j}| > \eta + 2\epsilon$. Note that $w \in \Sigma_{j}\setminus B_{\delta_{j} + \epsilon}(z_{j,k})$ for $k$ sufficiently large. Thus, in case (i), we find that, by the definition of $\delta(\cdot,\cdot)$, $\delta(\Sigma_{j},z_{j,k}) \geq \delta_{j} + \epsilon$ for all $k$ sufficiently large. This contradicts the choice of $z_{j,k}$. Similarly, in case (ii) we have that
\[
|A_{\Sigma_{j}}|(w)|w-z_{j,k}| > \eta + \epsilon,
\]
for $k$ sufficiently large, since $|w-z_{j,k}|\to |w-z_{j}|$ as $k\to\infty$. Again, this yields a contradiction, as before.
Thus, we have arranged that $z_{j}$ minimizes $\delta(\Sigma_{j},\cdot)$. Since $\delta(\Sigma_{j}, y) \to 0$, we also have that $\delta_{j} \to 0$ and consequently, it follows from the definition of $\delta_{j}$ and
(\ref{eq:curv-away-from-pt}) that $z_{j} \to y.$ We claim that $\delta_{j} =0$ for all sufficiently large $j$. Arguing by contradiction, we assume (upon extracting a subsequence that we do not relabel) that $\delta_{j}>0$ for all $j$. Using this, we now perform the relevant blow-up argument. Define
\[
\tilde\Sigma_{j} = \delta_{j}^{-1}(\Sigma_{j}-z_{j}).
\]
Note that as in the proof of Theorem \ref{theo:weakly-st-low-dim-est}, the monotonicity formula implies that $\mathcal H^{n}(\tilde \Sigma_{j} \cap B_{R}(0)) \leq \tilde \Lambda R^{n}$ for some $\tilde \Lambda=\tilde \Lambda(\Lambda, n, H_0)$. Moreover, the choice of $\delta_{j}$ implies that $\tilde\Sigma_{j}\setminus B_{1}$ is smooth and satisfies
\begin{equation}\label{eq:sheeting-thm-proof-curv-est}
|A_{\tilde\Sigma_{j}}|(x)|x| \leq \eta
\end{equation}
for $x \in\tilde\Sigma_{j}\setminus B_{1}$. Note also that $|H_{\tilde{\Sigma}_j}|\leq \delta_j \frac{H_0}{10} \to 0$. The area bounds and weak stability imply, by the regularity and compactness theorems in \cite{BW:stableCMC} (recalled in Theorem~\ref{theo:BWregcomp}, Appendix \ref{appendixBW} below), that $\tilde \Sigma_{j}$ converge in the varifold sense to $\tilde V$, which is stationary, weakly stable, smoothly embedded outside of a co-dimension $7$ singular set and satisfies $\Vert \tilde V\Vert(B_{R}(0)) \leq \tilde\Lambda R^{n}$. Furthermore, by the curvature estimates (\ref{eq:sheeting-thm-proof-curv-est}), the support of $\tilde V$ is a smooth hypersurface $\tilde \Sigma_{\infty}$ outside of $B_{1}(0)$ satisfying
\[
|A_{\tilde\Sigma_{\infty}}|(x)|x| \leq \eta
\]
and the convergence is smooth on compact sets outside $B_{1}(0)$.
Thus, by Proposition \ref{prop:bernstein-all-dim}, each connected component of the support of $\tilde V$ is a hyperplane and so the support of $\tilde \|V\|$ is made up of finitely many parallel hyperplanes.
Now, we again appeal to Theorem \ref{theo:sheeting-away-from-pt} to conclude that the convergence of $\tilde \Sigma_{j}$ to $\tilde V$ occurs smoothly (possibly with multiplicity) away from some fixed point $\tilde z \in
\supp \|\tilde V\|$ (if the sheeting actually occurs everywhere, we simply set $\tilde z =0$). Note that the curvature estimates \eqref{eq:sheeting-thm-proof-curv-est} imply that $|\tilde z| \leq 1$.
Define $\hat z_{j} = z_{j} + \delta_{j}\tilde z$ and let $\hat\delta_{j} : = \delta(\Sigma_{j},\hat z_{j})$. Since $\hat\delta_{j} \geq \delta_{j}/2$ (by the definition of $\delta_{j}$), there is $w_{j}\in\Sigma_{j} \setminus B_{\delta_{j}/2}(\hat z_{j})$ so that either (i) $w_{j}\in\sing \Sigma_{j}$, or (ii) $|A_{\Sigma_{j}}|(w_{j})|w_{j} - \hat z_{j}| > \eta$. We note that in either case we have that
\begin{equation}\label{eq:sheeting-pf-scale-bettter-pt}
\liminf_{j\to\infty} \delta_{j}^{-1} |w_{j} - \hat z_{j}| = \infty.
\end{equation}
(For if not, then defining $\tilde w_{j} = \delta_{j}^{-1}(w_{j}- z_{j})$, we find that, in the scale of $\tilde\Sigma_{j}$ discussed above,
\[
|\tilde z- \tilde w_{j}| = \delta_{j}^{-1} |(\hat z_{j} - z_{j})-(w_{j}-z_{j})| = \delta_{j}^{-1}|w_{j} - \hat z_{j}|
\]
is bounded above (after passing to a subsequence) and bounded below by $\frac 12$
(since $w_{j}\not \in B_{\delta_{j}/2}(\hat z_{j})$), but because $\tilde\Sigma_{j}$ sheets away from $\tilde z$, we find that either (i) or (ii) would be a contradiction.)
Finally, we define
\[
\check \Sigma_{j} := | w_{j}- \hat z_{j}|^{-1}(\Sigma_{j}-\hat z_{j})
\]
and set
\[
\check w_{j} : = |w_{j}-\hat z_{j}|^{-1}(w_{j}-\hat z_{j}), \qquad \check z_{j} : = | w_{j} -\hat z_{j}|^{-1}(z_{j}-\hat z_{j}) = - \delta_{j} |w_{j} - \hat z_{j}|^{-1} \tilde z.
\]
Note that it follows from (\ref{eq:curv-away-from-pt}), (i) and (ii) that $w_{j} \to y$ and hence (since $z_{j} \to y$) that $|w_{j} - \hat z_{j}| \to 0.$ We have already shown that outside of $B_{\delta_{j}}(z_{j})$, $\Sigma_{j}$ is smooth and satisfies \eqref{eq:sheeting-thm-proof-curv-est}. This implies that $\check\Sigma_{j}$ is smooth outside of $B_{\delta_{j}|w_{j}-\hat z_{j}|^{-1}}(\check z_{j})$ and additionally satisfies
\[
|A_{\check\Sigma_{j}}|(x) |x-\check z_{j}| \leq \eta.
\]
By \eqref{eq:sheeting-pf-scale-bettter-pt}, and recalling that $|\tilde z| \leq 1$, we have that $\delta_{j}|\hat z_{j}-w_{j}|^{-1} \to 0$ and $\check z_{j}\to 0$. As before, we may take the varifold limit $\check V$ of $\check \Sigma_{j}$, and the curvature estimates we have just established show that this limit occurs smoothly (possibly with multiplicity) outside of $B_{1/2}(0)$. The curvature estimates pass to the limit so by Proposition~\ref{prop:bernstein-all-dim}
each connected component of ${\rm spt} \, \|\check V\|$ is a hyperplane. Thus, since $|\check w_{j}|=1$, we find that $\check w_{j}\not \in \sing\check \Sigma_{j}$ and $|A_{\check\Sigma_{j}}|(\check w_{j})\to 0$, so
\[
|A_{\check\Sigma_{j}}|(\check w_{j})|\check w_{j}|\to 0.
\]
Rescaling to the original scale, this contradicts both (i) and (ii) above (concerning $w_{j}$). This contradiction establishes that $\delta_{j} = 0$ for $j$ sufficiently large.
We have now shown that $\Sigma_{j}$ is smooth away from $z_{j}$ and satisfies
\[
|A_{\Sigma_{j}}|(x) |x-z_{j}|\leq \eta
\]
for all $x \in \Sigma_{j}\setminus \{z_{j}\}$. If $z_{j} \not \in \Sigma_{j}$ there is nothing further to show. Else, arguing as in the beginning of the proof of Proposition~\ref{prop:bernstein-low-dim}, we see that for fixed $j$ and any small ball $B_{\rho}(z_{j})$, the hypersurface $\Sigma_{j}$ is strongly stable either in $B_{\rho}(z_{j})$ or in $B_{9} \setminus \overline{B_{\rho}(z_{j})};$ it follows from this fact that for each fixed $j$, there is $\rho>0$ such that $\Sigma_{j}$ is strongly stable in the annulus $B_{\rho}(z_{j}) \setminus \{z_{j}\}$, and hence in the ball $B_{\rho}(z_{j})$. Moreover, by the curvature estimates and Lemma \ref{lemm:cone-small-curvature}, any tangent cone $\Sigma_{j}$ at $z_{j}$ must be supported on a hyperplane. Thus, $\Sigma_{j}$ is smooth at $z_{j}$ by \cite[Theorems 3.1 and 3.3]{BW:stableCMC}. This completes the proof of Proposition \ref{prop:curv-est-sheeting-all-dim}.
\end{proof}
\begin{proof}[Completion of the proof of Theorem \ref{theo:main-weakly-st-sheeting}]
As discussed in the beginning of this section, $\Sigma_{j}$ converge in the sense of varifolds to the hyperplane $\{x^{n+1}=0\}$ with multiplicity $k$. We claim that the curvature of $\Sigma_{j}$ is uniformly bounded on $\Sigma_{j}\cap B_{9}(0)$. Let $\lambda_{j} = \max_{\Sigma_j \cap B_9(0)} |A_{\Sigma_j}|$ and assume for a contradiction (upon extracting a subsequence that we do not relabel) that $\lambda_{j} \to \infty$ as $j \to \infty$. By applying Proposition \ref{prop:curv-est-sheeting-all-dim} iteratively we may find a further subsequence (not relabeled) so that $\Sigma_j$ is a smooth immersion in the whole ball $B_9(0)$ and there is $z_{j}\in B_{6}(0)$ and $\eta_{j}\to 0$ so that $\Sigma_{j}$ satisfies the curvature estimates
\begin{equation}\label{eq:final-curv-est-sheeting-all-dim}
|A_{\Sigma_{j}}|(x)|x-z_{j}| \leq \eta_{j}
\end{equation}
for all $x\in B_{9}(0)$. Note that since $z_j \in B_6(0)$, it follows from (\ref{eq:final-curv-est-sheeting-all-dim}) that $|A_{\Sigma_{j}}|$ is uniformly bounded in the annulus $B_9(0)\setminus \overline{B}_8(0)$ and therefore
the maximum of $|A_{\Sigma_j}|$ in $\Sigma_{j} \cap B_{9}(0)$ is achieved at a point $y_j\in B_8(0)$. We set
\[
\tilde\Sigma_{j} = \lambda_{j}(\Sigma_{j}-y_{j}).
\]
By construction we have $|A_{\tilde\Sigma_{j}}|(0) = 1$ and that $|A_{\tilde\Sigma_{j}}|$ is uniformly bounded on compact subsets of ${\mathbb R}^{n+1}$, so $\tilde\Sigma_{j}$ converges smoothly on compact subsets of $\mathbb{R}^{n+1}$ to a non-flat smooth hypersurface $\tilde\Sigma_{\infty}$. On the other hand, the estimate \eqref{eq:final-curv-est-sheeting-all-dim} is scale invariant, so for $\tilde z_{j} = \lambda_{j}(z_{j}-y_{j})$, we see that
\[
|A_{\tilde\Sigma_{j}}|(x) |x-\tilde z_{j}| \leq \eta_{j}.
\]
Considering $x=0$ here, we find that $\tilde z_{j}\to 0$, since $\eta_{j}\to 0$. Hence, passing this inequality to the limit, we find that
\[
|A_{\tilde\Sigma_{\infty}}|(x) |x| = 0,
\]
contrary to the fact that $\tilde\Sigma_{\infty}$ is non-flat.
This implies that the curvature of $\Sigma_{j}$ (in the original scale, for the original sequence) was uniformly bounded in $B_8(0)$. Since $\Sigma_j$ converges to a hyperplane, the uniform curvature bounds and standard elliptic estimates conclude the proof.
\end{proof}
\begin{rema}
It is possible to conclude in a slightly different manner, by using the curvature estimates from Proposition \ref{prop:curv-est-sheeting-all-dim} with $\eta < 1$ to prove that the function $f_{j}(x) : = |x-z_{j}|^{2}$ is strictly convex (for any $j$ large enough). From this, if we assume sheeting of $\Sigma_j$ away from a point $y$ (second alternative of Theorem \ref{theo:sheeting-away-from-pt}), it is not hard to argue (using a max-min argument) that distinct sheets of $\Sigma_{j} \cap \left(B_9(0) \setminus B_{1/2}(y)\right)$ cannot be connected in $B_{1/2}(y)$ (endowing $\Sigma_j$ with the topology of the immersion, not of the embedding); therefore each connected component of $\Sigma_j$ in $B_9(0)$ contains exactly one sheet of $\Sigma_{j} \cap \left(B_9(0) \setminus B_{1/2}(y)\right)$. Theorem \ref{theo:main-weakly-st-sheeting} then follows from Allard's regularity theorem applied to each connected component individually, since standard arguments show that each component converges to the hyperplane with multiplicity $1$ in the sense of varifolds. This alternative argument seems to be necessary for the applications to bounded index surfaces mentioned in Section \ref{boundedindexminimal} (cf.\ \cite{CKM}).
\end{rema}
|
{
"timestamp": "2018-02-27T02:08:27",
"yymm": "1802",
"arxiv_id": "1802.08928",
"language": "en",
"url": "https://arxiv.org/abs/1802.08928"
}
|
\section{Introduction}
\label{sec:introduction}
By 2019, the online video might be responsible for more than 80\% of global Internet traffic~\cite{CISCO2016}. Not only are internet users watching more online video, but they are also recording themselves and producing a growing number of videos for sharing their day-to-day life routine. The ubiquity of inexpensive shoot video devices and the lower costs of producing and storing videos are giving unprecedented freedom to the people to create increasingly long-running first-person videos. On the other hand, such freedom might lead the user to create a final long-running and boring video, once most everyday activities do not merit recording.
A central challenge is to selective highlight the meaningful parts of the videos without losing the whole message that the video should convey. Although video summarization techniques~\cite{Molino2017, Mahasseni2017} provide quick access to videos' information, they only return segmented clips or single images of the relevant moments. By not including the very last and the following frames of a clip, the summarization might lose the clip context~\cite{Plummer2017}. Hyperlapse techniques yield quick access to the meaningful parts and also preserve the whole video context by fast-forwarding the videos applying an adaptive frame selection~\cite{Kopf2014,Joshi2015,Poleg2015}. Despite the Hyperlapse techniques being able to address the shake effects of fast-forwarding first-person videos, handling every frame equally important is a major weakness of these techniques. In a lengthy stream recorded using the always-on mode, some portions of the videos are undoubtedly more relevant than others.
\begin{figure}[!t]
\centering
\includegraphics[width=1.0\linewidth]{Introduction.pdf}
\caption{The fast-forward methodology. A weighted sampling combined with a smoothing transition step is applied to tackle the abrupt camera movements. The activation vector indicates which frames compose the fast-forward video. A smoothing step is applied to the transitions between the selected frames.}
\label{fig:introduction}
\end{figure}
Most recently, methods on fast-forward videos emphasizing relevant content have emerged as promising and effective approaches to deal with the tasks of visual smoothness and semantic highlighting of first-person videos. The relevant information is emphasized by playing faster the non-semantic segments and applying a smaller speed-up rate in the semantic ones~\cite{Ramos2016,Silva2016,Lai2017,Silva2018} or even playing them in slow-motion~\cite{Yao2016}.
To reach both objectives, visual smoothness and semantic highlight, these techniques describe the video frames and their transitions by features, and then formulate an optimization problem using the combination of these features. Consequently, the computation time and memory usage are impacted by the number of features used, once the search space grows exponentially. Therefore, the current Hyperlapse methods are not scalable regarding the number of features.
In this work, we present a new semantic fast-forward method that solves the adaptive frame sampling by modeling the frame selection as a Minimum Sparse Reconstruction (MSR) problem (Figure~\ref{fig:introduction}). The video is represented as a dictionary, where each column describes a video frame. The frames selection is defined by the activation vector, and the fast-forwarding effect is reached by the sparsity nature of the problem. In other words, we look for the smallest set of frames that provides the reconstruction of the original video with small error. Additionally, to attenuate abrupt camera movements in the final video, we apply a weighted version of the MSR problem, where frames related to camera movement are more likely to be sampled.
In the proposed modeling, the scalability of features is not a problem anymore, because using a high dimensional descriptor leads to a balance of the dictionary dimensions, which is recommended to solve the MSR problem, and do not substantially affect the computational cost and memory usage. We experimentally demonstrate that our approach creates videos composed of more relevant information than the state-of-the-art Semantic Fast-Forwarding method and as smooth as the state-of-the-art Hyperlapse techniques.
The contributions of our work are: $i$) a set of methods capable of handling larger feature vectors to better describe the frames and the video transitions, addressing the abrupt camera motions while not increasing the computational processing time; $ii$) a new labeled 80-hour multimodal (3D Inertial Movement Unit, GPS, and RGB-D camera) dataset of first-person videos covering a wide range of activities such as video actions, party, beach, tourism, and academic life. Each frame is labeled with respect to the activity, scene, recorder ID, interaction, and attention.
\section{Related Work}
\label{sec:related_work}
Works on selective highlighting of the meaningful parts of first-person videos have been extensively studied in the past few years. We can broadly classify them into Video Summarization and Hyperlapse approaches.
\paragraph{Video Summarization.}
The goal of video summarization is to produce a compact visual summary containing the most discriminative and informative parts of the original video. Techniques typically use features that range from low-level such as motion and color~\cite{Zhao2014, Gygli2015} to high-level (\eg, important objects, user preferences)~\cite{Kim2014, Yao2016, Sharghi2017}.
Lee~\etal~\cite{Lee2012}~exploit interaction level, gaze, and object detection frequency as egocentric properties to create a storyboard of keyframes with important people and objects. Lu~and~Grauman~\cite{Lu2013}~present video skims as summaries instead of static keyframes. After splitting the video into subshots, they compute the mutual influence of objects and estimate the subshots importance to select the optimal chain of subshots.
Recent approaches are based on highlight detection~\cite{Lin2015, Bettadapura2016, Yao2016} and vision-language models~\cite{Sharghi2017, Plummer2017, Panda2017}. Bettadapura~\etal~\cite{Bettadapura2016}~propose an approach for identifying picturesque highlights. They use composition, symmetry and color vibrancy as scoring metrics and leverage GPS data to filter frames by the popularity of the location. Plummer~\etal~\cite{Plummer2017}~present a semantically-aware video summarization. They optimize a linear combination of visual, \ie, representativeness, uniformity, interestingness, and vision-language objectives to select the best subset of video segments.
Sparse Coding has been successfully applied to many varieties of vision tasks~\cite{Wright2009, Zhao2011, Cong2012, Zhao2014, oliveira_tip2014, Mei2014, Mei2015pr}. In video summarization, Cong~\etal~\cite{Cong2012}~formulate the problem of video summarization as a dictionary selection problem. They propose a novel model to either extract keyframes or generate video skims using sparsity consistency. Zhao~\etal~\cite{Zhao2014}~propose a method based on online dictionary learning that generates summaries on-the-fly. They use sparse coding to eliminate repetitive events and create a representative short version of the original video. The main benefit of using sparse coding for frame selection is that selecting a different number of frames does not incur an additional computational cost. This work differs from sparse coding video summarization since it handles the shakiness in the transitions via a weighted sparse frame sampling solution. Also, it is capable of dealing with the temporal gap caused by discontinuous skims.
\paragraph{Hyperlapse.}
A pioneering work in creating hyperlapse from casual first-person videos was conducted by Kopf~\etal~\cite{Kopf2014}. The output video comes from the use of image-based rendering techniques such as projecting, stitching and blending after computing the optimal trajectory of the camera poses. Despite their remarkable results, the method has a high computational cost and requires camera motion and parallax to compute the 3D model of the scene.
Recent strategies focus on selecting frames using different adaptive approaches to adjust the density of frame selection according to the cognitive load. Poleg~\etal~\cite{Poleg2015}~model the frame selection as a shortest path in a graph. The nodes of this graph represent the frames of the original video and, the edges weights between pairs of frames are proportional to the cost of including the pair sequentially in the output video. An extension for creating a panoramic hyperlapse of a single or multiple input videos was proposed by Halperin~\etal~\cite{Halperin2017}. They enlarge each of the input frames using neighboring frames from the videos to reduce the perception of instability. Joshi~\etal~\cite{Joshi2015} present a method based on dynamic programming to select an optimal set of frames regarding the desired target speed-up and the smoothness in frame-to-frame transitions jointly.
Although these solutions have succeeded in creating short and watchable versions of long first-person videos, they often remove segments of high relevance to the user, since the methods handle all frames as having the same semantic relevance.
\paragraph{Semantic Hyperlapse.}
Unlike traditional hyperlapse techniques, where the goal is to optimize the output number of frames and the visual smoothness, the semantic hyperlapse techniques also include the semantic relevance for each frame. Ramos~\etal~\cite{Ramos2016}~introduced an adaptive frame sampling process embedding semantic information within. The methodology assigns scores to frames based on the detection of predefined objects that may be relevant to the recorder. The rate of dropped frames is a function of the relative semantic load and the visual smoothness. Later, Silva~\etal~\cite{Silva2016} extended the Ramos~\etal's method using a better semantic temporal segmentation and an egocentric video stabilization process in the fast-forward output. The drawbacks of these works include abrupt changes in the acceleration and shaky exhibition at every large lateral swing in the camera.
Most recently, two new hyperlapse methods for first-person videos were proposed: the Lai~\etal's system~\cite{Lai2017} and the Multi-Importance Fast-Forward (MIFF)~\cite{Silva2018} method. Lai~\etal's system converts $ 360^\circ $ videos into normal field-of-view hyperlapse videos. They extract semantics through regions of interest using spatial-temporal saliency and semantic segmentation to guide camera path planning. Low rates of acceleration are assigned to interesting regions to emphasize them in the hyperlapse output. In the MIFF method, the authors applied a learning approach to infer the users' preference and determine the relevance of a given frame. The MIFF calculates different speed-up rates for segments of the video, which are extracted using an iterative temporal segmentation process according to the semantic content.
Although not focused on the creation of hyperlapses, Yao~\etal~\cite{Yao2016}~present a highlight-driven summarization approach that generates skimming and timelapse videos as summaries from first-person videos. They assign scores to the video segments by using late fusion of spatial and temporal deep convolution neural networks (DCNNs). The segments with higher scores are selected as video highlights. For the video timelapse, they calculate proper speedup rates such that the summary is compressed in the non-highlight segments and expanded in highlight segments. It is noteworthy that timelapse videos do not handle the suavity constraint that is a mandatory requirement for hyperlapse videos. Differently from the aforementioned work, our approach optimizes semantic, length and smoothness to create semantic hyperlapses. Most importantly, it keeps the path taken by the recorder avoiding to lose the flow of the story and thus, conveying the full message from the original video in a shorter and smoother version.
\section{Methodology}
\label{sec:methodology}
In this section, we describe a new method for creating smooth fast-forward videos that retains most of the semantic content of the original video in a reduced processing time. Our method consists of four primary steps: i) Creation and temporal segmentation of a semantic profile of the input video; ii) Weighted sparse frame sampling; iii) Smoothing Frame Transitions (SFT), and iv) Video compositing.
\begin{figure*}
\centering
\includegraphics[width=0.975\linewidth]{Methodology.pdf}
\caption{Main steps of our fast-forward methodology. For each segment created in the temporal semantic profile segmentation (a), the frames are described (b) and weighted based on the camera movement computed (c). The frames are sampled by minimizing local-constrained and reconstruction problem (d). The smoothing step is applied to tackle the abrupt transitions of the selected frames (e).}
\label{fig:methodology}
\end{figure*}
\subsection{Temporal Semantic Profile Segmentation}
The first step of a semantic fast-forward method is the creation of a semantic profile of the input video. Once we set a semantic (\eg, faces, type of objects of interest, scene, \etc), a video score profile is created by extracting the relevant information and assigning a semantic score for each frame of the video (Figure~\ref{fig:methodology}-a). The confidence of the classifier combined with the locality and size of the regions of interest score are used as the semantic score~\cite{Ramos2016,Silva2016}.
The set of scores defines a profile curve, which is used for segmenting the input video into semantic and non-semantic sequences. Following, a refinement process is executed in the semantic segments, creating levels of importance regarding the defined semantic. Finally, speed-up rates are calculated based on the length and level of relevance of each segment. The rates are calculated such that the semantic segments are played slower than the non-semantic ones, and the whole video achieves the desired speed-up. We refer the reader to~\cite{Silva2018} for a more detailed description of the multi-importance semantic segmentation.
The output of this step is a set of segments that are used to feed the following steps that process each one separately.
\subsection{Weighted Sparse Frame Sampling}
In general, hyperlapse techniques solve the adaptive frame selection problem searching the optimal configuration (\eg, shortest path in a graph or dynamic programming) in a space of representation where different types of features are combined to represent frames or transitions between frames. A large number of features can be used for improving the representation of a frame or transitions, but such solution leads to a high-dimensional representation space increasing the computation time and memory usage. We address this problem of representation using a sparse frame sampling approach, Figure~\ref{fig:methodology}-d.
Let ${D=[\mathbf{d}_1, \mathbf{d}_2, \mathbf{d}_3, \cdots, \mathbf{d}_n] \in \mathbb{R}^{f \times n}}$ be a segment of the original video with $n$ frames represented in our feature space. Each entry ${\mathbf{d}_i \in \mathbb{R}^{f}}$ stands for the feature vector of the ${i}$-th frame. Let the video story ${\mathbf{v} \in \mathbb{R}^{f}}$ be defined as the sum of the frame features of the whole segment, \ie, ${\mathbf{v} = \sum_{i=1}^n \mathbf{d}_i}$. The goal is to find an optimal subset ${S=[\mathbf{d}_{s_1},\mathbf{d}_{s_2}, \mathbf{d}_{s_3},\cdots,\mathbf{d}_{s_m}] \in \mathbb{R}^{f \times m}}$, where ${m \ll n}$ and ${\{s_1,s_2,s_3,\cdots,s_m\}}$ belongs to the set of frames in the segment.
Let the vector~${\boldsymbol{\alpha} \in \mathbb{R}^{n}}$ be an activation vector indicating whether ${\mathbf{d}_i}$ is in the set $S$ or not. The problem of finding the values for $\boldsymbol{\alpha}$ that lead to a small reconstruction error of $\mathbf{v}$, can be formulated as a Locality-constrained Linear Coding (LLC)~\cite{Wang2010} problem as follow:
\begin{equation}
\label{eq:LLC}
\argmin{\boldsymbol{\alpha}~\in~\mathbb{R}^{n}} { \norm{\mathbf{v} - D~\boldsymbol{\alpha}}^{2} + \lambda_\alpha \norm{\mathbf{g} \odot \boldsymbol{\alpha}}^2 },
\end{equation}
where $\mathbf{g}$ is the Euclidean distance of each dictionary entry $\mathbf{d}_i$ to the segment representation $\mathbf{v}$, and $\odot$ is an element-wise multiplication operator. The ${\lambda_{\alpha}}$ is the regularization term of the locality of the vector $\boldsymbol{\alpha}$.
The benefit of using LLC formulation instead of the traditional Sparse Coding (SC) model is twofold. The LLC provides local smooth sparsity and can be solved by an analytical solution, which results in a smoother final fast-forward video in a lower computational cost.
\paragraph{Weighted Sampling.}
Abrupt camera motions are challenging issues for fast-forwarding video techniques. They might lead to the creation of shaky and nauseating videos. To tackle this issue, we used a weighted Locality-constrained Linear Coding formulation, where each dictionary entry has a weight assigned to it:
\begin{equation}
\label{eq:LLCW}
\boldsymbol{\alpha^\star} = \argmin{\boldsymbol{\alpha}~\in~\mathbb{R}^{n}} { \norm{\mathbf{v} - D~\boldsymbol{\alpha}}^{2} + \lambda_\alpha \norm{W~\mathbf{g} \odot \alpha}^2 },
\end{equation}
where $W$ is a diagonal matrix built from the weight vector ${\mathbf{w} \in \mathbb{R}^n}$, \ie, ${W\triangleq\text{diag}(\mathbf{w})}$.
This weighting formulation provides a flexible solution, where we create different weights for frames based on the camera movement and thus, we can change the contribution for the reconstruction without increasing the sparsity/locality term significantly.
Let ${C \in \mathbb{R}^{c \times n}}$ be the Cumulative Displacement Curves~\cite{Poleg2014}, \ie, the cumulative sum of the Optical Flow magnitudes, computed over the horizontal displacements in ${5\times5}$ grid windows of the video frames (Figure~\ref{fig:methodology}-c). Let ${C' \in \mathbb{R}^{c \times n}}$ be the derivative of each curve $C$ \wrt time. We assume frame $i$ to be within an interval of abrupt camera motion if all curves $C'$ present the same sign (positive/negative) at the point $i$, which represents a head-turning movement~\cite{Poleg2014}. We assign a lower weight for these motion intervals to enforce them to be composed of a larger number of frames. We empirically set the weights to ${\mathbf{w}_i=0.1}$ and ${\mathbf{w}_i=1.0}$ for the frame features inside and outside the interval, respectively.
\paragraph{Speed-up Selection.}
All frames related to the activated positions of the vector~$\boldsymbol{\alpha^\star}$ will be selected to compose the final video. Since ${\lambda_\alpha}$ controls the sparsity, it also manages the speed-up rate of the created video. The zero-value ${\lambda_\alpha}$ enables the activation of all frames leading to a complete reconstruction.
To achieve the desired speed-up, we perform an iterative search starting from zero, as depicted in Algorithm~\ref{alg:lambda_adjustment}. The function ${NumOfFrames(\lambda)}$ (Line~\ref{alg_line:number_selected_frames}) solves Equation~\ref{eq:LLCW} using $\lambda$ as the value of $\lambda_\alpha$ and returns the number of activations in $\boldsymbol{\alpha^\star}$.
\begin{algorithm}[!t]
\caption{Lambda value adjustment}
\label{alg:lambda_adjustment}
\begin{algorithmic}[1]
\Require \small{Desired length of the final video ${VideoLength}$.}
\Ensure \small{The ${\lambda_\alpha}$ value to reach the desired number of frames.}
\Function{Lambda\_Adjustment}{${VideoLength}$}
\State ${\lambda_\alpha \gets 0}$ , ${step \gets 0.1}$ , ${nFrames \gets 0}$
\While{${nFrames \neq VideoLength} $}
\State $ nFrames \gets NumberOfFrames(\lambda_\alpha + step)$
\label{alg_line:number_selected_frames}
\If {${nFrames \geq VideoLength}$}
\State ${\lambda_\alpha \gets \lambda_\alpha + step}$
\Else
\State ${step \gets step / 10}$
\EndIf
\EndWhile
\EndFunction
\end{algorithmic}
\end{algorithm}
\paragraph{Frame Description.}
The feature vector of the $i$-th frame ${\mathbf{d}_i \in \mathbb{R}^{446}}$ (Figure~\ref{fig:methodology}-b) is composed of the concatenation the following terms.
The ${\mathbf{hof_m} \in \mathbb{R}^{50}}$ and ${\mathbf{hof_o} \in \mathbb{R}^{72}}$ are histogram of optical flow magnitudes and orientations of the $i$-th frame, respectively. The appearance descriptor, ${\mathbf{a} \in \mathbb{R}^{144}}$, is composed of the mean, standard deviation, and skewness values of HSV color channels of the windows in a ${4\times4}$ grid of the frame $i$.
To define the content descriptor, ${\mathbf{c} \in \mathbb{R}^{80}}$, we first use the YOLO~\cite{Redmon2016} to detect the objects in the frame $i$; then, we create a histogram with these objects over the 80 classes of the YOLO architecture.
Finally, the sequence descriptor, ${\mathbf{s} \in \mathbb{R}^{100}}$, is an one hot vector, with the ${\text{mod}(i,100)}$-th feature activated.
\subsection{Smoothing Frame Transitions}
A solution ${\boldsymbol{\alpha^\star}}$ does not ensure a final smooth fast-forward video. Occasionally, the solution might provide a low error reconstruction of small and highly detailed segments of the video. Thus, by creating a better reconstruction with a limited number of frames, ${\boldsymbol{\alpha^\star}}$ might ignore stationary moments or visually likely views and create videos similar to results of summarization methods.
We address this issue by dividing the frame sampling into two steps. First, we run the weighted sparse sampling to reconstruct the video using a speed-up multiplied by a factor ${SpF}$. The resulting video contains ${1/SpF}$ of the desired number frames. Then, we iteratively insert frames into the shakier transitions (Figure~\ref{fig:methodology}-e) until the video achieves the exact number of frames.
Let ${I(F_x,F_y)}$ be the instability function defined by ${I(F_x,F_y)= AC(F_x,F_y)*(d_{y}-d_{x}-speedup)}$. The function ${AC(F_x, F_y)}$ calculates the Earth Mover's Distance~\cite{Pele2009} between the color histograms of the frames $F_x$ and $F_y$. The second term of the instability function is the speed-up deviation term. This term calculates how far the distance between frames $F_x$ and $F_y$, \ie, ${d_y - d_x}$, are from the desired speedup. We identify a shakier transition using the Equation~\ref{eq:identify_peak}:
\begin{equation}
\label{eq:identify_peak}
{i^\star = \argmax{i~\in~\mathbb{R}^{m}}{I(F_{s_i},F_{s_{i+1}})}}.
\end{equation}
The set of frames from ${F_{s_{i^\star}}}$~to~${F_{s_{i^\star+1}}}$, \ie, solution of Equation~\ref{eq:identify_peak}, has visually dissimilar frames with a distance between them higher than the required speed-up.
After identifying the shakier transition, from the subset with frames ranging from ${F_{s_{i^\star}}}$ to ${F_{s_{i^\star+1}}}$, we choose the frame ${F_{j^\star}}$ that minimizes the instability of the frame transition as follows:
\begin{equation}
\label{eq:frame_picker}
{j^\star = \argmin{j~\in~\mathbb{R}^{n}}{I(F_{s_{i^\star}},F_j)^2 + I(F_j,F_{s_{i^\star+1}})^2} }.
\end{equation}
Equations~\ref{eq:identify_peak}~and~\ref{eq:frame_picker} can be solved by exhaustive search, since the interval is small. In this work, we use ${SpF=2}$ in the experiments. Higher values enlarge the search interval, increasing the time for solving Equation~\ref{eq:frame_picker}.
\subsection{Video compositing}
All selected frames of each segment are concatenated to compose the final video (Figure~\ref{fig:methodology}-f). In this last step, we also run the egocentric video stabilization proposed in the work of Silva~\etal~\cite{Silva2016}, which is properly designed to fast-forwarded egocentric videos. The stabilizer creates smooth transitions by applying weighted homographies. Images corrupted during the smoothing step are reconstructed using the non-selected frames of the original video.
\section{Experiments}
\label{sec:exp_results}
In this section, we describe the experimental results on the Semantic Dataset~\cite{Silva2016} and a new multimodal semantic egocentric dataset. After detailing the datasets, we present the results followed by the ablation study on the components and efficiency analysis.
\begin{figure}[!t]
\centering
\includegraphics[width=1.0\linewidth]{Dataset.pdf}
\caption{Left: setup used to record videos with RGB-D camera and IMU. Center: frame samples from DoMSEV. Right: an example of the available labels for the image highlighted in green.}
\label{fig:dataset}
\end{figure}
\begin{figure*}[!t]
\centering
\includegraphics[width=0.883\linewidth]{Results_baselines.pdf}
\caption{Evaluation of the proposed Sparse Sampling methodology against the literature fast-forward methods. Dashed and doted lines in (b) are related to the mean instability indexes of the original video and the uniform sampling, respectively. Desirable values are: (a) higher, (b) lower, and (c) closer to zero.}
\label{fig:results_baselines}
\end{figure*}
\subsection{Datasets and Evaluation criterion}
\paragraph{Semantic Dataset.}
We first test our method using the Semantic Dataset, proposed by Silva~\etal~\cite{Silva2016}. This dataset is composed of $11$ annotated videos. Each video is classified having $0\%$, $25\%$, $50\%$, or $75\%$ of semantic content in the semantic portions (a set of frames with high semantic score) on average. For instance, in the Walking25p video, the recorder is walking and there are an average of $25\%$ of frames with faces and/or pedestrians. It is worth noting that even when video belongs to the class 0p, it still contains semantics on its frames. The reason of being classified as $0$p is mainly because it does not have a minimum number of frames with high semantic score.
Because this dataset has the annotation of the semantic load, we can use it for finding the best semantic fast-forward method, \ie, the fast-forward approach that retains the highest semantic load of the original video.
\paragraph{Multimodal Semantic Egocentric Videos.}
Aside from the Semantic Dataset, we also evaluated our approach on new 80-hour data set. Because of the absence of unrestricted and available multimodal data to work with egocentric tasks, we propose an 80-hour Dataset of Multimodal Semantic Egocentric Videos (DoMSEV). The videos cover a wide range of activities such as shopping, recreation, daily life, attractions, party, beach, tourism, sports, entertainment, and academic life.
The multimodal data was recorded using either a GoPro Hero\textsuperscript\texttrademark camera or a built setup composed of a 3D Inertial Movement Unit (IMU) attached to the Intel Realsense\textsuperscript\texttrademark R200 RGB-D camera. Figure~\ref{fig:dataset} shows the setup used and a few of examples of frames from the videos. Different people recorded the videos in varied illumination and weather conditions.
The recorders labeled the videos informing the scene where some segment were taken (\eg, indoor, urban, crowded environment, \etc), the activity performed (walking, standing, browsing, driving, biking, eating, cooking, observing, in conversation, \etc), if something caught their attention and when they interacted with some object. Example of labels are depicted in Figure~\ref{fig:dataset}. Also, we create a profile for each recorder representing their preferences over the $80$ classes of the YOLO classifier~\cite{Redmon2016} and the $48$ visual sentiment concepts defined by Sharghi \etal~\cite{Sharghi2017}. To create the recorders' profile, we asked them to indicate their interest in each class and concepts in a scale from $0$ to $10$.
Table~\ref{tab:multimodal_dataset} summarizes in the ``Info'' and ``Videos'' columns the diversity of sensors and activities that can be found in the dataset. Due to the lack of space, we chose the videos which best represent the diverse of activities, camera models, mounting, and the presence/absence of sensors info. The dataset, source code and the 3D model for printing the built setup are publicly available in \href{https://www.verlab.dcc.ufmg.br/semantic-hyperlapse/cvpr2018-dataset/}{https://www.verlab.dcc.ufmg.br/semantic-hyperlapse/cvpr2018}.
\paragraph{Evaluation criterion.}
The quantitative analysis presented in this work is based on three aspects: instability, speed-up, and amount of semantic information retained in the fast-forward video.
The \textit{Instability} index is measured by using the cumulative sum over the standard deviation of pixels in a sliding window over the video~\cite{Silva2018}.
The \textit{Speed-up} metric is given by de difference of the achieved speed-up rate from the required value. The speed-up rate is the ratio between the number of frames in the original video and in its fast-forward version. In this work, we used $10$ as required speed-up.
For the \textit{Semantic} evaluation, we consider the labels defined in the Semantic Dataset, which characterize the relevant information as pedestrian in the ``Biking'' and ``Driving'' sequences, and face in the ``Walking''. The semantic index is given by the ratio between the sum of the semantic content in each frame of the final video and the maximum possible semantic value (MPSV). The MPSV is the sum of the semantic scores of the $n$ top-ranked frames of the output video, where $n$ is the expected number of frames in the output video, given the required speed-up.
\subsection{Comparison with state-of-the-art methods}
\begin{table*}[!t]
\centering
\caption{Results and videos details of a sample of the proposed multimodal dataset. Duration is the length of the video before the acceleration. RS in the Camera column stands for RealSense\texttrademark~by Intel\textsuperscript{\textregistered} and Hero is a GoPro\textsuperscript{\textregistered} line product.}
\label{tab:multimodal_dataset}
\setlength{\tabcolsep}{2.9pt}
\small{
\begin{tabular}{lrrrrrrrrrrrrrllcccc} \toprule
& \multicolumn{2}{c}{\textbf{Semantic}$^1$(\%)} & & \multicolumn{2}{c}{\textbf{Speed-up}$^2$} & & \multicolumn{2}{c}{\textbf{Instability}$^3$} & & \multicolumn{2}{c}{\textbf{Time}$^3$(s)} & & \multicolumn{7}{c}{\textbf{Info}} \\
\thead{\textbf{Videos}} & \thead{Ours} & \thead{MIFF} & & \thead{Ours} & \thead{MIFF} & & \thead{Ours} & \thead{MIFF} & & \thead{Ours} & \thead{MIFF} & & \thead{Duration \\ (hh:mm:ss)} & \thead{Mount} & \thead{Camera} & & \thead{\rot[90][0.7em]{IMU}} & \thead{\rot[90][0.7em]{Depth}} & \thead{\rot[90][0.7em]{GPS}} \\ \cmidrule(l){2-3} \cmidrule(l){5-6} \cmidrule(l){8-9} \cmidrule(l){11-12} \cmidrule(l){14-20}
Academic\_Life\_09 & 21.80 & 24.74 & & 0.01 & 0.00 & & 47.56 & 59.38 & & 145.6 & 3,298.5 & & 01:02:53 & helmet & RS R200 & & \checkmark & \checkmark & \\
Academic\_Life\_10 & 24.99 & 25.12 & & -0.02 & 1.53 & & 47.47 & 51.62 & & 282.2 & 7,654.7 & & 02:04:33 & head & Hero5 & & \checkmark & & \checkmark \\
Academic\_Life\_11 & 21.03 & 20.14 & & -0.00 & 0.20 & & 30.19 & 42.64 & & 96.6 & 3,176.9 & & 01:02:04 & hand & Hero4 & & & & \\
Attraction\_02 & 65.04 & 59.22 & & 0.10 & 0.00 & & 24.68 & 25.65 & & 95.0 & 5,284.6 & & 01:31:10 & chest & Hero5 & & \checkmark & & \checkmark \\
Attraction\_08 & 80.29 & 77.52 & & 0.35 & 1.72 & & 34.78 & 37.78 & & 8.7 & 1,762.0 & & 00:32:41 & chest & Hero5 & & \checkmark & & \checkmark \\
Attraction\_09 & 43.83 & 44.35 & & -0.18 & 0.29 & & 51.30 & 52.42 & & 27.7 & 3,265.1 & & 00:52:43 & helmet & RS R200 & & \checkmark & \checkmark & \\
Attraction\_11 & 27.28 & 31.55 & & -0.05 & -0.02 & & 31.93 & 35.79 & & 185.6 & 4,779.3 & & 01:17:20 & helmet & RS R200 & & \checkmark & \checkmark & \checkmark \\
Daily\_Life\_01 & 18.76 & 20.01 & & 0.04 & 2.56 & & 47.06 & 49.05 & & 126.3 & 5,222.0 & & 01:16:45 & head & Hero5 & & \checkmark & & \checkmark \\
Daily\_Life\_02 & 25.68 & 25.51 & & -0.10 & 3.48 & & 38.16 & 46.80 & & 46.4 & 5,741.3 & & 01:33:39 & head & Hero5 & & \checkmark & & \checkmark \\
Entertainment\_05 & 24.63 & 23.93 & & 0.04 & 0.01 & & 33.79 & 39.12 & & 20.8 & 3,786.1 & & 00:55:25 & helmet & RS R200 & & \checkmark & \checkmark & \\
Recreation\_03 & 76.52 & 72.70 & & -0.04 & 0.45 & & 41.69 & 43.64 & & 37.8 & 3,518.7 & & 00:57:39 & helmet & Hero4 & & & & \\
Recreation\_08 & 24.20 & 26.33 & & -0.05 & 3.74 & & 34.98 & 38.44 & & 59.2 & 5,957.0 & & 01:44:15 & shoulder & Hero5 & & \checkmark & & \checkmark \\
Recreation\_11 & 67.94 & 65.25 & & 0.20 & 0.02 & & 12.49 & 12.15 & & 17.9 & 2,802.9 & & 00:46:04 & chest & Hero5 & & \checkmark & & \checkmark \\
Sport\_02 & 13.62 & 14.85 & & -0.13 & 6.25 & & 44.96 & 52.59 & & 20.0 & 2,387.6 & & 00:43:20 & head & Hero5 & & \checkmark & & \checkmark \\
Tourism\_01 & 64.00 & 62.90 & & -0.01 & 2.15 & & 28.93 & 31.57 & & 33.6 & 3,283.4 & & 00:55:35 & chest & Hero4 & & & & \\
Tourism\_02 & 48.24 & 47.22 & & -0.23 & 3.22 & & 52.38 & 54.27 & & 118.2 & 9,331.0 & & 02:22:52 & head & Hero5 & & \checkmark & & \checkmark \\
Tourism\_04 & 27.20 & 29.24 & & 0.00 & 0.10 & & 53.14 & 56.41 & & 229.4 & 8,302.5 & & 01:46:38 & helmet & RS R200 & & \checkmark & \checkmark & \\
Tourism\_07 & 42.93 & 42.72 & & 0.09 & 4.47 & & 39.44 & 37.08 & & 27.1 & 3,906.1 & & 01:05:03 & head & Hero5 & & \checkmark & & \checkmark \\
\cmidrule(l){2-3} \cmidrule(l){5-6} \cmidrule(l){8-9} \cmidrule(l){11-12}
\textit{Mean} & \textit{39.89} & \textit{39.63} & & \textit{0.00} & \textit{1.72} & & \textit{38.38} & \textit{42.08} & & \textit{87.7} & \textit{4,636.6} & & & & & & & & \\
& \multicolumn{2}{c}{\scriptsize{$^1$\textit{Higher is better.}}} & & \multicolumn{2}{c}{\scriptsize{$^2$\textit{Better closer to 0}}} & & \multicolumn{5}{c}{\scriptsize{$^3$\textit{Lower is better.}}} & & & & & & & & \\ \bottomrule
\end{tabular}
}
\end{table*}
In this section, we present the quantitative results of the experimental evaluation of the proposed method. We compare it with the methods: EgoSampling (ES)~\cite{Poleg2015}, Stabilized Semantic Fast-Forward (SSFF)~\cite{Silva2016}, Microsoft Hyperlapse (MSH)~\cite{Joshi2015} the state-of-the-art method in terms of visual smoothness, and Multi-Importance Fast-Forward (MIFF)~\cite{Silva2018} the state-of-the-art method in terms of the amount of semantics retained in the final video.
Figure~\ref{fig:results_baselines}-a shows the results of the Semantic evaluation performed using the sequences in the Semantic Dataset. We use the area under the curves as a measure of the retained semantic content. Our approach outperformed the other methodologies. The area under the curve of the proposed method was $100.74\%$ of the area under the MIFF curve, which is the state-of-the-art in semantic hyperlapse. The second Semantic Hyperlapse technique evaluated, SSFF, had $52.01\%$ of the area under curve of MIFF. Non-semantic hyperlapse techniques such as MSH and ES achieved at best $19.63\%$ of the MIFF area.
The results for Instability are presented as the mean of the instability indexes calculated over all sequences in the Semantic Dataset (Figure~\ref{fig:results_baselines}-b, lower values are better). The black dotted and the green dashed lines stand for the mean instability index when using an uniform sampling and for the original video, respectively. Ideally, it is better to yield an instability index as closer as possible to the original video. The reader is referred to the Supplementary Material for the individual values. The chart shows that the our method created videos as smooth as the state-of-the-art method MSH and smoother than the MIFF.
Figure~\ref{fig:results_baselines}-c shows the speed-up achieved by each method. The bar represent the average difference between the required speed-up and the rate achieved by a respective method for each class of video in the Semantic Dataset. Values closer to zero are desirable. The chart shows that our method provided the best acceleration for ``Driving'' and ``Walking'' experiments. In ``Biking'' experiments MSH held the best speed-up.
As far as the semantic metric is concerned \mbox{(Figure~\ref{fig:results_baselines}-a)}, our approach leads followed by MIFF. We ran a more detailed performance assessment comparing our method to MIFF in the multimodal dataset. The results are shown in Table~\ref{tab:multimodal_dataset}. As can be seen, our method outperforms MIFF in all metrics. The column ``Time'' shows the time for the frame sampling step of each method (MIFF runs a parameter setup and the shortest path, and ours runs minimum reconstruction followed by the smoothing step). Our method was ${53\times}$ faster than MIFF. It is noteworthy that, unlike MIFF that requires $14$ parameters to be adjusted, our method is parameter-free. Therefore, the average processing time spent per frame was $0.5$~ms, while the automatic parameter setup process and the sampling processing of MIFF spent $30$~ms per frame. The descriptor extraction for each frame ran in $320$~ms facing $1{,}170$~ms of MIFF. The experiments were conducted in a machine with i7-6700K CPU @ 4.00GHz and 16 GB of memory.
\subsection{Ablation analysis}
\label{subsec:ablation}
In this Section, we discuss the gain of applying the steps Weighted Sparse Frame Sampling and Smoothing Frame Transitions to the final fast-forward video. All analysis were conducted in the Semantic Dataset.
\paragraph{Weighted Sparse Sampling.}
As stated, we introduce a new model based on weighted sparse sampling to address the problem of abrupt camera motions. In this model, small weights are applied to frames containing abrupt camera motions to increase the probability of these frames being selected and, consequently, to create a smooth sequence.
Considering all sequences of abrupt camera motions present in all videos of the Semantic Dataset, the weighted version manages to sample, in average, three times more frames than the non-weighted version. Figure~\ref{fig:results_weighted} illustrates the effect of solving the sparse sampling by weighting the activation vector. It can be seen that the weighting strategy helps by using a denser sampling in curves (on the right) than when applying the non-weighted sparse sampling version (on the left). In this particular segment, our approach sampled twice the number of frames, leading to less shaky lateral motions.
\begin{figure}[!t]
\centering
\includegraphics[width=1.0\linewidth]{Results_weighted.pdf}
\caption{The effect of applying the Weighted Sparse Sampling in an abrupt camera movement segment. Black arrows are the frames of the original video, red arrow are frames selected by non-weighted sparse sampling, and the green arrows represent the frames sampled by the weighted sparse sampling. Each image is related with the respective numerated arrow.}
\label{fig:results_weighted}
\end{figure}
\paragraph{Smoothing Frame Transitions.}
By computing the coefficient of variation (CV), we measured the relative variability of the points representing the appearance cost of the frames (blue and red points in Figure~\ref{fig:results_SFT}). The appearance cost is computed as the Earth Mover's Distance~\cite{Pele2009} between the color histogram of frames in a transition.
After applying the proposed smoothing approach we achieved ${CV=0.97}$, while the simple sampling provided ${CV=2.39}$. The smaller value for our method indicates a smaller dispersion and consequently fewer visual discontinuities. Figure~\ref{fig:results_SFT} shows the result when using SFT and non-smoothed sparse sampling. The horizontal axis contains the index of selected frames and the vertical axis represents the appearance cost between the $i$-th frame and its following in the final video. The points in the red line represent the oversampling pattern of non-smoothed sparse sampling, in which many frames are sampled in segments hard to reconstruct followed by a big jump.
The abrupt scene changing is depicted by high values of appearance cost. The red-bordered frames in the figure show an example of two images that compose the transition with the highest appearance cost for a fast-forwarded version of the video ``Walking 25p'' using non-smoothed sparse sampling. After applying the SFT approach, we have a more spread sampling covering all segments, and with less video discontinuities. The blue-bordered images present the frames composing the transition with the highest appearance cost using the sparse sampling with the SFT step. By comparing the red and blue curves, one can clearly see that after using SFT, we achieve smoother transitions, \ie, lower values for the appearance cost.
\begin{figure}[!t]
\centering
\includegraphics[width=1.0\linewidth]{Results_SFT.pdf}
\caption{Frame sampling and appearance cost of the transitions in the final video before and after applying the Smoothing Frame Transition (SFT) to the video ``Walking 25p''. Images with blue border show the frames composing the transition with the highest appearance cost using SFT. Images with red borders are related to the non-smoothed sparse sampling.}
\label{fig:results_SFT}
\end{figure}
\section{Conclusion}
\label{sec:conclusion}
In this work, we presented a new semantic fast-forward parameter-free method for first-person videos. It is based on a weighted sparse coding modeling to address the adaptive frame sampling problem and smoothing frame transitions to tackle abrupt camera movements by using a denser sampling along the segments with high movement. Contrasting with previous fast-forward techniques that are not scalable in the number of features used to describe the frame/transition, our method is not limited by the size of feature vectors.
The experiments showed that our method was superior to state-of-the-art semantic fast-forward methods in terms of semantic, speed-up, stability, and processing time. We also performed an ablation analysis that showed the improvements provided by the weighted modeling and smoothing step. An additional contribution of this work is a new labeled $80$-hour multimodal dataset, with several annotations related to the recorder preferences, activity, interaction, attention, and the scene where the video was taken.
\paragraph{Acknowledgments.}
The authors would like to thank the agencies CAPES, CNPq, FAPEMIG, and Petrobras for funding different parts of this work.
{\small
\bibliographystyle{ieee}
|
{
"timestamp": "2018-03-15T01:02:33",
"yymm": "1802",
"arxiv_id": "1802.08722",
"language": "en",
"url": "https://arxiv.org/abs/1802.08722"
}
|
\section{Artifiact Extraction}
In practice, many learning algorithms produce labeling maps that
provide little to no insight into why a particular trajectory is given
a particular label. In the next section, we seek a way to
systematically summarize a labeling in terms of the parametric
specification used to induce the logical distance.
\subsection{Post-facto Projections}
To begin, observe that due to the nature of the Hausdorff distance,
when explaining why two boundaries differ, one can remove large
segments of the boundaries without changing their Hausdorff distance. This
motivates us to find a small summarizing set of parameters for
each label. Further, since the Hausdorff distance often reduces to
the distance between two points, we aim to summarize each boundary
using a particular projection map. Concretely,
\begin{definition}
Letting $\partial\mathcal{V}_\varphi(\mathcal{D}^T)$ denote the
set of all possible validity domain boundaries, a projection is a
map:
\begin{equation}
\label{eq:proj}
\pi : \partial\mathcal{V}_\varphi(\mathcal{D}^T) \to \mathbb{R}^n
\end{equation}
where $n$ is the number of parameters in $\varphi$.
\end{definition}
\begin{remark}
In principle, one could extend this to projecting to a finite tuple
of points. For simplicity, we do not consider such cases.
\end{remark}
Systematic techniques for picking the projection include
\textit{lexicographic projections} and solutions to
\textit{multi-objective optimizations}; however, as seen in the
introduction, a-priori choosing the projection scheme is subtle.
Instead, we propose performing a post-facto optimization of a
collection of projections in order to be maximally representative of
the labels. That is, we seek a projection, $\pi^*$, that maximally
disambiguates between the labels, i.e., maximizes the minimum distance
between the clusters. Formally, given a set of traces associated with
each label $L_1, \ldots L_k$ we seek:
\begin{equation}
\label{eqn:proj}
\pi^* \in \argmax_{\pi} \min_{i,j \in {k\choose 2}} d_{\infty}(\pi(L_i), \pi(L_j))
\end{equation}
For simplicity, we restrict our focus to projections induced by the
intersection of each boundary with a line intersecting the base of the
unit box ${[0, 1]}^n$. Just as in the recursive boundary
approximations, due to monotonicity, this intersection point is
guaranteed to be unique. Further, this class of projections is in
one-one correspondence with the boundary. In particular, for any point
$p$ on boundary, there exists exactly one projection that produces
$p$. As such, each projection can be indexed by a point in ${[0,
1]}^{n-1}$.
This is perhaps unsurprising given that in 2-d, one can index this
class by the angle with the $x$-axis and in 3-d on can include the
angle from the $z$-axis.
\begin{remark}
Since we expect clusters of boundaries to be near each other, we
also expect their intersection points to be near each other.
\end{remark}
\begin{remark}
For our experiment, we searched for $\pi^*$ via a sweep through a
discretized indexing of possible angles.
\end{remark}
\subsection{Label Specifications}
Next, observe that given a projection, when studying the infinity
norm distance between labels, it suffices to consider only the
bounding box of each label in parameter space. Namely,
letting $B : \PowerSet{\mathbb{R}^n} \to I[\mathbb{R}^n]$ denote
the map that computes the bounding box of a set of points in $\mathbb{R}^n$,
for any two labels $i$ and $j$:
\begin{equation}
\label{eq:interpBounding}
d_{\infty}(\pi(L_i), \pi(L_j)) =d_{\infty}(B\circ\pi(L_i), B\circ \pi(L_j)).
\end{equation}
This motivates using the projection's bounding box as a surrogate for
the cluster. Next, we observe that one can encode the set of
trajectories whose boundaries intersect (and thus can project to) a
given bounding box as a simple Boolean combination of the
specifications corresponding to instantiating $\varphi$ with the parameters
of at most $n+1$ corners of the box~\cite[Lemma
2]{logicalClustering}. While a detailed exposition is outside the
scope of this article, we illustrate with an example.
\begin{example}
Consider examples 0 and 1 from the introductory example viewed
as validity domain boundaries under~\eqref{eq:param_spec}. Suppose
that the post-facto projection mapped example 0 to $(1/4, 1/2)$ and
mapped example 1 to $(0.3, 0.51)$. Such a projection is plausibly near
the optimal for many classes of projections since none of the other
example boundaries (who are in different clusters) are near the
boundaries for 0 and 1 at these points. The resulting specification
is:
\begin{equation}
\begin{split}
\phi&(\mathbf{x}) = \varphi_{ex}(\mathbf{x})(1/4, 1/2)\wedge \neg\varphi_{ex}(\mathbf{x})(1/4, 0.51)\wedge \neg\varphi_{ex}(\mathbf{x})(0.3, 1/2)\\
&= \mathbbm{1}\bigg[t \in [1/4, 0.3] \implies \mathbf{x}(t) \in [1/2, 0.51] \wedge t > 0.3 \implies \mathbf{x}(t) \geq 0.51 \bigg]
\end{split}
\end{equation}
\end{example}
\subsection{Dimensionality Reduction}
\begin{wrapfigure}{r}{0.5\textwidth}
\centering
\includegraphics[width=0.5\textwidth]{imgs/diag_projection.png}
\caption{Figure of histogram resulting from projecting
noisy variations of the traffic slow down example time
series onto the diagonal of the unit box.}\label{fig:dim_reduce}
\end{wrapfigure}
Finally, observe that the line that induces the projection can serve
as a mechanism for dimensionality reduction. Namely, if one
parameterizes the line $\gamma(t)$ from $[0, 1]$, where $\gamma(0)$ is
the origin and $\gamma(1)$ intersects the unit box, then the points
where the various boundaries intersect can be assigned a number
between $0$ and $1$. For high-dimensional parameter spaces, this
enables visualizing the projection histogram and could even be used
for future classification/learning. We again illustrate using our
running example.
\begin{example}
For all six time series in the traffic slow down example, we
generate 100 new time series by modulating the time series with
noise drawn from $\mathcal{N}(1, 0.3)$. Using our previously labeled
time series, the projection using the line with angle $45^\circ$ (i.e.,
the diagonal of the unit box) from the x-axis yields the
distribution seen in Fig~\ref{fig:dim_reduce}. Observe that all three
clusters are clearly visible.
\end{example}
\begin{remark}\label{rem:pca}
If one dimension is insufficient, this procedure can be extended to
an arbitrary number of dimensions using more lines. An interesting
extension may be to consider how generic dimensionality techniques
such as principle component analysis would act in the limit where
one approximates the entire boundary.
\end{remark}
\section{Preliminaries}\label{sec:prelim}
The main object of analysis in this paper are time-series.\footnote{Nevertheless, the material presented in the sequel easily generalizes to other objects.}
\begin{definition}[Time Series, Signals, Traces] Let $T$ be a
subset of $\PosZReals$ and $\mathcal{D}$ be a nonempty
set. A time series (signal or trace), $\mathbf{x}$ is
a map:
\begin{equation}
\mathbf{x}: T \to \mathcal{D}
\end{equation}
Where $T$ and $\mathcal{D}$ are called the time domain and
value domain respectively. The set of all time series is denoted
by $\mathcal{D}^T$.
\end{definition}
Between any two time series one can define a metric which measures
their similarity.
\begin{definition}[Metric]
Given a set $X$, a metric is a map,
\begin{equation}
\label{eq:metric}
d : X \times X \to \PosZReals
\end{equation}
such that $d(x, y) = d(y,x)$, $d(x, y) = 0 \iff x = y$,
$d(x, z) \leq d(x, y) + d(y, z)$.
\end{definition}
\begin{example}[Infinity Norm Metric]
The infinity norm induced distance $d_\infty(\vec{x}, \vec{y})
\mathrel{\stackrel{\makebox[0pt]{\mbox{\normalfont\tiny def}}}{=}} \max_i\left({|x_i - y_i|}\right)$ is a metric.
\end{example}
\begin{example}[Hausdorff Distance]
Given a set $X$ with a distance metric $d$, the Hausdorff distance
is a distance metric between closed subsets of $X$. Namely, given
closed subsets $A, B \subseteq X$:
\begin{equation}\label{eq:hausdorff}
d_H(A, B) \mathrel{\stackrel{\makebox[0pt]{\mbox{\normalfont\tiny def}}}{=}} \max\left(\sup_{x \in A}~\inf_{y \in B}(d(x, y)), \sup_{y \in B}~\inf_{x \in A}(d(y, x))\right )
\end{equation}
We use the following property of the Hausdorff distance
throughout the paper: Given two sets $A$ and $B$,
there necessarily exists points $a\in A$ and $b\in B$ such
that:
\begin{equation}
d_H(A, B) = d(a, b)
\end{equation}
\end{example}
Within a context, the platonic ideal of a metric between traces
respects any domain-specific properties that make two elements
``similar''.\footnote{Colloquially, if it looks like a duck and
quacks like a duck, it should have a small distance to a duck.} A
logical trace property, also called a specification, assigns to each
timed trace a truth value.
\begin{definition}[Specification]
A specification is a map, $\phi$, from time series to true or false.
\begin{equation}
\phi : \mathcal{D}^T \to \{1, 0\}
\end{equation}
A time series, $\mathbf{x}$, is said to satisfy a specification iff
$\phi(\mathbf{x}) = 1$.
\end{definition}
\begin{example}\label{ex:spec}
Consider the following specification related to the specification
from the running example:
\begin{equation}
\phi_{ex}(\mathbf{x}) \mathrel{\stackrel{\makebox[0pt]{\mbox{\normalfont\tiny def}}}{=}}
\mathbbm{1}\bigg [\forall t \in T ~.~ \big (t > 0.2
\implies \mathbf{x}(t) < 1 \big)\bigg](\mathbf{x})
\end{equation}
where $\mathbbm{1}[\cdot]$ denotes an indicator function. Informally, this specification says that after
$t=0.2$, the value of the time series, $x(t)$, is always
less than 1.
\end{example}
Given a finite number of properties, one can then ``fingerprint'' a
time series as a Boolean feature vector. That is, given $n$
properties, $\phi_1 \ldots \phi_n$ and the corresponding indicator
functions, $\phi_1 \ldots \phi_n$, we map each time series to
an $n$-tuple as follows.
\begin{equation}
\mathbf{x} \mapsto (\phi_1(\mathbf{x}), \ldots, \phi_n(\mathbf{x}))
\end{equation}
Notice however that many properties are not naturally captured by a
{\em finite\/} sequence of binary features. For example, imagine a
single quantitative feature $f : \mathcal{D}^T \to [0, 1]$
encoding the percentage of fuel left in a tank. This feature
implicitly encodes an uncountably infinite family of Boolean features
$\phi_k(\mathbf{x}) = \mathbbm{1}[f(\mathbf{x}) = k](\mathbf{x})$ indexed by the percentages $k \in
[0, 1]$. We refer to such families as parametric specifications. For
simplicity, we assume that the parameters are a subset of the unit
hyper-box.
\begin{definition}[Parametric Specifications] A parametric
specification is a map:
\begin{equation}\label{eq:pspec}
\varphi: \mathcal{D}^T \to \bigg ({[0, 1]}^n \to \{0, 1\} \bigg )
\end{equation}
where $n\in \mathbb{N}$ is the number of parameters and
$\bigg ({[0, 1]}^n \to \{0, 1\} \bigg )$ denotes the set of
functions from the hyper-square, ${[0, 1]}^n$ to $\{0, 1\}$.
\end{definition}
\begin{remark}
The signature, $\varphi : {[0, 1]}^n \to (\mathcal{D}^T \to \setof{0, 1})$ would have been an alternative
and arguably simpler definition of parametric specifications;
however, as we shall see,~\eqref{eq:pspec} highlights that a
trace induces a structure, called the validity domain, embedded in
the parameter space.
\end{remark}
Parametric specifications arise naturally from syntactically
substituting constants with parameters in the description of a
specification.
\begin{example}\label{ex:p_spec}
The parametric specification given in Ex~\ref{ex:spec} can be
generalized by substituting $\tau$ for $0.2$ and $h$ for $1$ in
Ex~\ref{ex:spec}.
\begin{equation}
\label{eq:param_spec}
\varphi_{ex}(\mathbf{x})(\tau, h) \mathrel{\stackrel{\makebox[0pt]{\mbox{\normalfont\tiny def}}}{=}} \mathbbm{1}\bigg [\forall t \in T~.~ \big (t > \tau \implies \mathbf{x}(t) < h\big )\bigg](\mathbf{x})
\end{equation}
\end{example} At this point, one could naively extend the notion of
the ``fingerprint'' of a parametric specification in a similar manner
as the finite case. However, if ${[0,1]}^n$ is equipped with a
distance metric, it is fruitful to instead study the geometry induced
by the time series in the parameter space. To begin, observe that the
value of a Boolean feature vector is exactly determined by which
entries map to $1$. Analogously, the set of parameter values for
which a parameterized specification maps to true on a given time
series acts as the ``fingerprint''. We refer to this characterizing set
as the validity domain.
\begin{definition}[Validity domain] Given an $n$ parameter specification,
$\varphi$, and a trace, $\mathbf{x}$, the validity domain
is the pre-image of 1 under $\varphi(\mathbf{x})$,
\begin{equation}
\mathcal{V}_\varphi(\mathbf{x}) \mathrel{\stackrel{\makebox[0pt]{\mbox{\normalfont\tiny def}}}{=}} \text{PreImg}_{\varphi(\mathbf{x})}[1]
=\bigg \{\theta \in {[0, 1]}^n~|~\varphi(\mathbf{x})(\theta) = 1\bigg \}
\end{equation}
\end{definition}
Thus, $\mathcal{V}_\varphi$, can be viewed as the map that returns the
structure in the parameter space indexed by a particular trace.
Note that in general, the validity domain can be arbitrarily complex making
reasoning about its geometry intractable. We circumvent
such hurdles by specializing to monotonic specifications.
\begin{definition}[Monotonic Specifications]\label{def:monotonic}
A parametric specification is said to be monotonic if for all traces,
$\mathbf{x}$:
\begin{equation}
\theta \trianglelefteq \theta' \implies \varphi(\mathbf{x})(\theta) \leq \varphi(\mathbf{x})(\theta')
\end{equation}
where $\trianglelefteq$ is the standard product ordering on ${[0, 1]}^n$, {e.g.}\xspace{} $(x, y) \leq (x', y')$ iff $(x < x' \wedge y < y')$.
\end{definition}
\begin{remark}
The parametric specification in Ex~\ref{ex:p_spec} is monotonic.
\end{remark}
\begin{proposition}
Given a monotonic specification, $\varphi$, and a time series, $\mathbf{x}$,
the boundary of the validity domain,
$\partial\mathcal{V}_\varphi(x)$, of a monotonic specification
is a hyper-surface that segments ${[0, 1]}^n$ into two
components.
\end{proposition}
In the sequel, we develop a distance metric between validity domains
which characterizes the similarity between two time series under the
lens of a monotonic specification.
\section{Case Study}\label{sec:casestudies}
To improve driver models and traffic on highways, the Federal Highway
Administration collected detailed traffic data on southbound US-101
freeway, in Los Angeles~\cite{NGSIM}. Traffic through the segment was
monitored and recorded through eight synchronized cameras, next to the
freeway. A total of $45$ minutes of traffic data was recorded
including vehicle trajectory data providing lane positions of each
vehicle within the study area. The data-set is split into $5979$ time
series. For simplicity, we constrain our focus to the car's speed. In
the sequel, we outline a technique for first using the parametric
specification (in conjunction with off-the-shelf machine learning
techniques) to filter the data, and then using the logical distance
from an idealized slow down to find the slow downs in the data. This
final step offers a key benefit over the closest prior
work~\cite{logicalClustering}. Namely given an over approximation of
the desired cluster, one can use the logical distance to further
refine the cluster.
\mypara{Rescale Data} As in our running example, we seek to
use~\eqref{eq:param_spec} to search for traffic slow downs; however,
in order to do so, we must re-scale the time series. To begin, observe
that the mean velocity is 62mph with 80\% of the vehicles remaining
under 70mph. Thus, we linearly scale the velocity so that
$70\text{mph} \mapsto 1\text{ arbitrary unit (a.u.)}$. Similarly, we
re-scale the time axis so that each tick is $2$
seconds. Fig~\ref{fig:normalized} shows a subset of the time series.
\begin{figure}[h]
\centering
\begin{subfigure}[t]{0.49\textwidth}
\centering
\includegraphics[width=\linewidth]{imgs/real_ts_1000.pdf}
\caption{1000 / 5000 of the rescaled highway 101 time
series.}\label{fig:normalized}
\end{subfigure}
\begin{subfigure}[t]{0.49\textwidth}
\centering \includegraphics[width=\textwidth]{imgs/gmm.pdf}
\caption{Projection of Time-Series to two lines in the parameter
space of~\eqref{eq:param_spec} and resulting GMM
labels. }\label{fig:gmm}
\end{subfigure}
\end{figure}
\mypara{Filtering} Recall that if two boundaries have small Hausdorff
distances, then the points where the boundaries intersect a line (that
intersects the origin of the parameter space) must be close. Since
computing the Hausdorff distance is a fairly expensive operation, we
use this one way implication to group time series which may be near
each other w.r.t.\ the Hausdorff distance.
In particular, we (arbitrarily) selected two lines intersecting the
parameter space origin at $0.46$ and $1.36$ radians from the $\tau$
axis to project to. We filtered out any time-series that did not
intersect the line within ${[0, 1]}^2$. We then fit a 5 cluster
Gaussian Mixture Model (GMM) to label the data. Fig~\ref{fig:gmm}
shows the result.
\begin{figure}[h]
\centering
\begin{subfigure}[t]{0.49\textwidth}
\centering
\includegraphics[width=\linewidth]{imgs/c4_dist_dist.pdf}
\caption{Cluster~4 Logical distance
histogram.}\label{fig:c4_dist_dist}
\end{subfigure}
\begin{subfigure}[t]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{imgs/annotated_cluster4.pdf}
\caption{Time-series in Cluster 4 colored by distance to ideal
slow down.}\label{fig:annotated_c4}
\end{subfigure}
\end{figure}
\mypara{Matching Idealized Slow Down} Next, we labeled the idealized
slow down, (trace 0 from Fig~\ref{fig:toy_boundaries}) using the
fitted GMM.\@ This identified cluster~4 (with 765 data points) as
containing potential slow downs. To filter for the true slow downs, we
used the logical distance\footnote{again associated
with~\eqref{eq:param_spec}} from the idealized slow down to further
subdivide the cluster. Fig~\ref{fig:c4_dist_dist} shows the resulting
distribution. Fig~\ref{fig:annotated_c4} shows the time series in
cluster 4 annotated by their distance for the idealized slow down.
Using this visualization, one can clearly identify 390 slow downs
(distance less than $0.3$)
\mypara{Artifact Extraction} Finally, we first searched for a single
projection that gave a satisfactory separation of clusters, but were
unable to do so. We then searched over pairs of projections to create
a specification as the conjunction of two box specifications. Namely,
in terms of $\eqref{eq:param_spec}$, our first projection yields the
specification: $\phi_1 = \varphi_{ex}(0.27, 0.55) \wedge \neg \varphi_{ex}(0.38,
0.55) \wedge \neg \varphi_{ex}(0.27, 0.76)$. Similarly, our second
projection yields the specification: $\phi_2 = \varphi_{ex}(0.35, 0.17)
\wedge \neg \varphi_{ex}(0.35, 0.31) \wedge \neg \varphi_{ex}(0.62, 0.17)$. The
learned slow down specification is the conjunction of these two
specifications.
\section{Introduction}\label{sec:intro}
Recently, there has been a proliferation of sensors that monitor
diverse kinds of real-time data representing {\em time-series
behaviors\/} or {\em signals\/} generated by systems and devices
that are monitored through such sensors. However, this deluge can
place a heavy burden on engineers and designers who are not interested
in the details of these signals, but instead seek to discover
higher-level insights in the data.
More concisely, one can frame the key challenge as: ``How does one
automatically identify logical structure or relations within the
data?'' To this end, modern machine learning (ML) techniques for
signal analysis have been invaluable in domains ranging from
healthcare analytics~\cite{kale2014examination} to smart
transportation~\cite{deng2016latent}; and from autonomous
driving~\cite{mccall2007driver} to social media~\cite{liu_sparse}.
However, despite the success of ML based techniques, we believe that
easily leveraging the domain-specific knowledge of non-ML experts
remains an open problem.
At present, a common way to encode domain-specific knowledge into an
ML task is to first transform the data into an {\em a priori\/} known
{\em feature space}, e.g., the statistical properties of a time
series. While powerful, translating the knowledge of domain-specific
experts into features remains a non-trivial endeavor. More recently,
it has been shown that Parametric Signal Temporal Logic formula along
with a total ordering on the parameter space can be used to extract
feature vectors for learning temporal logical predicates
characterizing driving patterns, overshoot of diesel engine re-flow
rates, and grading for simulated robot controllers in a Massively Open
Online Course~\cite{logicalClustering}. Crucially, the technique of
learning through the lens of a logical formula means that learned
artifacts can be readily leveraged by existing formal methods
infrastructure for verification, synthesis, falsification, and
monitoring. Unfortunately, the usefulness of the results depend
intimately on the total ordering used. The following example
illustrates this point.
\begin{figure}[h]
\centering \includegraphics[width=3in]{imgs/example_carspeeds.png}
\caption{Example signals of car speeds on a
freeway.}\label{fig:example_traces}
\end{figure}
\mypara{Example:} Most freeways have bottlenecks that lead to traffic
congestion, and if there is a stalled or a crashed vehicle(s) at this
site, then upstream traffic congestion can severely
worsen.\footnote{We note that such data can be obtained from fixed
mounted cameras on a freeway, which is then converted into
time-series data for individual vehicles, such as in~\cite{NGSIM}.}
For example, Fig~\ref{fig:example_traces} shows a series of potential
time-series signals to which we would like to assign pairwise
distances indicating the similarity (small values) or differences
(large values) between any two time series. To ease exposition, we
have limited our focus to the car's speed. In signals 0 and 1, both
cars transition from high speed freeway driving to stop and go
traffic. Conversely, in signal 2, the car transitions from stop and
go traffic to high speed freeway driving. Signal 3 corresponds to a
car slowing to a stop and then accelerating, perhaps due to difficulty
merging lanes. Finally, signal 4 signifies a car encountering no
traffic and signal 5 corresponds to a car in heavy traffic, or a
possibly stalled vehicle.
Suppose a user wished to find a feature space equipped with a measure
to distinguish cars being stuck in traffic. Some properties might be:
\begin{enumerate}[leftmargin=1.5em,itemsep=0pt,parsep=0pt,partopsep=0pt]
\item Signals 0 and 1 should be {\em very\/} close together since both
show a car entering stop and go traffic in nearly the same manner.
\item Signals 2, 3, and 4 should be close together since the car
ultimately escapes stop and go traffic.
\item Signal 5 should be far from all other examples since it does not
represent entering or leaving stop and go traffic.
\end{enumerate}
\begin{figure}[t]
\centering
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{imgs/normalized_stats.pdf}
\caption{Statistical feature space}\label{fig:normalizedStats}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=\linewidth]{imgs/toy_boundaries.png}
\caption{Trade-off boundaries in
specification.}\label{fig:toy_boundaries}
\end{subfigure}
\end{figure}
\begin{wrapfigure}{r}{0.4\textwidth}
\centering\fbox{\includegraphics[width=0.4\textwidth]{imgs/toy_cluster_map.png}}
\caption{Adjacency matrix and clustering of
Fig~\ref{fig:example_traces}. Smaller numbers mean that the time
series are more similar with respect to the logical distance
metric.\label{fig:toy_cluster_map}}
\end{wrapfigure}
For a strawman comparison, we consider two ways the user might
assign a distance measure to the above signal space. Further, we
omit generic time series distance measures such as Dynamic Time
Warping~\cite{keogh2000scaling} which do not offer the ability to
embed domain specific knowledge into the metric. At first, the user
might treat the signals as a series of independent measurements and
attempt to characterize the signals via standard statistical
measures on the speed and acceleration (mean, standard deviation,
{etc.}\xspace). Fig~\ref{fig:normalizedStats} illustrates how the example
signals look in this feature space with each component normalized
between 0 and 1. The user might then use the Euclidean distance of
each feature to assign a distance between signals. Unfortunately,
in this measure, signal 4 is not close to signal 2 or 3, violating
the second desired property. Further, signals 0 and 1 are not
``very'' close together violating the first property. Next, the
user attempts to capture traffic slow downs by the following
(informal) parametric temporal specification: ``Between time $\tau$
and 20, the car speed is always less than $h$.''. As will be made
precise in the preliminaries (for each individual time-series)
Fig~\ref{fig:toy_boundaries}
illustrates the boundaries between values of $\tau$ and $h$ that
make the specification true and values which make the specification
false. The techniques in~\cite{logicalClustering} then require the
user to specify a particular total ordering on the parameter space.
One then uses the maximal point on the boundary as the
representative for the entire boundary. However, in practice,
selecting a good ordering a-priori is non-obvious. For
example,~\cite{logicalClustering} suggests a lexicographic ordering
of the parameters. However, since most of the boundaries start and
end at essentially the same point, applying any of the lexicographic
orderings to the boundaries seen in Fig~\ref{fig:toy_boundaries}
would result in almost all of the boundaries collapsing to the same
points. Thus, such an ordering would make characterizing a slow down
impossible.
In the sequel, we propose using the Hausdorff distance between
boundaries as a general ordering-free way to endow time series with
a ``logic respecting distance
metric''. Fig~\ref{fig:toy_cluster_map} illustrates the distances
between each boundary. As is easily confirmed, all 3 properties
desired of the clustering algorithm hold.
\mypara{Contributions} The key insight in our work is that in many
interesting examples, the distance between satisfaction boundaries
in the parameter space of parametric logical formula can
characterize the domain-specific knowledge implicit in the
parametric formula. Leveraging this insight we provide the
following contributions:
\begin{enumerate}[leftmargin=1.5em,itemsep=0pt,parsep=0pt,partopsep=0pt]
\item We propose a new distance measure between time-series
through the lens of a chosen monotonic specification. Distance
measure in hand, standard ML algorithms such as nearest neighbors
(supervised) or agglomerative clustering (unsupervised) can be
used to glean insights into the data.
\item Given a labeling, we propose a method for computing
representative points on each boundary. Viewed another way, we
propose a form of dimensionality reduction based on the temporal
logic formula.
\item Finally, given the representative points and their labels, we
can use the machinery developed in~\cite{logicalClustering} to
extract a simple logical formula as a classifier for each label.
\end{enumerate}
\section{Related Work}
Time-series clustering and classification is a well-studied area in
the domain of machine learning and data
mining~\cite{liao2005clustering}. Time series clustering that work
with raw time-series data combine clustering schemes such as
agglomerative clustering, hierarchical clustering, $k$-means
clustering among others, with similarity measures between time-series
data such as the dynamic time-warping (DTW) distance, statistical
measures and information-theoretic measures. Feature-extraction based
methods typically use generic sets of features, but algorithmic
selection of the right set of meaningful features is a
challenge. Finally, there are model-based approaches that seek an
underlying generative model for the time-series data, and typically
require extra assumptions on the data such as linearity or the
Markovian property. Please see~\cite{liao2005clustering} for detailed
references to each approach. It should be noted that historically
time-series learning focused on univariate time-series, and extensions
to multivariate time-series data have been relatively recent
developments.
More recent work has focused on automatically identifying features
from the data itself, such as the work on {\em
shapelets\/}~\cite{ye2009time,mueen2011logical,lines2012shapelet},
where instead of comparing entire time-series data using similarity
measures, algorithms to automatically identify distinguishing motifs
in the data have been developed. These motifs or shapelets serve not
only as features for ML tasks, but also provide visual feedback to the
user explaining why a classification or clustering task, labels given
data, in a certain way. While we draw inspiration from this general
idea, we seek to expand it to consider logical shapes in the data,
which would allow leveraging user's domain expertise.
Automatic identification of motifs or basis functions from the data
while useful in several documented case studies, comes with some
limitations. For example, in~\cite{bahadori2015functional}, the
authors define a subspace clustering algorithm, where given a set of
time-series curves, the algorithm identifies a subspace among the
curves such that every curve in the given set can be expressed as a
linear combination of a deformations of the curves in the subspace. We
note that the authors observe that it may be difficult to associate
the natural clustering structure with specific predicates over the
data (such as patient outcome in a hospital setting).
The use of logical formulas for learning properties of time-series has
slowly been gaining momentum in communities outside of traditional
machine learning and data
mining~\cite{bartocci2014data,dTree,rPSTL,iPSTL}. Here, fragments of
Signal Temporal Logic have been used to perform tasks such as
supervised and unsupervised learning. A key distinction from these
approaches is our use of libraries of signal predicates that encode
domain expertise that allow human-interpretable clusters and
classifiers.
Finally, preliminary exploration of this idea appeared in prior work
by some of the co-authors in~\cite{logicalClustering}. The key
difference is the previous work required users to provide a ranking of
parameters appearing in a signal predicate, in order to project
time-series data to unique points in the parameter space. We remove
this additional burden on the user in this paper by proposing a
generalization that projects time-series signals to trade-off curves
in the parameter space, and then using these curves as features.
\section{Logic-Respecting Distance Metric}\label{sec:metric}
In this section, we define a class of metrics on the signal space that
is derived from corresponding parametric specifications. First,
observe that the validity domains of monotonic specifications are
uniquely defined by the hyper-surface that separates them from the
rest of the parameter space. Similar to Pareto fronts in a
multi-objective optimization, these boundaries encode the trade-offs
required in each parameter to make the specification satisfied for a
given time series. This suggests a simple procedure to define a
distance metric between time series that respects their logical
properties: Given a monotonic specification, a set of time series, and
a distance metric between validity domain boundaries:
\begin{enumerate}
\item Compute the validity domain boundaries for each time series.
\item Compute the distance between the validity domain boundaries.
\end{enumerate}
Of course, the benefits of using this metric would rely entirely on
whether (i) The monotonic specification captures the relevant domain-specific details (ii) The distance between validity domain boundaries
is sensitive to outliers. While the choice of specification is highly
domain-specific, we argue that for many monotonic specifications, the
distance metric should be sensitive to outliers as this represents a
large deviation from the specification. This sensitivity requirement
seems particularly apt if the number of satisfying traces of the
specification grows linearly or super-linearly as the parameters
increase. Observing that Hausdorff distance~\eqref{eq:hausdorff}
between two validity boundaries satisfy these properties, we define
our new distance metric between time series as:
\begin{definition}
Given a monotonic specification, $\varphi$, and a distance metric on
the parameter space $({[0, 1]}^n, d)$, the logical distance between
two time series, $\mathbf{x}(t), \mathbf{y}(t) \in \mathcal{D}^T$ is:
\begin{equation}\label{eq:logical_distance}
d_\varphi(\mathbf{x}(t), \mathbf{y}(t)) \mathrel{\stackrel{\makebox[0pt]{\mbox{\normalfont\tiny def}}}{=}} d_H\left (\partial\mathcal{V}_\varphi(\mathbf{x}), \partial\mathcal{V}_\varphi(\mathbf{y})\right)
\end{equation}
\end{definition}
\subsection{Approximating the Logical Distance}
Next, we discuss how to approximate the logical distance metric
within arbitrary precision. First, observe that the validity domain
boundary of a monotonic specification can be recursively approximated
to arbitrary precision via binary search on the diagonal of the
parameter space~\cite{maler:hal-01556243}. This approximation yields a
series of overlapping axis aligned rectangles that are guaranteed to
contain the boundary (see Fig~\ref{fig:computeBoundaries}).
\begin{figure}[H]
\centering
\includegraphics[height=1.3in]{imgs/approx_boundary.pdf}
\caption{Illustration of procedure introduced
in~\cite{maler:hal-01556243} to recursively approximate a validity
domain boundary to arbitrary
precision.}\label{fig:computeBoundaries}
\end{figure}
To formalize this approximation, let $I(\mathbb{R})$ denote the set of
closed intervals on the real line. We then define an axis aligned
rectangle as the product of closed intervals.
\begin{definition}
The set of axis aligned rectangles is defined as:
\begin{equation}\label{eq:recs}
I(\mathbb{R}^n) \mathrel{\stackrel{\makebox[0pt]{\mbox{\normalfont\tiny def}}}{=}} \prod_{i=1}^n I(\mathbb{R})
\end{equation}
\end{definition}
The approximation given in~\cite{maler:hal-01556243} is then a family
of maps,
\begin{equation}\label{eq:approx_i}
\text{approx}^i : \mathcal{D}^T \to \PowerSet{I(\mathbb{R}^n)}
\end{equation}
where $i$ denotes the recursive depth $\PowerSet{\cdot}$ denotes the powerset.\footnote{
The co-domain of~\eqref{eq:approx_i} could be tightened to $\big( 2^{n} - 2\big)^{i}$, but to avoid also parameterizing the discretization function, we do
not strengthen the type signature.}
For example, $\text{approx}^0$
yields the bounding box given in the leftmost subfigure in
Fig~\ref{fig:computeBoundaries} and $\text{approx}^1$ yields the
subdivision of the bounding box seen on the right.\footnote{ If the
rectangle being subdivided is degenerate, i.e., lies entirely within
the boundary of the validity domain and thus all point intersect the
boundary, then the halfway point of the diagonal is taken to be the
subdivision point.}
Next, we ask the question: Given a discretization of the rectangle set
approximating a boundary, how does the Hausdorff distance between the
discretization relate to the true Hausdorff distance between two
boundaries? In particular, consider the map that takes a set of
rectangles to the set of the corner points of the rectangles.
Formally, we denote this map as:
\begin{equation}\label{eq:corners}
\text{discretize} : \PowerSet{I(\mathbb{R}^n)} \to \PowerSet{\mathbb{R}^n}
\end{equation}
As the rectangles are axis aligned, at this point, it is fruitful to
specialize to parameter spaces equipped with the infinity norm. The
resulting Hausdorff distance is denoted $d_H^\infty$. This
specialization leads to the following lemma:
\begin{lemma}\label{lem:error}
Let $\mathbf{x}$, $\mathbf{x}'$ be two time series and $\mathcal{R}, \mathcal{R}'$
the approximation of their respective boundaries. Further, let
$p, p'$ be points in $\mathcal{R}, \mathcal{R}'$ such that:
\begin{equation}
\label{eq:5}
\hat{d} \mathrel{\stackrel{\makebox[0pt]{\mbox{\normalfont\tiny def}}}{=}} d_H^\infty(\text{discretize}(\mathcal{R}),
\text{discretize}(\mathcal{R'})) = d_\infty(p, p')
\end{equation}
and let $r, r'$ be the rectangles in $\mathcal{R}$ and
$\mathcal{R}'$ containing the points $p$ and $p'$ respectively. Finally, let
$\frac{\epsilon}{2}$ be the maximum edge length in $\mathcal{R}$ and $\mathcal{R'}$, then:
\begin{equation}\label{eq:4}
\max(0, \hat{d} - \epsilon) \leq d_\varphi(\mathbf{x}, \mathbf{x}') \leq \hat{d} + \epsilon
\end{equation}
\end{lemma}
\begin{proof}
First, observe that (i) each rectangle intersects its boundary (ii)
each rectangle set over-approximates its boundary. Thus, by
assumption, each point within a rectangle is at most $\epsilon/2$
distance from the boundary w.r.t.\ the infinity norm. Thus, since
there exist two points $p, p'$ such that
$\hat{d} = d_\infty(p, p')$, the maximum deviation from the logical
distance is at most $2\frac{\epsilon}{2} = \epsilon$ and
$\hat{d} - 2\epsilon \leq d_\varphi(\mathbf{x}, \mathbf{x}') \leq \hat{d} +
2\epsilon$. Further, since $d_\varphi$ must be in $\mathbb{R}^{\geq 0}$, the
lower bound can be tightened to
$\max(0, \hat{d} - 2\epsilon)$.~$\blacksquare$
\end{proof}
We denote the map given by~\eqref{eq:4} from the points to the error interval as:
\begin{equation}
\label{eq:2}
d_H^\infty \pm \epsilon : \PowerSet{\mathbb{R}} \times \PowerSet{\mathbb{R}} \to I(\mathbb{R}^+)
\end{equation}
Next, observe that this approximation can be made arbitrarily close to
the logical distance.
\begin{theorem}
Let $d^\star= d_\varphi(\mathbf{x}, \mathbf{y})$ denote the logical distance between two
traces $\mathbf{x}, \mathbf{y}$.
For any $\epsilon \in \mathbb{R}^{\geq 0}$, there exists $i\in \mathbb{N}$ such that:
\begin{equation}
\label{eq:arb_precision}
d_H^\infty(\text{discretize}(\text{approx}^i(\mathcal{R})),
\text{discretize}(\text{approx}^i(\mathcal{R}'))) \in [d^\star - \epsilon, d^\star + \epsilon]
\end{equation}
\end{theorem}
\begin{proof}
By Lemma~\ref{lem:error}, given a fixed approximate depth, the above
approximation differs from the true logical distance by at most two
times the maximum edge length of the approximating rectangles.
Note that by construction, incrementing the approximation depth
results in each rectangle having at least one edge being
halved. Thus the maximum edge length across the set of rectangles
must at least halve. Thus, for any $\epsilon$ there exists an approximation
depth $i\in \mathbb{N}$ such that:
\[
d_H^\infty(\text{discretize}(\text{approx}^i(\mathcal{R})),
\text{discretize}(\text{approx}^i(\mathcal{R}'))) \in [d^\star - \epsilon, d^\star + \epsilon]~.~
\blacksquare
\]
\end{proof}
Finally, algorithm~\ref{alg:approx} summarizes the above procedure.
\begin{algorithm}[H]
\scriptsize
\caption{Approximate Logical Distance\label{alg:approx}}
\begin{algorithmic}[1]
\Procedure{approx\_dist}{$\mathbf{x}, \mathbf{x}', \delta$}
\State{$lo, hi \gets 0, \infty$}
\While{$hi - lo > \delta$}
\State{$\mathcal{R}, \mathcal{R}' \gets \text{approx}^i(\mathbf{x}), \text{approx}^i(\mathbf{x}')$}
\State{$\text{points}, \text{points}' \gets \text{discretize}(\mathcal{R}), \text{discretize}(\mathcal{R}')$}
\State{$lo, hi \gets \big (d_H^\infty \pm \epsilon\big )(\mathcal{R}, \mathcal{R}')$}
\EndWhile{}
\State{\Return{$lo, hi$}}
\EndProcedure{}
\end{algorithmic}
\end{algorithm}
\begin{remark}
An efficient implementation should of course memoize previous calls
to $\text{approx}^i$ and use $\text{approx}^i$ to compute
$\text{approx}^{i+1}$. Further, since certain rectangles can be
quickly determined to not contribute to the Hausdorff distance, they
need not be subdivided further.
\end{remark}
\subsection{Learning Labels}
The distance interval $(lo, hi)$ returned by Alg~\ref{alg:approx} can
be used by learning techniques, such as
\textit{hierarchical or agglomerative clustering}, to estimate
clusters (and hence the labels). While the technical details of these
learning algorithms are beyond the scope of this work, we formalize
the result of the learning algorithms as a labeling map:
\begin{definition}[Labeling]
A $k$-labeling is a map:
\begin{equation}
\label{eq:labeling}
L : \mathcal{D}^T \to \{0, \ldots, k\}
\end{equation}
for some $k \in \mathbb{N}$. If $k$ is obvious from context or not
important, then the map is simply referred to as a
labeling.
\end{definition}
\subsection{Case Study 2 - CPS Grader}
|
{
"timestamp": "2018-08-03T02:03:08",
"yymm": "1802",
"arxiv_id": "1802.08924",
"language": "en",
"url": "https://arxiv.org/abs/1802.08924"
}
|
\subsection{CARDs}
A \emph{conflict-aware replicated datatype} is a tuple $D = (S,E,C)$
where $S$ is the store type, $E$ is the type of \emph{effects}, and $C$
is the type of \emph{consistency guards}.
Effects and consistency guards are detailed below.
Informally, effects are store transformers and consistency guards
specify the exact semantic restrictions on consistency under which each
operation may execute under.
The key point behind CARDs is to automate the reasoning about the
interaction between effects and consistency guards.
This allows a developer to program CARD operations modularly, letting
the system handle conflicts in an automated manner.
\paragraph{CARD Effects}
The type $E$ is the type of \emph{effects} on the store.
A value $e:E$ has a denotation $\denote{e}$ which is an $S \to S$
function modifying a store value.
\begin{example}
\label{ex:effects}
In the bank account example, we have the effect type $E :=
\mathtt{Add}\;\mathtt{Nat} \;|\; \mathtt{Sub}\;\mathtt{Nat}$.
Each effect is of the form $\mathtt{Add}\;n$ or $\mathtt{Sub}\;n$ for
some positive integer $n$.
The denotations of $\mathtt{Add}\;n$ and $\mathtt{Sub}\;n$ are given
by $\lambda s .\; s + n$ and $\lambda s .\; s - n$, respectively.
\end{example}
\paragraph{Consistency Guards}
The type $C$ is the type of \emph{consistency guards} on the store
which describe measures of ``accuracy'' for partial knowledge of the
store value.
Consistency guards are \emph{semantic in nature}, i.e., they do not restrict the
ordering of operations like traditional consistency models (e.g., sequential
consistency, etc), but instead semantically restrict the updates to the store.
Formally, a value $c:C$ has a denotation $\denote{c}$ which is a two-state
predicate (of type $S \times S \to \mathbb{B}$) relating the ``global store
value'' ($s_g$) and a ``local store view'' ($s_r$) that some replica has.
We will write $c(s_1,s_2)$ to mean $[s_1/s_g][s_2/s_r]\denote{c}$.
We restrict all guards to be reflexive, as in $\forall s. c(s,s)=\top$ -- a
replica store view equal to the global store value represents complete
knowledge of the store.
Replicas and local store views are described fully in
Section~\ref{sec:replica}.
\begin{example}
\label{ex:guards}
In the running bank account example, the denotation of consistency guards have
type $\mathtt{Int} \times \mathtt{Int} \to \mathbb{B}$.
The guard $\mathtt{LE} := s_g \geq s_r$ restricts the global store
value to be at least as great as the local store value.
Intuitively, we will use the guard $\mathtt{LE}$ to ``guard'' withdraw
operations -- any replica executing a withdraw operation will have a
local store value that is at most the global value, ensuring that the
withdraw does not decrease the balance below $0$.
Informally, this implies that we need to restrict the global value
from being decreased by other withdraw operations once the local
replica has decided on a value of balance for the current withdraw
operation.
Another guards we will use in the bank account examples is $\mathtt{EQ} := s_g = s_r$.
\end{example}
\paragraph{Effect Classes}
A CARD's effect type $E$ will often generate an infinite set of effect
values.
For example, the Counter CARD includes an $\mathtt{Add}\;n :=
\lambda s .\; s + n$ effect for all $n:\mathbb{N}$.
In order to facilitate automated reasoning about effects and guards
that is necessary for runtime locking decisions, we assume that this set
of infinite effects are divided into a finite set $\overline{E}$ of
\emph{parametric effect classes}.
The choice of classes must be made by the developer of the CARD, and
is most effective when each class is characterized by the relationship
to the set of relevant guards.
In our examples, we assume that the type $E$ is a non-recursive algebraic data
type, with values of each type variant being one class.
We will elide this classification detail for the rest of the paper;
when an algorithm quantifies $\forall e:E$ we assume that we are using
a finite $E$ or a quantification over the finitely many parametric classes of
$\overline{E}$.
\begin{example}
\label{ex:effect-classes}
For the bank account example, the obvious choice is to classify
effects by constructor: $\overline{E} := \{ \mathtt{Add^\forall},
\mathtt{Sub^\forall} \}$ where $\mathtt{Add^\forall}$ and
$\mathtt{Sub^\forall}$ include events of the form $\mathtt{Add}\;n$
and $\mathtt{Sub}\;n$, respectively.
Each effect in the effect class behaves similarly with respect to the
guards $\mathtt{LE}$ and $\mathtt{EQ}$.
For example, all $\mathtt{Sub}\;n$ effects may cause the condition
$\mathtt{LE} := s_g \geq s_r$ to be violated if the global store is
updated with it, while $\mathtt{Add}\;n$ cannot cause the same.
\end{example}
\subsection{CARD Executions}
\label{sec:cards-executions}
Following standard practice
(see~\cite{Burckhardt:Book,Burckhardt:2012,Burckhardt:2014,Gotsman:2016}),
we describe the execution history of an eventually consistent
replicated store using a set of \emph{events} that each represent the
execution of a single operation on the data store.
Events contain an effect that changes the store and a return value
that gives some information about the store back to the caller.
In addition, CARD events contain a set of \emph{active guards} that represent
the semantic consistency restrictions on the event.
Events are ordered by an \emph{arbitration total order} in order to
support CARDs with non-commutable effects.
Such an order must be decided consistently by all members of the
replicated store without coordination -- time stamps and Lamport
clocks can be used for this purpose, or it can be omitted in
implementation for systems which only make commutable store updates.
\paragraph{Active Guards}
Each event has a set of (zero or more) \emph{active guards}, (or AGs for short).
An event's AGs represent consistency guards that a replica had when producing
the effect.
Since we will allow a replica to impose a series of consistency guards to
produce one effect, each event might have more than one AG.
Associated with each AG is the subset of previous events that the
replica witnessed when it imposed the consistency guard.
This encodes the standard visibility relation between an AG and an event.
The AG is also associated with the consistency guard it represents.
\paragraph{$D$-executions}
Formally, a \emph{$D$-execution} for a CARD $D$ is a tuple
$L=(s_0,W,G,\mathsf{grd},\mathsf{ar},\vis)$ where:
\begin{itemize}
\item $s_0:S(D)$ is the initial store value
\item $W$ is a finite set of events.
\item $G$ is a finite set of active guards.
\item $\mathsf{grd} : W \to \mathbb{P}(G)$ gives the set of AGs for an event.
Every AG is associated with a single event, which we denote by
$\mathsf{grd}^{-1} : G \to W$.
\item $\mathsf{ar} \subseteq (W \times W)$ is the arbitrary total ordering on
events.
\item $\vis \subseteq (W \times G)$ is our guard-based visibility
relation, which indicates whether an AG witnesses an event. We
denote by $\vis^{-1} : G \to \mathbb{P}(W)$ the set of all events
witnessed by an AG.
\end{itemize}
A $D$-execution also defines the following functions for examining
events and active guards:
\begin{itemize}
\item $\eff: W \to E(D)$ gives the $D$-effect an event holds
\item $\mathsf{rval}: W \to A$ gives the return value (of some type $A$) an
event holds
\item $\mathsf{gc} : G \to C(D)$ gives the consistency guard an active guard
was formed from.
\end{itemize}
\begin{example}
\label{ex:events}
In our running bank account example, two instances of events can be:
\begin{compactitem}
\item A withdraw event $\eta_w$ with effect
$\eff(\eta_w)=\lambda s.\; s - 10$ reducing the store by $10$
while returning the value $\mathsf{rval}(\eta_w)=10$ and guarded by a
singleton active guard set $\mathsf{grd}(\eta_w)=\{g\}$ which maintains
consistency guard $\mathsf{gc}(g)=\mathtt{LE}$ for the store with respect
to $100$, the store value it witnessed when $\eta_w$ was being
created.
\item A deposit event $\eta_d$ with effect
$\eff(\eta_d)=\lambda s.\; s + 100$ indicating that the effect of
the event increases the store value by $100$, while returning
$\mathsf{rval}(\eta_d)=100$, and being (not) guarded by an empty set
$\mathsf{grd}(\eta_d)=\emptyset$ indicating that the replica made no store
queries when creating $\eta_d$.
\end{compactitem}
\end{example}
\paragraph{Evaluations}
The \emph{store evaluation} of a $D$-execution $L$, written as
$\eval(L)$ is the store value arrived at by starting with $s_0$ and
applying $\eff(\eta_i)$ for each $\eta_i \in W$ in $\mathsf{ar}$ order.
Formally, if $W = \{ \eta_0, \eta_1, \ldots \eta_n \}$ with each
$i < j \implies \mathsf{ar}(\eta_i, \eta_j)$, then
$\eval(L) = (\denote{\eff(\eta_n)} \circ \denote{\eff(\eta_{n-1})}
\cdots \denote{\eff(\eta_0)})(s_0)$.
\begin{example}
\label{ex:execs}
Continuing Example~\ref{ex:events}, given a $D$-execution
$L = (0, \{ \eta_w, \eta_d \},G,\mathsf{grd},\mathsf{ar},\vis)$ where
$\mathsf{ar}(\eta_d, \eta_w)$, the store evaluation $\eval(L)$ is given by
$((\lambda s.\; s - 10 \;\circ\; \lambda s.\; s + 100)) (0)$, i.e.,
$90$.
\end{example}
\begin{definition}[sub-executions]
We define a \emph{sub-execution} of a $D$-execution
$L=(s_0,W,G,\mathsf{grd},\mathsf{ar},\vis)$ as any other $D$-execution
$L'=(s_0,W',G',\mathsf{grd}',\mathsf{ar}',\vis')$ for which $W' \subseteq W$,
$G' \subseteq G$, $\mathsf{grd}' \subseteq \mathsf{grd}$, $\mathsf{ar}' \subseteq \mathsf{ar}$,
$\vis' \subseteq \vis$, and
$\forall \eta \in W'.\; \mathsf{grd}(\eta)=\mathsf{grd}'(\eta)$ (so that any
remaining event retains all it's active guards).
\end{definition}
The above definition says that for $L'$ to be a sub-execution, $W'$ must retain
any event that is visible to any guards remaining in $G'$ (and thus which has
``caused'' any observable effect).
\paragraph{Pre-Executions}
We define the \emph{pre-execution} of an event $\eta$ in a
$D$-execution $L$ as the sub-execution of $L$'s components to the events
ordered by $\mathsf{ar}$ before $\eta$, and we
write this as $L_{\eta}$ for short.
The \emph{pre-store} of $\eta$ is then the evaluation of $L_{\eta}$,
and the \emph{post-store} is $\denote{\eff(\eta)}(L_{\eta})$.
In further discussion, the \emph{global store value} when an operation
is being executed at a replica, refers to the pre-store value in the
abstract execution (as per the arbitration order).
Note that this global store value is not stored explicitly, and the
replica executing an operation cannot learn the global store value
without additional coordination with other replicas.
\begin{example}
\label{ex:pre-execs}
Continuing Example~\ref{ex:execs}, the pre-execution of $\eta_w$ is
given by
$L_{\eta_w} = (0, \{ \eta_d
\},\emptyset,\emptyset,\emptyset,\emptyset)$. The pre-store and
post-store values are $100$ and $90$, respectively.
\end{example}
Similarly, we define the \emph{vis-execution} of a guard $g$ in a
$D$-execution $L$ as the pre-execution of $L$'s components to the events
in $\vis^{-1}(g)$, and we write this as
$L_g$ for short.
The \emph{vis-store} of $g$ is then the evaluation of $L_g$.
\paragraph{Well-Formed Executions}
We consider a $D$-execution \emph{well-formed} if all of the following
hold:
\begin{enumerate}
\item An event's AGs can only be influenced by other events which are
preceding ($\mathsf{ar}$ respects $\vis$, causal consistency), i.e.,
$
\forall \eta_1 \in W.\; \forall g \in \mathsf{grd}(\eta_1).\; \forall
\eta_2 \in \vis^{-1}(g).\; \mathsf{ar}(\eta_1,\eta_2)
$
\item All AGs are satisfied, meaning that their pre-store and
vis-store satisfy their consistency guard (guard-compliance), i.e,
$
\forall \eta \in W.\; \forall g \in \mathsf{grd}(\eta).\;
\mathsf{gc}(g)(\eval(L_{\eta}),\eval(L_g))
$
\item An AG that sees an event also sees the preceding events seen by
that event's AGs (transitivity of $\vis$), i.e.,
$
\forall \eta_1,\eta_2 \in W.\; \forall g_2,g_3 \in G.\;
\vis(\eta_1,g_2) \land g_2 \in \mathsf{grd}(\eta_2) \land \vis(\eta_2,g_3)
\Rightarrow \vis(\eta_1,g_3)
$
\end{enumerate}
\paragraph{Event Specifications}
We specify correctness of events using constraints on the
relation between the pre-store value $s$ before the execution of the
event, the post-store value $s'$ after the execution of the event, and
the return value $a$ associated with the event.
Formally, an \emph{event specification} is a predicate $\varphi$ of
type $S \times S \times A \to \mathbb{B}$.
\begin{definition}[Satisfaction of an Event Specification]
\label{def:event-sat}
An event $\eta$ in an execution $L$ \emph{satisfies} a specification
$\varphi$, written $\eta \models_L \varphi$, iff $\varphi$ holds
for $\eta$'s pre-store as $s$, $\eta$'s post-store as $s'$ and
$\eta$'s return value as $a$.
\[
\eta \models_L \varphi \Leftrightarrow s=\eval(L_{\eta}) \land
s'=\denote{\eff(\eta)}(s) \land a=\mathsf{rval}(\eta) \Rightarrow \varphi(s,s',a)
\]
\end{definition}
\begin{example}
\label{ex:event-specs}
For the running bank account example, we may want the properties that
\begin{inparaenum}[(a)]
\item the post-store value is non-negative, and
\item the change in the store value is equal to the return value of
each event.
\end{inparaenum}
The event specification $\varphi(s, s', a) := s' \geq 0 \land s -
s' = a$ exactly states this specification.
Both the events $\eta_w$ and $\eta_d$ satisfy this specification: for
example, in the case of $\eta_w$, we have $\psi \land s' = e(s) := s \geq
100 \land s' = s - 10 \implies s' \geq 0 \land s - s' = 10 := \varphi$.
\end{example}
In Section~\ref{sec:lang}, we describe $\lambda^Q$, a programming
language for writing CARD \emph{operations}, programs that dynamically
produce an event based on a replicas (limited) knowledge of the current
store value.
The operational semantics of $\lambda^Q$ operations only produce well-formed
executions (Theorem~\ref{thm:op-rules-wf}). The type system of $\lambda^Q$ can
be used to check that an operation only produces events which satisfy a
particular specification (Theorem~\ref{thm:prod-event-sat}). This property makes
proving invariants straightforward (Theorem~\ref{lem:invariants}).
\subsection{CARD Operations}
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{msc/grammar-wide.png}
\caption[$\lambda^Q syntax$]{Terms, values, and dependent types of
$\lambda^Q$. The rules for deriving $\tau$ types for terms are
found in Figure~\ref{fig:LQ:typing-rules}. The $k$ metavariable
represents \texttt{Bool} and \texttt{Int} constants, and $c$
represents consistency guards.}
\label{fig:LQ:syntax}
\end{figure}
The $\lambda^Q$ syntax includes two special value terms that interact with a
replicated store.
\begin{description}
\item[Query] The $Q c \triangleright x. t$ term defines an operation
that queries the global store value up to the consistency predicate
$c$, binding the value to $x$ before executing the sub-operation
$t$. As stated before, the global store value is not explicitly
stored. Intuitively, to execute the query, a replica coordinates
with other replicas ensuring that any effects that violate $c$ are
either arbitered before the current operation, or after the current
operation has finished executing.
\item[Return+Emit] The $R. (t_e,t_a)$ term defines a trivial operation which
performs no query and applies $(t_e,t_a)$ as the operational result,
in which $t_e$ that is an effect emitted onto the store and $t_a$ is
a return value that is evaluated and returned to the caller.
%
If the $R$ term is nested inside a $Q$ term, the effect and return
values may include information read from the store.
\end{description}
\begin{example}
\label{ex:lq:qrterm}
The basic withdraw bank account operation is expressed in
$\lambda^Q$ as
follows:\[ \mathtt{withdraw} := \lambda n .\; Q (s_g \geq s_r)
\triangleright x .\; \mathtt{if}~(x > n)~\mathtt{then}~R
(\mathtt{Sub}\; n, n)~\mathtt{else}~R (\mathtt{Add}\; 0, 0) \]
Here, the global store value is queried up to the predicate
$s_g \geq s_r$, i.e., the value bound to $x$ is at most the global
value, and the operation is executed assuming that the store value
is $x$.
The more involved ``strong'' withdraw operation would be expressed
as:
\begin{align*}
\mathtt{swithdraw} := \lambda n .\; Q (s_g \geq s_r)
\triangleright x .\; \mathtt{if}~(x >
n)~&\mathtt{then}~R (\mathtt{Sub}\; n, n) \\[-1ex]
& \mathtt{else}~Q (s_g = s_r) \triangleright x .\; \mathtt{if}~(x
> n)&\mathtt{then}~R (\mathtt{Sub}\; n, n)\\[-1ex]
& &\mathtt{else}~R (\mathtt{Add}\; 0, 0)
\end{align*}
The first query and the \texttt{then} branch act as the standard
withdraw operation, while the second query (with the stronger
consistency predicate $s_g = s_r$) learns the exact value of the
global store (forcing pending deposit operations to commit), and then
executes the withdraw.
%
This operation avoids the stronger coordination needed for the
second, ``full'' query if it can work safely from just the first
partial one, while still always making the withdrawal if it's
absolutely possible.
For completeness, the deposit operation (which does not need a
query) would be expressed as
$\mathtt{deposit} := \lambda n .\; R .\; (\mathtt{Add}\; n, n)$.
\end{example}
\begin{figure}[h]
\begin{ottdefnblock}[]{$\Gamma \vdash \ottnt{t} \ottsym{:} \tau$}{}
\ottusedrule{\ottdruletypeXXvar{}}
\ottusedrule{\ottdruletypeXXlambda{}} \\[2mm]
\ottusedrule{\ottdruletypeXXite{}} \\[2mm]
\ottusedrule{\ottdruletypeXXq{}} \\[2mm]
\ottusedrule{\ottdruletypeXXr{}} \\[2mm]
\end{ottdefnblock}
\begin{mathpar}
\inferrule*[Right=LT-Sub]{
\Gamma \vdash t : S_1 \\
\Gamma \vdash S_1 <: S_2 \\
\Gamma \vdash S_2
}{
\Gamma \vdash t : S_2
}
\and
\inferrule*[Right=Dec-$<:$-Base]{
\text{Valid}(\llbracket \Gamma \rrbracket \land \llbracket t_1
\rrbracket \Rightarrow \llbracket t_2 \rrbracket)
}{
\Gamma \vdash \{\nu : B \;|\; t_1\} <: \{\nu : B \;|\; t_2\}
}
\end{mathpar}
\caption{Typing and sub-typing rules for $\lambda^Q$.}
\label{fig:LQ:typing-rules}
\label{fig:lt-sub}
\vspace{-3ex}
\end{figure}
\subsection{Operation Types}
The type system for $\lambda^Q$ (detailed in
Figure~\ref{fig:LQ:typing-rules}) extends Liquid
Types~\cite{liquidtypes} on the CBV $\lambda$-calculus.
For those unfamiliar, liquid types refine standard types with
predicates on the values.
For example, the typing judgement $t : \{ \nu : \mathtt{Int} \mid x > 5
\}$ asserts that the term $t$ is an integer, as well as that the value
is greater than $5$.
In Figure~\ref{fig:LQ:typing-rules}, standard terms in the language
are typed as per standard liquid types, while CARD operations are
typed under a special $Op$ type.
The typing judgement $t : Op(D, A, \varphi)$ indicates that $t$ is an
operation for the CARD $D$ that returns a value of type $A$
and that any $D$-execution event that results from the operation
satisfies the event specification $\varphi$.
Intuitively, the \textsc{type_q} rule is similar to a conditional
guard rule: if a term $t$ is of type $Op(D,A,\varphi)$ given the
additional premise $\denote{c}$, the term $Q c \triangleright x. t$ is
of type $Op(D, A, \varphi)$.
The \textsc{type_r} rule derives our \texttt{Op} type for a base $R$
term from a standard Liquid Type judgment, stating that the return
value and the denotation of the effect in the $R$ term must together
(in the logical constraint context of $\Gamma$) ensure the \texttt{Op}
type's $\varphi$ specification holds.
The refinement part of this Liquid Type judgment becomes a simple
logical constraint problem according to the rules in
Figure~\ref{fig:lt-sub}.
In these rules, $<:$ is the ``subtype'' relation, which states that
the left hand side has the same basic type as the right hand side, and
that the left's refinement implies the right's refinement.
The denotational brackets on $\llbracket \Gamma \rrbracket$ reduce the
context to the set of logical statements contained in its refinements.
\begin{figure}
\centering
\[
\inferrule*[Right=type_lambda]{
\inferrule*[Right=type_q]{
\inferrule*[Right=type_ite]{
\inferrule{
}{
n:\mathtt{Nat}, x:\{\nu:S(\mathtt{Counter})\;|\;x \le s\}
\vdash (x \ge n) : \mathtt{Bool}
} \\
n:\mathtt{Nat}, x:\{\nu:S(\mathtt{Counter})\;|\;x \le s\},x
\ge n \vdash \{\ldots(\emph{then})\} \\
n:\mathtt{Nat}, x:\{\nu:S(\mathtt{Counter})\;|\;x \le s\},\neg(x
\ge n) \vdash \{\ldots(\emph{else})\}
}{
n:\mathtt{Nat}, x:\{\nu:S(\mathtt{Counter})\;|\;x \le s\} \vdash \{\mathtt{if}\ldots\} : \mathtt{Op}(\mathtt{Counter},\mathtt{Int},\varphi)
} \\
\inferrule{
}{
n:\mathtt{Nat} \vdash \mathtt{LE} : C(\mathtt{Counter})
}
}{
n : \mathtt{Nat} \vdash Q\;\mathtt{LE} \triangleright x.\; \{\mathtt{if}\ldots\} \; : \; \mathtt{Op}(\mathtt{Counter},\mathtt{Int},\varphi)
}
}{
\bullet \vdash \lambda n .\; Q\;\mathtt{LE} \triangleright x.\; \{\mathtt{if}\ldots\} \; : \; (n:\mathtt{Nat}) \to
\mathtt{Op}(\mathtt{Counter},\mathtt{Int},\varphi)
}
\]
\caption{Derivation of \texttt{withdraw} type down to branches with
base $R$ terms.}
\label{fig:ex-type-1}
\end{figure}
\begin{figure}
\centering
\[
\inferrule*[Right=type_r]{
\inferrule{
\inferrule*[Right=type_var]{
}{
\Gamma^+ \vdash n : \mathtt{Nat}
}
}{
\Gamma^+ \vdash \mathtt{Sub}\;n : E(\mathtt{Counter})
} \\
\Gamma^+ \vdash n : \{\nu :
\mathtt{Int}\;|\;s'=\denote{\mathtt{Sub}\;n}(s) \Rightarrow \varphi\}
}{
\Gamma^+ \vdash R.\; (\mathtt{Sub}\;n,n) :
\mathtt{Op}(\mathtt{Counter},\mathtt{Int},\varphi)
}
\]
\caption{Derivation of $R$ term for \texttt{withdraw}'s success
branch down to standard Liquid Type.}
\label{fig:ex-type-2}
\end{figure}
\begin{figure}
\centering
\[
\inferrule*[Right=LT-Sub]{
\inferrule*[Right=Dec-<:-Base]{
\text{Valid}(\llbracket \Gamma \rrbracket \land \llbracket \mathtt{Nat}
\rrbracket \Rightarrow \llbracket \{\nu :
\mathtt{Int}\;|\;s'=\denote{\mathtt{Sub}\;n}(s) \Rightarrow
\varphi\} \rrbracket)
}{
\Gamma^+ \vdash \mathtt{Nat} <: \{\nu :
\mathtt{Int}\;|\;s'=\denote{\mathtt{Sub}\;n}(s) \Rightarrow
\varphi\}
}
}{
\Gamma^+ \vdash n : \{\nu :
\mathtt{Int}\;|\;s'=\denote{\mathtt{Sub}\;n}(s) \Rightarrow
\varphi\}
}
\]
\caption{Derivation for one of \texttt{withdraw}'s Liquid Type
obligations into logical constraint problem.}
\label{fig:ex-type-3}
\end{figure}
\begin{example}
\label{ex:lq:types}
As an end-to-end demonstration, we now type-check the
\texttt{withdraw} operation according to the specfication we have
been using, for which
%
\[\varphi := (s \ge 0 \Rightarrow s' \ge 0) \land (a = s - s')\]
%
We first follow the derivation in Figure~\ref{fig:ex-type-1},
storing in the context the constraint on $s$ (the pre-store value)
that the query on $\mathtt{LE}$ gives us.
%
This produces two unsolved branches, one for the \texttt{then}
branch of the \texttt{if} term on which we can assume $x \ge n$, and
one on the \texttt{else} branch where we assume the opposite.
%
Like the query constraints, these assumptions are added to the
context.
We now elide the trivial \texttt{else} branch and follow the
\texttt{then} branch, referring to the context so far (including
$x \ge n$) as $\Gamma^{+}$, in Figure~\ref{fig:ex-type-2}.
%
This takes us to the standard Liquid Type obligation
%
\[
\Gamma^+ \vdash n : \{\nu :
\mathtt{Int}\;|\;s'=\denote{\mathtt{Sub}\;n}(s) \Rightarrow
\varphi\}
\]
%
which may look strange since $n$ already has the type \texttt{Nat}
in $\Gamma^+$.
%
This is where, in Figure~\ref{fig:ex-type-3}, we use the Liquid Type
subtyping rules to reduce the obligation to a logical constraint
problem which we can verify by hand or with an SMT solver, and in
which we are aided by the $s$ constraint from our guarded query:
%
\begin{mathpar}
\llbracket \Gamma^+ \rrbracket \land \llbracket \mathtt{Nat}
\rrbracket \Rightarrow \llbracket \{\nu :
\mathtt{Int}\;|\;s'=\denote{\mathtt{Sub}\;n}(s) \Rightarrow
\varphi\} \rrbracket = \\
(n \ge 0) \land (x \le s) \land (x \ge n) \land (s' = s - n)
\Rightarrow (s \ge 0
\Rightarrow s' \ge 0) \land (a = s -
s')
\end{mathpar}
%
Deciding this as valid, we have thus verified that \texttt{withdraw}
has our desired behavior in a concurrent setting.
\end{example}
\subsection{Operation Executions}
$\lambda^Q$ follows the standard semantics of the CBV
$\lambda$-calculus for evaluating standard terms (terms with standard
refinement types, excluding the Op type).
We use the judgement $t \Downarrow_{\lambda} t'$ to represent the
standard big-step semantics for CBV $\lambda$-calculus.
Operations, i.e., terms of type $\text{Op}(D,A,\varphi)$, cannot be
evaluated in a pure setting.
Rather, they are executed by replicas, which may query values from the
global replicated store.
The state of the operational evaluation is represented by
$(s, \psi, t)$ where $s$ is the the global store value, $\psi$ is the
accumulated active consistency guard, and $t$ is the term to be
evaluated.
Each execution step is described abstractly by the \emph{operation
execution rules} (Figure~\ref{fig:LQ:exec-rules}):
\begin{compactitem}
\item Query-evaluation step: A query evaluation step represents a
replica executing $Q c \triangleright x .\; t$, i.e., querying the
evaluation global store under the query predicate $c$, and
evaluating the term $t$ with $x$ bound to the value of the query.
The replica obtains (non-deterministically, at this level) a value
$s_x$ such that $c(s_x, s_r)$ holds, and the value of $\psi$ is
updated with $[s_x / s_r]c$ and the resulting term is obtained by
substituting the value $s_r$ in $t$.
\item Drift step: A drift step represents the value of the global store
value changing due to the execution of a different replica.
However, the $\psi$ value in the execution context restricts the
change so that snapshots which have been substituted into $t$ (by
steps of the \textsc{query} rule) remain consistent according to the
guards they were queried with.
Note that this rule makes the execution non-deterministic.
\end{compactitem}
Fully executing an operation $t$ with type $\text{Op}(C,A,\varphi)$
from $(s, \top, t)$ produces $(s, \psi, R. (e, a))$ where $a:A$ is the return
value and $\denote{e}(s)$ is the final value of the global store. By the
soundness of liquid types, we get that
$(s, \denote{e}(s), a) \models \varphi$.
\begin{figure}[h]
\centering
\ottdefnprocXXstep{}
\caption{Operation execution rules}
\label{fig:LQ:exec-rules}
\end{figure}
\begin{example}
\label{ex:lq:opexec}
We describe one execution each of the deposit, withdraw, and strong
withdraw operations in the bank account example.
The steps resulting from query and drift steps are
superscripted with $Q$ and $D$, respectively.
\begin{compactitem}
\item The evaluation of $\mathtt{deposit}\; 100$ can produce the
following sequence:
$(0, \top, \mathtt{deposit}\; 100) \mapsto^{Q} (0, \top, R
(\mathtt{Add}\; 100, -100))
$
\item The evaluation of $\mathtt{withdraw}\; 10$ can produce the
following sequence:
$(0, \top, \mathtt{withdraw}\; 10) \mapsto^{D} (100, \top,
\mathtt{withdraw}\; 10) \mapsto^{Q} (100, s_g \geq 100, R
(\mathtt{Sub}\; 10, 10))$
\item The nested queries in $\mathtt{swithdraw}$ lead to multiple
query steps in the evaluation.
The following is a valid evaluation sequence:
$(0, \top, \mathtt{swithdraw}\; 10) \mapsto^{Q} (0, s_g \geq 0,
t_{iq}) \mapsto^{D} (100, s_g \geq 0, t_{iq}) \mapsto^{D} (90, s_g
\geq 0, t_{iq}) \mapsto^{D} (90, s_g \geq 0 \land s_g = 90, R
(\mathtt{Sub}\; 10, 10))$
where $t_{iq} := Q (s_g = s_r) \triangleright x .\;
\mathtt{if}~(x > 10)~\mathtt{then}~R (\mathtt{Sub}\; 10,
10)~\mathtt{else}~R (\mathtt{Add}\; 0, 0)$.
\end{compactitem}
\end{example}
\paragraph{Combining Multiple Operational Executions.}
The operation execution rules produce a sequence of evaluation steps
corresponding to the invocation of a single operation. We now
describe how a number of different (possibly concurrent) operation
invocations correspond to a CARD execution. Intuitively, the CARD
execution must be produced by combining the update steps of an
operation execution for each invocation. The drift steps in the
operation execution of $t$ correspond exactly to the updates of all
the operations arbitrated before the effect produced by $t$, and the
query steps must take as their $s_x$ value a post-store of some subset
of the effects arbitrated before.
Given a set of $D$-operation invocations $T$ with
$\mathsf{op} : T \to \mathtt{Op}(D,A,\varphi)$ giving the operation term for
each invocation, we say a CARD execution $L = (s_0,W,G,\mathsf{grd},\mathsf{ar},\vis)$
is \emph{produced by} $T$ iff there exists a one-to-one correspondence
between events $\eta_i \in W$ and operation invocations $t_i \in T$
such that:
\begin{compactitem}
\item there exists an operation execution for $t_i$ of the form
$(s_0,\top,\mathsf{op}(t_i)) \mapsto^{*} (s_i, \psi_i, R (e_i, a_i))$ in
which
$\psi_i=[s_{x0}/s_r]c_0 \land [s_{x1}/s_r]c_1 \land \ldots \land
[s_{xn}/s_r]c_n$,
\item $\mathsf{grd}(\eta_i)$ contains $n$ active guards corresponding to the
$n$ clauses in $\psi_i$; $g_j$ corresponds to clause
$[s_{xj}/s_r]c_j$ in $\psi_i$ such that $\mathsf{gc}(g_j)=c_j$,
$\eval(L_{g_j})=s_j$ and $\vis^{-1}(g_j)$ includes the $\vis^{-1}$
set of each guard of each event in $\vis^{-1}(g_j)$.
\item the \textsc{drift} steps in $t_i$'s operation execution
correspond, in order, to the preceeding events in $L_{\eta_i}$ such
that for $\eta_j\in L_{\eta_i}$, $\eff(\eta_j)$ is the effect
quantified in the corresponding \textsc{drift} step's premise,
\item $\eval(L_{\eta_i})=s_i$,
\item $\eff(\eta_i)=e_i$, and
\item $\mathsf{rval}(\eta_i)=a_i$.
\end{compactitem}
\begin{example}
\label{ex:lq:op-execs-to-card-exec}
The operational executions of the deposit, withdraw and strong
withdraw operations from Example~\ref{ex:lq:opexec} can produce the
abstract execution
$L = (0, \{\eta_d, \eta_w, \eta_{sw}\},G,\mathsf{grd},\mathsf{ar},\vis)$ where:
\begin{inparaenum}[(a)]
\item $\eta_d := (\top, \lambda s .\; s + 100, -100)$,
\item $\eta_w := (s_g \geq 100, \lambda s .\; s - 10, 10)$, and
\item $\eta_{sw} := (s_g \geq 0 \land s_g \geq 90, \lambda s .\;
s - 10, 10)$.
\end{inparaenum}
The exact correspondence between the abstract execution and the
operational executions is depicted in
Figure~\ref{fig:op-execs-to-card-exec}.
\end{example}
\begin{figure}
\input{fig-dep-witd-switd}
\caption{Correspondence between operational executions of
$\mathtt{deposit}\; 100$, $\mathtt{withdraw}\; 10$, and
$\mathtt{swithdraw}\; 10$ and an abstract execution.}
\label{fig:op-execs-to-card-exec}
\end{figure}
\begin{theorem}[Well-Formedness of Operation Executions]
\label{thm:op-rules-wf}
Any $D$-execution $L$ that is produced by a set of $D$-operations
$T$ is well-formed (by the definition in
Section~\ref{sec:cards-executions}).
%
\proof{
%
The non-trivial part is guard-compliance.
%
We prove guard-compliance by induction on the operation execution
step sequence corresponding to each event $\eta$, with
I.H. $s \models \psi$.
%
\begin{itemize}
\item Base: $s=s_0$, $\psi$ is empty, trivially satisfied.
\item Step with \textsc{query}: I.H. gives $s \models \psi$,
\textsc{query} premise gives $s \models [s_x/s_r]c$, thus $s
\models \psi \land [s_x/s_r]c$.
\item Step with \textsc{drift}: Premise gives $s' \models \psi$.
\end{itemize}
%
The pre-store of $\eta$ must be equal to the $s$ value of it's
operation's final context because each event in $L_{\eta}$ applies
the same effect as its corresponding \textsc{drift} step.
%
The vis-store of any $g\in \mathsf{grd}(\eta)$ is equal to the $s_x$ in
it's $\psi$ clause by definition of producing a $D$-execution.
%
Thus all guards in $L$ are satisfied by their pre-store and
vis-store values. $\Box$
%
}
\end{theorem}
\begin{theorem}[Preservation for Operation Executions]
\label{thm:soundness2}
For any derived term $\Gamma \vdash t : \text{Op}(C,A,\varphi)$ and
starting state $s_0$, if an operation execution
$(s_0,\top,t) \longmapsto^{*} (s',\psi',R. (e,a))$ exists, then
$\psi' \Rightarrow \varphi(s',\denote{e}(s'),a)$.
%
\proof{
%
We must show that $\psi$ is made strong enough to guarantee
$\varphi$ for a term $\Gamma \vdash t : \text{Op}(D,A,\varphi)$.
%
We begin by inductively evaluating and analyzing the type
derivation of $t$ side by side, showing that at each step,
$\denote{\Gamma} \Rightarrow \psi$.
%
\begin{description}
\item[Base:] $\Gamma=\psi=\top$.
\item[Case $Q c \triangleright x.\; t'$:] We evaluate this term by
a \textsc{query} step, adding $[s_x/s_r]c$ to $\psi$ and
replacing $x$ with $s_x$ in $t'$.
%
We type this term by the \textsc{type_q} rule, adding $[x/s_r]c$
to $\denote{\Gamma}$.
%
So our knowledge of $s_x$ in the evaluated $t'$ is matched by
our knowledge of $x$ in the typed $t'$, and $(\denote{\Gamma}
\Rightarrow \psi) \Rightarrow (\denote{\Gamma}\land [x/s_r]c
\Rightarrow \psi \land [s_x/s_r]c$.
\item[Case (any other):] This term is evaluated by the standard
$\lambda$-calculus rules and does not add any obligations to
$\psi$.
\end{description}
%
We have thus evaluated $t$ to a configuration
$(s,\psi,R.\;(t_e,t_a))$ and followed its type derivation to a
term $\Gamma \vdash R.\;(t_e,t_a) : \text{Op}(D,A,\varphi)$ such
that $\denote{\Gamma} \Rightarrow \psi$ (when $x$'s in $\Gamma$
are replaced with their corresponding $s_x$'s).
%
The remaining obligation of the type derivation shows that the
contents of $\denote{\Gamma}$ ensure that the final term satisfies
$\varphi$ under any compatible store value, and so $\psi$ must be
strong enough to ensure the same
(Def.~\ref{def:event-sat}). $\Box$
}
\end{theorem}
\begin{theorem}[Produced $D$-Events Satisfy Operation Specifications]
\label{thm:prod-event-sat}
Given an operation invocation $t_i$ in a set of invocations $T$ for
which $\mathsf{op}(t_i):\texttt{Op}(D,A,\varphi)$, the event $\eta_i$
corresponding to $t_i$ in any $D$-execution $L$ produced by $T$ via
the operation execution rules satisfies $\varphi$ (in the sense of
Def.~\ref{def:event-sat}).
%
\proof{
%
By Theorem~\ref{thm:soundness2}, we know that for any operation
execution step sequence for $t_i$ ending with
$(s',\psi',R.\;(t_e,t_a))$, we have
$\psi' \Rightarrow \varphi(s',\denote{t_e}(s'),t_a)$.
%
And so have this statement for the operation execution sequence
that produces $\eta_i$, for which $\eff(\eta_i)=t_e$ and
$\mathsf{rval}(\eta_i)=t_a$.
%
The guards in $\mathsf{grd}(\eta_i)$ are together satisfied by the same
store values that $\psi'$ is satisfied by, and so guard compliance
(a component of well-formedness of $L$, which we have by
Theorem~\ref{thm:op-rules-wf}) ensures that
$\psi'(\eval(L_{\eta_i}]))$.
%
Thus for $s=\eval(L_{\eta_i})$ we have
$\varphi(s,\denote{\eff(\eta_i)}(s),\mathsf{rval}(a))$, meaning that
$\eta \models_L \varphi$. $\Box$
%
}
\end{theorem}
Because operation-produced events respect to their specifications, it
is easy to show that invariants can be maintained.
\begin{theorem}[Execution Invariants]
\label{lem:invariants}
Given a $D$-store predicate $I$ and a set of $D$-operation
invocations $T$, each of which has a type which includes
$I(s) \Rightarrow I(s')$ in its specification, any $D$-execution, which is
produced by $T$ and for which $I(s_0)$ holds, preserves $I$.
%
\proof{
%
This follows immediately from Theorem~\ref{thm:prod-event-sat}.
%
Every event in the produced execution will respect
$I(s) \Rightarrow I(s')$, and so $I$ is preserved over each effect
application.
%
}
\end{theorem}
\begin{example}
\label{ex:invariants}
%
Suppose we want to ensure that the invariant $I := s \geq 0$ holds for the
bank account example, i.e., that the account value is always non-negative. The
key insight from Theorem~\ref{lem:invariants} is that the task of ensuring
this invariant can be split into guaranteeing two separate properties:
%
\begin{compactitem}
\item the system only produces events that are
sound for the specification $I(s) \implies I(s')$, and
\item the executions are well-formed.
\end{compactitem}
%
For example, if every event produced by the system is in one of the forms of
$\eta_w$ or $\eta_d$ from Section~\ref{sec:cards} (with the constants $10$ and
$100$ replaced by any non-negative integer), all these events are guaranteed
to be sound for the specification. Further, the system would need to ensure
that these events are executed only in the contexts where the guards hold.
\end{example}
\subsection{Measures of Non-Conflict}
First, we define the following notion of an \emph{immediate accord} between an
effect and a guard.
An immediate accord existing between an effect $e:E$ and a guard $c:C$ implies
that the effect updating the global store cannot violate the consistency guard
in an execution of an action bound by a $c$ query, i.e., actions of the form $Q c
\triangleright x . t$.
\begin{definition}[Immediate Accord]
Given a CARD $D=(S,E,C)$, guard $c:C$ and effect $e:E$, an
\emph{immediate accord} exists between them, written as
$\text{IA}(c,e)$, iff
%
\[
\forall s_g, s_r : S. \; c(s_g,s_r) \implies
c(\denote{e}(s_g),s_r).
\]
\end{definition}
We denote by $\text{IAS}_D(c)$ the set of all $D$-effects in immediate
accord with $c$.
\begin{example}
\label{ex:locks:ia}
In the running example, there is an immediate accord between the
effect $\mathtt{Add}\; n$ and the guard $\mathtt{LE}$. However,
there is no immediate accord between $\mathtt{Add}\; n$ and
$\mathtt{EQ}$, or between $\mathtt{Sub}\; n$ and either of
$\mathtt{EQ}$ and $\mathtt{LE}$.
\end{example}
\begin{definition}[Careful Executions]
We call a $D$-execution $D=(L,G,\mathsf{grd},\mathsf{ar},\vis)$ \emph{careful} iff
for each $g \in G$ guarding an event $\eta$, $L_g$ contains all
events $\eta_i$ in $L_{\eta}$ for which $\eff(\eta_i)$ is not in
immediate accord with $\mathsf{gc}(g)$.
\end{definition}
A careful execution is always produced when a replica resolving a
query must see every event in the network which is not in immediate
accord with its guard.
This safety measure over-approximates the guard satisfaction condition
followed by the operation rules by excluding invisible subsets that
satisfy the guard ``by blind luck'', such as an invisible
account-emptying withdrawal followed by an invisible deposit that
undoes it (see Figure~\ref{fig:blind-luck-execs}).
\begin{figure}
\input{fig-blind-luck-execs}
\caption{Blind luck executions. There can be executions which have events of
not-in-accord operations that are invisible to one another, and still
produce a well-defined result.
}
\label{fig:blind-luck-execs}
\end{figure}
Intuitively, allowing an undetected ``lucky pair'' also allows an
undetected ``unlucky single'' which would make the query resolution
unsound.
We thus use the careful, well-formed $D$-execution as our basis for
the following definitions.
\paragraph{Transitive Accords.}
As illustrated in Section~\ref{sec:overview} (the joint account
CARD\xspace), it is not sufficient for a replica maintaining $c$ to
coordinate with replicas concurrently emitting effects
$e \not\in \text{IAS}_D(c)$. A second effect $e' \in \text{IAS}_D(c)$
that is concurrent to $e$ might change the behavior of $e$ if it is
arbitrated earlier. Hence, we now describe a stronger notion of
accords.
\begin{definition}[Transitive Accord]
A \emph{transitive accord} exists between an effect $e:E$ and a
guard $c:C$ (written as $\text{TA}_D(e,c)$) iff for any careful
$D$-execution $L=(W,G,\mathsf{grd},\mathsf{ar},\vis)$ containing an event $\eta$
guarded by $g$ with $\mathsf{gc}(g)=c$, and for any event $\eta' \notin W$
for which $\eff(\eta')=e$, the guard $g$ remains satisfied in
$L'=(W \cup \{\eta'\},G,\mathsf{grd},\mathsf{ar} \cup \{(\eta',\eta)\},\vis)$.
\end{definition}
A \emph{transitive accord set} for $c$ is a set of effects for
which transitive accords exist.
Intuitively, any replica maintaining a guard $c$ needs to coordinate
with replicas emitting effects which are not in its transitive accord
set because a new event arriving at the replica may be inserted
somewhere in the middle of history by the arbitrary ordering.
The following theorem states that finding the largest transitive
accord set is undecidable.
\begin{theorem}
\label{thm:undecidability}
Given a CARD $D$ and $D$-guard $c$, finding the largest cardinality
transitive accord set for $c$ is undecidable.
%
\proof{Sketch: the proof relies on constructing an effect $e$ which can
induce a violation of the guard $g$ only from a single store state.
Now, $e \in \text{TAS}(c)$ if and only if that single store state is
reachable through the effects of the system.
Such store value reachability problems are undecidable.
}
\end{theorem}
\begin{example}
\label{ex:locks:ta}
In the joint bank account example, let's intuit the transitive accord set
for the guard of \texttt{withdrawJ}, $c = \mathtt{LE} \land \mathtt{App?}$.
Recall that the state is expressed as a tuple $(s: \mathtt{Int}, b_1:
\mathtt{Bool}, b_2: \mathtt{Bool})$, and that $\mathtt{withdrawJ}
:= Q\;\mathtt{LE}\land\mathtt{App?}\triangleright(\cdots)$, where
$\denote{\mathtt{LE}} := s(s_r) \leq s(s_g)$ and $\denote{\mathtt{App?}}
:= b_2(s_r) = b_2(s_g)$. We begin by deciding the immediate accord set of
$c$, $\text{IAS}_D(c)$:
\begin{itemize}
\item $\denote{\mathtt{Request}}$ only changes $b_1$, which is not used in
either of \texttt{withdrawJ}'s guards. Therefore the effect is in
$\text{IAS}_D(c).$
\item $\denote{\mathtt{Approve}}$ and $\denote{\mathtt{Reset}}$ can both
change $b_2$, violating \texttt{App?}, so neither is in $\text{IAS}_D(c)$.
\item $\denote{\mathtt{Add}\ n}$ only increases $s$, satisfying
$\mathtt{LE}$ and $\mathtt{App?}$ (trivially), so it is in
$\text{IAS}_D(c).$
\item $\denote{\mathtt{Sub}\ n}$ and $\denote{\mathtt{Set}\ n}$ can both
decrease $s$, violating \texttt{LE}, so neither is in $\text{IAS}_D(c).$
\end{itemize}
Therefore, the immediate accord set of $\mathtt{LE} \land \mathtt{App?}$
contains $\mathtt{Request}$ and $\mathtt{Add}\ n$. Now let's see which of
these two is also in the transitive accord set. Notice the presence of
an additional $\mathtt{Add}\ n$ can never decrease $s$, even when combined
with other rules. Nor can it change $b_2.$ This shows that $\mathtt{TA}_D(
\mathtt{Add}\ n, c).$ $\mathtt{Request}$ is more complicated, since it
toggles $b_1$, which sets $b_2$ when combined with $\mathtt{Approve}.$
Consider an abstract execution consisting of an $\mathtt{Approve}$ followed
by a $\mathtt{Withdraw}\ 10.$ Because there is no request, $b_1 = \bot$, the
$\mathtt{Approve}$ will keep $b_2 = \bot.$ This will result in the
$\mathtt{Withdraw}\ 10$ acting as a $\mathtt{NoOp}$ Now suppose we produce a
new execution using the same events preceded by a $\mathtt{Request}.$ This
time $b_1 = \top$, and could lead to the withdraw being executed.
Therefore, the only effect with a transitive accord with $\mathtt{LE} \land
\mathtt{App?}$ is $\mathtt{Add}.$
\end{example}
\subsection{Inferring Minimal Locking Conditions}
\label{sec:locks-minimal}
\paragraph{Consistency Invariants}
A \emph{consistency invariant} in a CARD $D$ is a $D$-guard $c$ for
which, given any pair of $D$-states $(s_g,s_r)$ and $D$-effect $e$,
$c(s_g,s_r) \Rightarrow c(\denote{e}(s_g),\denote{e}(s_r))$.
\begin{theorem}[CINV + IA = TA]
For a CARD $D=(S,E,C)$, if a $c:C$ is a consistency invariant in $D$
and an effect $e:E$ is in immediate accord with $c$, then $e$ is
also in transitive accord with $c$.
%
\proof{
%
Suppose we have a careful, well-formed $D$-execution $L$
containing event $\eta$ and active guard $g \in \mathsf{grd}(\eta)$ for
which $\mathsf{gc}(g)$ is a consistency invariant.
%
As $L$ is well-formed, $g$ is satisfied, meaning that
$\mathsf{gc}(g)(\eval(L),\eval(L_g))$ holds.
We now take a new event $\eta_2$ for which
$\text{IA}(\mathsf{gc}(g),\eff(\eta_m))$ holds and create a new execution
$M=(W \cup \{\eta_m\},G,\mathsf{grd},\mathsf{ar} \cup \{\eta_m \times
\eta\},\vis)$.
%
Showing that $g$ is also satisfied in $M$ is proof that
$\text{TA}(\mathsf{gc}(g),\eff(\eta_m))$ holds.
%
We show this by inductively evaluating $M_{\eta}$ and $L_g$
alongside each other and noting that at each step, the post-states
of the two sub-executions satisfy $g$'s consistency guard.
%
This will give us that $\mathsf{gc}(g)(\eval(M_{\eta}),\eval(L_{g}))$,
showing that $g$ is satisfied in $M$.
At the base case, $\mathsf{gc}(g)(s_0,s_0)$ holds by definition of
consistency guards (they are always implied by equality).
%
For our inductive step, we examine an event $\eta'$ which is in
some combination of the executions $L_{\eta}$, $M_{\eta}$, and
$L_{g}$, with $\mathsf{gc}(g)(s_M,s_{L_g})$ as our inductive hypothesis:
%
\begin{description}
\item[Case $\eta' \in L_{\eta} \cap M_{\eta} \cap L_{g}$:] The
fact that $\mathsf{gc}(g)$ is a consistency invariant gives us\\
$\mathsf{gc}(g)(\denote{\eff(\eta')}(s_M),\denote{\eff(\eta')}(s_{L_g}))$.
\item[Case
$\eta' \in L_{\eta} \cap M_{\eta} \land \eta' \notin L_{g}$:]
Because $L$ is careful and $\eta'$ is not in $L_{g}$, we must
have $\text{IA}(\mathsf{gc}(g),\eff(\eta'))$. This gives us
$\mathsf{gc}(g)(\denote{\eff(\eta')}(s_M),s_{L_g})$.
\item[Case
$\eta' \in M_{\eta} \land \eta' \notin L_{\eta} \cup L_{xi}$:]
This can only be our new event $\eta_m$ for which we have
$\text{IA}(\mathsf{gc}(g),\eff(\eta_m))$ by assumption. This gives us
that $c(\denote{\eff(\eta')}(s_M),s_{L_g})$.
\end{description}
%
Thus we have $\text{TA}(\mathsf{gc}(g),\eff(\eta_m))$ because $g$ remains
satisfied when $\eta_m$ is added to $L$. $\Box$
}
\end{theorem}
Consistency invariants for CARDs play the role equivalent to standard
inductive loop invariants in sequential program verification --- they
are a strengthening of the required property that is preserved by
operations. We show that every consistency invariant that implies a
given $c$ defines a transitive accord set for $c$.
\begin{theorem}
\label{thm:ci-tas}
Let $D$ be a CARD and $c$ and $c'$ be $D$-guards. If $c'$ is a
consistency invariant and $c' \Rightarrow c$, then
$\text{IAS}_D(c')$ is a transitive accord set for $c$.
\end{theorem}
Note that the identity relation itself ($=$) is always a
consistency invariant, similar to how $\bot$ is always a loop
invariant in the sequential setting. However, this consistency
invariant leads to a transitive accord set that rejects all state
mutating effects in the CARD\xspace. The challenge is to identify the
consistency invariant that leads to the most complete transitive
accord set.
In spite of Theorem~\ref{thm:undecidability}, we present a simple
semi-procedure that computes a reasonable transitive accord set
in practice through consistency invariants. First, let the
\emph{weakest consistency precondition} of a guard $c$ and effect
$e$, $\texttt{WCP}\xspace(e,c)$, be the weakest guard such that
$(s_g,s_r) \models \texttt{WCP}\xspace(e,c)$ implies that
$(\denote{e}(s_g),\denote{e}(s_r)) \models c$. Now, we decide
transitive accords with:
\[
\text{TAS}_D(c) := \mathbf{let} \; c' = \bigwedge_{e:E} \texttt{WCP}\xspace(e,c) \;
\mathbf{in} \;
\mathbf{if} \; c \Rightarrow c' \;
\mathbf{then} \; \text{IAS}_D(c) \;
\mathbf{else} \; \text{TAS}_D(c \land c')
\]
The following theorem states the soundness of the above procedure.
\begin{theorem}
\label{thm:tas-soundness}
Given a CARD $D$, a $D$-guard $c$, and a $D$-effect $e$, the
procedure $\text{TAS}_D(e,c)$ returns a transitive accord set for
$c$.
%
\proof{
%
The proof follows from the following:
\begin{itemize}
\item The guard argument at recursive call $i$ (which we will
call $c_i$) is a strengthening of $c$.
\item If, at recursive call $i$, the condition
$c_i \Rightarrow c'$ holds, then $c_i$ is a consistency
invariant in $D$ because $\forall e:E.\; c_i \Rightarrow
\texttt{WCP}\xspace(e,c)$.
\item Therefore, because $c_i \Rightarrow c$ and $c_i$ is a
consistency invariant, then the returned $\text{IAS}_D(c_i)$ is
a transitive accord set for $c$ by Theorem~\ref{thm:ci-tas}.
\end{itemize}
%
}
\end{theorem}
The procedure $\text{TAS}$ is computing the greatest fixed-point
$c'_L$ of the equation
$\mu c' : c' \implies c \land ((s_g, s_r) \models c') \implies
\bigwedge_{e:E} (\denote{e}(s_g), \denote{e}(s_r)) \models c'$ as a
consistency invariant and using it to decide transitive accords.
However, any fixed-point of the equation is sufficient, and any
technique used in standard sequential program reasoning can be applied
to compute this fixed-point (e.g., widening from abstract
interpretation, logical interpolant computation, etc).
\subsection*{Checking a Dependent Operation Type}
Because guards reduce the concurrent problem of operation correctness
to a sequential one, we can use standard sequential reasoning tools to
verify operation behavior.
In particular, we extend the type inference rules of Liquid
Types~\cite{liquidtypes} to cover $\lambda^Q$'s unique terms.
Operations are then type-checked with respect to a specification on
the behavior of the event they produce.
For example, the specification we check for the \texttt{withdraw}
operation states formally the behavior we described earlier:
\[
\mathtt{withdraw} \; : \; (n:\mathtt{Nat}) \to
\mathtt{Op}(\mathtt{Counter},\mathtt{Int},(s \ge 0 \Rightarrow s'
\ge 0) \land (a = s - s'))
\]
This operation type states that \texttt{withdraw}, given a natural
number amount, is an operation over \texttt{Counter} returning an
integer and meeting two refinement conditions:
\begin{inparaenum}
\item the bank account's non-negative invariant is preserved and
\item the return value ($a$) reflects exactly the amount that is
removed from the account.
\end{inparaenum}
The $\mathbf{a}$, $\mathbf{s}$, and $\mathbf{s'}$ in the specification
are special free variables used to refer to the return value, the
store value before applying the operation's effect, and the store
value after the operation completes.
Our typing rules will reduce this to a Liquid Type which must be
checked.
The argument $n$'s type \texttt{Nat} is itself an example of a Liquid
Type which we will use in the derivation.
\[
\denote{n: \mathtt{Nat}} = \denote{n:\{\nu : \mathtt{Int} \;|\; \nu
\ge 0\}} = n \ge 0
\]
We now check the operation type against our \texttt{withdraw}
definition.
The correctness of \texttt{withdraw} depends on the store value
guarantee it demands via the \texttt{LE} query guard, and the
\textsc{type_q} typing rule adds that guarantee to the context.
\ottusedrule{\ottdruletypeXXq{}}
Thus typing the outer term
$Q \; \mathtt{LE} \triangleright x. \; \mathtt{if}\ldots$ adds
$x:\{\nu : \mathtt{Int} \;|\; \nu \le s\}$ which states that the value
bound to $x$ is less than or equal to the pre-effect store value.
\[
\denote{x:\{\nu : \mathtt{Int} \;|\; \nu \le s\}} = x \le s
\]
Following the positive branch of the
$\mathtt{if}\;(x \ge n)\;\mathtt{then}\{\ldots\}\mathtt{else}\{\ldots\}$
further adds $x \ge n$ to the context.
We arrive at the final constraint-solving problem by applying the rule
\ottusedrule{\ottdruletypeXXr{}}
\noindent to the $R.\; (\mathtt{Sub}\;n,n)$ base term that gives the effect and
return value that a successful \texttt{withdraw} produces.
Following the \textsc{type_r} rule, we need to show
\[
\Gamma \vdash n : \{\nu : \text{Int} \;|\; s'=((\lambda s.\; s -
n)(s)) \Rightarrow (s \ge 0 \Rightarrow s' \ge 0) \land (a = s -
s')\}
\]
to finish checking the positive \texttt{then} branch, which becomes
the simple constraint problem
\[
(n \ge 0) \land (x \le s) \land (x \ge n) \land (s' = s - n)
\Rightarrow (s \ge 0
\Rightarrow s' \ge 0) \land (a = s -
s')
\]
when $\denote{\Gamma}$ is unpacked according to the Liquid Type rules.
The trivial \texttt{else} branch check is clearly satisfied by the
fact that its effect does nothing.
\[
\denote{\Gamma} \land (s' = s) \Rightarrow (s \ge 0 \Rightarrow s'
\ge 0) \land (a = s - s')
\]
\subsection*{CARDs with Non-Commutable Effects}
Many replicated data reasoning models and implementations require all
effects on the replicated store to be commutable in order to simplify
the way histories are merged.
In the interest of generality, CARDs do allow non-commuting store effects, and
our reasoning technique and implementation technique are equipped to handle
them efficiently.
To demonstrate this flexibility and build some more intuition, let's
take a look at some example applications. More examples can be found
in Section~\ref{sec:evaluation}.
Figure~\ref{fig:noncom-effects}d illustrates how non-commutative effects
(here, $+5$ and $\times1.2$) can lead to replicas diverging,
violating strong eventual consistency.
\subsubsection*{Bank Account with Interest and Non-commuting Effects}
\label{sec:bank-with-interest}
An obvious challenge of non-commutable effects is maintaining SEC.
Our approach, following~\cite{Burckhardt:2012}, is to use an {\em
arbitration order}, which is a total order on events which a replica
chooses to evaluate the current value.
The key is that the arbitration order must be chosen and maintained
consistently across replicas.
Such an order can be maintained using a standard combination of Lamport
clocks and replica identifiers and by inserting newly received updates
appropriately in history instead of appending them.
We now extend our example to show that even with non-commuting
effects, strong eventual consistency can be achieved without blocking.
Consider our bank account over an extended CARD \texttt{Counter'} with
new effect $\denote{\mathtt{Interest}}:= \lambda s. s * 1.2$, and
suppose we write a new operation \texttt{safeBalance} which returns a
value that is definitely not less than the account's actual value.
\[
\mathtt{safeBalance} : \mathtt{Op}(\mathtt{Counter'},\mathbb{Z},s' =
s \land a \le s)
\]
The order of the {\tt Sub} and {\tt Interest} events matter, i.e., the
effects do not commute.
Most approaches~\cite{CRDTs,RedBlue} would declare these two
operations in conflict, and thus would be either disallowed (CRDTs) or
declared strongly consistent (RedBlue).
Furthermore, if effects are reordered at replicas, maintaining
guarantees about the relationship between the return value and the
global state becomes hard --- so using an operation that reads this
shifting state might require coordination.
However, the guard of {\tt safeBalance} allows us to infer that its
requirement does not conflict with either {\tt deposit} or {\tt
interest}, so all three operations can be executed in parallel.
Because the desired behavior of \texttt{safeBalance} was verified
entirely based on its query guard, we can be sure that its behavior
survives effect reorderings.
Thus we achieve efficiency, even while ensuring application
properties, by depending on the arbitration order rather than
coordination to maintain SEC even with non-commutable effects.
\subsubsection*{Joint Bank Account and Chained Conflicts}
We have explained how using the arbitration order allows achieving SEC.
The downside is that due to non-commuting effects, detecting conflicts is in
general more difficult than it was for our first bank account example. There
may exist effects which cannot violate a guard, but instead can change the
behavior of a non-commuting effect that does have the ability to violate a
guard.
To demonstrate, we extend the example to a bank account which is
jointly owned by two users, in which a user must first request a
withdraw (via {\tt request}) and wait for someone else to approve (via
\texttt{approve}) before actually performing it.
We use a $(\mathtt{Counter},\mathtt{Bool},\mathtt{Bool})$ tuple as the
store, which supports the effects and guards of the $\mathtt{Counter}$
as well as effects and guard
\begin{align*}
\vspace{-4ex}
\denote{\mathtt{Request}}&:= \lambda (s,b_1,b_2).(s,\top,b_2) & \denote{\mathtt{App?}}&:= b_2(s_g) = b_2(s_r) \\
\denote{\mathtt{Approve}}&:= \lambda (s,b_1,b_2).(s,b_1,b_1) \\
\denote{\mathtt{Reset}}&:= \lambda (s,b_1,b_2).(s,\bot,\bot)
\vspace{-4ex}
\end{align*}
in which \texttt{App?} guarantees that the second boolean seen has the
same value as the second boolean on the global store.
In this case, a user must first request a withdraw (via {\tt Request})
and wait for someone else to approve (via \texttt{Approve}) in order
for the withdrawal to have an effect.
\begin{align*}
\mathtt{withdrawJ}:= Q\;\mathtt{LE}\land\mathtt{App?}
\triangleright (s,b_1,b_2).(&\mathtt{if}\;(s \ge n
\land b_2)\;\\ &\mathtt{then}\;R.(\mathtt{Sub}\;n
\circ \mathtt{Reset},n)\;\\ &\mathtt{else}\; R.(\mathtt{NoOp},0))
\end{align*}
The operation {\tt withdrawJ} is guarded by $\mathtt{App?}$ to be sure
that the actual withdrawal of funds happens only if it was approved.
The operation {\tt withdrawJ} must not be concurrent with itself (as
before), but it is now also in conflict with anything that emits {\tt
Approve}, as {\tt Approve} can invalidate $\mathtt{App?}$.
Now note that {\tt Approve} and {\tt Request} are non-commuting: the
behavior of {\tt Approve} is changed by a {\tt Request} existing
before it.
Consider a situation (illustrated in Fig. ~\ref{fig:chained-conflicts}) where
replica $r_1$ emits {\tt Approve} and then runs {\tt withdrawJ}, while
concurrently, replica $r_2$ emits {\tt Request}.
Let us assume that the arbitration order will eventually put the {\tt
Request} before the effect of {\tt Approve}.
Then an execution can look as follows: replica $r_1$ sees an {\tt Approve}
(which does not set $\mathtt{app}$ to {\tt true} as there is no request pending) and
then $r_1$ executes a {\tt withdraw} while guaranteeing that there are no
concurrent $\mathtt{Sub}\;n$ or {\tt Approve} effects.
However, when the {\tt Request} from replica $r_2$ is received by
$r_1$, and the arbitration causes this effect to be ordered before the
\texttt{Approve}, then suddenly the behavior of the \texttt{Approve}
changes: it sets the second boolean to true.
Note that at the time of execution of {\tt withdraw}, the guard
\texttt{App?} would hold; however, the arrival of the {\tt Request}
and consequent re-evaluation of \texttt{Approve} would retroactively
invalidate the guard.
Thus \texttt{App?} must be in conflict with not just {\tt Approve},
but also with {\tt Request}, as it changes the behavior of {\tt
Approve}, potentially causing violation. We provide an algorithm
that finds such chained conflicts in Section~\ref{sec:locks-minimal}.
\begin{figure}
\input{fig-chained-conflicts}
\caption{Chained conflicts causing a problem. {\tt withdrawJ} saw an {\tt approve}, but the {\em approve} did not have an effect, since {\tt approve} did not see a {\em request}. So {\tt withdrawJ} failed and reported $0$ to the client. However, later a {\tt request} was arbitrated before {\tt approve} changing it effects, and making the execution of {\tt withdrawJ} invalid.}
\label{fig:chained-conflicts}
\end{figure}
\section{Introduction}
\label{sec:intro}
\input{popl2019-intro}
\section{Writing and Verifying CARD Operations}
\label{sec:overview}
\input{popl2019-overview}
\newpage
\section{Conflict-Aware Replicated Datatypes}
\label{sec:cards}
\input{popl2019-cards}
\newpage
\section{Language and Type System for CARD Operations}
\label{sec:lang}
\input{popl2019-lang}
\newpage
\section{Inferring Conflict Avoidance Requirements}
\label{sec:locks}
\input{popl2019-locks}
\section{Implementing a Replica Network}
\label{sec:replica}
\input{popl2019-replica}
\section{Conflict Detection Evaluation}
\label{sec:evaluation}
\input{popl2019-eval}
\section{Related Work}
\label{sec:relwork}
We described how our work builds on CRDTs (Shapiro et al.~\cite{CRDTs} provide a comprehensive overview).
Several frameworks allow both conflict-free, and conflicting
operations~\cite{Gotsman:2016,Balegas:2015,Li:2014,ECDS,RedBlue,bayou}, offering different trade-offs between consistency
and availability. Such mixed-consistency systems are typically built upon key-value databases
that offer tunable transaction isolation~\cite{Bailis:2013,Terry:2013,Lakshman:2010}.
Our work is closest to the work of ~\cite{Gotsman:2016}, which also focuses on
on reasoning about data types with such conflicting operations. The approach of
~\cite{Gotsman:2016} allow the programmer to specify for every pair of
operations whether there is a conflict, using a token based system. In contrast,
our consistency guards are specified for each operation separately, which allows
the developer to reason only about the operation they are currently writing.
Note that while our consistency guards (replica state - global state relations)
are related to the guarantee relations (replica state - replica state relations)
of~\cite{Gotsman:2016}, the most important difference is how these are used.
\cite{Gotsman:2016} use the guarantee relations only in the proof of correctness
of a program (as a manual step). The programmer cannot write these guarantees,
they can only declare conflicts explicitly between each pair of operations. In
contrast, our language lets the programmer specify the guards directly,
leading to modular specifications, from which conflicts can be algorithmically
inferred.
The second closest work is that of \cite{Balegas:2015}, introduces explicit
consistency, in which concurrent executions are restricted using an application
invariant. Two technically most important differences are: first, our
consistency guards are significantly more expressive than invariants. The
consistency guards relate the global state to the local state, whereas
invariants talk only about one state. That means that in the framework of
Balegas et al., one cannot specify a property such as ``if {\tt getBalance}
returns a value $v$, then the account balance is at least $v$'' (see the bank
account with interest in Section~\ref{sec:bank-with-interest}). Second, our
consistency predicates allow finding conflicts by checking conditions on sequential programs. In contrast, application invariants of Balegas et
al. require to check conditions on concurrent programs, a significantly harder
task.
A related approach~\cite{ECDS,RedBlue} allows manual selection
of consistency levels for operations.
Quelea~\cite{ECDS} allows specifying contracts (ordering constraints) on
effects. In contrast, our system hides the concept of effect ordering in
history, and allows modular conflict specification.
CARDs can use such systems as a backend,
automatically generating the contracts via the conflict inference technique.
The homeostasis protocol~\cite{homeostasis} addresses conflicts between
operations by allowing bounded inconsistencies as long as other forms of
correctness are preserved. It may be possible to fruitfully combine consistency
guards with relaxed consistency notions. We leave this for future work.
Bayou~\cite{bayou} is an early system for detecting and managing conflicts. The
conflicts are detected (translated to our terminology) by re-running a check on
every replica where an effect is propagated to see if the data has been updated
in parallel. This approach to conflict detection is very different from our
consistency guard (which are predicates that link a global and local state).
The axiomatic specification which we used to define CARDs\xspace is based on the
model presented in~\cite{Burckhardt:2014,Attiya:2016}. We built on the model to
define consistency guard compliance, as well as type checking soundness.
The tension between consistency and availability in distributed systems is captured
by the CAP theorem~\cite{B00,GL12} --- we aim to preserve eventual
consistency, while maximizing availability.
\section{Conclusion}
\label{sec:coclusion}
We present CARDs, a new extension of CRDTs which allow conflicting operations.
The key idea was to develop a language that gives programmers the ability to
specify consistency guards that establish what a CARD operation expects from its
distributed environments. This enables modular and sequential reasoning about
CARD operations.
This paper opens several possible directions for future work. Among these, we
plan to pursue extending our language to allow composition of CARDs, as well as
transactions with multiple emits. We also plan to work on quantitative relaxations of our invariant requirements.
Furthermore, we will investigate systems
aspects of our approach: we will empirically investigate different approaches to implementation of our conflict avoidance algorithm.
\newpage
\ifacm
\bibliographystyle{ACM-Reference-Format}
\else
\bibliographystyle{plain}
\fi
|
{
"timestamp": "2018-09-27T02:14:29",
"yymm": "1802",
"arxiv_id": "1802.08733",
"language": "en",
"url": "https://arxiv.org/abs/1802.08733"
}
|
\section{Background}
DataSite extends the literature on exploratory visual analysis and visualization recommendation to better aid analysts with data exploration in a proactive manner.
Here we discuss the state-of-the-art research and inspirations for DataSite.
\subsection{Exploratory Visual Analysis}
Exploratory data analysis (EDA)~\cite{keim2006challenges, tukey1977exploratory} is the canonical user scenario for visualization.
The key characteristic for EDA is that the analyst is not initially familiar with the dataset, and may also be unclear about the goals of the exploration.
The exploratory process involves browsing the data to get an overall understanding, deriving questions from the data, and finally looking for answers.
Efficient data exploration often relies on visual interfaces~\cite{tukey1977exploratory}.
\textit{Dynamic queries}~\cite{shneiderman1994dynamic} is an interaction technique for such interfaces, where users formulate visual queries as a combination of filters.
Writ large, \textit{faceted browsing} allows for creating queries on specific dimensions of the data~\cite{yee2003faceted}.
\subsection{Visual Specification}
Specifying visual representations is one of the key challenges in visualization.
Research efforts here span the spectrum from programming languages to point-and-click interfaces.
Visualization toolkits such as D3~\cite{bostock2011d3} represent one side of this spectrum, and gives unprecedented control over the visualization, but at the cost of significant programming expertise and development time.
High-level visual grammars, such as Grammar of Graphics~\cite{wilkinson2006grammar}, ggplot2~\cite{wickham2016ggplot2}, and Vega-Lite~\cite{Satyanarayan2016_vega}, abstract away implementation details, but may still have a high barrier of entry and steep learning curve due to the need for visual design knowledge.
A recent development in visual specification has been the introduction of interactive visual design environments such as Lyra~\cite{Satyanarayan2014}, iVoLVER~\cite{Mendez2016}, and iVisDesigner~\cite{Ren2014}.
Even more recent tools include Data Illustrator~\cite{liu2018dataillustrator} and DataInk~\cite{Xia2018}, both of which use direct manipulation to allow designers to bind visual features to data.
Common among them is that they
require no programming, and are thus positioned at the very other end of the spectrum from visualization toolkits and grammars.
However, interactive view specification can be clumsy and inefficient at times, and may still be plagued by challenges due to the intricacies of visualization design.
Shelf-based visualization environments such as Polaris~\cite{stolte2002polaris}, Tableau~\cite{tableau}, and PoleStar~\cite{polestar} fall in the middle of the spectrum.
Common among these is their ability to allow the user to drag and drop data dimensions, metadata, and measures to specific ``shelves,'' each one representing a visual channel such as axis, shape, scale, color, etc.
This point-and-click approach to visual specification is flexible enough to construct a wide range of visualizations, but not so complex so as to become technical, such as Lyra, iVoLVER, and iVisDesigner.
In DataSite, we employ a variant of a shelf-based visualization environment for this very reason.
\begin{figure*}
\centering
\includegraphics[width=0.85\textwidth]{overall_structure}
\caption{The structure and workflow of the DataSite visual analysis system.
The manual and proactive (DataSite) visualization workflow have shared common procedures in the middle.
Components within the red rectangle are the key parts of DataSite: proactive computational modules that can run through various data fields, visualization, and natural language descriptions.
These together offer suggested features in the feed.}
\label{fig:overall_structure}
\end{figure*}
\subsection{Visualization Recommendation}
The idea behind visualization recommendation is to use recommendation engines~\cite{herlocker2004collaborafilter} to suggest relevant views to the user, thus reducing the cognitive load.
While this idea has seen a resurgence in the visualization community in recent years, it is by no means a new idea.
Mackinlay~\cite{mackinlay1986automating} first proposed automatic visualization design based on input data in 1986.
His work combines expressiveness and effectiveness criteria inspired by Bertin~\cite{bertin1983semiology} and Cleveland et al.~\cite{cleveland1984graphical} to recommend suitable visualizations.
Tableau's Show Me system~\cite{mackinlay2007show} provided a practical and commercial implementation of these ideas.
Many similar approaches to automatic visual specification exist.
Sage~\cite{roth1994interactive} extends Mackinlay's work to enhance user-directed design by completing and retrieving partial specifications based on their appearance and data contents.
The rank-by-feature framework~\cite{seo2005rank} sorts scatterplot, boxplots, and histograms in a hierarchical clustering explorer to understand and find important features in multidimensional datasets.
SeeDB~\cite{seeDB2014} generates a wide range of visualizations, and define which ones would be interesting by deviation and scale.
Perry's~\cite{perry2013vizdeck} and Van den Elzen's~\cite{van2013small} work attack the problem that generates multiple visualizations shown with small thumbnails.
Recommendation engines have been used to great effect for visualization in the last few years.
Voyager~\cite{Wongsuphasawat2016} generates a large number of visualizations and organizes them by relevance on a large, scrolling canvas.
Visualization by demonstration~\cite{Saket2017} lets the user demonstrate incremental changes to a visualization, and then gives recommendations on transformations.
Zenvisage~\cite{siddiqui2016effortless} automatically identifies and recommends interesting visualizations to the user depending on what they are looking for.
Recently, Voyager 2~\cite{Wongsuphasawat2017} builds on Voyager, but supports wildcards in the specification and provides additional partial view suggestions. ``Top-K insights''~\cite{tang2017extracting} provides theory for generating insights, which is the main motivation of our paper.
All of these ideas were formative in our work on DataSite, but our approach takes this a step further by focusing on continuous computation from a library of automatic algorithms, with findings propagated to the user in a dynamically updating feed.
\subsection{Proactive Computation alongside Visualization}
The idea of proactive visual analytics discussed in our paper builds on the idea to opportunistically run computations in anticipation of user needs, which is observed in Novias~\cite{novais2012proactive}, TreeVersity~\cite{treeversity}, and Analyza~\cite{googleexplore} (\textit{Explore} in Google Sheet).
Novias identifies visual elements of evolving features and provides multiple views in an interactive environment.
TreeVersity provides a list of outliers in textual form, which identifies changes in the data automatically.
The most similar research to DataSite is Analyza, which provides auto-computed features in natural language.
In contrast, DataSite aims to push proactive computation to depth and complexity rather than just simple overall statistics in the dataset.
Furthermore, DataSite pushes features to a feed view that is akin to social media feeds users are already accustomed to.
\section{Conclusion and Future Work}
\label{sec:conclusion}
We have presented DataSite, a visual analytics system that integrates automatic computation with manual visualization exploration.
DataSite introduces the feed, a list of dynamically updated notifications arising from a server-side computation engine that continually runs suitable analyses on the dataset.
The feed stimulates the analyst's sensemaking through brief descriptions of computational modules along with corresponding charts.
Filters and text search bar enable quick scan and fast data exploration.
Two controlled user studies evaluate the approach compared to PoleStar and Voyager 2, respectively, and show that significant performance improvements over the manual view specification tool (PoleStar) in both breadth and depth for data coverage, as well as useful guidance in exploration.
It also provides more meaningful charts and features to analysts over Voyager 2, while maintaining similar ease of usage.
The results are promising and indicate that the system promotes data analysis in all stages of exploration.
DataSite can be seen as a canonical visual analytics system in that it blends automatic computations with manual visual exploration, thus establishing a true partnership between the analyst and the computer.
We regard it as the first step towards a fully proactive visualization system involving a human in the loop.
Of course, many improvements can be made towards a more efficient system; after all, while CPU resources are cheap, they are not free.
One potential future research topic is guiding recommendations based on the analyst's interest, past interactions, and even their personality.
For example, consider a DataSite-like system that would respond to an analyst drilling deep into a part of a sales dataset over time to proactively compute future sales projections for that part of the data in an effort to anticipate future questions the analyst may have.
Other ideas may include mining the analyst's click stream, browsing and analysis history, and even social media profiles to determine how to best guide the proactive computation.
Finally, we could also use interaction to dynamically update the ranking of features in the feed, e.g., prioritize features for data fields selected by the user.
\section*{Acknowledgments}
This work was partially supported by the U.S.\ National Science Foundation award IIS-1539534.
Any opinions, findings, and conclusions expressed in this material are those of the authors and do not necessarily reflect the views of the funding agency.
\bibliographystyle{SageH}
\section{Design Rationale: Proactive Analytics}
\label{sec:design}
The core philosophy for proactive analytics is that human thinking is expensive, whereas computational resources are (generally) cheap.
Following this philosophy, a proactive approach to visual analytics should automatically run computations in the background and present its features to the analyst in an endeavor to reduce the analyst's cognitive effort during the sensemaking process.
In essence, the solution is to use brute force computational power of the computer to help balance out the equation between the human analyst and the computer tool.
This leverages the respective strengths of each partner while complementing their weaknesses:
\begin{itemize}
\item{Human analyst:} The human operator analyzing data.
\begin{itemize}[nosep]
\item\textit{Strength:} creativity, experience, deduction, domain knowledge.
\item\textit{Weakness:} limited short-term memory, computational power, lack of analysis expertise, and limited perception.
\end{itemize}
\item{Computer analytics tool:} The tool facilitating analysis.
\begin{itemize}[nosep]
\item\textit{Strength:} significant memory and computational power; large library of analytical algorithms.
\item\textit{Weakness:} no creativity, intuition, or deduction.
Lack of domain knowledge.
\end{itemize}
\end{itemize}
Based on these ideas and the related work (see the previous section), we derive the following design guidelines for proactive visual analytics tools (we give examples of illustrative systems for each guideline):
\begin{itemize}
\item[D1]\textbf{Offload computation from analyst to machine.}
The analytical tool should be designed so as to offload as much as possible of the analysis from the user.
Given our core philosophy, this means that the tool should never be idle waiting for the user to act.
Instead, it should always be running tasks in the background, and start another task as soon as one finishes.
\begin{itemize}
\item\textit{Example:} The Voyager~\cite{Wongsuphasawat2016} and Voyager 2~\cite{Wongsuphasawat2017} systems pre-emptively perform computation on features of the current dataset to provide new views to the user.
\end{itemize}
\item[D2]\textbf{Present automated features incrementally with minimal interruption to the analyst.}
Automatic features derived by the background computational processes must be propagated to the user, but the presentation of these features should be designed so as not to interrupt the user's cognitive processes needlessly.
These features should be accumulated in a feed where they can be easily surveyed and viewed at the user's own initiative rather than in a blocking manner that requires action.
\begin{itemize}
\item\textit{Example:} The InsightsFeed tool~\cite{BadamProgressive} progressively runs calculations in the background and updates the displays as new results come in.
\end{itemize}
\item[D3]\textbf{Reduce the knowledge barrier of human thinking.}
Data analytics is a nascent discipline with rapidly evolving methods, many requiring the data to support specific assumptions or exhibit certain properties, so it is often difficult even for expert-level analysts to stay abreast of current practice~\cite{Batch2018}.
This is another situation where timely proactive support can save analyst effort by investing CPU time: the tool can simply run every conceivable analytical method from a large library of methods (ordered by perceived utility) and only present interesting trends.
\begin{itemize}
\item\textit{Example:} Tang et al.~\cite{tang2017extracting} propose a tool that automatically calculates the top-$k$ insights from a multidimensional dataset based on an importance function used to score different findings.
\end{itemize}
\item[D4]\textbf{Eliminate ``cold-start'' through exposing potentially relevant features of the data early during exploration.}
A challenge related to the knowledge barrier is the so-called ``cold-start problem''; the fact that, when beginning analysis on a new dataset, it can be challenging to know how to get started because the data can be overwhelming and difficult to get a handle on.
Again, this can be mitigated by not choosing but simply performing \textbf{all} applicable analyses from a library of such methods.
\begin{itemize}
\item\textit{Example:} Schein et al.~\cite{coldstartrecommend} define the cold start problem for recommender systems and propose a method for deriving recommendation scores for new items based on similarities to existing items.
\end{itemize}
\end{itemize}
\section{Discussion}
\label{discussion}
Our results show that the feed interface in DataSite expedites the process of data exploration both in breadth and depth: participant preference for both open-ended as well as focused exploration was favorable to our tool.
Below we explain these results in depth, and then discuss some of the limitations of our work.
\subsection{Explaining the Results}
Compared with the study results in Voyager 2, DataSite has a comparable unique field set coverage.
The reason why DataSite does not improve the coverage significantly is that Voyager 2 shows all the charts by default, while DataSite only shows charts on demand when participants click on the descriptions.
In other words, DataSite requires participants to actively examine the charts in the feed rather than merely browsing them in Voyager 2's \textit{Related Views}.
Most participants preferred DataSite for data exploration, and rated the feed very useful to aid data analysis and provide trends and guidance of creating meaningful visualization.
It is worth noting that DataSite also yielded higher ratings in focused exploration.
While DataSite is not designed primarily for targeted exploration, the study reveals a potential effect on that.
This also motivates us to consider what and how a targeted data analysis system should adjust, and what evaluations can be done to achieve that purpose.
One observation from our evaluation studies is that simple statistics (average, range, variance, etc) did not interest participants much.
A comprehensive evaluation of what features would be more interesting to the analysts is needed.
The salient features lower barrier for bootstrapping exploration.
However, too many features may distract user's interest, which have to be balanced carefully.
While Voyager 2 also provides efficient visualization recommendations, results from our evaluation indicate that participants felt that the feed was more targeted and worth analyzing.
Three participants noted that while they were going through Voyager's related views, they sometimes forgot what they had seen using manual view specifications.
We speculate that DataSite explicitly labels the features using a textual description facilitates more targeted analysis.
It is worth noting that DataSite exhaustively applies computations to all the possible data fields (and combinations).
While this enhances data coverage, not all modules and corresponding charts represent a clear insight.
For example, categorical attributes such as ``name'' may have thousands of entries, and it is very difficult to find salient trends via such a chart.
While DataSite ranks features by their significance, a more precise saliency measure is needed.
The challenge is how to measure the efficiency of analytical features from a human perspective, and how to unify the metrics across various types of computations.
This requires comprehensively measuring the efficiency for each visualization.
This is further complicated by the fact that different analysts may have different perspectives, or the same analyst may have different perspectives depending on the question in the study.
For the automobile dataset, buyers may wish to see which car is more economic and safer (higher fuel mileage and fewer accident records), while sellers may be interested in popularity (higher profits and larger number of sales).
These contexts should also be considered for personalizing features.
Automatic guided tooltips, suggested by one participant, would be one way to achieve this.
\subsection{Limitations}
Our goal with DataSite is to take computational guidance to its logical extreme, building on the current trend of recommendation engines for visualization.
However, this kind of automatic analysis approach is fraught with challenges, including eroding an analysts' independent thought process (as discussed by Wongsuphasawat et al.~\cite{Wongsuphasawat2017}), automating key decisions that would benefit from analyst insight, and even HARKing~\cite{Kerr1998} (hypothesizing after results are known) and p-hacking~\cite{Selvin1966} (extensively mining a dataset in search of significant findings).
We do not claim that DataSite's mixed-initiative method is optimal for balancing the analytic burden between analyst and computer, only that it is one instance in the design space that shows promise.
However, while DataSite automates some of the analytical process, it does not aim to replace the analyst.
Data analysis is best performed with an analyst in the loop, and DataSite ensures the analyst is always in control.
From our evaluation, the average number of insights from different data sources are 8 manually created, 11 from the feed, and 7 from the feed with modifications.
This observation shows that participants generated almost the same number of insights from feed (automatic) and manually created.
Another valid point of criticism is that computational power is not always cheap; some algorithms are simply not tractable to be run for an entire dataset in an exhaustive manner.
This means that DataSite's scheduling algorithm requires fine tuning; pure brute force, as somewhat provocatively stated earlier in this paper, is not a universal solution.
Our current implementation can scale up to tens of thousands of entries in the dataset, which is comparable to many existing visualization tools~\cite{Wongsuphasawat2017, Yalcin2017}.
In particular, our evaluations involve datasets with 10,000 (bird strikes) and 3,000 (movies) items.
Still, while there is potential for a more scalable system, it is beyond the scope of this paper.
\subsection{Top-Down vs.\ Bottom-Up Data Analysis}
One of the strengths of visualization is its data-driven, bottom-up, and self-informing nature: as Tukey notes~\cite{tukey1977exploratory}, the type of \textit{exploratory data analysis} so powerfully supported by visualization allows for deriving hypotheses and insights from datasets that are not previously known or well-explored.
This same focus on hypothesis generation permeates much research on visual exploration, including in particular Keim's seminal work~\cite{Keim2001}, which quotes visualization as ``especially useful when little is known about the data and the exploration goals are vague.''
Put simply, visualization allows you to ask (and often answer) questions you didn't know you had.
This is also the strength of a visualization recommendation engine such as Voyager 2, where the philosophy can be expressed as generating as many pertinent charts as possible in the hope of informing the user.
It is also diametrically opposite to the top-down approach afforded by the server-side computation engine used in DataSite, where a suite of pre-defined computational modules are used to extract potentially significant features from a dataset and bring them to the user's attention.
The significant difference between this and traditional confirmatory data analysis methods, including statistical packages such as R, SAS, and JMP, is that DataSite eliminates the need for both (a) forming hypotheses, and (b) testing them using the correct methods.
It does this in the most possible straightforward way: by relying on sheer brute force to test \textbf{all} the hypotheses through state-of-the-art computational modules designed by the DataSite developers.
However, by definition, such a suite of modules is limited by the actual modules provided, which makes this approach less flexible to unknown datasets.
A bottom-up visualization-centric approach, on the other hand, will rely on the human user to detect incidental features in the dataset.
This means that DataSite trades some of the flexibility of more open-ended visual exploration tools for the benefit of reducing the knowledge and hypothesis generation barriers of such tools.
It is important to keep this trade-off in mind when contrasting top-down vs.\ bottom-up data analysis tools, such as those compared in this paper.
The ultimate purpose of this paper is to explore this trade-off in more detail, not to attempt to demonstrate the superiority of one approach or the other.
No approach is likely to be superior, and, in fact, their combination will likely reap the most rewards.
For example, our DataSite implementation also includes manual view specification to enable the user to independently visualize the data, and also uses charts even when reporting on features found.
This is done in an effort to stimulate the type of serendipitous, bottom-up sensemaking that visualization scaffolds.
It is clear that such a combined effort is the best way to proceed in this domain even in the future.
\section{Evaluation Overview}
\label{sec:evaluation_overview}
DataSite creates a new method for visual exploration through a mixture of manual and automated visualization specifications driven by proactive computations.
For this reason, we are interested in understanding whether the exploratory analysis with DataSite supports bootstrap understanding and broad coverage of the data.
We are also curious about knowing how/why the feed helps, and how it changes the analyst's approach in finding features.
To answer these questions, we conducted two user studies: (1) comparing with a manual visualization specification tool, PoleStar, focusing on data field coverage; and (2) comparing with a visualization recommendation system, Voyager 2~\cite{Wongsuphasawat2017}, focusing on data exploration to compare the effects of adding a \textit{Feed} (in DataSite) versus \textit{Related Views} (in Voyager 2).
In other words, Study 1 aims to understand the fundamental utility of the feed view itself, while Study 2 expands this to understanding DataSite's proactive analytics workflow compared to a recent visual recommendation system.
\subsection{Dataset}
To enable comparisons of our results with PoleStar and Voyager 2, we reused the same datasets for our studies.
One is a collection of films (``movies") containing 3,178 records and 15 data fields, including 7 categorical, 1 temporal, and 8 quantitative attributes.
The other dataset contains records of FAA wildlife airplane strikes (``birdstrikes''), which contains 10,000 records and 14 data fields, with 9 categorical, 1 temporal, and 4 quantitative attributes.
These two datasets have similar complexity (w.r.t.\ number of attributes), and are easy to understand.
\subsection{Study Design and Procedure}
In both user studies, we used $2$ tools with $2$ datasets (one dataset on each tool interface).
Participants in both studies started with an assigned tool and dataset, and then moved to the second interface.
To deal with learning effects, we counterbalanced the order of tools and datasets---half of our subjects used PoleStar/Voyager 2 first and the other half used DataSite first (similarly with the dataset).
Each participant began a session by completing a short demographic survey.
However, we did not screen participants based on the demographic information provided.
The participant was then introduced to the first interface assigned.
The participant were first shown the interface and a tutorial on how to use the tool with the classic automobile dataset for training purposes.
For DataSite, they were also shown the feed view and its associated operations.
The participant was then allowed to train using the interface with the automobile dataset, and were encouraged to ask questions about the dataset and tools until they indicated that they were ready to proceed.
The experimenter then briefly introduced the participant to the experimental dataset and asked him/her to explore the dataset ``as much as possible'' (open-ended) within a given time of 20 minutes for each system.
They were asked to speak out aloud their thinking process and insights.
We did not ask the participants to have specific questions to answer during the session, as this may bias them in exploration and limit their focus to specific subsets rather than the whole dataset.
After completing a session with the first tool, the participants repeated the same procedure for the second tool and dataset.
After completing the tasks for both tools, they were asked to complete a questionnaire with Likert-scale ratings on the efficiency and usefulness of each tool as well as the participant's rationale for their ratings.
Participants were also encouraged to verbalize their motivations and comments on each tool.
Each session with two studies as well as exit survey in total lasted around 60 minutes.
All the sessions were held in a laboratory setting in a university campus.
Both tools ran on Google Chrome web browser on a Windows 10 laptop with a $14$-inch display.
The experimenter observed each session and took notes.
Participant's interactions with the tool were logged into files, including application events.
The audio of the session was also recorded for further analysis.
\subsection{Implementation}
\label{sec:implementation}
DataSite is based on a client/server architecture.
The client side is developed using AngularJS,\footnote{\url{https://angularjs.org/}} a JavaScript-based web application framework.
The visualization functionality in the DataSite client is based on the PoleStar interface (available as open source)~\cite{polestar}, which is built on top of Vega-Lite~\cite{Satyanarayan2016_vega}.
We implemented the computational engine using Node.js,\footnote{\url{https://nodejs.org/}} a non-blocking server-side JavaScript framework.
Datasets of interest can be uploaded by the user on the client interface, and sent to the server.
The server processes them using the engine and proactively sends the finished features to the feed view.
This structure enables managing a wide array of input data formats, and scales to large datasets.
In essence, the server does all the heavy lifting: loading data, maintaining the connections to clients, executing computational modules, and updating features.
\section{The DataSite System}
\label{system}
DataSite consists of (1) a user interface for proactive visual analytics containing components for visualization authoring along with a dynamically updated feed view, and (2) a proactive computation engine continuously running background modules on a target dataset.
The user interface runs in a client on a modern web browser and consists of a manual visualization view coupled with a \textit{feed view}.
The client interface is designed for an analyst to use when manually analyzing data in their web browser.
The computation engine, on the other hand, runs in a server process, thereby offloading computation (D1).
The feed view accumulates features as status updates (D2), each consisting of a title, an icon, a detailed textual description, and a representative interactive visualization.
Working in concert, the feed view reduces the knowledge barrier (D3) by continuously displaying trends from the proactive computation engine.
The feed also provides a starting point, eliminating the cold start problem (D4).
\subsection{Client-Side: Visualization Interface}
The DataSite interface comprises a data schema panel, an encoding panel, a manual chart specification view, and a feed view (Figure~\ref{fig:teaser}).
The data schema, encoding, and chart specification views together compose a basic shelf-based visualization system for exploring the data.
The main visualization view is shown in the center of the screen, with the data schema and encoding on the left.
This interface design is consistent with typical exploratory visualization tools, such as Tableau, QlikView, and Spotfire.
Augmenting this design, the dynamically updated feed view is the key interface-level contribution of DataSite.
The feed accumulates features generated by the server-side computation engine.
To give ample space for the analyst's navigation through the interface components, the feed is placed on the right side of the screen to complement the manual specification view.
The data and encoding panel can be hidden to free up additional space.
\begin{figure}
\centering
\includegraphics[width=0.80\linewidth]{component_example}
\caption{Example of features in the feed: a brief textual description (``Correlation metric between Miles per Gallon and Displacement attributes in a Cars dataset.'') with a corresponding auto-generated chart (scatterplot for these two specific attributes).
A red line that shows the computed correlation trend between two attributes is also shown.}
\label{fig:component_example}
\end{figure}
The feed view is inspired by social media feeds, where events {\color{red}pinned} by participants appear in a dynamically updating list in chronological order.
A \textit{data feature} in the feed is a notification from a computation engine.
Once a feature has been computed by a server-side analysis component, it will be dynamically added to the feed.
The feed view can be searched and filtered; sorted by the computational measure, the time it was produced, or in alphabetical order; and grouped by type.
Each feature is initially represented as a short title and an icon explaining the underlying computation task.
Users can expand a feature to see a detailed text description as well as an associated chart for the data attributes processed by the underlying computation (Figure~\ref{fig:component_example}), and then collapse it when needed.
When the user manually selects or drag-and-drop data attributes in the encoding panel, the feed will be reordered, with computational categories that contain the selected data attributes moved to the top.
Each update item in the feed consists of the following components (expanded on demand):
\begin{itemize}
\item\textbf{Title:} Each update has a compact title that gives a brief idea of the contents of the feature or insight.
This title and the thumbnail is the only thing shown when the update is collapsed, thus taking a minimum of display space.
For example, the Pearson correlation generates titles such as ``$\rho = 0.5$ for Weight and MPG.''
\item\textbf{Icon/thumbnail:} A small iconic representation that gives a visual indication of the contents.
For computations that generate charts, this could be a miniature thumbnail of the chart.
\item\textbf{Textual description:} A description of the feature presented on the feed view in a proactive manner.
For example, for the Pearson correlation coefficients~\cite{correlationPearson} between \textit{Weights in lbs} and \textit{Miles per Gallon} in cars dataset~\cite{carsDataset1981Henderson}, the textual description is: ``Correlation of $0.5$ was found between attributes \textit{Weights in lbs} and \textit{Miles per Gallon}.''
This active description gives the analyst the sense that the computer is their collaborator in helping them explore the data.
To avoid overloading the feed with an excessive number of features, we combine related trends and illustrate them with a single chart (e.g., min/max are combined, described as a range, and shown on a bar chart, see Fig.~\ref{fig:chart_formats}).
\item\textbf{Charts:} Manual view specification yields full control to the analysts, but may cause high cognitive load.
To avoid this, DataSite shows the most efficient encodings for each chart corresponding to tasks from a computational module according to the existing metrics~\cite{bertin1983semiology, cleveland1984graphical, mackinlay1986automating}.
Charts are lazily rendered when clicked, thus reducing the page load significantly.
For instance, with two categorical attributes, DataSite renders a heatmap (Figure~\ref{fig:heatmap}) with the intersecting frequency counts marked in color.
Similar to the approach in previous research~\cite{badam2016timefork, Wongsuphasawat2016}, charts can be moved to the main view panel by clicking a \textit{specify the chart} icon on the top right.
Furthermore, charts highlight aspects of the underlying computation as visual cues: for example, charts generated from the clustering computation will highlight the clusters within the chart.
\end{itemize}
In addition to automatic updates, analysts can {\color{red}pin} views from the manual chart visualization window, saving that view as an update in the feed.
The feed view keeps track of these user-generated updates as a separate category.
This is the same as bookmarking charts, and in the future we plan to make the feed a collaborative space, where either human or computer {\color{red}pin} features to allow sharing of findings.
\begin{figure}[htb]
\centering
\includegraphics[width=0.3\linewidth]{histogram_bar}
\includegraphics[width=0.3\linewidth]{aggregation_bar}
\includegraphics[width=0.3\linewidth]{scatterplot_cluster}
\caption{Chart types for different computational modules used in DataSite.
From left: histogram bar (mean/variance), histogram line
(min/max), and scatterplot (clusters in 2D).}
\label{fig:chart_formats}
\end{figure}
\subsection{Server-Side: Computation Engine}
\label{computation_modules}
The server-side DataSite computation engine begins analyzing a dataset as soon as it is uploaded.
The engine consists of multiple computational modules (easily extended as plugins); Table~\ref{tab:algorithms} shows a sample.
A single module can yield several tasks; for example, a simple Pearson correlation module would create a task for each combination of numerical attributes, but not for categorical attributes.
A scheduler analyzes the data and runs computations in a specific order; see the next section for details on scheduling analysis.
The computation engine is multi-threaded using a computational thread pool, executing each computation in the scheduled order.
For each finished task, the computational module will generate a status update that will be pushed to the visualization interface.
As soon as a computational thread is freed up, the scheduler will recycle the thread for a new task.
In this way, the engine is never blocked by complex, long-running tasks.
Furthermore, each computation module executes independently, so a single module failure does not affect the overall system.
For example, if one module fails executing due to errors or invalid data, it will not return results, but other modules can still execute without interruption.
By virtue of this modular architecture, DataSite can be easily extended with new computation modules.
The current implementation provides statistical analysis, K-means clustering (3, 5, 7 clusters), density based clustering (DBSCAN with various parameters), linear regression, and polynomial regression modules.
Figure~\ref{fig:chart_formats} shows sample charts created in the feed view for some computation modules.
Again, computational modules can be added easily, and the goal of the framework is to have as many modules as possible such that computational engine is always running and recommending insights to the user.
\begin{figure}[h]
\centering
\includegraphics[width=0.80\linewidth]{heatmap}
\caption{Representative chart (heatmap) automatically generated for co-occurrence frequency counts of two categorical data fields (origin country and number of cylinders) in a Cars dataset.
Darker color indicates more counts in that category combination; in this example, V8 cars from the USA.}
\label{fig:heatmap}
\end{figure}
\subsection{Scheduling Automatic Analysis}
The scheduler is a core component in the computation engine.
It passes the dataset through its entire library of loaded computational modules, receiving an estimate of the computational complexity and relevance from each module based on the meta-data---number of attributes, types, and dataset size.
Furthermore, the scheduler also encodes typical analytical practice by focusing on main effects and trends in the dataset, and then turn to specific combinations of dimensions of the data.
All of the metrics are then used by the scheduler to determine which modules to run, and in which order to run them.
It may also reschedule jobs in response to results returned from another module; for example, to run post-hoc analysis in response to a significant result from an analysis of variance test.
In addition, the scheduler may choose to launch long-lasting analyses---such as multidimensional scaling or cluster analysis---early, knowing that these results will take a while to return.
DataSite currently utilizes asynchronous multi-threaded operations for all the existing computation modules mentioned above.
The system starts executing all the algorithms asynchronously when initially receiving the dataset.
It then waits for results to come back, updating the feed in response.
In the future, we anticipate letting the user guide the computation order, either more implicitly, or explicitly (by providing interactions to steer the computation).
This would enable customizing the DataSite scheduler to the analytical practice of a specific user while retaining the overall hybrid model.
However, such implicit or explicit computational steering of proactive analysis is outside the scope of our current work.
\begin{table*}
\centering
\begin{tabular}{rrclp{7cm}}
\toprule[0.5pt]
\textbf{Modules} & \textbf{Data Formats} & \textbf{\#Attr.} & \textbf{Chart} &
\textbf{Description}\\
\toprule[0.5pt]
Mean/variance & numerical & 1 & hist. (Fig.~\ref{fig:chart_formats}) & Attribute $A$ has mean of $X$ with variance of $Y$. \\
Min/max (range) & numerical & 1 & hist. line & Range (min, max) was found in attribute $A$. \\
Freq. counts & categorical & 1 & aggr. (Fig.~\ref{fig:chart_formats}) & $X$ was the most/least frequent sub-category in $A$. \\
Freq. comb. & categorical & 2 & heatmap (Fig.~\ref{fig:heatmap}) & Most frequent combination was found between $X$ in attribute $A$, and $Y$ in attribute $B$. \\
Correlation & numerical & 2 & scatterplot & Correlation of $A$ was found between $X$ and $Y$. \\
$K$-means & numerical & 2 & scatterplot (Fig.~\ref{fig:chart_formats}) & $K$-means with $N$ clusters between $X$ and $Y$ has average error $E$.\\
DBSCAN & numerical & 2 & scatterplot & DBSCAN between $X$ and Y with minPts = $p$ estimated $K$ clusters. \\
Linear Regression & numerical & 2 & regression line & Linear regression between $X$ and $Y$ has estimate error of $E$. \\
Poly. Regression & numerical & 2 & regression line & Polynomial regression between $X$ and $Y$ has estimate error of $E$. \\
\toprule[0.5pt]
\end{tabular}
\caption{Example computational modules with corresponding data and chart types.
We have currently used algorithms working with one or two data attributes
in our computation engine.
Brief textual descriptions for each module are also listed.}
\label{tab:algorithms}
\end{table*}
\section{Introduction}
\label{sec:introduction}
Data exploration using visual analytics~\cite{Thomas2005} is often characterized as a dialogue between analyst and computer, with each conversational partner providing unique and complementary capabilities~\cite{badam2016timefork}: the analyst provides creativity, experience, and insight, whereas the computer provides algorithms, computation, and storage.
In practice, however, most current visual analytics systems put the analyst in the driver's seat to guide the analysis.
This one-sided arrangement falls short when the analyst does not know how to best transform or visualize the data, or is simply overwhelmed due to the sheer scale of the dataset or the limited time available for analysis.
A balanced dialogue would share control between the two conversational partners---analyst and computer---in a way that leverages their respective strengths.
Such a \textit{proactive approach} to data analysis would automatically select and execute appropriate computations to inform the analyst's sensemaking process.
In this paper, we present \textsc{DataSite}, a proactive visual analytics system where the user analyzes and visualizes the data while a computation engine simultaneously selects and executes appropriate automatic analyses on the data in the background.
By continuously running all conceivable computations on all combinations of data dimensions, ranked in order of perceived utility for the specific data, DataSite uses brute force to relieve the burden from the analyst of having to know all these analyses.
Any potentially interesting trends and insights unearthed by the computation engine are propagated as status notifications on a \textit{feed view}, similar to posts on a social media feed such as Twitter or Facebook.
We designed this feed view to support different stages of exploration.
Status updates are continuously and dynamically added to the feed as they become available during the exploration.
To provide a quick overview, they are presented with a brief description that can be sorted, filtered, and queried.
To get more details on an individual response without committing to the active path of exploration, we allow the analyst to expand an update to see details in natural language as well as an interactive thumbnail of a representative visualization.
Finally, the user can select an update to bring it to the manual specification panel, allowing for manual exploration.
Our web-based implementation of DataSite consists of a web client interface for multidimensional data exploration as well as a server-side computational engine with a plugin system, allowing new components to be integrated.
The client interface is a shelf-based visualization design environment similar to Tableau (based on Polestar~\cite{polestar}).
The server-side computational engine currently includes common analysis components such as clustering, regression, correlation, dimension reduction, and inferential statistics, but can be further expanded depending on the type of data being loaded into DataSite.
Each computational plugin implements a standardized interface for enumerating and ranking supported algorithms, running an analysis, and returning one or several status updates to the feed view.
Computational tasks are run in a multithreaded, non-blocking fashion on the server, and use rudimentary scheduling based on their perceived utility for the dataset.
While our proactive analytics approach and DataSite prototype are novel, they are part of a greater trend on the use of recommendation engines for visualization (e.g.,~\cite{Saket2017, Wongsuphasawat2017, Wongsuphasawat2016}).
However, additional empirical evaluation is still needed to understand how (a) mixed-initiative and proactive analytics compares to traditional exploratory analysis, as well as (b) specific approaches to this idea compare to each other.
Towards this end,
we present results from two user studies involving exploratory analysis of unknown data, one that compared DataSite to a Tableau-like visualization system (PoleStar~\cite{polestar}), and one that compared it to a partial-specification visualization recommendation system (Voyager 2~\cite{Wongsuphasawat2017}).
Using DataSite's feed, our participants derived richer, more complex, and subjectively insightful findings compared to when using PoleStar, or even Voyager 2's recommendation feed.
This supports our hypothesis that a true proactive analytics platform such as DataSite can improve coverage and increase complexity of insights compared to reactive or partial-specification approaches.
Beyond the DataSite system, our approach can be applied to other exploratory analysis tools to promote richer exploratory analysis, even for non-experts, analysts pressed for time, or analysts unfamiliar with a dataset before exploration.
\section{User Study 1: Comparison with PoleStar}
\label{sec:study1}
In this study, we compare DataSite with a Tableau-style visual analysis tool (PoleStar).
As described earlier, this study was motivated by a fundamental question: what happens when you incorporate a feed view into a conventional visualization tool.
We therefore studied the data field coverage during open-ended visual exploration influenced by the Feed in DataSite against Polestar (a baseline interface without the Feed view).
Note that apart from the Feed view, the DataSite interface resembles the PoleStar interface.
Our hypotheses were: (1) DataSite would have higher data field coverage and more charts viewed, (2) DataSite would allow exploration of complex charts with multiple encodings (capturing multiple attributes), and support faster understanding of the data.
\subsection{Participants}
We recruited 16 paid participants (7 female, 9 male) from the general student population at our university.
Participants were 18 to 35 years of age, with some prior data analysis and visualization experience.
All of them had experience with data analysis and visualization tools: All (16) had used Excel, 10 had used Tableau, 7 Python/matplotlib, 7 R/ggplot, and 3 had used other analytics tools.
No participant had previously seen or analyzed the datasets used in our study.
They had not heard of or used DataSite or PoleStar, though some found PoleStar to be similar to Tableau.
\subsection{Results and Observations}
As mentioned in the evaluation overview, participants' interaction logs and notes taken by experimenter were collected during the study.
We used the linear mixed-effects model~\cite{barr2013random, green2013efficacy} for our analysis of the collected data.
We modeled the participants and datasets as random effects with intercept terms (per-dataset and per-participant bias), and regarded different tools and the order of tool usage as fixed effects.
This setting accounts for the variance of tools and datasets with individual subject's performance during the study.
We used likelihood-ratio tests to compare the full model with other models to evaluate the significance of difference.
Overall, correctness in collected insights was high across both conditions (PoleStar and DataSite), with no reliable difference.
For this reason, we chose to disregard further analysis of this effect.
To assess the broad coverage of data fields, we consider the number of unique data field sets.
Users may have been exposed to a large number of visualization charts,
while the unique field sets shown and interacted with are conservative and reasonable measures of overall dataset coverage.
Based on this, there is a significant improvement of data attribute coverage with DataSite (30\% increase compared to PoleStar: $\chi^2(1) = 19.26$, $p < 0.005$).
Participants interacted with more charts, both from the feed as well as by modifying encodings from the charts present within the feed.
This confirms the first hypothesis.
There are more multi-attribute charts (encoding two or more data attributes) that participants viewed and interacted with using DataSite than PoleStar ($\chi^2(1) = 10.31$, $p < 0.005$).
This is expected since DataSite provides pre-computed features, while participants had to manually create all visualization charts themselves in PoleStar.
75\% participants have seen at least 50\% more data fields in DataSite.
Participants also found twice the number of charts using DataSite that are informative and worth ``speaking out'' ($\chi^2(1) = 7.82$, $p < 0.005$).
10 participants have created more than 3 advanced charts with the help of feed (and ``spoke out'' about them): they started with charts from feed and added more data fields as encodings to the charts.
This suggests that the DataSite system through its Feed view leads to the users viewing more number of charts that are beneficial from their perspective.
It also indicates that DataSite encourages the user to reach complex (multi-attribute) charts during visual exploration, confirming our second hypothesis.
Participants showed great interest in the features within the feed view.
Most of them spent at least 25\% of time on exploring the feed itself.
All participants felt that the feed is useful for analysis and provides guidance of ``where to look'' in the data.
They rated DataSite higher than PoleStar in terms of efficiency (Likert scale, 1 to 5, mean: 4.67 vs 3.40) and comprehensiveness (mean: 4.20 vs.\ 3.21).
All participants rated the usefulness of the feed 3 or higher.
\section{User Study 2: Comparison with Voyager 2}
\label{sec:study2}
The results from the first study were promising and they answer our fundamental questions about the utility of the DataSite feed view.
In Study 2, we compared DataSite with Voyager 2, a modern visualization recommendation system.
The goal was to observe differences and further understand the utility of the feed in DataSite compared to the Related Views and \textit{wildcards} in Voyager 2. Our hypotheses are: (1) DataSite will provide comparable if not more data field coverage owing to its rigorous computation engine; and (2) DataSite will better guide the user's exploration towards faster and comprehensive understanding in the given time.
\subsection{Participants}
We recruited 12 participants (8 female) from our university.
All had similar demographics (between 18 and 35 years of age) and data analysis experience as before: all participants (12) had used Excel, 8 Tableau, 6 Python/matplotlib, 1 with R/ggplot.
They had not heard of DataSite or Voyager 2, or seen the datasets involved.
\subsection{Results: Quantitative}
We used the same linear mixed-effect model for statistical analysis in Study 2 similar to Study 1.
As for Study 1, participant insights were collectively accurate independent of condition (DataSite and Voyager 2); thus, we chose not to analyze this aspect further.
\subsubsection{Data Field Coverage}
We first looked into the participants' performance separately for both datasets (movies and birdstrikes), and compared the effects of visualization tools.
We consider the number of unique field sets that users have shown and examined, respectively (similar to the previous study).
In Figure~\ref{fig:boxplot}, we see that for movies and birdstrikes datasets, the number of unique field sets that users interacted with (hovered mouse for more than three seconds) is similar: DataSite has 5 and 4 more unique field sets respectively in the birdstrike dataset (median: 30 in DataSite vs.\ 25 in Voyager 2) and movies dataset (median: 31 in DataSite vs.\ 27 in Voyager 2).
Overall, DataSite promotes slightly more data field coverage in total (mean: 30 and 26), mainly because the feed contains an exhaustive list of features across computational modules.
\begin{figure}
\centering
\includegraphics[width=0.70\linewidth]{uniquefieldset}
\caption{Box plot showing the distributions of unique fields that users interacted with per tool and dataset.
DataSite has slightly larger number of unique field sets in both cases.}
\label{fig:boxplot}
\end{figure}
In regard to the number of unique field sets that have been shown (the user may look through the charts without interaction) to the users, DataSite users (mean 43, s.d.~19.7) were shown fewer charts than Voyager 2 (mean 54, s.d.~13.5).
The reason may be that Voyager 2 shows charts by default, while DataSite needs user interaction to expand the features in the feed to see the charts.
As for the number of charts that participants spoke out aloud during the study, the tools have a significant difference ($\chi^2(1) = 7.34$, $p < 0.05$): DataSite (mean 14.53, s.d.~2.04) gave participants 30\% more charts to ``speak out'' about, compared to Voyager 2 (mean 11.63, s.d.~2.32).
In other words, participants found more charts to be informative and worth talking about using DataSite.
Among all the ``speak out'' charts, an average of 35\% are directly from the feed.
Other ``speak out'' charts in DataSite are either moved from the feed to the main view and then edited, or manually created.
This indicates that the feed view contributes to more data field coverage and more charts that analysts find useful and worth pointing out.
When using DataSite, all participants viewed and interacted with charts in the feed.
Most of them (11 of 12) spent more than 30\% percent of time exploring the feed.
Two participants even used the feed as the main interface for exploring the datasets.
Beyond this, two participants interacted with more than 70\% of total charts, and 75\% of their ``speak out'' charts were directly from the feed.
\subsubsection{Text Search and Filter Usage}
We analyzed the usage of filters and text search bar.
We were interested in observing whether filters and text search can aid them in searching for desired features within the feed view, and whether it is efficient and easy to use compared to \textit{Related Views} and \textit{Wildcards} in Voyager 2.
All participants have used the drop-down filters at least 5 times, and 9 of 12 tried text search.
8 of 12 of them said that the filters and the text search were useful for quick search of the feed during the study session.
7 of 12 had used the combinations of text search and filter.
Three participants found \textit{wildcards} in Voyager 2 to be not very intuitive.
They used wildcards fewer times during the exploration, which matches the results from Wongsuphasawat et al.~\cite{Wongsuphasawat2017}.
In comparison, filters and search options not only contribute to fast data exploration, but also improve the efficiency of drilling down into features during proactive visual analytics.
This is one of the advantages of providing descriptions for features in the feed view.
\subsubsection{User Ratings}
We collected user's feedback and ratings for tools in the post-study survey.
For each tool, participants were asked to evaluate the tools based on the efficiency, enjoyability, and ease of use, on Likert scale ratings from 1 (least) to 5 (most).
The participants rated DataSite ($\mu = 4.32$, $\sigma = 0.67$, $p = 0.14$) higher than Voyager 2 ($\mu = 3.92$, $\sigma = 0.67$) regarding the efficiency.
For enjoyability and ease of use, the ratings are comparable: enjoyability (DataSite: $\mu = 4.33$, $\sigma = 0.65$; Voyager 2: $\mu = 4.08$, $\sigma = 0.67$), ease of use (DataSite: $\mu = 3.92$, $\sigma = 0.85$; Voyager 2: $\mu = 4$, $\sigma = 0.60$).
When asked about the comprehensiveness of their explorations of the dataset (DataSite: $\mu = 4.42$, $\sigma = 0.87$; Voyager 2: $\mu = 3.75$, $\sigma = 0.51$, $p = 0.013$), $7/12$ users rated DataSite higher and $4/12$ rated both tools with the highest ($5$) score.
Two participants gave lower ratings for DataSite compared to Voyager 2 and mentioned that it is because they felt in Voyager 2 it was easier to browse multiple charts while in DataSite they had to explicitly click.
Overall, DataSite was seen to be more efficient and presenting a more comprehensive coverage of the data fields with respect to visual exploration than Voyager 2, while maintaining the similar level of enjoyability and ease of use.
Users also responded very positively when asked whether features in the feed provide guidance in their data analysis: 50\% chose 5 and the rest chose a 4 rating.
When it comes to comparison (Fig.~\ref{fig:boxplot_rating}) between two tools on a 5-level symmetric scale (with range $[-2, 2]$.), most participants (11 of 12) preferred DataSite ($\mu = 1.25$, $\sigma = 0.87$) to be most useful or useful for data exploration.
Beyond this, participants were asked about their preferences between the two tools for focused question answering (as questioned by Wongsuphasawat et al.~\cite{Wongsuphasawat2017}).
7 of 12 users preferred DataSite, and 4 were neutral with no preference, with 1 preferring Voyager 2 (rated -1).
This is a little surprising since DataSite was primarily designed for visual exploration (and not question answering).
\begin{figure}[htb]
\centering
\includegraphics[width=0.80\linewidth]{rating_comparison}
\caption{User preference in terms of visualization tools for open-ended exploration and focused question answering.
DataSite received higher preference in both; 11 of 12 participants prefer DataSite for data exploration, and 9 of 12 prefer DataSite for focused question answering.}
\label{fig:boxplot_rating}
\end{figure}
\subsection{Results: Qualitative}
To better understand the results from the statistical analysis, the participant ratings, and how DataSite helped participants explore the datasets, we present our observations below.
\subsubsection{Comparing Charts and Features}
DataSite and Voyager 2 are qualitatively different in that the Voyager 2 recommendation engine stems from query generation and partial view specification using wildcard specifiers~\cite{Wongsuphasawat2017}, whereas DataSite is based features computed by specific computational processes (Table~\ref{tab:feed}).
Thus, the primary output of Voyager 2 is a sequence of charts generated from this query engine, each arranged in a scrolling ``related views'' panel that are derived from the specified view.
DataSite, on the other hand, generates notifications in the feed view based on results returned from the computational modules loaded.
For this reason, it is difficult to directly compare features in the DataSite feed with chart recommendations in the Voyager 2 interface.
Nevertheless, Voyager 2 is the closest baseline we have, so even if the comparison is not entirely apples to apples, we think that the general metrics of data coverage, insights, subjective rankings, and qualitative feedback that the Voyager 2 evaluation~\cite{Wongsuphasawat2017} uses are appropriate.
Nevertheless, there are many common charts in both DataSite and Voyager 2.
One simple example is that in DataSite, there are descriptive analyses that shows the frequency counts of categorical attributes.
This is also shown in Univariate Summaries in Voyager 2.
When specifying two ``Quantitative Fields'' as \textit{wildcards} in Voyager 2, the specified views contain the scatterplot combining two numerical fields.
If there is a correlation between the fields, this is easily seen using visual inspection of the scatterplot.
For DataSite, this is shown with a description of the correlation and a trending line of regression estimate in a scatterplot, which helps users understand the data.
This is an example of how the query generation engine in Voyager 2 can yield similar results as a directed analytical component in DataSite.
We explore more about the pros and cons of this difference in the discussion.
\subsubsection{Comparing User Findings}
As mentioned above, DataSite features are based on computational algorithms, and it has more ``advanced and detailed'' analysis (quoted from a participant) than standard visualization tools.
Several participants showed great interest in the regression estimate and clustering visualization.
One participant said that clustering chart gave him the clear indication that most birdstrike accidents had very small damage and only a few caused severe outcomes.
Based on this insight highlighted by DataSite, the participant was easily able to dig deeper into which accidents yields the most serious damage.
As an example for the regression line, participants learned its use based on how a car's horse power increases with the number of cylinders.
DataSite creates visualization from algorithmic and analytical perspectives, which may be closer to human thinking, compared to Voyager 2, that generates charts from data attributes using a query engine.
In more general terms, DataSite provides a suite of computational components that uncover the underlying relationship within the dataset, which may not be easily seen if using manual view specification.
Participants mentioned that the feed provides ``much more detail'', while Voyager is ``basic'' and ``does not give a log of insights'', and sometimes that charts in Voyager 2 ``do not make much sense''.
On the other hand, DataSite features are intrinsically limited by the computational components currently available and loaded in the system.
This is to contrast with the chart-generating feature of Voyager 2, where the power of visualization can yield answers to questions not provided by specific modules.
We go deeper into this discussion at the end of this paper.
\subsubsection{When Participants Used the Feed}
The 12 participants were divided evenly to have different orders of the tools (DataSite first or Voyager 2 first).
Four out of six who used Voyager 2 first, examined the feed (first interacted with the feed) in the beginning of their analysis with DataSite.
For those exposed to DataSite first, 5 of 6 did the same.
The rest started their manipulation first with manual specifications.
It is worth noting that when the participants did not have any idea of how to construct interesting charts to get insights, they (8 of 12) switched to the feed for charts and inspirations (during the middle 10 minutes).
10 of 12 scanned through the feed at least once in the last 5 minutes of the session.
9 of 12 participants returned to the feed at least 3 times during the study.
All of them specified at least 3 charts from the feed into the main view.
This suggests that the feed can help analyst in multiple phases of exploration.
\subsubsection{In-depth Data Exploration}
Users usually create charts in manual specification tools with less than three attributes for encodings to limit the information encoded to a perceivable level.
7 of 12 participants found more advanced charts (3 or more data fields/attributes, the same below) that they ``spoke out'' in DataSite than Voyager 2 (at least 20\% more).
They mentioned that the summary in feed provides descriptive analysis, while charts alone in Voyager 2 may need more time to understand.
It is worth noting that one participant used feed as the only interface for data exploration without additional manual specifications, and none did the same in Voyager 2.
She explained that the feed provides a systematic approach towards analyzing the dataset, while she had difficulty understanding \textit{Related Views} in Voyager 2.
\begin{table}[htb]
\centering
\begin{tabular}{cccccc}
\toprule
\textbf{\#Charts} & \textbf{Simple stats}
& \textbf{Corr} & \textbf{Freq} & \textbf{Clust} & \textbf{Regr}\\
\midrule
mean & 2.25 & 4.38 & 4.31 & 3.54 & 3.26 \\
std. dev. &1.25 & 2.5 & 3.46 & 1.02 & 1.57 \\
\bottomrule
\end{tabular}
\caption{Mean and standard deviation of participant interactions with computational results.
Participants interacted with advanced features more (e.g., correlations, frequency counts, clustering, etc), while few features regarding simple statistics (min/max and mean/variance) were examined.}
\label{tab:feed}
\end{table}
\subsubsection{``Speaking Out'' Charts in the Feed}
The number of ``speak out'' charts that users verbally referred to during the study revealed interesting aspects for data analysis by general users.
Table~\ref{tab:feed} gives mean and variance of features in different categories that the participants ``speak out'' about.
Participants were more interested in plots of multiple numerical fields and categorical fields, rather than a single numerical field.
Specifically, they merely viewed the charts in range/mean/variance modules (average number of charts are around 1), and from our observations, they skimmed through the natural language descriptions but did not click to see the charts.
This implies that simple statistics are not interesting enough for analysts to examine, or the text descriptions alone are sufficient to understand.
For complex computations, charts are viewed more by expanding their textual description in the feed.
This is because there are usually no intuitive attribute combinations to creating informative charts with data fields (participants had to rely on random combinations or based on their general understanding).
After seeing the charts in the feed, they all agreed that those charts were more informative than the ones they created by manual view specification.
This is a good confirmation of the utility of proactive visual exploration.
\subsubsection{Inspirations from the Feed}
The feed view provides recommendations for visual data exploration from an analytical perspective.
The features suggest certain combinations that yield effective visualizations.
All the participants manually specified similar charts (w.r.t. encodings) after they had seen the charts within the feed, especially heatmaps representing frequency combination of two categorical fields.
More than 80\% (10 of 12) of the participants mentioned that the feed gave them some ideas of which features and encodings can be used to make the chart more informative.
On the other hand, \textit{Related Views} in Voyager 2 show visualization recommendations to users that can be easily browsed, but participants thought of them just as related charts rather than specific analytical insights.
They browsed through \textit{Related Views} a lot but had never considered about how and why the specific chart was suggested.
Also, 2 participants felt that the descriptions sometimes were not very easy to understand.
\subsection{Participant Feedback}
In this section, we list comments, suggestions, and feedback from the free text comments in the post-study survey and audio recording transcripts.
For example, participants described that DataSite helped visual data exploration process:
\textit{``The feed helps gear you in the right direction, especially if you are new to a dataset.
It tells you something notable that is worth looking into.''}
As for comparisons to Voyager 2, \textit{``DataSite is more specific because it gives you the options with various kinds of results.
The feed is very helpful in data analysis.''}
One participant even remarked that \textit{``[DataSite] will be very useful for day-to-day usage, especially for advanced data analysis, and can be used in industrial applications.''}
Overall, the feed view was lauded, with one participant noting that \textit{``the feed in DataSite provides a good starting point to visualize data if you don't have any idea about the dataset.''}
However, participants also provided suggestions on how to improve the feed.
Said one participant, \textit{``it would be better to make feed more user friendly, such as drag-and-drop to move charts into the main view.''}
The feed was also perceived to be daunting, or as one participant put it: \textit{``the feed is very useful, but sometimes it has a lot of results and can be a little overwhelming.''}
Another participant said that \textit{``in DataSite it is a bit difficult for me to understand the results in the feed, while Voyager 2 provides intuitive charts.''}
One participant suggested that \textit{``it would be interesting if there were guided tips that can help when I'm stuck in a chart, such as `try changing x and y axis' when the axis label is difficult to read.''}
\section{Usage Scenario}
\label{sec:usagescenario}
Here we illustrate how an analyst can use the DataSite system to examine and find interesting features and insights about the class car dataset~\cite{carsDataset1981Henderson}.
As soon as the analyst uploads the dataset into DataSite, the system will queue up computations scheduled by their suitability for the specific dataset.
Meanwhile, the client-side shows the interactive interface to enable the user to begin data analysis (Fig.~\ref{fig:teaser}).
From the data panel (Fig.~\ref{fig:teaser}, left), the analyst will see the dataset has three categorical attributes, one temporal, and six numerical attributes.
To encode the field, the analyst can manually drag-and-drop or auto-add it to the encoding panel (Fig.~\ref{fig:teaser} middle) to create a visualization.
To start exploration, he/she may want to get an overview of the dataset.
Without creating any visualization manually, DataSite updates with notifications that are automatically computed by the server in the Feed field on the right.
The analyst finds that the average weight of cars is 2,979 lbs, and the range is from 1,613 to 5,140 lbs (Figure.~\ref{fig:feeditems_rangemean}).
The feed is dynamically updated with notifications once each computation is finished.
There are simple statistics for numerical attributes, such as mean and variance, ranges (which are min and max), and correlations, as well as frequency counts for categorical attributes.
Notification in the feed include a brief natural language description.
The analyst may be interested in why the \textit{Displacement} and \textit{Miles per Gallon} has a correlation of -0.76.
The analyst clicks on the notification, causing it to be expanded to show a corresponding chart (Fig.~\ref{fig:component_example}) explaining the finding: As the \textit{Displacement} increases, \textit{Miles per Gallon} decreases.
In order to view details and conduct further modifications to the chart, the analyst can move the chart into the main view (Fig.~\ref{fig:teaser} right) by clicking the icon on the top right.
The analyst may also want to understand the highest frequency counts for the number of cylinders.
To achieve this, the analyst uses the filter drop-down option, clicks on the Frequency Count filter, and types ``cylinders'' into the text bar.
The feed will filter the precise results: cars with 4 cylinder engines has the highest frequency.
The analyst can then pin interesting manual specification charts in the feed, which can be used for tracking the progress of analysis and re-visiting the analysis in the future.
|
{
"timestamp": "2018-09-25T02:09:16",
"yymm": "1802",
"arxiv_id": "1802.08621",
"language": "en",
"url": "https://arxiv.org/abs/1802.08621"
}
|
\section{Introduction}
Advanced smart infrastructures enable creative ways to enhance life quality and provide information flow towards facility owners, supporting occupant localization services, detection of human patterns, and enabling facility-occupant interaction applications \cite{inaya2017real, sikeridis2017occupant}. We describe a dataset of Bluetooth Low Energy (BLE) advertisement readings \cite{heydon2013bluetooth} collected during an IRB-approved one-month trial with real participants. The central component in our approach is a moving BLE beacon architecture that deploys static scanners to intercept advertisement packets. Gathered packets supported two functionalities (see Section \ref{sec:3}) and were forwarded to a central server to be used for real-time visualizations, or occupant activity recognition.
Real-subject-based datasets are valuable for a wide spectrum of research fields. The majority of existing studies on human activity sensing focus either on restrained settings (small scale) or is employed for a small duration (a couple of days) \cite{cabrero2017cwi}. This work provides a dataset extracted from a large-scale multi-floor setting and for an extensive one-month long experiment period. The following sections describe the experimental setup, the real-subject trial, and the dataset contents.
\section{Bluetooth Low Energy Beacons}
The BLE standard was proposed by the Bluetooth Special Interest Group (SIG) and is designed for low transmission power, short data burst communications. The protocol is operating at the 2.4 GHz frequency band with 40 channels of 2MHz spacing (3 for advertising, 37 for data transmissions). While BLE supports classic master/slave connection paradigm its extremely energy efficient nature drew attention to a new class of mini-scale devices the BLE beacons. These devices operate by periodically broadcasting packets/identifiers at a specific time interval and transmission power.
On the receiving end of those transmissions there can be BLE enabled devices able to utilize information such as the Received Signal Strength Indicator (RSSI) to support various application with Micro-Location, geofencing \cite{zafari2016microlocation}, and occupant tracking \cite{sikeridis2017occupant} being some cases in point. The sustainability of such use case is further supported by the low power consumption of BLE beacons that allows the devices to be powered by a coin cell battery for years before any need for reconfiguration.
In light of the above, initially, Apple with iBeacon \cite{iBeaconProtocol} and later Google with Eddystone \cite{eddy} standardized BLE beacon protocols (essentially the advertising packet formats). A BLE beacon that utilizes the iBeacon profile transmits a message that contains three pieces of data:
(a) A Universally Unique Identifier (UUID) which is the identity of the beacon (b) a Major value that denotes general spatial information, and (c) a Minor value that denotes more specific spacial information.
Regarding Eddystone the operation is similar to the iBeacon with the exception that it can support four different payload types in its frame format: (a) Unique ID (UID) frame - denotes identity and consists of two parts Namespace (10 bytes), and Instance (6 bytes) (b) URL address frame (URL) - carries a URL that the BLE end device can directly open (c) Sensor Telemetry frame (TML) - can be used to send sensor data, and (d) Ephemeral Identifier frame - the beacon broadcasts a continuously changing identifier that can be resolved to data by an end BLE device that shares a specific key with the beacon (security feature). More information regarding BLE Beacons and their applications in Internet of Things settings can be found in \cite{zafari2016microlocation , zafari2017ibeacon, jeon2018ble}.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{loc.pdf}
\caption{Location of RPis - Facility topology.}
\label{fig:loc}
\end{figure}
\section{Sensing Infrastructure and Trial}
In this section we document the utilized devices and elaborate on the deployed infrastructure that collects the trial data.
\subsection{BLE Advertising Devices}
Facility occupants carry off-the-shelf BLE-based beacons that continuously transmit BLE advertisement packets. We used the Gimbal Series 10 iBeacon \cite{gimbal, iBeaconProtocol} configured to broadcast under the same 16-byte universally unique identifier (UUID). Each beacon is distinguished by a 4-byte identifier inside the manufacturer BLE advertisement packets. The periodic transmission rate for each beacon is set to 1 Hz, with omni-directional antenna propagation setting, and transmission power of 0 dBm.
\subsection{Sensing Infrastructure}
The deployment of the sensing infrastructure aimed for easy installation and cost efficiency. Thus, the backbone of the system is the Raspberry Pi 3 (RPi), which is able to listen to the BLE advertisement channels, and collect all generated packets. All utilized RPi edge devices are connected through WiFi and act as MQTT (Message Queuing Telemetry Transport) \cite{mqttProtocol} clients transmitting information to an MQTT broker hosted by the server. The server performs data management, validating and storing information in a MariaDB SQL database.
Thirty two RPis were installed in three floors within NCSU Centennial Campus - Engineering Building II in order to support the experiment. The exact location of the RPis in the three-floor setting is shown in Fig. \ref{fig:loc}.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{ARCH.pdf}
\caption{RSSI report operation.}
\label{fig:ARCH}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{check.pdf}
\caption{ Check-In/Check-Out report operation.}
\label{fig:check}
\end{figure}
\begin{table}[!h]
\scriptsize
\renewcommand{\arraystretch}{1.3}
\caption{Trial \& Sensing Infrastructure Information}
\label{Table:components}
\centering
\begin{tabular}{||l||l||}
\hline
\bf{BLE Beacon Type} & Gimbal Beacon Series 10 \\ \hline
\bf{Number of Beacons/Participants} & 46 \\ \hline
\bf{BLE Beacon Scanner Type} & Raspberry Pi 3 \\ \hline
\bf{Number of Scanners} & 32 \\ \hline
\bf{Number of Floors} & 3 \\ \hline
\bf{Trial Dates} & 09/15/2016 - 10/17/2016 \\ \hline
\end{tabular}
\end{table}
\subsection{Operation}
Regarding system operation two approaches were utilized in parallel:
\begin{enumerate}
\item {\it RSSI Report}: all advertisement packet receptions from occupant beacon devices are directly reported to the server with a message that contains the beacon/user ID, the packet's Received Signal Strength Indicator (RSSI), a reception timestamp, and finally the ID of the RPi that received the advertisement, as seen in Fig. \ref{fig:ARCH}.
\item {\it Check-In/Check-Out Report}: each RPi scanner continuously manages a list of current occupants/users in its proximity. A {\it check in} timestamp is created during occupant's initial entry, and while this beacon is still being detected by the RPi, a {\it last seen} timestamp is updated. When the beacon is no longer detected (advertisement packets are no longer being received) a Check-In/Check-Out report packet is created and sent to the server containing the beacon/user ID, the {\it check in} timestamp, the {\it last seen} timestamp, and finally the ID of the RPi as seen in Fig. \ref{fig:check}. A thirty second period is used to ensure that the occupant exited the RPi proximity.
\end{enumerate}
\subsection{Real-Subject Trial}
Following the system architecture described above, an IRB-approved trial with 46 participants took place from September 15 to October 17 of 2016. Participants included frequent occupants of the building that carried a BLE beacon with them at all times during their usual routines. The experiment considered all three-floors (see Fig. \ref{fig:ARCH}) and the core idea was to get insights on repeated occupant behavior and patterns in relation to the facility environment.
Table \ref{Table:components} summarizes basic trial and sensing infrastructure information.
\section{BLEBeacon Dataset}\label{sec:3}
The discussed dataset consists of two files, one containing the trial readings from the RSSI report operation ({\it RSSI Report} file) and the other from the Check-In/Check-Out report operation
({\it Check-In Check-Out Report} file). No participant personal information were kept or made available to retain personal privacy. The {\it RSSI Report} file contains the following entries:
\begin{itemize}
\item {\it Entry\_id}: unique identifier of a packet in the dataset.
\item {\it Beacon\_id}: unique identifier of the occupant/beacon.
\item {\it RSSI}: the Received Signal Strength Indicator (RSSI) in dB.
\item {\it Timestamp}: Date (Month/Day/Year) and Unix time (Hour:Second) of the advertisement packet reception moment from the Rpi.
\item {\it RPi\_id}: RPi that received the packet (see Fig. \ref{fig:ARCH}).
\end{itemize}
The {\it Check-In/Check-Out Report} file contains {\it Entry\_id}, {\it Beacon\_id}, and {\it RPi\_id} as described above with the addition of two entries namely:
\begin{itemize}
\item {\it In\_time}: Date (Month/Day/Year) and Unix time (Hour:Second) of the moment a user enters the RPi's vicinity and the first advertisement packet is received.
\item {\it Out\_time}: Date (Month/Day/Year) and Unix time (Hour:Second) of the last advertisement packet received from the same user by the specific RPi.
\end{itemize}
Due to the architecture of the sensing infrastructure where several RPi scanners were deployed in close proximity, a single BLE advertisement packet from an occupant/beacon can be received fro multiple RPis. This creates multiple entries of the same packet in the database. Such packets are timestamped during the reception moment at the RPi scanner. The RSSI measurements from different RPis whose locations are known can be used to identify a user's exact location inside the facility. An initial analysis of the dataset is described in \cite{inaya2017real}. Work related to the BLEBeacon dataset can be found in \cite{inaya2017real, sikeridis2017cloud, sikeridis2017occupant}.
\section{Conclusion}
This report accompanies and documents the BLEBeacon Dataset, a collection of BLE advertisement packets gathered from a three-floor sensing infrastructure accommodating real-participants carrying iBeacons, following their routines during a one-month period. Possible uses of the dataset include network behavior and reliability detection in similar sensing environments, user mobility pattern extraction due to the experiment's length, occupancy clustering with group identification/monitoring, and provision of facility management or crowd monitoring application insights considering real-life conditions.
\section*{Acknowledgment}
\vspace{-1mm}
\def0.85{0.85}
This research was supported by an IBM Faculty Award.
The authors are grateful to Mahdi Inaya and Michael Meli for carrying out the system deployment and installation, the NC State ECE Department for allowing us to host the trial in the EBII building, and to the gracious ECE trial participants.
\bibliographystyle{IEEEtran}
|
{
"timestamp": "2018-02-27T02:03:22",
"yymm": "1802",
"arxiv_id": "1802.08782",
"language": "en",
"url": "https://arxiv.org/abs/1802.08782"
}
|
\section{Introduction}
The first observation of gravitational waves from a binary neutron star merger~\citep{Abbott2017} opens up the possibility of studying neutron stars interiors through tidally-induced phase shifts to gravitational waveforms~\citep{Lackey2015,Agathos2015}. This would allow gravitational wave astronomy to serve as a probe of the equation of state above nuclear density, which is otherwise difficult to study. Low-frequency modes with frequencies swept by the orbital frequency may be resonantly excited by tidal interactions in neutron star-black hole and neutron star-neutron star mergers~\citep{Bildsten1992,Cutler1993,Lai1994,Reisenegger1994,Xu2017,Andersson2018}, causing a phase shift that depends on the exact nature of the excited modes. Low frequency $g$-modes are especially interesting, although the resulting gravitational waveform phase shifts from their resonant excitation will likely be impossible to measure with current-generation detectors unless the merging neutron stars are rapidly rotating or have large radii~\citep{Ho1999,Flanagan2007}. The acoustic $p$-modes are too high in frequency to be resonantly excited themselves, but could participate in nonlinear tidal interactions involving the coupling of the $g$-modes and the $p$-modes which may be observable through a gravitational waveform phase shift~\citep{Weinberg2013,Essick2016}.
Potential sources of $g$-modes in neutron stars have been studied for decades. In a normal fluid neutron star, buoyancy arising from temperature gradients~\citep{McDermott1983,Bildsten1995} and the proton fraction gradient~\citep{Reisenegger1992} have been investigated, supporting modes of frequencies $1\sim 100$ Hz. \citet{Lee1995} studied proton fraction gradient $g$-modes in Newtonian stars with superfluid cores, and confirmed a previous calculation by \citet{Lindblom1994} which found two sets of $p$-modes, corresponding to normal fluid and superfluid degree of freedom respectively. Sound speeds for both sets of superfluid neutron star $p$-modes have been calculated by \citet{Epstein1988a}, and this second set of $p$-modes has also been found in a fully relativistic, finite-temperature calculation~\citep{Gualtieri2014}. However, in neutron star cores composed of superfluid neutrons and superfluid-superconducting protons~\citep{Lombardo2001, Page2011}, proton fraction gradients do not lead to $g$-modes unless temperatures are above the neutron critical temperature, estimated to be $\lesssim 10^9$ K ~\citep{Yakovlev1999}, above which electron-neutron coupling~\citep{Bertoni2015} and electron-proton electrostatic coupling will cause both baryon species to move together. $G$-modes due to entropy gradients in superfluid neutron stars were first found by~\citet{Gusakov2013a}, and shortly thereafter they found a new class of $g$-modes resulting from leptonic buoyancy~\citep{Kantor2014} (hereafter KG14). This effect is caused by a gradient in the electron fraction at number densities $\gtrsim 0.13$ fm$^{-3}$, where both electrons and muons coexist in most equations of state. While these leptonic buoyancy $g$-modes were considered in a nonzero temperature star, they were found to exist even in the zero-temperature limit, and their existence was independently confirmed~\citep{Passamonti2016}. A recent paper by~\citet{Yu2017} (hereafter YW17) has computed $g$-mode frequencies and displacement fields arising from leptonic buoyancy in zero temperature neutron star cores using Newtonian gravity, and used their results to study resonant tidal excitation of the modes during neutron star binary inspiral.
We calculate both sets of compressional modes of two-superfluid neutron stars in the zero-temperature approximation, including both the $g$-modes arising due to leptonic buoyancy and the $p$-modes, but with a few crucial differences to previous calculations. Like KG14, we include general relativity and work in the Cowling approximation, neglecting the effects of perturbations to the metric; YW17 used Newtonian gravity but included self gravity perturbations. Secondly, we use a flexible parametrized equation of state (EOS) that allows us to easily adjust the compressibility of the neutron star core, and employ this EOS to calculate the $g$-modes for a range of compressibilities and correspondingly a range of stellar masses and radii. Thirdly, we allow the neutron superfluid to flow into the crust of normal fluid nuclei instead of assuming that the crust is a single normal fluid. We find that this has important implications for the neutron component of the $g$-modes and for both components of the $p$-modes. We compute the displacement fields for the $g$-modes, which KG14 did not report in their initial letter but YW17 did, although the differences in our method mentioned above mean that our modes differ qualitatively and quantitatively. We also use our formalism to compute the $p$-modes in the star like \citet{Lee1995} and \citet{Gualtieri2014}, though only in the zero temperature limit.
In Section~\ref{sec:EoS}, we introduce the parametrized equation of state we use in the core (Section~\ref{subsec:CoreEoS}) and the crust (Section~\ref{subsec:CrustEoS}). In Section~\ref{sec:FluidDynamics} we obtain the equations of motion for the modes and compute the Brunt--V\"{a}is\"{a}l\"{a} frequency due to the muon gradient in the core. The crust-core interface and boundary conditions for the modes, which we find are significant to determining the normal mode displacement fields, are then discussed. Finally, in Section~\ref{sec:NormalModes} we compute the $g$-and $p$-modes, with and without entrainment of the superfluid neutrons and protons in the core, and we make comparisons to previous calculations.
\section{Equation of state}
\label{sec:EoS}
Here we describe the model of the background neutron star that we used for the calculation of the Brunt--V\"{a}is\"{a}l\"{a} frequency in Section~\ref{sec:BVFreq} and the compressional modes in Section~\ref{sec:NormalModes}. Our EOS is based on a relatively simple, parametrized model. We adopt parameters to satisfy constraints near nuclear density $n_{\text{nuc}}=0.16$ fm$^{-3}$ and allow neutron star masses above 2$M_{\odot}$. It is a phenomenological model, but is sufficiently detailed that we can compute all thermodynamic quantities we need to find the normal modes. $\hbar=c=1$ is used throughout.
\subsection{Core equation of state}
\label{subsec:CoreEoS}
We consider an electrically neutral fluid of neutrons, protons, electrons and muons at zero temperature. Its energy density $\rho$ is specified as a function of three variables: baryon number density $n_{\rm b}$, proton fraction $Y$ and electron fraction $f$, where the neutron, proton, electron and muon number densities are respectively $n_{\rm n}=n_{\rm b}(1-Y)$, $n_{\rm p}=n_{\rm b}Y$, $n_{\rm e}=n_{\rm b}Yf$ and $n_{\upmu}=n_{\rm b}Y(1-f)$. We separate the energy density into kinetic and interaction parts $\rho=\rho_{\text{kin}}+\rho_{\text{int}}$; the kinetic part (including the rest mass) is given by
\begin{equation}
\rho_{\text{kin}}(n_{\rm b},Y,f)=\frac{p_{\rm Fe}^4}{4\pi^2}+\sum_{j={\rm n,p,}\upmu}\frac{m_j^4}{\pi^2}\phi\left(\frac{p_{{\rm F}j}}{m_j}\right)
\label{eq:KineticEnergyDensity}
\end{equation}
for Fermi momenta $p_{{\rm F}j}=(3\pi^2n_j)^{1/3}$ and bare mass $m_j$ of particle species $j$, and where
\begin{equation}
\phi(x)=\frac{x^3}{4}\sqrt{x^2+1}+\frac{x}{8}\sqrt{x^2+1}-\frac{1}{8}\text{arsinh}(x).
\end{equation}
We have assumed the electrons are ultrarelativistic and ignore the difference between the proton and neutron mass, assuming $m_{\rm n}=m_p=m_{\rm N}$. Although we use the bare nucleon mass, we assume that $\rho_{\text{int}}$ includes effective mass corrections adequately. The interaction energy density $\rho_{\text{int}}(n_{\rm b},Y)$ employed is based on that of~\citet{Hebeler2013}, but with a different form for the symmetry penalty term:
\begin{align}
\rho_{\text{int}}(n_{\rm b},Y)={}&n_{\text{nuc}}E_{\rm S}\frac{\overline{n}^2+f_{\rm S}\overline{n}^{\gamma_{\rm S}+1}}{1+f_{\rm S}} \nonumber
\\ & +n_{\text{nuc}}E_{\rm A}\overline{n}^2\left(\frac{\overline{n}+\overline{n}_0}{1+\overline{n}_0}\right)^{\gamma_{\rm A}-1}(1-2Y)^2,
\label{eq:NucleonInteractionEnergyDensity}
\end{align}
where $\overline{n}=n_{\rm b}/n_{\text{nuc}}$ and $\overline{n}_0$ is a characteristic number density. For $\overline{n}\ll\overline{n}_0$, the symmetry penalty term is quadratic in $\overline{n}$, as the energy per baryon should be linear in the density at low densities.
The requirements of $-16$ MeV per baryon binding energy and zero pressure for symmetric nuclear matter at nuclear density constraint the parameters $E_{\rm S}$, $\gamma_{\rm S}$ and $f_{\rm S}$, while experimental measurements like those used in constructing Figure~6 of~\citet{Lattimer2016} constrain $E_{\rm A}$, $\gamma_{\rm A}$ and $\overline{n}_0$. Another constraint is that the EOS must allow a maximum mass $\geq 2M_{\odot}$~\citep{Antoniadis2013}, though this can be adjusted upward to allow for higher masses if required by future observations. We consider four possible parameter choices PC1--PC4, differing in the values of $\gamma_{\rm S}$ and $f_{\rm S}$, with each choice corresponding to a value of the nuclear compressibility parameter $K=9(\partial^2(\rho/n_{\rm b})/\partial\overline{n}^2)|_{\overline{n}=1,Y=1/2}$. These are listed in Table~\ref{tab:EOSParameters}. For these choices of $E_{\rm A}$, $\gamma_{\rm A}$ and $\overline{n}_0$, the symmetry energy $S_{\rm v}=31.73$ MeV and density derivative $L=60.32$ MeV, within the $1\sigma$ confidence region of Figure~6 of~\citet{Lattimer2016}. Three of the chosen values of $K$ are within the $240\pm20$ MeV confidence range cited in~\citet{Lattimer2016}, with $K=220$ MeV not being used due to not allowing a $2M_{\odot}$ star to exist in our EOS. The $K=280$ MeV parametrization represents a causal limit i.e. the sound speed equals the speed of light for central densities just beyond that which has the maximum mass for this parametrization. While the EOS is flexible, we found that it was difficult to obtain a maximum mass greater than $2.2M_{\odot}$, and also found that adjusting the parameters $E_{\rm A}$, $\gamma_{\rm A}$ and $\overline{n}_0$ had only a small effect on the nuclear compressibility, so these parameters were fixed for all four parameter sets.
\begin{table}
\caption{Different parametrizations of core equation of state and corresponding nuclear compressibility $K$, maximum mass $M_{\text{max}}$, radius at maximum mass $R_{\text{max}}$, central baryon number density for the maximum mass star $n_{b,\text{cntr},\text{max}}$ and the baryon number density at which the sound speed equals the speed of light $n_{b,\text{cl}}$.}
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
\multicolumn{1}{|c|}{} & \textbf{PC1} & \textbf{PC2} & \textbf{PC3} & \textbf{PC4} \\
\hline
$E_{\rm S}$ (MeV) & -37.8 & -37.8 & -37.8 & -37.8 \\
\hline
$\gamma_{\rm S}$ & 1.31 & 1.356 & 1.452 & 1.547 \\
\hline
$f_{\rm S}$ & -0.667 & -0.634 & -0.577 & -0.530 \\
\hline
$E_{\rm A}$ (MeV) & 19.9 & 19.9 & 19.9 & 19.9 \\
\hline
$\gamma_{\rm A}$ & 0.61 & 0.61 & 0.61 & 0.61 \\
\hline
$\overline{n}_0$ (MeV) & 0.05 & 0.05 & 0.05 & 0.05 \\
\hline
$K$ (MeV) & 230 & 240 & 260 & 280 \\
\hline
$M_{\text{max}}/M_{\odot}$ & 2.01 & 2.05 & 2.15 & 2.24 \\
\hline
$R_{\text{max}}$ (km) & 10.23 & 10.34 & 10.62 & 10.88 \\
\hline
$n_{{\rm b},\text{cntr},\text{max}}/n_{\text{nuc}}$ & 7.43 & 7.22 & 6.73 & 6.32 \\
\hline
$n_{{\rm b},\text{cl}}/n_{\text{nuc}}$ & 9.8 & 8.9 & 7.5 & 6.4 \\
\hline
\end{tabular}
\label{tab:EOSParameters}
\end{table}
The pressure is specified by
\begin{equation}
P=n_{\rm b}\frac{\partial \rho}{\partial n_{\rm b}}-\rho
\end{equation}
and the chemical potential by
\begin{equation}
\mu=\frac{\partial \rho}{\partial n_{\rm b}}.
\end{equation}
The individual chemical potentials are calculated using
\begin{equation}
\mu_x=\frac{\partial \rho}{\partial n_x},\quad x={\rm n,p,e},\upmu.
\end{equation}
The background star is assumed to be in beta equilibrium, implying
\begin{equation}
\mu_{\rm n}=\mu_{\rm p}+\mu_{\rm e}=\mu_{\rm p}+\mu_{\upmu} \Rightarrow \mu_{\rm e}=\mu_{\upmu}.
\end{equation}
We find that muons first appear at $n_{\rm b}=0.8n_{\text{nuc}}$ for all four EOS parametrizations that we considered. In Figure~\ref{fig:CoreEoS} (top), we compare our $\rho(n_{\rm b})$ in the core to the BSk19--BSk21 EOSs from~\citet{Potekhin2013}, finding that ours is in good agreement with all three of their EOSs in the lower half of the density range and with the BSk19 and BSk20 EOS in the higher density region. We also plot the proton fraction $Y$ and $Y_{\rm e}=fY$ as functions of $n_{\rm b}$ in the core (bottom).
\begin{figure}
\centering
\includegraphics[width=1.0\columnwidth]{CoreEOSCombined.pdf}
\caption{Top: Energy density $\rho$ as a function of $n_{\rm b}/n_{\text{nuc}}$ in the core from this paper (RW with PC1 parameters) and the BSk19--BSk21 EOS from~\citet{Potekhin2013}. Bottom: Proton fraction $Y=n_{\rm q}/n_{\rm b}$ and electron fraction $Y_{\rm e}=fY=n_{\rm e}/n_{\rm b}$ in the core for the RW EOS with PC1 parameters. $Y=Y_{\rm e}$ below the muon threshold density $n_{\rm b}/n_{\text{nuc}}=0.8$. }
\label{fig:CoreEoS}
\end{figure}
\subsection{Crust equation of state}
\label{subsec:CrustEoS}
In the inner crust between neutron drip at $n_{\rm b}\sim 10^{11}$ g/cm$^3$ and the transition to the core at $\sim 10^{14}$ g/cm$^3$ ($n_{\rm b}\sim 0.5n_{\text{nuc}}$), neutron stars are expected to consist of neutron-rich nuclei surrounded by a dripped neutron gas and a pervasive ultrarelativistic electron gas. Below neutron drip, the outer crust, consisting only of nuclei and an electron gas, is included using the BPS EOS~\citep{Baym1971a}. It is, however, neglected when computing the oscillation modes as it constitutes less than a hundredth of a percent of the star by mass, thus having a negligible effect on the bulk oscillation modes. The effects of this omission are briefly discussed at the conclusion of Section~\ref{sec:BCs}.
Following~\citet{Baym1971} and~\citet{Haensel2001b}, we consider a liquid drop-type model with spherical nuclei of radius $r_{\rm n}$ inside spherical unit cells of radius $r_{\rm c}$. We do not model exotic shapes or nuclear pasta~\citep{Ravenhall1983,Hashimoto1984,Watanabe2003} at this stage, and we allow the proton and nucleon numbers $Z$ and $A$ to vary continuously. The density of neutrons outside the nuclei is $n_{\rm n,o}$, while the nuclei themselves have baryon density $n_{\rm i}$. The neutron and proton densities inside the nuclei are $n_{\rm n,i}=(1-Y)n_{\rm i}$ and $n_{\rm p,i}=Yn_{\rm i}$ respectively, where in the crust $Y=Z/A$ is the proton fraction of the nuclei and $Z$ and $A$ are defined to include only baryons inside the nuclei. The electron number density is fixed by electric charge neutrality to be equal to the average proton number density over the cell, so
\begin{equation}
n_{\rm e} = wn_{\rm p,i}=wn_{\rm i}Y,
\end{equation}
where $w=(r_{\rm n}/r_{\rm c})^3$ is the fraction of the volume of each cell occupied by the nucleus and $r_{\rm n}=(3A/4\pi n_{\rm i})^{1/3}$. We also define the number density of nuclei $n_{\rm n}$, which is given in terms of the cell radius by
\begin{equation}
n_{\rm n} = \frac{3}{4\pi r_{\rm c}^3},
\end{equation}
so the total baryon number density $n_{\rm b}$ is given by
\begin{equation}
n_{\rm b} = An_{\rm n}+(1-w)n_{\rm n,o}.
\end{equation}
Later, we discuss the fluid oscillations in the crust in terms of the macroscopic motion of two fluids: a free neutron superfluid and a normal fluid of nuclei. In the fluid equations, we use the mean density of free neutrons outside the nuclei $n_{\rm f}\equiv(1-w)n_{\rm n,o}$ and the mean density of nuclear baryons $n_{\rm c}\equiv An_{\rm n}$. In terms of these densities, the total baryon number density is
\begin{equation}
n_{\rm b}=n_{\rm f}+n_{\rm c};
\end{equation}
note too that $w=n_{\rm c}/n_{\rm i}$.
We write the energy density for the inner crust in terms of the five variables $n_{\rm f}$, $n_{\rm c}$, $n_{\rm i}$, $A$ and $Y$. The energy density for the inner crust has five components: bulk energy densities for the nuclei $\rho_{i,\text{bulk}}$, surrounding neutron gas $\rho_{{\rm o},\text{bulk}}$, and electron gas $\rho_{\rm e}$, Coulomb energy density $\rho_{\text{Coul}}$ including the self-energy of the nuclei and the lattice energy, and surface energy density $\rho_{\text{surf}}$. The bulk energy density for nuclear matter, the neutron gas and the electron gas have the same form as in the core, discussed in the previous section. We then have
\begin{align}
\rho(n_{\rm f},n_{\rm c},A,Y,n_{\rm i}) ={}& w\rho_{{\rm i},\text{bulk}}(n_{\rm i},Y)+(1-w)\rho_{{\rm o},\text{bulk}}(n_{\rm f}/(1-w))\nonumber \\
{}&+\rho_{\rm e}(Yn_{\rm c})+n_{\rm n}E_{\text{Coul}}(n_{\rm c},n_{\rm i},A,Y)\nonumber \\
{}&+n_{\rm n}E_{\text{surf}}(n_{\rm i},A,Y), \label{eq:CrustEnergyDensity}
\end{align}
where
\begin{align}
{}&\rho_{{\rm i},\text{bulk}}(n_{\rm i},Y)=\frac{m_{\rm N}^4}{\pi^2}\left[\phi\left(\frac{p_{\rm Fi}}{m_{\rm N}}Y^{1/3}\right)+\phi\left(\frac{p_{\rm Fi}}{m_{\rm N}}(1-Y)^{1/3}\right)\right] \nonumber \\
&\quad +n_{\text{nuc}}E_{\rm S}\frac{\overline{n}_{\rm i}^2+f_{\rm S}\overline{n}_{\rm i}^{\gamma_{\rm S}+1}}{1+f_{\rm S}}+n_{\text{nuc}}E_{\rm A}\overline{n}_{\rm i}^2\left(\frac{\overline{n}_{\rm i}+\overline{n}_{\rm o}}{1+\overline{n}_{\rm o}}\right)^{\gamma_{\rm A}-1}(1-2Y)^2, \\
{}&\rho_{{\rm o},\text{bulk}}(n_{\rm n,o})=\frac{m_{\rm N}^4}{\pi^2}\phi\left(\frac{p_{\rm Fo}}{m_{\rm N}}\right)+n_{\text{nuc}}E_{\rm S}\frac{\overline{n}_{\rm n,o}^2+f_{\rm S}\overline{n}_{\rm n,o}^{\gamma_{\rm S}+1}}{1+f_{\rm S}} \nonumber \\
&\qquad +n_{\text{nuc}}E_{\rm A}\overline{n}^2_{\rm n,o}\left(\frac{\overline{n}_{\rm n,o}+\overline{n}_{\rm o}}{1+\overline{n}_{\rm o}}\right)^{\gamma_{\rm A}-1}, \\
{}&\rho_{\rm e}(Yn_{\rm c}=n_{\rm e})=\frac{(3\pi^2n_{\rm e})^{4/3}}{4\pi^2}, \\
{}&E_{\text{Coul}}(n_{\rm c},n_{\rm i},A,Y)=\frac{16}{15}(\pi Yn_{\rm i}e)^2r_{\rm n}^5\left[1-\frac{3}{2}w^{1/3}+\frac{1}{2}w\right], \\
{}&E_{\text{surf}}(n_{\rm i},A,Y)=4\pi r_{\rm n}^2\sigma_{\rm s}(Y),
\end{align}
where $p_{\rm Fi}=(3\pi^2n_{\rm i})^{1/3}$, $n_{\rm n,o}=n_{\rm f}/(1-w)$, $p_{\rm Fo}=(3\pi^2n_{\rm n,o})^{1/3}$, $\overline{n}_{\rm i}=n_{\rm i}/n_{\text{nuc}}$, $\overline{n}_{\rm n,o}=n_{\rm n,o}/n_{\text{nuc}}$ and $\sigma_{\rm s}$ denotes the nuclear surface energy. We assume that the electrons are relativistic down to neutron drip and ignore the neutron-proton mass difference here.
Following~\citet{Ravenhall1983a} and~\citet{Lattimer1985}, we take the surface energy $\sigma_s$ to be a function only of the proton fraction $Y$ at zero temperature, and use the parametrization
\begin{equation}
\sigma_{\rm s}(Y)=\frac{\sigma_0(2^{\alpha+1}+\beta)}{Y^{-\alpha}+(1-Y)^{-\alpha}} .
\label{eq:SurfaceEnergy}
\end{equation}
We selected parameters $\sigma_0$, $\alpha$, $\beta$ which give a approximately constant proton number $Z\approx 40$ throughout the density range of the inner crust, as is found in more detailed calculations of the inner crust equation of state \citep{Douchin2000,Onsi2008,Pearson2012,Potekhin2013}:
\begin{align*}
&\sigma_0 = 1.4\text{ MeV/fm}^2, \\
&\alpha = 3, \\
&\beta = 24.
\end{align*}
$\alpha$ and $\beta$ are close to the corresponding parameters in~\citet{Ravenhall1983a}, but $\sigma_0$ is $\approx 50$\% larger than its corresponding parameter.
For a general change of state, the change in the energy density is
\begin{align}
d\rho={}&\left.\frac{\partial\rho}{\partial A}\right|_{n_{\rm f},n_{\rm c},n_{\rm i},Y}{\rm d}A+\left.\frac{\partial\rho}{\partial Y}\right|_{n_{\rm f},n_{\rm c},n_{\rm i},A}{\rm d}Y \nonumber \\
{}&+\left.\frac{\partial\rho}{\partial n_{\rm i}}\right|_{n_{\rm f},n_{\rm c},A,Y}{\rm d}n_{\rm i}+\left.\frac{\partial\rho}{\partial n_{\rm c}}\right|_{n_{\rm f},n_{\rm i},A,Y}{\rm d}n_{\rm c} \nonumber \\
{}&+\left.\frac{\partial\rho}{\partial n_{\rm f}}\right|_{n_{\rm c},n_{\rm i},A,Y}{\rm d}n_{\rm f}.
\end{align}
At fixed $n_{\rm b}$, ${\rm d}n_{\rm c}+{\rm d}n_{\rm f}=0$, so this becomes
\begin{align}
&d\rho=\left.\frac{\partial\rho}{\partial A}\right|_{n_{\rm f},n_{\rm c},n_{\rm i},Y}{\rm d}A+\left.\frac{\partial\rho}{\partial Y}\right|_{n_{\rm f},n_{\rm c},n_{\rm i},A}{\rm d}Y \nonumber \\
&+\left.\frac{\partial\rho}{\partial n_{\rm i}}\right|_{n_{\rm f},n_{\rm c},A,Y}{\rm d}n_{\rm i}+\left(\left.\frac{\partial\rho}{\partial n_{\rm c}}\right|_{n_{\rm f},n_{\rm i},A,Y}-\left.\frac{\partial\rho}{\partial n_{\rm f}}\right|_{n_{\rm c},n_{\rm i},A,Y}\right){\rm d}n_{\rm c}.
\end{align}
The ``nuclear virial theorem'' \citep{Haensel2001b} and pressure balance correspond to the conditions
\begin{equation}
\left.\frac{\partial\rho}{\partial Y}\right|_{n_{\rm f},n_{\rm c},n_{\rm i},A}=
\left.\frac{\partial\rho}{\partial n_{\rm i}}\right|_{n_{\rm f},n_{\rm c},A,Y} = 0, \label{eq:NVTandPB}
\end{equation}
respectively, while the condition that there is no energy associated with exchanging neutrons between the nuclei and the external free neutron gas (henceforth the ``exchange condition'') is
\begin{equation}
\left.\frac{\partial\rho}{\partial n_{\rm c}}\right|_{n_{\rm c},n_{\rm i},A,Y}-\left.\frac{\partial\rho}{\partial n_{\rm f}}\right|_{n_{\rm c},n_{\rm i},A,Y}-\frac{Y}{n_{\rm c}}\left.\frac{\partial\rho}{\partial Y}\right|_{n_{\rm f},n_{\rm c},n_{\rm i},A}=0 \label{eq:Exchange1}
\end{equation}
since proton density $Yn_{\rm c}$ is unchanged by exchange of neutrons. Beta equilibrium is simply
\begin{equation}
\left.\frac{\partial\rho}{\partial Y}\right|_{n_{\rm f},n_{\rm c},n_{\rm i},A}=0, \label{eq:BetaEq}
\end{equation}
so Eq.~(\ref{eq:Exchange1}) becomes
\begin{equation}
\left.\frac{\partial\rho}{\partial n_{\rm c}}\right|_{n_{\rm c},n_{\rm i},A,Y}-\left.\frac{\partial\rho}{\partial n_{\rm f}}\right|_{n_{\rm c},n_{\rm i},A,Y}=0. \label{eq:Exchange2}
\end{equation}
Imposing the four conditions Eqs.~(\ref{eq:NVTandPB},\ref{eq:BetaEq},\ref{eq:Exchange2}), we can determine the values of $n_{\rm f}$, $n_{\rm c}$, $n_{\rm i}$, $A$ and $Y$ at each $n_{\rm b}$ and thus compute at each $n_{\rm b}$ the energy density $\rho$ and pressure $P$, which is given by
\begin{equation}
P = P_{\rm e}+P_{\text{Coul}}+P_{{\rm o},\text{bulk}}=\frac{1}{3}\rho_{\rm e}+\frac{n_{\rm c}^2}{A}\frac{\partial E_{\text{Coul}}}{\partial n_{\rm c}}+P_{{\rm o},\text{bulk}}.
\label{eq:CrustPressure}
\end{equation}
We will also require the chemical potentials for each fluid $\mu_{\rm f}$ and $\mu_{\rm c}$, given by
\begin{align}
\mu_{\rm f}=\left.\frac{\partial\rho}{\partial n_{\rm f}}\right|_{n_{\rm c},n_{\rm i},A,Y}={}&(1-w)\frac{\partial\rho_{\text{bulk},{\rm o}}}{\partial n_{\rm n,o}}\frac{\partial n_{\rm n,o}}{\partial n_{\rm f}}=\mu_{\rm n,o}, \label{eq:MuFUnsimplified} \\
\mu_{\rm c}=\left.\frac{\partial\rho}{\partial n_{\rm c}}\right|_{n_{\rm f},n_{\rm i},A,Y}={}&Y\mu_{\rm e}+\frac{P_{{\rm o},\text{bulk}}+\rho_{\text{bulk},i}}{n_{\rm i}} \nonumber \\
{}&+\frac{16}{15}(\pi Yn_{\rm i}e)^2\frac{Y}{A}r_{\rm n}^5\left[3-5w^{1/3}+2w\right]. \label{eq:MuCUnsimplified}
\end{align}
Using the four equilibrium conditions, Eqs.~(\ref{eq:MuFUnsimplified}--\ref{eq:MuCUnsimplified}) can be used to show that $\mu_{\rm c}=\mu_{\rm f}$ in equilibrium.
For this inner crust equation of state, neutron drip occurs between $2.7-2.8\times 10^{11}$ g/cm$^3$ or $n_{\rm b}=0.00103-0.00106n_{\text{nuc}}$, depending on the parametrization of the nuclear physics. The core and crust equations of state must be joined when the pressure and chemical potentials of each are equal. This occurs at different densities for each EOS parametrization described in Table~\ref{tab:EOSParameters}, with the pressure, chemical potential and densities in the core and crust at the transition for each parametrization listed in Table~\ref{tab:CCTransition}.
\begin{table}
\caption{Pressure $P$, chemical potential $\mu$ and baryon number density $n_{\rm b}$ at the core ($+$) and crust ($-$) sides of the crust-core transition for each EOS parametrization employed in this paper. The size of the density jump as a percentage of the baryon number density at the transition is also listed.}
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
\multicolumn{1}{|c|}{} & \textbf{PC1} & \textbf{PC2} & \textbf{PC3} & \textbf{PC4} \\
\hline
$P$ (MeV/fm$^3$) & 0.215 & 0.225 & 0.239 & 0.249 \\
\hline
$\mu/m_{\rm N}$ & 1.0108 & 1.0112 & 1.0117 & 1.0121 \\
\hline
$n_{\rm b}^+/n_{\text{nuc}}$ & 0.399 & 0.413 & 0.437 & 0.458 \\
\hline
$n_{\rm b}^-/n_{\text{nuc}}$ & 0.394 & 0.407 & 0.431 & 0.451 \\
\hline
$\Delta n_{\rm b}/n_{\rm b}$ (\%) & 1.2 & 1.4 & 1.3 & 1.5 \\
\hline
\end{tabular}
\label{tab:CCTransition}
\end{table}
The energy density and pressure for our EOS in the crust (Figure~(\ref{fig:CrustEOS}), top) are within 10--20\% of those found in more detailed calculations such as~\citet{Pearson2012} and~\citet{Potekhin2013}. The bottom panel shows $A$, $Z$ and $n_{\rm c}/n_{\rm b}$ as functions of $n_{\rm b}/n_{\text{nuc}}$ in the crust. The values of $Z$ closely match those of~\citet{Ravenhall1972}, including the dip downward just before the transition to the core. Our values of $n_{\rm c}/n_{\rm b}$ are in agreement with~\citet{Kobyakov2016}; our values of $A$ are typically greater than theirs by a factor of $1.5$, though the difference increases as the crust-core transition is approached, while our values are typically a factor of $4$ lower than those of Pearson et al.
\begin{figure}
\centering
\includegraphics[width=1.0\columnwidth]{CrustEOSCombined.pdf}
\caption{Top: Mass-energy density $\rho$ and pressure $P$ as functions of $n_{\rm b}/n_{\text{nuc}}$ across the density range of the inner crust for the RW EOS with PC1 parametrization. Bottom: Nucleon number $A$, proton number $Z$ and the ratio $0<n_{\rm c}/n_{\rm b}<1$ as functions of $n_{\rm b}/n_{\text{nuc}}$ across the density range of the inner crust for the same EOS. The maximum density in the crust before the transition to the core is $n_{\rm b}/n_{\text{nuc}}=0.394$ for the PC1 parametrization.}
\label{fig:CrustEOS}
\end{figure}
Figure~\ref{fig:MvsR} compares the mass-radius relation for two different parametrizations of the two-part EOS described here (RW) to a few other representative neutron star equations of state. The RW EOS uses the BPS EOS~\citep{Baym1971a} for densities below neutron drip. The radius, radius at neutron drip and central density for each EOS parametrization and stellar mass used in the rest of this paper are described in Table~\ref{tab:StellarModelProperties}.
\begin{figure}
\centering
\includegraphics[width=1.0\columnwidth]{MvsR}
\caption{Neutron star mass-radius plot for the EOS described in this paper with two parametrizations as given by Table~\ref{tab:EOSParameters} (RW PC1 and RW PC4) and three equations of state of increasing stiffness from both~\citet{Hebeler2013} (denoted H1, H2, H3) and~\citet{Potekhin2013} (denoted BSk19, BSk20, BSk21).}
\label{fig:MvsR}
\end{figure}
\begin{table}
\caption{List of radius $R$, radius at neutron drip $R_{\text{ND}}$, and central density $n_{b,\text{cntr}}$ for each stellar mass and parametrization choice employed in the calculation of compressional mode frequencies in this paper.}
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
\textbf{PC1} & $\mathbf{1.2M_{\odot}}$ & $\mathbf{1.4M_{\odot}}$ & $\mathbf{1.7M_{\odot}}$ & $\mathbf{2M_{\odot}}$ \\
\hline
$R$ (km) & 12.24 & 12.11 & 11.75 & 10.54 \\
\hline
$R_{\text{ND}}$ (km) & 11.72 & 11.70 & 11.47 & 10.39 \\
\hline
$n_{\rm b,\text{cntr}}/n_{\text{nuc}}$ & 2.73 & 3.17 & 4.07 & 6.67 \\
\hline
\textbf{PC2} & $\mathbf{1.2M_{\odot}}$ & $\mathbf{1.4M_{\odot}}$ & $\mathbf{1.7M_{\odot}}$ & $\mathbf{2M_{\odot}}$ \\
\hline
$R$ (km) & 12.31 & 12.21 & 11.90 & 11.06 \\
\hline
$R_{\text{ND}}$ (km) & 11.78 & 11.79 & 11.62 & 10.89 \\
\hline
$n_{\rm b,\text{cntr}}/n_{\text{nuc}}$ & 2.66 & 3.05 & 3.86 & 5.61 \\
\hline
\textbf{PC3} & $\mathbf{1.2M_{\odot}}$ & $\mathbf{1.4M_{\odot}}$ & $\mathbf{1.7M_{\odot}}$ & $\mathbf{2M_{\odot}}$ \\
\hline
$R$ (km) & 12.48 & 12.43 & 12.23 & 11.71 \\
\hline
$R_{\text{ND}}$ (km) & 11.94 & 11.99 & 11.92 & 11.50 \\
\hline
$n_{\rm b,\text{cntr}}/n_{\text{nuc}}$ & 2.49 & 2.82 & 3.46 & 4.55 \\
\hline
\textbf{PC4} & $\mathbf{1.2M_{\odot}}$ & $\mathbf{1.4M_{\odot}}$ & $\mathbf{1.7M_{\odot}}$ & $\mathbf{2M_{\odot}}$ \\
\hline
$R$ (km) & 12.63 & 12.62 & 12.50 & 12.14 \\
\hline
$R_{\text{ND}}$ (km) & 12.07 & 12.17 & 12.17 & 11.92 \\
\hline
$n_{\rm b,\text{cntr}}/n_{\text{nuc}}$ & 2.35 & 2.64 & 3.16 & 3.96 \\
\hline
\end{tabular}
\label{tab:StellarModelProperties}
\end{table}
\section{Fluid dynamics}
\label{sec:FluidDynamics}
\subsection{Two-fluid formalism in the core}
We now derive the equations of motion for the perturbations of a two-superfluid neutron star that we use to compute its normal modes. We first consider the core, and the same model is generalized to the crust in Sections~(\ref{subsec:CrustEoM}) and~(\ref{subsec:SFCrustEoM}). We work at zero temperature, so there are no normal fluid neutron or proton components, and include general relativity, but work in the Cowling approximation and so ignore perturbations of the metric.
In a core composed of superfluid neutrons and protons and normal fluid electrons and muons, the leptons will move along with the protons since the plasma frequency $\sim10^{22}$ s$^{-1}$ is much greater than the frequencies of the compressional modes. We thus have two independently-moving fluids-- a neutron superfluid and charged fluid. This differs from the commonly-chosen separation of the fluid into normal and superfluid components, though the formulation used here has a number of advantages which illuminate the underlying physics. First, it makes clear the role of the leptonic buoyancy, which exists only in the charged fluid. It also reveals the significance of thermodynamic coupling and entrainment between the two fluids, with the equations describing the motion of the fluids becoming completely uncoupled if these quantities are zero as is demonstrated at the end of this section. At densities above $2$--$3n_{\text{nuc}}$, the $s$-wave pairing energy gap for protons may go to zero~\citep{Zhou2004,Baldo2007}, meaning that the proton fluid can be a normal fluid in the inner core, but the equations of motion for the charged fluid will remain unchanged in this case, and the charged fluid and neutrons would still behave as two separately-moving fluids as long as the neutrons remain superfluid. Note that both methods should be equivalent, and the normal fluid and superfluid displacement modes can be reconstructed from taking the appropriate superposition of the neutron and charged fluid modes.
We assume that neutrons remain superfluid throughout the core; calculations summarized in Fig. 2 of~\citet{Gezerlis2014} support the idea that the neutron gap does not vanish at any density below at least $4.2n_{\text{nuc}}$, which includes all neutron stars less massive than about $1.7M_{\odot}$ for our adopted equation of state. Moreover, these calculations suggest that neutron superfluidity would be maintained out to the crust-core boundary for core temperatures below $\simeq 3\times 10^8$K; the model used in KG14 is similar. If neutrons become normal somewhere inside the core, their coupling to electrons~\citep{Bertoni2015} suffices to merge the two fluids into a single fluid, irrespective of whether the protons are superfluid. Thus, if neutrons were entirely normal throughout the core, then $g$-modes would arise from a combination of the leptonic buoyant force associated with the gradient of $f$ and the buoyant force associated with the gradient of $Y$~\citep{Reisenegger1992}, but overall their frequencies would be lower than when neutrons are superfluid; see Section~\ref{sec:BVFreq}, especially Eqs.~(\ref{eq:NFBruntVaisalaFrequency}--\ref{eq:YGradientBruntVaisalaFrequency}). However, if the neutrons are only normal deep inside the core of the neutron star, then the $g$-modes arising from leptonic buoyancy, which is only substantial near the outer boundary of the core, would be largely unaffected. Thus, the $g$-mode frequency spectrum is, in principle, a probe of neutron superfluidity in the cores of neutron stars.
We specify the neutron superfluid four-velocity $u^{\mu}_{\rm n}$ and number density $n_{\rm n}$, and charged fluid four-velocity $u^{\mu}_{\rm q}$ and number density $n_{\rm q}=n_{\rm p}=n_{\rm e}+n_{\upmu}$. We can rewrite the energy density $\rho$ as a function of $n_{\rm n}$, $n_{\rm q}$ and the electron fraction $f=n_{\rm e}/n_{\rm q}$;
\begin{equation}
\rho(n_{\rm n},n_{\rm q},f)=\rho_{\text{nuc}}(n_{\rm n},n_{\rm q})+\rho_{\rm e}(n_{\rm q}f)+\rho_{\upmu}(n_{\rm q}(1-f)),
\end{equation}
where $\rho_{\text{nuc}}$ includes both the kinetic and interaction contributions relating to the nucleons. This gives two chemical potentials
\begin{align}
\mu_{\rm n}= & \frac{\partial\rho}{\partial n_{\rm n}}=\frac{\partial\rho_{\text{nuc}}}{\partial n_{\rm n}}, \\
\mu_{\rm q}= & \frac{\partial\rho}{\partial n_{\rm q}}=\frac{\partial\rho_{\text{nuc}}}{\partial n_{\rm q}}+\frac{\partial\rho_{\rm e}}{\partial n_{\rm e}}\frac{\partial n_{\rm e}}{\partial n_{\rm q}}+\frac{\partial\rho_{\upmu}}{\partial n_{\upmu}}\frac{\partial n_{\upmu}}{\partial n_{\rm q}},
\label{eq:ChargedChemicalPotential}
\end{align}
which are equal in beta equilibrium.
The motion of the two fluids is described by the relativistic Euler equations
~\citep{Carter1998,Andersson2007}
\begin{align}
0={}&u_{\rm n}^{\rho}\nabla_{\rho}(\mu_{\rm n}u^n_{\sigma})+\nabla_{\sigma}\mu_{\rm n}-2u_{\rm n}^{\rho}\nabla_{[\rho}(\mu_{\rm n}\epsilon_{\rm n}W_{\sigma]}), \label{eq:EulerN} \\
0={}&u_{\rm q}^{\rho}\nabla_{\rho}(\mu_{\rm q}u_{\sigma}^{\rm q})+\nabla_{\sigma}\mu_{\rm q}+(\mu_{\upmu}-\mu_{\rm e})\nabla_{\sigma}f+2u_{\rm q}^{\rho}\nabla_{[\rho}(\mu_{\rm n}\epsilon_{\rm p}W_{\sigma]}), \label{eq:EulerQ}
\end{align}
and the continuity equations
\begin{align}
\nabla_{\rho}(n_{\rm n}u^{\rho}_{\rm n})=0, \\
\nabla_{\rho}(n_{\rm q}u^{\rho}_{\rm q})=0.
\end{align}
where $W_{\sigma}=u^{\rm n}_{\sigma}-u^{\rm q}_{\sigma}$. $\epsilon_{\rm n}$ and $\epsilon_{\rm p}$ are defined to parameterize the entrainment, and are related by
\begin{equation}
n_{\rm q}\epsilon_{\rm p} = n_{\rm n}\epsilon_{\rm n}.
\end{equation}
The entrainment parameters $\epsilon_{\rm p}$ and $\epsilon_{\rm n}$ here are dimensionless and are the same as those of~\citet{Prix2002}. We vary the parameter $\epsilon_{\rm p}$ to adjust the strength of the entrainment, noting that the effective mass of the proton $m_{\rm p}^*$ is often related to $\epsilon_{\rm p}$ via
\begin{equation}
\epsilon_{\rm p}=1-\frac{m_{\rm p}^*}{m_{\rm N}}.
\end{equation}
The term in Eq.~(\ref{eq:EulerQ}) $\propto \nabla_{\sigma} f$ is responsible for the leptonic buoyancy, and in the outer regions of the core without muons, it is zero.
We now calculate the equations of motion for perturbations to a spherically-symmetric, static background in chemical equilibrium. The metric in Schwarzschild coordinates is
\begin{equation}
{\rm d}s^2=-{\rm e}^{\nu(r)}{\rm d}t^2+{\rm e}^{\lambda(r)}{\rm d}r^2+r^2({\rm d}\theta^2+\sin^2\theta {\rm d}\phi^2),
\end{equation}
where ${\rm e}^{\lambda(r)}=(1-2M(r)/r)^{-1}$, $M(r)$ is the mass enclosed within radius $r$, and ${\rm e}^{\nu(r)}$ is determined using the gravitational redshift formula $\mu(r)\sqrt{-g_{00}}=\text{constant}$, so
\begin{equation}
{\rm e}^{\nu(r)}=(-g_{00})=\left(\frac{m_{N,\text{Fe}-56}}{\mu_0(r)}\right)^2\left(1-\frac{2M}{R}\right),
\end{equation}
where $R$ is the coordinate radius of the star, $M=M(R)$ its total mass computed using the equation of state and the TOV equation and $m_{N,\text{Fe}-56}$ is the mass per baryon of an iron-56 nucleus.
Since the velocities under consideration are much less than the speed of light, we can ignore relativistic gamma factors and write the fluid four-velocities as
\begin{equation}
u^{\mu}_a=\frac{{\rm d}x^{\mu}_a}{{\rm d}\tau}\approx {\rm e}^{-\nu/2}\frac{{\rm d}x^{\mu}_a}{{\rm d}t}.
\label{eq:FourVelocityDefinition}
\end{equation}
For a stationary background, $u_a^{\mu}={\rm e}^{-\nu/2}(1,0,0,0)$. The velocity of the perturbation $\delta u_a^{\mu}$ to first order in perturbation theory and in the Cowling approximation is thus given by \citep{Andersson2007}
\begin{align}
\delta u_a^{\mu} = {}&(\delta^{\mu}_{\nu}+u_a^{\mu}u^a_{\nu})(u_a^{\sigma}\nabla_{\sigma}\overline{\xi}_a^{\nu}-\overline{\xi}_a^{\sigma}\nabla_{\sigma}u_a^{\nu}) \nonumber \\
={}&(\delta^{\mu}_{\nu}-\delta^{\mu}_0\delta^0_{\nu})\left({\rm e}^{-\nu/2}\nabla_{0}\overline{\xi}_a^{\nu}-\overline{\xi}_a^{\sigma}\nabla_{\sigma}{\rm e}^{-\nu/2}\delta^{\nu}_0\right)
\end{align}
for Lagrangian displacement fields $\overline{\xi}_a^{\mu}$ defined in a coordinate basis. We set $\overline{\xi}_a^0=0$ using the gauge freedom within the definition of $\overline{\xi}^{\mu}_a$. Taking the Eulerian perturbation of Eq.~(\ref{eq:EulerQ}) and considering its spatial components $\sigma=i=1,2,3$, we obtain to first order in perturbation theory
\begin{align}
0={}&{\rm e}^{-\nu}\partial^2_t\overline{\xi}_{\rm q}^i+{\rm e}^{-\nu}\epsilon_{\rm p}\partial^2_t(\overline{\xi}^i_{\rm n}-\overline{\xi}_{\rm q}^i)+g^{ii}\partial_i\left(\frac{\delta\mu_{\rm q}}{\mu_0}\right) \nonumber \\
{}&+\frac{(\delta\mu_{\upmu}-\delta\mu_{\rm e})}{\mu_0}g^{ii}\partial_i f,
\label{eq:PerturbedEulerQ} \\
0={}&{\rm e}^{-\nu}\partial^2_t\overline{\xi}_{\rm n}^i+{\rm e}^{-\nu}\epsilon_{\rm n}\partial^2_t(\overline{\xi}^i_{\rm q}-\overline{\xi}_{\rm n}^i)+g^{ii}\partial_i\left(\frac{\delta\mu_{\rm n}}{\mu_0}\right),
\label{eq:PerturbedEulerN}
\end{align}
where $\mu_0$ is the common background equilibrium chemical potential. The perturbed continuity equations are identical:
\begin{align}
\delta n_a = -n_a\Theta_a-\overline{\xi}^r_a\frac{{\rm d}n_a}{{\rm d}r},
\end{align}
where we have defined
\begin{align}
\Theta_a = \frac{1}{\sqrt{-g}}\frac{\partial(\sqrt{-g}\overline{\xi}_a^i)}{\partial x^i}.
\label{eq:CovariantDivergence}
\end{align}
Since we consider nonrotating stars, spherical symmetry is preserved and the normal modes are spheroidal/poloidal. The displacement field for such a mode in the orthonormal tetrad is
\begin{equation}
\boldsymbol{\xi}_a={\rm e}^{i\omega t}\left[\xi^r_a(r)Y_{lm}(\theta,\phi)\hat{\mathbf{e}}_r+\xi^{\perp}_a(r)r\nabla Y_{lm}(\theta,\phi)\right] \quad a={\rm n,q},
\label{eq:DisplacementField}
\end{equation}
where $Y_{lm}(\theta,\phi)$ are the usual spherical harmonics and we use the usual orthonormal basis vectors. $\omega$ is the angular frequency of the oscillation as observed far from the star. In the coordinate basis, the components of $\overline{\xi}^i_a$ are
\begin{align}
\overline{\xi}^r_a ={}& {\rm e}^{-\lambda/2}\xi^r_a(r)Y_{lm}(\theta,\phi){\rm e}^{i\omega t}, \label{eq:CoordinateBasisXir}\\
\overline{\xi}^{\theta}_a ={}& \xi^{\perp}_a(r)\frac{1}{r}\partial_{\theta}Y_{lm}(\theta,\phi){\rm e}^{i\omega t}, \label{eq:CoordinateBasisXitheta}\\
\overline{\xi}^{\phi}_a ={}& \xi^{\perp}_a(r)\frac{1}{r\sin\theta}\partial_{\phi}Y_{lm}(\theta,\phi){\rm e}^{i\omega t}. \label{eq:CoordinateBasisXiphi}
\end{align}
To compute the buoyant term in Eq.~(\ref{eq:PerturbedEulerQ}), use
\begin{equation}
\frac{\partial\rho}{\partial f}=\frac{\partial \rho_{\rm e}}{\partial n_{\rm e}}\frac{\partial n_{\rm e}}{\partial f}+\frac{\partial \rho_{\upmu}}{\partial n_{\upmu}}\frac{\partial n_{\upmu}}{\partial f}=n_{\rm q}(\mu_{\rm e}-\mu_{\upmu});
\end{equation}
then
\begin{align}
\delta\mu_{\upmu}-\delta\mu_{\rm e}= & -\delta\left(\frac{1}{n_{\rm q}}\frac{\partial\rho}{\partial f}\right) \nonumber \\
= & -\frac{1}{n_{\rm q}}\left(\frac{\partial^2\rho}{\partial f^2}\delta f+\frac{\partial\mu_{\rm q}}{\partial f}\delta n_{\rm q} \right) \nonumber \\
= & \mu_{\rm qf}\Theta_{\rm q}, \label{eq:EulerianPertLeptonChemPotDiff}
\end{align}
where we have defined the thermodynamic derivatives
\begin{equation}
\mu_{ab}\equiv\frac{\partial\mu_a}{\partial n_{\rm b}} \quad a,b\in\{{\rm n,q}\}; \quad\mu_{\rm qf}\equiv\frac{\partial\mu_{\rm q}}{\partial f},\label{eq:MuAB}
\end{equation}
where $\mu_{\rm nq}=\mu_{\rm qn}$; explicitly,
\begin{align}
\mu_{\rm n} &=\frac{\partial\rho}{\partial n_{\rm b}}-\frac{Y}{n_{\rm b}}\frac{\partial\rho}{\partial Y}, \\
\mu_{\rm q} &=\frac{\partial\rho}{\partial n_{\rm b}}+\frac{(1-Y)}{n_{\rm b}}\frac{\partial\rho}{\partial Y}, \\
\mu_{\rm nn} &=\frac{\partial^2\rho}{\partial n_{\rm b}^2}-\frac{2Y}{n_{\rm b}}\frac{\partial^2\rho}{\partial n_{\rm b}\partial Y}+\frac{Y^2}{n_{\rm b}^2}\frac{\partial^2\rho}{\partial Y^2}, \label{eq:Munn} \\
\mu_{\rm qq} &=\frac{\partial^2\rho}{\partial n_{\rm b}^2}+\frac{2(1-Y)}{n_{\rm b}}\frac{\partial^2\rho}{\partial n_{\rm b}\partial Y}+\frac{(1-Y)^2}{n_{\rm b}^2}\frac{\partial^2\rho}{\partial Y^2}, \label{eq:Muqq}\\
\mu_{\rm nq} &=\frac{\partial^2\rho}{\partial n_{\rm b}^2}+\frac{(1-2Y)}{n_{\rm b}}\frac{\partial^2\rho}{\partial n_{\rm b}\partial Y}-\frac{Y(1-Y)}{n_{\rm b}^2}\frac{\partial^2\rho}{\partial Y^2}. \label{eq:Munq}
\end{align}
Henceforth, we define
\begin{equation}
\Pi_a=\Pi_a(r)Y_{lm}(\theta,\phi)\equiv\frac{\delta\mu_a}{\mu_0},
\label{eq:PiDefinition}
\end{equation}
in terms of which the Euler equations are
\begin{align}
&\omega^2{\rm e}^{-\nu}(1-\epsilon_{\rm n})\xi^r_{\rm n}+\omega^2{\rm e}^{-\nu}\epsilon_{\rm n}\xi^r_{\rm q}={\rm e}^{-\lambda/2}\frac{{\rm d}\Pi_{\rm n}}{{\rm d}r}, \label{eq:XiRN}\\
&\omega^2{\rm e}^{-\nu}r(1-\epsilon_{\rm n})\xi^{\perp}_{\rm n}+\omega^2{\rm e}^{-\nu}r\epsilon_{\rm n}\xi^{\perp}_{\rm q}=\Pi_{\rm n}, \label{eq:XiPerpN} \\
&\omega^2{\rm e}^{-\nu}(1-\epsilon_{\rm p})\xi^r_{\rm q}+\omega^2{\rm e}^{-\nu}\epsilon_{\rm p}\xi^r_{\rm n}={\rm e}^{-\lambda/2}\frac{{\rm d}\Pi_{\rm q}}{{\rm d}r}+{\rm e}^{-\lambda/2}\frac{\mu_{\rm qf}}{\mu_0}\frac{{\rm d}f}{{\rm d}r}\Theta_{\rm q}, \label{eq:XiRQ} \\
&\omega^2{\rm e}^{-\nu}r(1-\epsilon_{\rm p})\xi^{\perp}_{\rm q}+\omega^2{\rm e}^{-\nu}r\epsilon_{\rm p}\xi^{\perp}_{\rm n}=\Pi_{\rm q}. \label{eq:XiPerpQ}
\end{align}
We also recast the continuity equations in terms of $\xi_a^r$ and $\Pi_a$; using
\begin{equation}
\Theta_a = \left(\frac{{\rm e}^{-\lambda/2}}{r^2}\frac{d(r^2\xi^r_a)}{dr}-\frac{l(l+1)}{r}\xi^{\perp}_a\right)Y_{lm},
\end{equation}
we obtain
\begin{align}
\delta n_{\rm n}(r) ={}& -n_{\rm n}\left[\frac{{\rm e}^{-\lambda/2}}{r^2}\frac{{\rm d}(r^2\xi^r_{\rm n})}{{\rm d}r}-{\rm e}^{\nu}\frac{k^2_{\perp}}{\omega^2}\left\{(1+x)\Pi_{\rm n}-x\Pi_{\rm q}\right\}\right] \nonumber \\
&-{\rm e}^{-\lambda/2}\xi_{\rm n}^r\frac{{\rm d}n_{\rm n}}{{\rm d}r}, \label{eq:EulerDensPertRBVn} \\
\delta n_{\rm q}(r) ={}& -n_{\rm q}\left[\frac{{\rm e}^{-\lambda/2}}{r^2}\frac{{\rm d}(r^2\xi^r_{\rm q})}{{\rm d}r}-{\rm e}^{\nu}\frac{k^2_{\perp}}{\omega^2}\left\{(1+y)\Pi_{\rm q}-y\Pi_{\rm n}\right\}\right] \nonumber \\ & -{\rm e}^{-\lambda/2}\xi_{\rm q}^r\frac{{\rm d}n_{\rm q}}{{\rm d}r}, \label{eq:EulerDensPertRBVq}
\end{align}
where $k^2_{\perp}\equiv l(l+1)r^{-2}$ and
\begin{align}
x\equiv \frac{\epsilon_{\rm n}}{1-\epsilon_{\rm p}-\epsilon_{\rm n}}, \\
y\equiv \frac{\epsilon_{\rm p}}{1-\epsilon_{\rm p}-\epsilon_{\rm n}}.
\end{align}
Using
\begin{align}
\delta\mu_{\rm n} = {}& \mu_{\rm nn}\delta n_{\rm n}+\mu_{\rm nq}\delta n_{\rm q}, \label{eq:EulerianPertMun} \\
\delta\mu_{\rm q}= {}& \mu_{\rm qq}\delta n_{\rm q}+\mu_{\rm nq}\delta n_{\rm n}+\mu_{\rm qf}\delta f ,\label{eq:EulerianPertMuq}
\end{align}
we find
\begin{align}
\delta n_{\rm n}(r)= & \frac{\mu_{\rm qq}\mu_0\Pi_{\rm n}-\mu_{\rm nq}\mu_0\Pi_{\rm q}-\mu_{\rm nq}\mu_{\rm qf}\xi^r_{\rm q}({\rm d}f/{\rm d}r)}{D}, \label{eq:DensityNEulerianPert} \\
\delta n_{\rm q}(r)= & \frac{\mu_{\rm nn}\mu_0\Pi_{\rm q}-\mu_{\rm nq}\mu_0\Pi_{\rm n}+\mu_{\rm nn}\mu_{\rm qf}\xi^r_{\rm q}({\rm d}f/{\rm d}r)}{D}, \label{eq:DensityQEulerianPert}
\end{align}
where $D\equiv(\mu_{\rm nn}\mu_{\rm qq}-\mu_{\rm nq}^2)$; then Eq.~(\ref{eq:EulerDensPertRBVn}--\ref{eq:EulerDensPertRBVq}) are
\begin{align}
&\frac{{\rm d}\xi^r_{\rm n}}{{\rm d}r}+\left[\frac{2}{r}+\frac{{\rm d}\ln n_{\rm n}}{{\rm d}r}\right]\xi^r_{\rm n}+\left[-\frac{k^2_{\perp}}{\omega^2}{\rm e}^{\nu+\lambda/2}(1+x)+\frac{\mu_0\mu_{\rm qq}}{n_{\rm n}D}{\rm e}^{\lambda/2}\right]\Pi_{\rm n} \nonumber
\\&=\left[-\frac{k^2_{\perp}}{\omega^2}{\rm e}^{\nu+\lambda/2}x+\frac{\mu_0\mu_{\rm nq}}{n_{\rm n}D}{\rm e}^{\lambda/2}\right]\Pi_{\rm q}+\frac{\mu_{\rm nq}\mu_{\rm qf}}{n_{\rm n}D}\frac{{\rm d}f}{{\rm d}r}\xi^r_{\rm q}, \label{eq:PerturbedContinuityN} \\
&\frac{{\rm d}\xi^r_{\rm q}}{{\rm d}r}+\left[\frac{2}{r}+\frac{{\rm d}\ln n_{\rm q}}{{\rm d}r}+\frac{\mu_{\rm nn}\mu_{\rm qf}}{n_{\rm q}D}\frac{{\rm d}f}{{\rm d}r}\right]\xi^r_{\rm q} \nonumber \\
&+\left[-\frac{k^2_{\perp}}{\omega^2}{\rm e}^{\nu+\lambda/2}(1+y)+\frac{\mu_0\mu_{\rm nn}}{n_{\rm q}D}{\rm e}^{\lambda/2}\right]\Pi_{\rm q} \nonumber
\\& =\left[-\frac{k^2_{\perp}}{\omega^2}{\rm e}^{\nu+\lambda/2}y+\frac{\mu_0\mu_{\rm nq}}{n_{\rm q}D}{\rm e}^{\lambda/2}\right]\Pi_{\rm n}. \label{eq:PerturbedContinuityQ}
\end{align}
Notice that, in the case of zero entrainment and zero thermodynamic coupling $\mu_{\rm nq}=0$, the two equations describing the neutron fluid motion (\ref{eq:XiRN}) and~(\ref{eq:XiRQ}) are completely uncoupled from those describing the charged fluid motion (\ref{eq:PerturbedContinuityN}) and~(\ref{eq:PerturbedContinuityQ}), leading to two sets of equations coupling ($\xi^r_{\rm n}$,$\Pi_{\rm n}$) and ($\xi^r_{\rm q}$,$\Pi_{\rm q}$), respectively.
\subsection{Brunt--V\"{a}is\"{a}l\"{a} frequency}
\label{sec:BVFreq}
To determine the Brunt--V\"{a}is\"{a}l\"{a} frequency, we use the radial components of the Euler equations to get
\begin{align}
\omega^2{\rm e}^{-\nu}\frac{1}{1+y}\xi^r_{\rm q}={}& {\rm e}^{-\lambda/2}\frac{{\rm d}\Pi_{\rm q}}{{\rm d}r}-{\rm e}^{-\lambda/2}\frac{y}{1+y}\frac{{\rm d}\Pi_{\rm n}}{{\rm d}r} \nonumber \\
& -{\rm e}^{-\lambda/2}\frac{\mu_{\rm qf}(\mu_{\rm nn}\Pi_{\rm q}-\mu_{\rm nq}\Pi_{\rm n})}{n_{\rm q}D}\frac{{\rm d}f}{{\rm d}r} \nonumber \\
&-{\rm e}^{-\lambda}\xi^r_{\rm q}\frac{\mu_{\rm qf}}{\mu_0n_{\rm q}}\frac{{\rm d}f}{{\rm d}r}\left[\frac{{\rm d}n_{\rm q}}{{\rm d}r}+\frac{\mu_{\rm nn}\mu_{\rm qf}}{D}\frac{{\rm d}f}{{\rm d}r}\right];
\label{eq:ChargedFluidRadialComponent2}
\end{align}
Eqs.~(\ref{eq:EulerianPertMun}--\ref{eq:DensityQEulerianPert}) with $\delta\mu_a=(d\mu_a/dr)\delta r$ and $\delta n_a=(dn_a/dr)\delta r$ imply
\begin{equation}
\frac{{\rm d}n_{\rm q}}{{\rm d}r}+\frac{\mu_{\rm nn}\mu_{\rm qf}}{D}\frac{{\rm d}f}{{\rm d}r}=\frac{\mu_{\rm nn}-\mu_{\rm nq}}{D}\frac{{\rm d}\mu_0}{{\rm d}r},
\label{eq:dnqdRReplacement}
\end{equation}
hence
\begin{align}
\xi^r_{\rm q}&\left[\omega^2+{\rm e}^{\nu-\lambda}(1+y)\left(\frac{\mu_{\rm qf}}{\mu_0}\frac{{\rm d}\mu_0}{{\rm d}r}\right)\left(\frac{\mu_{\rm nn}-\mu_{\rm nq}}{n_{\rm q}D}\right)\frac{{\rm d}f}{{\rm d}r}\right] \nonumber \\
={}&{\rm e}^{\nu-\lambda/2}\left((1+y)\frac{{\rm d}\Pi_{\rm q}}{{\rm d}r}-y\frac{{\rm d}\Pi_{\rm n}}{{\rm d}r}\right. \nonumber \\ {}&\left.-(1+y)\frac{\mu_{\rm qf}(\mu_{\rm nn}\Pi_{\rm q}-\mu_{\rm nq}\Pi_{\rm n})}{n_{\rm q}D}\frac{{\rm d}f}{{\rm d}r}\right).
\end{align}
So the square of the Brunt--V\"{a}is\"{a}l\"{a} frequency is
\begin{equation}
N_{\rm q}^2(r)=-{\rm e}^{\nu-\lambda}(1+y)\left(\frac{\mu_{\rm qf}}{\mu_0}\frac{{\rm d}\mu_0}{{\rm d}r}\right)\left(\frac{\mu_{\rm nn}-\mu_{\rm nq}}{n_{\rm q}D}\right)\frac{{\rm d}f}{{\rm d}r}.
\label{eq:BruntVaisalaFrequency1}
\end{equation}
This can be rewritten in a manner which eliminates the dependence on derivatives of $f$. Using $\mu_{\rm e}=\mu_{\upmu}$ in the background, we can write
\begin{align}
(3\pi^2n_{\rm e})^{1/3}={}&\sqrt{(3\pi^2n_{\upmu})^{2/3}+m_{\upmu}^2} \nonumber \\
&\Rightarrow f^{2/3}-(1-f)^{2/3}=\left(\frac{m_{\upmu}^3}{3\pi^2n_{\rm q}}\right)^{2/3}
\label{eq:ElectronFractionRelation}
\end{align}
and
\begin{equation}
\frac{{\rm d}f}{{\rm d}r}=-\frac{f^{1/3}(1-f)^{1/3}}{n_{\rm q}^{5/3}[f^{1/3}+(1-f)^{1/3}]}\left(\frac{m_{\upmu}^2}{(3\pi^2)^{2/3}}\right)\frac{dn_{\rm q}}{dr}.
\label{eq:dfdR}
\end{equation}
Differentiating Eq.~(\ref{eq:ChargedChemicalPotential}) to find
\begin{align}
\mu_{\rm qf}={}&\frac{\partial}{\partial f}\left(\mu_{\rm p}+f\mu_{\rm e}(fn_{\rm q})+(1-f)\mu_{\upmu}((1-f)n_{\rm q})\right) \nonumber \\
={}&n_{\rm e}\frac{{\rm d}\mu_{\rm e}}{{\rm d}n_{\rm e}}-n_{\upmu}\frac{{\rm d}\mu_{\upmu}}{{\rm d}n_{\upmu}}=\frac{m_{\upmu}^2}{3(3\pi^2n_{\rm q}f)^{1/3}},
\label{eq:MuQF2}
\end{align}
we therefore have
\begin{align}
&\mu_{\rm qf}\frac{{\rm d}f}{{\rm d}r}=-\frac{m_{\upmu}}{n_{\rm q}}G(f)\frac{{\rm d}n_{\rm q}}{{\rm d}r}, \nonumber \\
&G(f)\equiv{}\frac{(1-f)^{1/3}}{3}\left(f^{1/3}-(1-f)^{1/3}\right)\left(f^{2/3}-(1-f)^{2/3}\right)^{1/2}.
\label{eq:G(f)}
\end{align}
Inserting this into Eq.~(\ref{eq:dnqdRReplacement}) to eliminate $dn_{\rm q}/dr$, then using the resulting equation to eliminate $\mu_{\rm qf}df/dr$ from Eq.~(\ref{eq:BruntVaisalaFrequency1}) gives us
\begin{equation}
N_{\rm q}^2(r)={\rm e}^{\nu-\lambda}(1+y)\frac{m_{\upmu}G(f)(\mu_{\rm nn}-\mu_{\rm nq})^2({\rm d}\mu_0/{\rm d}r)^2}{\mu_0n_{\rm q}D[n_{\rm q}D-\mu_{\rm nn}m_{\upmu}G(f)]}.
\label{eq:BruntVaisalaFrequency2}
\end{equation}
$N_{\rm q}(r)$ is plotted in Figure~(\ref{fig:BVFrequency}) with zero entrainment ($y=0$), along with the Brunt--V\"{a}is\"{a}l\"{a} frequency for a normal fluid star $N_{\text{nf}}$, given by
\begin{equation}
N_{\text{nf}}=\sqrt{N^2_{Y}+YN_{\rm q}^2},
\label{eq:NFBruntVaisalaFrequency}
\end{equation}
where the lepton gradient contribution is reduced by a factor of $Y$ due to the increased inertia of moving both the protons and neutron, and where~\citep{Reisenegger1992}
\begin{align}
N_{Y}^2(r)={}&-{\rm e}^{\nu-\lambda}\left(\frac{1}{\mu_0}\frac{{\rm d}\mu_0}{{\rm d}r}\right)\left(\frac{\mu_{{\rm b}Y}}{n_{\rm b}\mu_{\rm bb}}\frac{{\rm d}Y}{{\rm d}r}+\frac{Y\mu_{\rm qf}}{n_{\rm b}\mu_{\rm bb}}\frac{{\rm d}f}{{\rm d}r}\right), \label{eq:YGradientBruntVaisalaFrequency} \\
\mu_{\rm bb}={}&Y^2\mu_{\rm qq}+2Y(1-Y)\mu_{\rm nq}+(1-Y)^2\mu_{\rm nn} \nonumber , \\
\mu_{{\rm b}Y}={}&n_{\rm q}(\mu_{\rm qq}-\mu_{\rm nq})-n_{\rm n}(\mu_{\rm nn}-\mu_{\rm nq}). \nonumber
\end{align}
The superfluid leptonic Brunt--V\"{a}is\"{a}l\"{a} frequency is similar to that of Figure 2 of KG14, Figure 5 of~\citet{Passamonti2016} and the zero entrainment results of YW17. Differences are due to differences in the equations of state used, and in the case of the YW17 results, our inclusion of general relativity in both the background star and the perturbations and our neglect of self-gravity perturbations. Figure~\ref{fig:BVFrequencyVarK} shows $N_{\rm q}$ for fixed mass $M=1.4M_{\odot}$ using each of the four parametrizations specified by Table~\ref{tab:EOSParameters}, with the peak value of $N_{\rm q}$ decreasing slightly with increasing nuclear compressibility $K$.
\begin{figure}
\centering
\includegraphics[width=1.0\columnwidth]{BVFrequencyRel.pdf}
\caption{Brunt--V\"{a}is\"{a}l\"{a} (cyclical) frequency as a function of coordinate radius $r$ with no entrainment, calculated using the PC1 parametrization for our equation of state from Section~(\ref{sec:EoS}) for two different model stars: $M=1.4M_{\odot}$, $n_{{\rm b},\text{cntr}}=3.17n_{\text{nuc}}$, $R=12.11$ km, $R_{\text{cc}}=11.27$ km and $M=2.0M_{\odot}$, $n_{{\rm b},\text{cntr}}=6.67n_{\text{nuc}}$, $R=10.54$ km and $R_{\text{cc}}=10.23$ km, where $n_{{\rm b},\text{cntr}}$ is the central baryon density and $R_{\text{cc}}$ the crust-core interface radius. $R_{\text{cc}}$ for each star is also indicated on the graph by a vertical line at the frequency cutoff for $N_{\text{nf}}$. The Brunt--V\"{a}is\"{a}l\"{a} frequency arising due to the leptonic composition gradient in superfluid stars $N_{\rm q}$ and the total Brunt--V\"{a}is\"{a}l\"{a} frequency for a normal fluid star $N_{\text{nf}}$ are displayed. The frequency cutoffs for $N_{\rm q}$ correspond to the muon threshold, which is at the same density but a different radius for each star.}
\label{fig:BVFrequency}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1.0\columnwidth]{BVFrequencyRelVarK.pdf}
\caption{Brunt--V\"{a}is\"{a}l\"{a} (cyclical) frequency as a function of coordinate radius $r$ with no entrainment, for fixed mass $1.4M_{\odot}$ using the four EOS parametrizations specified by Table~\ref{tab:EOSParameters}. The frequency cutoffs correspond to the muon threshold, which is at the same density but a different radius for each star.}
\label{fig:BVFrequencyVarK}
\end{figure}
Using the definition of the Brunt--V\"{a}is\"{a}l\"{a} frequency, we can rewrite Eq.~(\ref{eq:XiRQ}) as
\begin{align}
\xi^r_{\rm q} {\rm e}^{-\nu}(\omega^2-N_{\rm q}^2)={}&{\rm e}^{-\lambda/2}(1+y)\frac{{\rm d}\Pi_{\rm q}}{{\rm d}r}-{\rm e}^{-\lambda/2}y\frac{{\rm d}\Pi_{\rm n}}{{\rm d}r} \nonumber \\
&+\frac{{\rm e}^{\lambda/2-\nu}\mu_0N_{\rm q}^2(\mu_{\rm nn}\Pi_{\rm q}-\mu_{\rm nq}\Pi_{\rm n})}{({\rm d}\mu_0/{\rm d}r)(\mu_{\rm nn}-\mu_{\rm nq})}.
\label{eq:XiRQ2}
\end{align}
Eqs.~(\ref{eq:XiRN}),~(\ref{eq:PerturbedContinuityN}--\ref{eq:PerturbedContinuityQ}) and~(\ref{eq:XiRQ2}) are the four coupled first-order ODEs describing the fluid perturbations in the core.
\subsection{Two-fluid formalism in the crust}
\label{subsec:CrustEoM}
To correctly calculate the compressional modes, we need to allow the oscillations in the core to propagate into the crust. As in the core, we consider two fluids- superfluid free neutrons, with displacement field $\overline{\xi}^i_{\rm f}$, and normal fluid nuclei, with displacement field $\overline{\xi}^i_{\rm c}$. We ignore elastic stresses here for simplicity, as we are not interested in the $s$ (shear) and $i$ (interface) modes caused by elasticity in the crust \citep{McDermott1983}. We also neglect entrainment, which is expected in the crust~\citep{Kobyakov2013,Chamel2017a} but which we do not expect to play a large role in the $g$-modes. Its effect on the $p$-modes, which we do not expect to be large either, will be examined in a later paper.
We derive the two-fluid crust equations of motion by first assuming that the perturbations of the chemical potential $\mu_{\rm f}$ and $\mu_{\rm c}$ in the crust can be written as a function of the two crustal number densities $n_{\rm f}$ (free neutrons) and $n_{\rm c}$ (baryons in nuclei), with changes in the other parameters of the crust EOS $A$, $Y$ and $n_{\rm i}$ having been absorbed into the changes in either $n_{\rm f}$ or $n_{\rm c}$. Thus we can write a series of four equations-- the two radial and two tangential components of the perturbed Euler equations-- analogously to Eqs.~(\ref{eq:XiRN}--\ref{eq:XiPerpQ}). We then have
\begin{align}
&\omega^2{\rm e}^{-\nu}\xi^r_{\rm f}={\rm e}^{-\lambda/2}\frac{{\rm d}\Pi_{\rm f}}{{\rm d}r}, \label{eq:XiRF}\\
&\omega^2{\rm e}^{-\nu}r\xi^{\perp}_{\rm f}=\Pi_{\rm f}, \label{eq:XiPerpF} \\
&\omega^2{\rm e}^{-\nu}\xi^r_{\rm c}={\rm e}^{-\lambda/2}\frac{{\rm d}\Pi_{\rm c}}{{\rm d}r}, \label{eq:XiRC} \\
&\omega^2{\rm e}^{-\nu}r\xi^{\perp}_{\rm c}=\Pi_{\rm c}, \label{eq:XiPerpC}
\end{align}
where $\Pi_{\rm f}\equiv\delta\mu_{\rm f}/\mu_0$, $\Pi_{\rm c}\equiv\delta\mu_{\rm c}/\mu_0$ in analogy with Eq.~(\ref{eq:PiDefinition}). From the perturbed continuity equation, we also have
\begin{align}
\delta n_{\rm f} = -n_{\rm f}\Theta_{\rm f}-{\rm e}^{-\lambda/2}\xi^r_{\rm f}\frac{{\rm d}n_{\rm f}}{{\rm d}r}, \label{eq:EulerDensPertnf}\\
\delta n_{\rm c} = -n_{\rm c}\Theta_{\rm c}-{\rm e}^{-\lambda/2}\xi^r_{\rm c}\frac{{\rm d}n_{\rm c}}{{\rm d}r}. \label{eq:EulerDensPertnc}
\end{align}
Using our assumption about the number density dependence of the perturbations, we can then rearrange
\begin{align}
\delta\mu_{\rm f}={}& \mu_{\rm ff}\delta n_{\rm f}+\mu_{\rm fc}\delta n_{\rm c}, \\
\delta\mu_{\rm c}={}& \mu_{\rm cc}\delta n_{\rm c}+\mu_{\rm fc}\delta n_{\rm f}
\end{align}
to obtain
\begin{align}
\delta n_{\rm f}=\frac{\mu_0(\mu_{\rm cc}\Pi_{\rm f}-\mu_{\rm fc}\Pi_{\rm c})}{D_{\text{crust}}}, \label{eq:deltanf}\\
\delta n_{\rm c}=\frac{\mu_0(\mu_{\rm ff}\Pi_{\rm c}-\mu_{\rm fc}\Pi_{\rm f})}{D_{\text{crust}}}, \label{eq:deltanc}
\end{align}
where $\mu_{ab}\equiv\partial \mu_a/\partial n_{\rm b}$ for $a,b\in\{\rm f,c\}$, $D_{\text{crust}}\equiv \mu_{\rm ff}\mu_{\rm cc}-\mu_{\rm fc}^2$ and $\mu_0=\mu_{\rm f}=\mu_{\rm c}$ in the background. There are subtleties involved in the calculation of $\mu_{\rm ff}$, $\mu_{\rm cc}$ and $\mu_{\rm fc}$, which are discussed in Appendix~\ref{app:1}. Combining Eqs.~(\ref{eq:deltanf}--\ref{eq:deltanc}) with Eq.~(\ref{eq:EulerDensPertnf}--\ref{eq:EulerDensPertnc}), we obtain
\begin{align}
\frac{{\rm d}\xi^r_{\rm f}}{{\rm d}r}+\left[\frac{2}{r}+\frac{{\rm d}\ln n_{\rm f}}{{\rm d}r}\right]\xi^r_{\rm f}+\left[-{\rm e}^{\nu+\lambda/2}\frac{k^2_{\perp}}{\omega^2}+\frac{\mu_0\mu_{\rm cc}}{n_{\rm f}D_{\text{crust}}}{\rm e}^{\lambda/2}\right]\Pi_{\rm f} \nonumber \\
= \frac{\mu_0\mu_{\rm fc}}{n_{\rm f}D_{\text{crust}}}{\rm e}^{\lambda/2}\Pi_{\rm c}, \label{eq:PerturbedContinuityF}\\
\frac{{\rm d}\xi^r_{\rm c}}{{\rm d}r}+\left[\frac{2}{r}+\frac{{\rm d}\ln n_{\rm c}}{{\rm d}r}\right]\xi^r_{\rm c}+\left[-{\rm e}^{\nu+\lambda/2}\frac{k^2_{\perp}}{\omega^2}+\frac{\mu_0\mu_{\rm ff}}{n_{\rm c}D_{\text{crust}}}{\rm e}^{\lambda/2}\right]\Pi_{\rm c} \nonumber \\
= \frac{\mu_0\mu_{\rm fc}}{n_{\rm c}D_{\text{crust}}}{\rm e}^{\lambda/2}\Pi_{\rm f}. \label{eq:PerturbedContinuityC}
\end{align}
Eqs.~(\ref{eq:XiRF}),~(\ref{eq:XiRC}) and (\ref{eq:PerturbedContinuityF}--\ref{eq:PerturbedContinuityC}) are the four coupled first-order ODEs describing the fluid perturbations in the crust. Similarly to the equations in the core, in the case of zero thermodynamic coupling $\mu_{\rm fc}=0$, these equations become two sets of coupled equations for ($\xi^r_{\rm f}$,$\Pi_{\rm f}$) and ($\xi^r_{\rm c}$,$\Pi_{\rm c}$).
\subsection{Single normal fluid in crust}
\label{subsec:SFCrustEoM}
In the crust, the neutron superfluid is $_1S^0$, with gaps $\sim 0.1$--$1$ MeV until low free neutron density~\citep{Gezerlis2010,Gezerlis2014}. Thus, we expect the free neutron gas to remain superfluid throughout most of the crust. The superfluid gap for neutrons falls precipitously at a Fermi wave number of approximately $0.05$ fm$^{-1}$ \citep{Gezerlis2014}, corresponding to a free neutron number density of $n_{\rm f}=n_{\rm NF}=2.64\times10^{-5}n_{\text{nuc}}$. For our calculations, we assume that the critical temperature $T_{\rm c}$ for free crustal neutrons is purely a function of $n_{\rm f}$. Thus, we assume that the free crustal neutrons are superfluid at $n_{\rm f}>n_{\rm NF}$ and normal at $n_{\rm f}\leq n_{\rm NF}$, so there is a sharp transition from superfluid to normal at $n_{\rm f}=n_{\rm NF}$. More realistically, $T_{\rm c}$ will also depend on $n_{\rm c}$; we assume that $T_{\rm c}$ depends on $n_{\rm c}$ much more weakly than it does on $n_{\rm f}$. In a finite temperature star, the superfluid density is proportional to $T_{\rm c}-T$ near the transition to normal fluid, but this temperature dependence is only important in a very thin region in the crust for $T$ small compared with typical values of $T_{\rm c}$ in the crust, which are $\sim 10^{10}$ K.
Between the transition density and neutron drip, free neutrons and nuclei move together as a single fluid. We represent this single normal fluid, which exists only in a very thin layer just above the neutron drip density, using the displacement field $\overline{\xi}^i_{\rm b}$ and the total baryon number density $n_{\rm b}=n_{\rm c}+n_{\rm f}$. The equation of state in this region is the same as in the two-fluid region of the crust. Since the two fluids move together here, there is a buoyancy and Brunt--V\"{a}is\"{a}l\"{a} frequency associated with the gradient of $Y_{\rm c}\equiv n_{\rm c}/n_{\rm b}$.
By analogy with Eqs.~(\ref{eq:XiRN}--\ref{eq:XiPerpQ}), ~(\ref{eq:PerturbedContinuityN}--\ref{eq:PerturbedContinuityQ}) and~(\ref{eq:XiRQ2}), we obtain two Euler equations and the perturbed continuity equation for the single normal fluid displacement field in the crust:
\begin{align}
&\omega^2{\rm e}^{-\nu}r\xi^{\perp}_{\rm b} = \Pi_{\rm b}, \label{eq:XiPerpB} \\
&\xi^r_{\rm b}{\rm e}^{-\nu}(\omega^2-N_{\rm b}^2) = {\rm e}^{-\lambda/2}\frac{{\rm d}\Pi_{\rm b}}{{\rm d}r}+\frac{{\rm e}^{\lambda/2-\nu}\mu_0n_{\rm b}^2}{{\rm d}\mu_0/{\rm d}r}\Pi_{\rm b}, \label{eq:XiRB} \\
&\frac{{\rm d}\xi^r_{\rm b}}{{\rm d}r}+\left[\frac{2}{r}+\frac{{\rm d}\ln n_{\rm b}}{{\rm d}r}+\frac{\mu_{bY_{\rm c}}}{n_{\rm b}\mu_{\rm bb}}\frac{{\rm d}Y_{\rm c}}{{\rm d}r}\right]\xi^r_{\rm b} \nonumber \\
&+\left[-{\rm e}^{\nu+\lambda/2}\frac{k_{\perp}^2}{\omega^2}+\frac{\mu_0}{n_{\rm b}\mu_{\rm bb}}{\rm e}^{\lambda/2}\right]\Pi_{\rm b}=0, \label{eq:PerturbedContinuityB}
\end{align}
where $\Pi_{\rm b}\equiv\delta\mu_{\rm b}/\mu_0$ and $n_{\rm b}$ is the Brunt--V\"{a}is\"{a}l\"{a} frequency associated with the gradient of $Y_{\rm c}$, given by
\begin{equation}
N_{\rm b}^2 = -{\rm e}^{\nu-\lambda}\frac{1}{\mu_0}\frac{{\rm d}\mu_0}{{\rm d}r}\frac{\mu_{{\rm b}Y_{\rm c}}}{n_{\rm b}\mu_{\rm bb}}\frac{{\rm d}Y_{\rm c}}{{\rm d}r}.
\label{eq:CrustBVFrequency}
\end{equation}
The two thermodynamic derivatives $\mu_{\rm bb}$ and $\mu_{{\rm b}Y_{\rm c}}$ are
\begin{align}
\mu_{\rm bb} ={}& Y_{\rm c}^2\mu_{\rm cc}+(1-Y_{\rm c})^2\mu_{\rm ff}+2Y_{\rm c}(1-Y_{\rm c})\mu_{\rm fc},\\
\mu_{{\rm b}Y_{\rm c}} ={}& n_{\rm c}(\mu_{\rm cc}-\mu_{\rm fc})-n_{\rm f}(\mu_{\rm ff}-\mu_{\rm fc}).
\end{align}
\subsection{Interface and boundary conditions}
\label{sec:BCs}
At the centre of the star, we impose the regularity condition $\Theta_a=0$, which implies that the displacement fields and $\Pi_a$ satisfy the following conditions at $r=0$:
\begin{align}
\xi^r_a ={}& l(\xi^r_a)_0r^{l-1}, \\
\Pi_a ={}& \omega^2{\rm e}^{-\nu}(\xi^r_a)_0r^l,
\end{align}
where $(\xi^r_a)_0$ is a constant. Since we can scale the overall amplitude of each mode, we only need to specify $(\xi^r_{\rm n})_0$ and can set $(\xi^r_{\rm q})_0=1$.
We require four conditions at the crust-core transition which allow the computation of the four quantities $(\xi^r_{\rm c},\xi^r_{\rm f},\Pi_{\rm c},\Pi_{\rm f})$ on the crust side of the transition using the quantities $(\xi^r_{\rm q},\xi^r_{\rm n},\Pi_{\rm q},\Pi_{\rm n})$ on the core side of the transition. Since the crust-core interface is denoted by the formation of nuclei, we know that the radial component of the displacement fields for the protons must be continuous at this interface. Since the motion of the protons is described by $\xi_{\rm q}^i$ and $\xi_{\rm c}^i$, this implies
\begin{equation}
(\xi^r_{\rm q})^+ = (\xi^r_{\rm c})^-,
\label{eq:InterfaceCondition1}
\end{equation}
where $+$ indicates the high-density (core) side and $-$ the low-density (crust) side of the transition. As baryons are not allowed to build up at the interface, baryon conservation is the second transition condition. Denoting the Lagrangian perturbation moving along with the nuclei (and hence the crust-core boundary) as $\Delta_{\rm c}$, the perturbed continuity equation for the total baryon number density is
\begin{equation}
\Delta_{\rm c}n_{\rm b}+n_{\rm b}\Theta_{\rm c}=0.
\end{equation}
Integrating this across the crust-core interface, we obtain
\begin{equation}
(n_{\rm n}\xi^r_{\rm n}-n_{\rm n}\xi^r_{\rm q})^+=(n_{\rm f}\xi^r_{\rm f}-n_{\rm f}\xi^r_{\rm c})^-.
\label{eq:InterfaceCondition2}
\end{equation}
As we have neglected elastic stresses in the crust, continuity of the tractions across the crust-core interface is given by the continuity of the pressure perturbation moving with the interface, or
\begin{equation}
(\Delta_{\rm c}P)^+=(\Delta_{\rm c}P)^-.
\end{equation}
Using the Gibbs--Duhem equation, we can use $\Delta P = \sum_{a}n_a\Delta \mu_a$ to rewrite this condition, giving
\begin{equation}
(n_{\rm n}\Pi_{\rm n}+n_{\rm q}\Pi_{\rm q})^+=(n_{\rm c}\Pi_{\rm c}+n_{\rm f}\Pi_{\rm f})^-+(n_{\rm b}^--n_{\rm b}^+){\rm e}^{-\lambda/2}\xi^r_{\rm c}\frac{{\rm d}\ln\mu_0}{{\rm d}r}.
\label{eq:InterfaceCondition3}
\end{equation}
Following~\citet{Andersson2011} and~\citet{Passamonti2012}, the final boundary condition we impose is continuity of the neutron chemical potential perturbation, $(\Delta_{\rm c}\mu_{\rm n})^+=(\Delta_{\rm c} \mu_{\rm f})^-$, which results from the ``chemical gauge''-independence of the neutron chemical potential. This final interface condition simplifies to
\begin{equation}
(\Pi_{\rm n})^+ = (\Pi_{\rm f})^-,
\label{eq:InterfaceCondition4}
\end{equation}
where we have used $\mu_0=\mu_{\rm c}=\mu_{\rm f}$ in the background equilibrium. The chemical potential is the same for crustal neutrons that are bound in nuclei or in the surrounding free superfluid; this condition is satisfied in the crustal equation of state, which allows neutrons to be exchanged freely between nuclei and the surrounding free neutron vapor (see Section~\ref{subsec:CrustEoS} and references therein). Thus, Eq.~(\ref{eq:InterfaceCondition4}) states the condition that there is no energy change when a crustal neutron is exchanged with a core neutron at the crust-core boundary irrespective of whether the crustal neutron is bound or free.
At the two fluid-single fluid transition in the crust just above neutron drip, baryon conservation and continuity of the tractions must be imposed. These two conditions can be expressed as
\begin{align}
(n_{\rm f}\xi^r_{\rm f}+n_{\rm c}\xi^r_{\rm c})^+={}&(n_{\rm b}\xi^r_{\rm b})^-,
\label{eq:TFSFCondition1}\\
(n_{\rm f}\Pi_{\rm f}+n_{\rm c}\Pi_{\rm c})^+={}&(n_{\rm b}\Pi_{\rm b})^-,\label{eq:TFSFCondition2}
\end{align}
where $+/-$ indicate the high density (two fluid) and low density (single fluid) regions respectively.
We also require another boundary condition at the two fluid-single fluid transition (SFT). In a very thin region where the superfluid neutron fraction $f$ falls from one to zero, $\xi_{\rm f}^r=f\xi_{\rm sf}^r+(1-f)\xi_{\rm nf}^r$, where $\xi_{\rm sf}^r$ and $\xi_{\rm nf}^r$ are the radial components of the superfluid neutron and normal fluid neutron displacement fields. If the normal neutrons couple perfectly to the charged component, then $f\xi_{\rm sf}^r=\xi_{\rm f}^r-(1-f)\xi_{\rm c}^r\to\xi_{\rm f}^r-\xi_{\rm c}^r$ for $f\to 0$. The current carried by the superfluid component should vanish where the superfluid neutrons disappear, which is true if
\begin{equation}
(\xi^r_{\rm f})^+=(\xi^r_{\rm c})^+
\label{eq:XifXicequal}
\end{equation}
at the surface where $f=0$. Combined with Eq.~(\ref{eq:TFSFCondition1}), this implies that
\begin{equation}
(\xi^r_{\rm f})^+=(\xi^r_{\rm c})^+=(\xi^r_{\rm b})^-.
\label{eq:ThreeComovingXis}
\end{equation}
A boundary condition is required at the outer surface of the star, which we approximate to occur at the neutron drip line since the outer crust contains so little of the star's mass (less than $0.01$\%) that we assume that its effect on the modes is negligible. We impose a form of the condition expressed in Eqs.~(\ref{eq:InterfaceCondition3}), but applied to the displacement field of the single normal fluid which exists just above neutron drip (ND)
\begin{align}
\left(\Pi_{\rm b} + {\rm e}^{-\lambda/2}\xi^r_{\rm b}\frac{{\rm d}\ln\mu_0}{{\rm d}r}\right)_{\text{at ND}}=0.
\label{eq:DeltaP}
\end{align}
As a check, we also compute a few $g$-modes and $p$-modes while integrating out to lower densities in the crust, imposing Eq.~(\ref{eq:DeltaP}) at $n_{\rm b}/n_{\text{nuc}}=1\times10^{-8}$. The $g$-mode frequencies obtained when doing so agree to within 0.1\% of those found when we stopped the integration at neutron drip. There is no discernible change in the core displacement fields for $g$-modes for these two boundary conditions, and the changes in the displacement fields in the crust are larger than in the core but still very small. The $p$-mode frequencies obtained in this way are within 2\% of those calculated with neutron drip as the stopping point for the integration. The $p$-mode displacement fields in the core are weakly affected by this shift in the minimum density, but the modes in the crust can differ significantly, particularly for the higher frequency modes which can have additional oscillations in the crust.
\section{Normal mode calculations}
\label{sec:NormalModes}
\subsection{WKB solutions}
Since the leptonic Brunt--V\"{a}is\"{a}l\"{a} frequency does not exist in the crust, we expect that the $g$-mode displacement fields in the crust will be evanescent and nearly zero. We thus employed the WKB approximation to calculate approximate $g$-mode displacement fields and mode frequencies, assuming no propagation into the crust, and also use the resulting approximate $p$-mode dispersion relations in discussing the $p$-mode displacement fields in the core. First, we convert Eqs.~(\ref{eq:XiRN},\ref{eq:PerturbedContinuityN}--\ref{eq:PerturbedContinuityQ},\ref{eq:XiRQ2}) into two second-order equations for $\Pi_{\rm n}$ and $\Pi_{\rm q}$, neglecting curvature terms, derivatives of the metric, $f$, the $\mu_{ab}$ and $N_{\rm q}$, and ignoring entrainment. We obtain
\begin{align}
&\frac{{\rm d}^2\Pi_{\rm n}}{{\rm d}r^2}+\frac{{\rm d}\ln n_{\rm n}}{{\rm d}r}\frac{{\rm d}\Pi_{\rm n}}{{\rm d}r}+\left[-k_{\perp}^2{\rm e}^{\lambda}+\frac{\mu_0\mu_{\rm qq}{\rm e}^{\lambda-\nu}\omega^2}{n_{\rm n}D}\right]\Pi_{\rm n}
\nonumber \\
{}&= \frac{\mu_{\rm nq}\mu_0{\rm e}^{\lambda-\nu}\omega^2}{n_{\rm n}D}\Pi_{\rm q},
\label{eq:2ndOrderN1} \\
&\frac{{\rm d}^2\Pi_{\rm q}}{{\rm d}r^2}+\frac{{\rm d}\ln n_{\rm q}}{{\rm d}r}\frac{{\rm d}\Pi_{\rm q}}{{\rm d}r}+\left(1-\frac{N_{\rm q}^2}{\omega^2}\right)\left[-k_{\perp}^2{\rm e}^{\lambda} +\frac{\mu_0\mu_{\rm nn}{\rm e}^{\lambda-\nu}\omega^2}{n_{\rm q}D}\right]\Pi_{\rm q} \nonumber \\
{}&= \frac{\mu_{\rm nq}\mu_0{\rm e}^{\lambda-\nu}(\omega^2-N_{\rm q}^2)}{n_{\rm q}D}\Pi_{\rm n}, \label{eq:2ndOrderQ1}
\end{align}
where we have used Eq.~(\ref{eq:BruntVaisalaFrequency1}) to replace ${\rm d}\mu_0/{\rm d}r$.
Defining $\Psi_a=\sqrt{n_a}\Pi_a$ and assuming that the $\Psi_a$ have a slowly-varying amplitude $C_a(r)$ and a rapidly-oscillating phase $S(r)=\int k_rdr$. Inserting this definition into Eqs.~(\ref{eq:2ndOrderN1}--\ref{eq:2ndOrderQ1}) gives
\begin{align}
(S')^2C_{\rm n} = M_{\rm nn}C_{\rm n} + M_{\rm nq}C_{\rm q}, \label{eq:WKB0O_1}\\
(S')^2C_{\rm q} = M_{\rm qq}C_{\rm q} + M_{\rm qn}C_{\rm n}, \label{eq:WKB0O_3}
\end{align}
where $d/dr='$ and
\begin{align}
M_{\rm nn} &= -k^2_{\perp}{\rm e}^{\lambda}+\frac{{\rm e}^{\lambda-\nu}\omega^2\mu_0\mu_{\rm qq}}{n_{\rm n}D}-\frac{1}{\sqrt{n_{\rm n}}}\frac{{\rm d}^2\sqrt{n_{\rm n}}}{{\rm d}r^2}, \\
M_{\rm qq} &= -\left(1-\frac{N_{\rm q}^2}{\omega^2}\right)k^2_{\perp}{\rm e}^{\lambda}+\frac{{\rm e}^{\lambda-\nu}\omega^2\mu_0\mu_{\rm nn}}{n_{\rm q}D}-\frac{1}{\sqrt{n_{\rm q}}}\frac{{\rm d}^2\sqrt{n_{\rm q}}}{{\rm d}r^2}, \\
M_{\rm nq} &= M_{qn} = -\frac{{\rm e}^{\lambda-\nu}\omega^2\mu_0\mu_{\rm nq}}{\sqrt{n_{\rm n}n_{\rm q}}D}.
\end{align}
Eqs.~(\ref{eq:WKB0O_1}--\ref{eq:WKB0O_3}) have solutions
\begin{equation}
(S')^2 =\frac{1}{2}\left[(M_{\rm nn}+M_{\rm qq}) \pm \sqrt{(M_{\rm nn}-M_{\rm qq})^2+4M_{\rm nq}^2} \right].
\label{eq:ZerothOrderPhase}
\end{equation}
As $|M_{\rm nq}| \ll |M_{\rm nn}|, |M_{\rm qq}|$, we can identify a neutron-dominated mode with $(S')^2=(k_r^2)_+\approx M_{\rm nn}$ and a charged fluid-dominated mode with $(S')^2=(k_r^2)_-\approx M_{\rm qq}$. In the low frequency $\omega^2\lesssim N^2_{\rm q}$ limit, $M_{\rm nn}\sim-k_{\perp}^2{\rm e}^{\lambda}$, so the low-frequency neutron-dominated mode is nonpropagating. The charged fluid-dominated mode does propagate, however, since $M_{\rm qq}\sim (N_{\rm q}^2/ \omega^2-1)k_{\perp}^2 {\rm e}^{\lambda}$ in the low-frequency limit, thus giving a dispersion relation for the $g$-modes
\begin{equation}
\omega^2_g\approx\frac{N_{\rm q}^2k_{\perp}^2{\rm e}^{\lambda}}{k^2},
\label{eq:ApproximateGModeDispersion}
\end{equation}
where $k^2=k_r^2+k^2_{\perp}{\rm e}^{\lambda}$, in agreement with the standard result of~\citet{McDermott1983}. The high frequency limit $\omega^2\gg N_{\rm q}^2$, keeping the $M_{\rm nq}$ contribution, gives the $p$-mode dispersion relation in the crust
\begin{align}
{}& \omega^2_{\rm p}\approx c_{\rm s\pm}^2k^2, \nonumber \\
{}&c_{\rm s\pm}^2={\rm e}^{\nu-\lambda}\frac{n_{\rm n}n_{\rm q}}{2\mu_0}\left[\left(\frac{\mu_{\rm qq}}{n_{\rm n}}+\frac{\mu_{\rm nn}}{n_{\rm q}}\right)\pm\sqrt{\left(\frac{\mu_{\rm qq}}{n_{\rm n}}-\frac{\mu_{\rm nn}}{n_{\rm q}}\right)^2+\frac{4\mu_{\rm nq}^2}{n_{\rm n}n_{\rm q}}}\right],
\label{eq:ApproximatePModeDispersion}
\end{align}
suggesting two sets of $p$-modes, one associated with each superfluid. This is similar to the simplified $p$-mode dispersion relation given by~\citet{Passamonti2016}. Here we have implicitly assumed that the phases of the two fluids are the same. If the thermodynamic coupling $\mu_{\rm nq}$ is ignored, Eq.~(\ref{eq:ApproximatePModeDispersion}) gives two completely separate dispersions, one for the charged fluid and one for the neutron fluid.
In the inner region of the star $r<r_t$, $k_r^2<0$ and the normal modes are exponentially damped. In the outer region $r_t<r<r_{\text{out}}$, $k_r^2>0$ and the modes are oscillatory. Matching at $r_t$ with the exponential solution in the inner region and imposing $\Psi_a(r_{\text{out}})=0$ assuming no propagation into the crust, allowed $g$-modes will have $k_r$ satisfying
\begin{equation}
\int_{r_t}^{r_{\text{out}}}k_r(r')dr'=\left(n_r-\frac{1}{4}\right)\pi, \quad n_r=1,2,3,...,
\label{eq:krCondition}
\end{equation}
where $n_r$ is the radial node number. This condition determines the allowed frequencies since $k_r(r)$ is a function of $\omega$. $n_r$ here is the radial index of the solution, setting the radial node number for $\Psi_a$ and by extension $\Pi_a$ and $\xi^r_a$.
\subsection{Numerical results}
\subsubsection{$g$-modes}
To obtain solutions for the displacement fields $\xi^i_a$ and the $\Pi_a$, we numerically integrated the system of four first-order equations given in the core by Eqs.~(\ref{eq:XiRN}),~(\ref{eq:PerturbedContinuityN}--\ref{eq:PerturbedContinuityQ}) and~(\ref{eq:XiRQ2}), in the crust by Eqs.~(\ref{eq:XiRF}),~(\ref{eq:XiRC}) and~(\ref{eq:PerturbedContinuityF}--\ref{eq:PerturbedContinuityC}), and in the crust just above neutron drip by Eqs.~(\ref{eq:XiRB}--\ref{eq:PerturbedContinuityB}). We use a standard energy normalization to set the amplitude of the displacement fields. Reinserting factors of $c$, this condition is
\begin{align}
\frac{\omega^2}{c^2}\int_0^{R_{\text{cc}}}\int_{\Omega} {\rm d}V\mu_0\left[ (1-\epsilon_{\rm p})n_{\rm q}\xi^{*i}_{\rm q}\xi^{\rm q}_i +(1-\epsilon_{\rm n})n_{\rm n}\xi^{*i}_{\rm n}\xi^{\rm n}_{\rm i} \right. \nonumber \\
\left. + n_{\rm q}\epsilon_{\rm p}(\xi^{*i}_{\rm q}\xi^{\rm n}_{\rm i}+\xi^{*i}_{\rm n}\xi^{\rm q}_i)\right] \nonumber \\
+\frac{\omega^2}{c^2}\int_{R_{\text{cc}}}^R\int_{\Omega} {\rm d}V\mu_0(n_{\rm f}\xi^{*i}_{\rm f}\xi^{\rm f}_i+n_{\rm c}\xi^{*i}_{\rm c}\xi^{\rm c}_i) =\frac{GM^2}{R},
\end{align}
where $M$ and $R$ are the mass and radius of the star, $R_{\text{cc}}$ is the coordinate radius of the crust-core transition, $\Omega$ indicates integration over the solid angle of a sphere and ${\rm d}V={\rm e}^{\lambda/2}r^2\sin\theta {\rm d}r{\rm d}\theta {\rm d}\phi$. In the very thin single fluid region at densities just above neutron drip, $\xi^i_{\rm f}=\xi^i_{\rm c}=\xi^i_{\rm b}$. Each function ($\xi^r_{\rm n}$,$\xi^r_{\rm q}$,$\xi^r_{\rm c}$,$\xi^r_{\rm f}$,$\Pi_{\rm n}$,$\Pi_{\rm q}$,$\Pi_{\rm c}$,$\Pi_{\rm f}$,$\xi^r_{\rm b}$,$\Pi_{\rm n}$) is scaled by the same amount, since they are all linearly related.
Figures~\ref{fig:GModesMass} and~\ref{fig:GModesEntrainment} show the $l=2$ $g$-mode frequency spectrum as a function of the stellar mass and the entrainment parameter $\epsilon_{\rm p}$, respectively, for the four different EOS parametrizations described in Table~\ref{tab:EOSParameters}. The WKB frequencies for the $1.4M_{\odot}$ star with no entrainment are shown in Figure~\ref{fig:GModesEntrainment} to illustrate that they are extremely close to the exact frequencies for $n_{r,{\rm q}}\gtrsim 2$, even though we did not permit propagation into the crust in the WKB approximation. This indicates that the crust is largely unimportant to the mode frequency for the $g$-modes.
We find that our frequencies, which include general relativity, are redshifted compared to those of YW17. For example, they use the approximate $g$-mode frequency $\omega_g/2\pi\approx 590/n_{q,r}$ Hz for their $1.40M_{\odot}$, $m_{\rm p}^*/m_{\rm N}=0.8$ and $K=230.9$ MeV~\citep{RikovskaStone2003} star, while we obtain an approximate frequency spectrum
\begin{equation}
\frac{\omega_g}{2\pi}\approx\frac{608-0.83(K-240\text{ MeV})-90\frac{M}{M_{\odot}}+297\epsilon_{\rm p}}{n_{r,{\rm q}}}\text{ Hz}, \label{eq:OmegaGApprox}
\end{equation}
which is accurate to within $\lesssim$5\% for $n_{r,q}>2$. This formula gives $\omega_g/2\pi\approx 549/n_{r,{\rm q}}$ Hz for a $1.40M_{\odot}$, $\epsilon_{\rm p}=1-m_{\rm p}^*/m_{\rm N}=0.2$ and $K=230.9$ MeV star in our model. Our frequencies are also lower than those of KG14, who did include general relativity. In this case, the differences in the frequencies are due to the different equations of state used, which also contributed to the differences between the results in this paper and those in YW17. Eq.~\ref{eq:OmegaGApprox} also indicates that the $g$-mode frequency is relatively insensitive to the nuclear compressibility $K$, with the numerator changing by only $41$ Hz over the range of $K$ values used here.
As expected from the $(1+y)=(1-Y(1+\epsilon_{\rm p}))/(1-\epsilon_{\rm p}-Y)$ proportionality of the Brunt--V\"{a}is\"{a}l\"{a} frequency, the $g$-mode frequencies are increased as the entrainment parameter $\epsilon_{\rm p}$ is increased. We find that the frequencies increase by a factor or $\sim 1.4$ from the $\epsilon=0$ to the $\epsilon_{\rm p}=0.5$ values, in agreement with an expected scaling factor of $1/\sqrt{1-\epsilon_{\rm p}}=\sqrt{2}$ for low $Y$. However, we do not find an increase as large as found in YW17, with our frequencies with $\epsilon_{\rm p}=0.5$ being a factor of $\sim1.5$ lower than theirs with $m^*_{\rm p}/m_{\rm N}=0.4$. Possible reasons for the disagreement are the differences in the equation of state and in the structure of the star, which we compute by solving the TOV equation. The decrease in $\omega_g$ with $K$ is also expected from the inverse relationship between $N_{\rm q}$ and $K$ as seen in Figure~\ref{fig:BVFrequencyVarK}, though this decrease is small (hence the relative insensitivity to $K$) because the $g$-modes can propagate over a longer distance in higher $K$ stars due to their greater radii. The decrease in $\omega_g$ with $M$ , even though the maximum value of $N_{\rm q}$ in the star increases with $M$, is explained as follows. Figure~\ref{fig:BVFrequency} shows that $N_{\rm q}$ becomes more peaked as a function of mass, meaning that the region of the star where $k_r$ is real (between $r_t$ and $r_{\text{out}}$ in Eq.~(\ref{eq:krCondition})) is smaller for large $M$. For large $n_{r,q}$ Eq.~(\ref{eq:krCondition}) becomes
\begin{equation}
\omega_g \approx \frac{l}{n_{r,{\rm q}}\pi} \int_{r_t}^{r_{\text{out}}} \frac{N_{\rm q}}{r}dr,
\label{eq:ApproximatekrCondition}
\end{equation}
so $\omega_g$ is smaller for a particular $n_{r,{\rm q}}$ when the range of integration is smaller, or when $M$ is larger.
\begin{figure*}
\centering
\includegraphics[width=1.0\textwidth]{CombinedGModePlot.pdf}
\caption{$l=2$ $g$-mode (cyclical) frequencies for different values of the stellar mass and grouped by the EOS parametrization, denoted in the bottom left corner of each subplot. The entrainment in the core was set to zero when computing these frequencies.}
\label{fig:GModesMass}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=1.0\textwidth]{EntrainmentGModePlot.pdf}
\caption{$l=2$ $g$-mode (cyclical) frequencies for different values of the entrainment parameter $\epsilon_{\rm p}$, grouped by the EOS parametrization, denoted in the bottom left corner of each subplot. The WKB frequencies for a zero entrainment, $1.4M_{\odot}$ star with EOS parametrization PC1 is included in the upper left subplot. All stellar models used in this plot are of mass $1.4M_{\odot}$.}
\label{fig:GModesEntrainment}
\end{figure*}
Figure~\ref{fig:GModesDisplacementFields} shows the displacement fields $\xi^r(r)$ and $\xi^{\perp}(r)$ for a few representative $l=2$ $g$-modes in the $1.40M_{\odot}$ star. Since the leptonic Brunt--V\"{a}is\"{a}l\"{a} frequency only acts on the charged fluid, the amplitude of the charged component is two orders of magnitude larger than the neutron component, and the neutron component is pulled along by the charge component through the thermodynamic coupling term $\mu_{\rm nq}$ (and also by the entrainment if $\epsilon_{\rm p}\neq 0$). In the crust, the $g$-mode frequencies are too low to excite oscillatory motion, and thus both the nuclear and neutron fluid displacements damp.
In the core, the charged component displacement fields are in reasonable agreement with YW17, but the neutron components have important differences. Our crust-core transition conditions change the oscillatory structure of the neutron component displacement fields, shifting them away from $\xi^r=0$ in the outer part of the core. This justifies our specification of the $g$-modes using $n_{r,{\rm q}}$, the radial node number of the charged fluid in the core. This is in contrast to results of YW17, which assumed a single normal fluid in the crust and imposes a crust-core transition condition (Eq. (B40) in YW17) that is equivalent to making both superfluid displacement fields equal. As the entrainment is increased in strength and the neutron fluid is forced to move along with the charged fluid to an even greater extent, we find that the radial nodes of the neutron fluid reappear at the locations of the charged fluid nodes. We cannot compare our results to YW17 in the crust because, unlike them, we treat the crust as two-fluids. We also do not compare our results to KG14, who do not show any displacement fields and who also use a single-fluid crust.
\begin{figure*}
\centering
\includegraphics[width=1.0\textwidth]{GModesPlot.pdf}
\caption{Displacement fields $\xi^r$ and $\xi^{\perp}$ for four $l=2$ $g$-modes in a $1.40M_{\odot}$, zero entrainment star with EOS parametrization PC1: ($n_{r,{\rm q}}$, $\omega/2\pi$)=(1, 435.2 Hz), (3, 155.8 Hz), (5, 95.34 Hz) and (10, 48.84 Hz). The crust-core interface is indicated by the thin line at $11.27$ km. To the left of this line, the displacement fields are $(\xi^r_{\rm n},\xi^r_{\rm q},\xi^{\perp}_{\rm n},\xi^{\perp}_{\rm q})$, while to the right they are $(\xi^r_{\rm f},\xi^r_{\rm c},\xi^{\perp}_{\rm f},\xi^{\perp}_{\rm c})$. $(\xi^r_{\rm b},\xi^{\perp}_{\rm b})$, which do not vary much over the very thin region ($\sim 10$ m) of single fluid above neutron drip, are not shown.}
\label{fig:GModesDisplacementFields}
\end{figure*}
\subsubsection{$p$-modes}
Figure~{\ref{fig:PModesDisplacementFields}} shows four distinct $l=2$ $p$-modes for a $\epsilon_{\rm p}$, $1.4M_{\odot}$ star, the first of which is actually an $n_{r,{\rm n}}=n_{r,{\rm q}}=0$ $f$-modes. These illustrate that 1) there are twice as many $p$-modes since there are two fluids, a result which is well-known \citep{Lindblom1994,Lee1995,Gualtieri2014}, including multiple modes with the same radial node number for one or both fluids, and 2) the fluids need not oscillate in phase, meaning the $n$ and $q$ fluids can have different numbers of radial nodes. In fact, we find that, for $\epsilon_{\rm p}=0$, most of the $p$-modes for a two-superfluid star behave as if the two fluids are (almost) uncoupled. This agrees with previous work \citep{Gusakov2011,Gualtieri2014}. This means that the core WKB result Eq.~(\ref{eq:ZerothOrderPhase}) does not apply for all $p$-modes since it assumes that the two fluids have identical phase, which is not necessarily true. In contrast to the $q$-led $g$-modes, the amplitudes of the $n$ and $q$-components of the $p$-modes are comparable. Additionally, the crust displacement fields can have multiple radial nodes, even with the crust constituting only a few percent of the star's radial extent, since the wave number for the $p$-mode is often significantly smaller in the crust than in the core. Following~\citet{Lindblom1994} we can classify $p$-modes by calculating the baryon current $Y\xi^r_{\rm q}+(1-Y)\xi^r_{\rm n}$: those with small baryon current compared with $\xi_{\rm n}^r-\xi_{\rm q}^r$ are classified as superfluid modes, denoted ``$s_i$'', while all others are classified as normal fluid modes, denoted ``$p_i$''. This is similar to the classification scheme of~\citet{Lindblom1994} and~\citet{Lee1995}, who use a scheme based on quantities related to our $\Pi_{\rm q}$ and $\Pi_{\rm n}$.
\begin{figure*}
\centering
\includegraphics[width=1.0\textwidth]{PModesPlot.pdf}
\caption{Displacement fields $\xi^r$ and $\xi^{\perp}$ for the $l=2$ $f$-mode and three $p$-modes in a $1.40M_{\odot}$, zero entrainment star: ($x_i$, $n_{r,{\rm q}}$, $n_{r,{\rm n}}$, $\omega_g/2\pi$)=($f$, 0, 0, 2302 Hz), ($p_1$, 1, 1, 6576 Hz), ($p_2$, 4, 2, 9703 Hz) and ($s_3$, 3, 2, 10378 Hz), where $x_i$ refers to the standard classification of the mode and its order as a subscript. The crust-core interface is indicated by the thin line at $11.27$ km. To the left of this line, the displacement fields are $(\xi^r_{\rm n},\xi^r_{\rm q},\xi^{\perp}_{\rm n},\xi^{\perp}_{\rm q})$, while to the right they are $(\xi^r_{\rm f},\xi^r_{\rm c},\xi^{\perp}_{\rm f},\xi^{\perp}_{\rm c})$. $(\xi^r_{\rm b},\xi^{\perp}_{\rm b})$, which do not vary much over the very thin region ($\sim 10$ m) of single fluid above neutron drip, are not shown. The radial node numbers $n_{r,{\rm q}}$ and $n_{r,{\rm n}}$ for each fluid for each mode are indicated in the upper left of each plot. The displacement fields have been scaled by factors of $\overline{n}^{1/2}_a=(n_a/n_{\text{nuc}})^{1/2}$, which accounts for the abrupt jumps occurring at the crust-core transition.}
\label{fig:PModesDisplacementFields}
\end{figure*}
Figure~\ref{fig:PModesScatter} plots the radial node numbers in the core for each $p$-mode as a function of the mode frequency with zero entrainment. We plot $n_{r,{\rm n}}$ and $n_{r,{\rm q}}$ separately for modes in which they are not identical and only one of them for modes in which they are the same. The $p$-modes for which $n_{r,{\rm n}}\neq n_{r,{\rm q}}$ the two components of the mode each roughly obey the uncoupled fluid dispersion relations $k^{\rm n}_r\approx M_{\rm nn}$ and $k^{\rm q}_r\approx M_{\rm qq}$, and those which have $n_{r,n}= n_{r,q}$ and roughly obey one of the two (coupled) WKB results given by Eq.~(\ref{eq:ZerothOrderPhase}). The modes of the latter type are labeled distinctly based on which solution $(k_r)_{\pm}$ they follow most closely. The separate, uncoupled dispersion relations obeyed by most $p$-modes suggest that they are formed from separate $n$ and $q$ oscillations which are paired together through the weak thermodynamic coupling (in the case of zero entrainment) due to having similar frequencies, with the pairing shifting the mode away from either of the exact frequencies that the uncoupled fluid modes would have. This means that the $n$ and $q$ components of each mode are not required to have the same node number, which is what we observe. The frequency residuals $\Delta\omega_{\rm p}$ compared to the uncoupled fluid or WKB result are shown in the right panel. For the nearly uncoupled modes, these were obtained by comparing the numerically calculated frequency to the expected frequency for the separate fluid components for a given $n_{r,{\rm n}}$ or $n_{r,{\rm q}}$ as calculated using $k^{\rm n}_r\approx M_{\rm nn}$ and $k^{\rm q}_r\approx M_{\rm qq}$ and Eq.~(\ref{eq:krCondition}). For the $n_{r,{\rm n}}=n_{r,{\rm q}}$ modes, the expected frequency was calculated for a given $n_r$ by using Eq.~(\ref{eq:krCondition}) and the WKB solution from Eq.~(\ref{eq:ZerothOrderPhase}) which gave the smallest frequency difference for each mode. $\Delta\omega_{\rm p}$ is small for most modes, indicating that they are well-described by either the nearly uncoupled or standard WKB dispersions. Many of the residuals for the high-frequency uncoupled-type neutron fluid modes are large, suggesting that for these modes the charged part of the mode could be ``pulling'' the neutron part towards being an $n_{r,{\rm n}}=n_{r,{\rm q}}$, charged fluid-like mode obeying the dispersion relation $(k_r)_{-}$.
\begin{figure*}
\centering
\includegraphics[width=1.0\textwidth]{CombinedPModePlot.pdf}
\caption{Left: $l=2$ $p$-mode radial node numbers $n_{r,{\rm n}}$ and $n_{r,{\rm q}}$ plotted as a function of the frequency of the corresponding mode for a $1.40M_{\odot}$, zero entrainment star. The $f$ and $s_0$ ($\omega/2\pi=28687$ Hz) modes are not shown. Modes where $n_{r,{\rm n}}\neq n_{r,{\rm q}}$ have the radial node numbers of the $n$ and $q$ displacement fields denoted separately, but are paired i.e. there are two ticks at the same frequency $\omega_{\rm p}$, one ($+$) denoting the value of $n_{r,{\rm n}}$ and the other (x) denoting $n_{r,{\rm q}}$. Modes where $n_{r,{\rm n}}=n_{r,{\rm q}}$ are denoted by distinct symbols depending on whether they more closely follow the $(k_r)_{+}$ ($n$-dominated, denoted by a triangle) or $(k_r)_{-}$ ($q$-dominated, denoted by a square) WKB dispersion relation.
Right: Residuals $\Delta\omega_{\rm p}$ between the full numerically calculated $p$-mode frequencies and those that an uncoupled $n$ or $q$ mode of identical $n_{r,{\rm n}}$/$n_{r,{\rm q}}$ would have (for $n_{r,{\rm n}}\neq n_{r,{\rm q}}$) \textit{or} between the fully numerically calculated $p$-mode frequencies and the nearest coupled WKB frequency corresponding to the same radial node number $n_{r,{\rm n}}=n_{r,{\rm q}}$.}
\label{fig:PModesScatter}
\end{figure*}
We also calculated the $p$-modes for a $1.40M_{\odot}$ star with strong entrainment $\epsilon_{\rm p}=0.5$. As expected, this drastic increase in the entrainment reduces the difference in the radial node number between the two fluids to at most $\pm 1$. It additionally tries to force the modes to obey the neutron-dominated WKB dispersion $(k_r)_{+}\approx M_{\rm nn}$, which is shown in Figure~(\ref{fig:PModesEntrainment}). That it is this solution that is selected as opposed to the charged fluid-dominated one suggests that the $p$-modes can be thought of as neutron-dominated in the same way that the $g$-modes can be considered charge-fluid dominated, with this shift arising because the entrainment coefficient in the neutron equations $\epsilon_{\rm n}=n_{\rm q}/n_{\rm n}\epsilon_{\rm p}$ is about an order of magnitude smaller than $\epsilon_{\rm p}$, which appears in the charged fluid equations.
\begin{figure}
\centering
\includegraphics[width=1.0\columnwidth]{EntrainmentPModesScatter.pdf}
\caption{$p$-modes for $1.40M_{\odot}$, $\epsilon_{\rm p}=0.5$ star with parametrization PC1 for the EOS, and $n_r$ (including fractional values) as a function of $\omega$ determined from Eq.~(\ref{eq:krCondition}) using $(k_r)_{+}\approx M_{\rm nn}$.}
\label{fig:PModesEntrainment}
\end{figure}
A final point of interest concerning the $p$-modes is the possible existence of pairs of distinct $p$-modes which are closely-spaced in frequency, as opposed to the nearly uniformly-spaced in frequency $p$-modes expected in the single fluid or WKB two-fluid cases. An example of such a mode pair we found for the $1.40M_{\odot}$, $K=230$ MeV, $\epsilon_{\rm p}=0$ star is the pair ($n_{r,{\rm q}}=6$, $n_{r,{\rm n}}=6$, $\omega_{\rm p}/2\pi=24116$ Hz) and ($n_{r,{\rm q}}=10$, $n_{r,{\rm n}}=9$, $\omega_{\rm p}/2\pi=24264$ Hz), which have a frequency spacing of the order of the $g$-mode frequencies. A similar phenomenon is observed in the finite temperature calculation of \citet{Gualtieri2014}, where the $p$-mode frequencies become very similar at certain ``resonance'' temperatures, though our results indicate that nearly-resonant $p$-modes can occur at any temperature. These mode pairs could provide a source of large nonlinear mode couplings for the two-superfluid version of the $p-g$ instability discussed in recent papers~\citep{Weinberg2013,Venumadhav2014,Weinberg2016}. These instabilities may be observable through phase shifts in the gravitational waveforms of binary neutron star mergers~\citep{Essick2016,Andersson2018}.
\section{Conclusions}
We have calculated the $g$- and $p$-modes of a superfluid star with leptonic buoyancy using a specific model for nuclear matter in the core and the crust. We have included general relativity and a two-fluid crust when computing the normal modes, finding that the crust-core interface conditions for the displacement fields change the neutron components of the $g$-mode displacement fields in the core by removing many of their radial nodes. In order to compute the modes, we have developed a simple but flexible equation of state for both crust and core which contains all of the thermodynamics required by our formalism. This allowed us to compute oscillation modes for a range of stellar and nuclear physics parameters, and our EOS can be easily adjusted to agree with new neutron star or nuclear physics measurements. In general our leptonic buoyancy $g$-mode frequencies are similar to those found previously, considering differences in the equations of state used to model the star and redshift factors, and are dominated by the charged fluid in which the buoyancy exists. We find that the $g$-mode frequencies increase with entrainment and decrease with stellar mass and nuclear compressibility, with only weak dependence on the latter. Our decomposition of the fluid into neutron and charged components clearly illustrates that the neutrons are pulled along by the charged fluid in the $g$-modes through thermodynamic coupling and entrainment, and otherwise would not participate in the $g$-mode.
In contrast, for zero entrainment, we reproduce earlier results~\citep{Gusakov2011,Gualtieri2014} that most of the $p$-modes behave as nearly uncoupled fluids, with the weak coupling between the two superfluids leading to pairing between uncoupled $n$- and $q$-fluid modes with similar frequencies. This results in $p$-modes whose charged and neutron components can have widely-differing radial node numbers, and in $p$-modes with frequency differences on the order of the $g$-mode frequencies. These could thus contribute to the recently proposed tidal-$p$-$g$ or related instabilities which depends on nonlinear couplings between $p$ and $g$-modes. For large entrainment, we find ``neutron-dominated'' $p$-modes, in which the phases of the two superfluids in the core are nearly the same so that they almost behave as a single neutron fluid.
As mentioned briefly by YW17 and incorporated in a recent paper~\citep{Yu2017a}, we should include hyperons in the neutron star core, as the chemical potential above $\sim 3n_{\text{nuc}}$ reaches the bare rest mass of the $\Lambda$ hyperon. This will lead to a softening of the equation of state and may provide additional hyperon superfluids which couple thermodynamically to the neutron and charged fluids, or if the hyperons are not superfluid, a hyperonic Brunt--V\"{a}is\"{a}l\"{a} frequency which can modify the $g$-modes in the inner core~\citep{Dommes2016}. If the star is able to contain $\Xi^{-}$ hyperons it could have a hyperonic Brunt--V\"{a}is\"{a}l\"{a} frequency even if the hyperons are superfluid, since the $\Xi^{-}$ would be expected to comove with the protons to which they are electrostatically coupled. Such hyperonic buoyancy would shift the $g$-mode frequencies obtained from leptonic buoyancy alone, which could be used as an indicator of the presence of hyperons in neutron stars if the resulting gravitational waveform phase shifts from the resonant excitation of these $g$-modes in binary neutron star inspirals could be measured. However, if the EOS is softened too much by the hyperons, it could become difficult for it to allow stars of mass $>2M_{\odot}$, as reaching this mass already required large nuclear compressibilities or high central densities.
\section*{Acknowledgements}
This work was supported in part by NASA ATP grant NNX13AH42G. PBR was also supported in part by the Boochever Fellowship at Cornell for fall 2017. We also thank the referee for many helpful comments that improved our paper.
\section{Introduction}
The first observation of gravitational waves from a binary neutron star merger~\citep{Abbott2017} opens up the possibility of studying neutron stars interiors through tidally-induced phase shifts to gravitational waveforms~\citep{Lackey2015,Agathos2015}. This would allow gravitational wave astronomy to serve as a probe of the equation of state above nuclear density, which is otherwise difficult to study. Low-frequency modes with frequencies swept by the orbital frequency may be resonantly excited by tidal interactions in neutron star-black hole and neutron star-neutron star mergers~\citep{Bildsten1992,Cutler1993,Lai1994,Reisenegger1994,Xu2017,Andersson2018}, causing a phase shift that depends on the exact nature of the excited modes. Low frequency $g$-modes are especially interesting, although the resulting gravitational waveform phase shifts from their resonant excitation will likely be impossible to measure with current-generation detectors unless the merging neutron stars are rapidly rotating or have large radii~\citep{Ho1999,Flanagan2007}. The acoustic $p$-modes are too high in frequency to be resonantly excited themselves, but could participate in nonlinear tidal interactions involving the coupling of the $g$-modes and the $p$-modes which may be observable through a gravitational waveform phase shift~\citep{Weinberg2013,Essick2016}.
Potential sources of $g$-modes in neutron stars have been studied for decades. In a normal fluid neutron star, buoyancy arising from temperature gradients~\citep{McDermott1983,Bildsten1995} and the proton fraction gradient~\citep{Reisenegger1992} have been investigated, supporting modes of frequencies $1\sim 100$ Hz. \citet{Lee1995} studied proton fraction gradient $g$-modes in Newtonian stars with superfluid cores, and confirmed a previous calculation by \citet{Lindblom1994} which found two sets of $p$-modes, corresponding to normal fluid and superfluid degree of freedom respectively. Sound speeds for both sets of superfluid neutron star $p$-modes have been calculated by \citet{Epstein1988a}, and this second set of $p$-modes has also been found in a fully relativistic, finite-temperature calculation~\citep{Gualtieri2014}. However, in neutron star cores composed of superfluid neutrons and superfluid-superconducting protons~\citep{Lombardo2001, Page2011}, proton fraction gradients do not lead to $g$-modes unless temperatures are above the neutron critical temperature, estimated to be $\lesssim 10^9$ K ~\citep{Yakovlev1999}, above which electron-neutron coupling~\citep{Bertoni2015} and electron-proton electrostatic coupling will cause both baryon species to move together. $G$-modes due to entropy gradients in superfluid neutron stars were first found by~\citet{Gusakov2013a}, and shortly thereafter they found a new class of $g$-modes resulting from leptonic buoyancy~\citep{Kantor2014} (hereafter KG14). This effect is caused by a gradient in the electron fraction at number densities $\gtrsim 0.13$ fm$^{-3}$, where both electrons and muons coexist in most equations of state. While these leptonic buoyancy $g$-modes were considered in a nonzero temperature star, they were found to exist even in the zero-temperature limit, and their existence was independently confirmed~\citep{Passamonti2016}. A recent paper by~\citet{Yu2017} (hereafter YW17) has computed $g$-mode frequencies and displacement fields arising from leptonic buoyancy in zero temperature neutron star cores using Newtonian gravity, and used their results to study resonant tidal excitation of the modes during neutron star binary inspiral.
We calculate both sets of compressional modes of two-superfluid neutron stars in the zero-temperature approximation, including both the $g$-modes arising due to leptonic buoyancy and the $p$-modes, but with a few crucial differences to previous calculations. Like KG14, we include general relativity and work in the Cowling approximation, neglecting the effects of perturbations to the metric; YW17 used Newtonian gravity but included self gravity perturbations. Secondly, we use a flexible parametrized equation of state (EOS) that allows us to easily adjust the compressibility of the neutron star core, and employ this EOS to calculate the $g$-modes for a range of compressibilities and correspondingly a range of stellar masses and radii. Thirdly, we allow the neutron superfluid to flow into the crust of normal fluid nuclei instead of assuming that the crust is a single normal fluid. We find that this has important implications for the neutron component of the $g$-modes and for both components of the $p$-modes. We compute the displacement fields for the $g$-modes, which KG14 did not report in their initial letter but YW17 did, although the differences in our method mentioned above mean that our modes differ qualitatively and quantitatively. We also use our formalism to compute the $p$-modes in the star like \citet{Lee1995} and \citet{Gualtieri2014}, though only in the zero temperature limit.
In Section~\ref{sec:EoS}, we introduce the parametrized equation of state we use in the core (Section~\ref{subsec:CoreEoS}) and the crust (Section~\ref{subsec:CrustEoS}). In Section~\ref{sec:FluidDynamics} we obtain the equations of motion for the modes and compute the Brunt--V\"{a}is\"{a}l\"{a} frequency due to the muon gradient in the core. The crust-core interface and boundary conditions for the modes, which we find are significant to determining the normal mode displacement fields, are then discussed. Finally, in Section~\ref{sec:NormalModes} we compute the $g$-and $p$-modes, with and without entrainment of the superfluid neutrons and protons in the core, and we make comparisons to previous calculations.
\section{Equation of state}
\label{sec:EoS}
Here we describe the model of the background neutron star that we used for the calculation of the Brunt--V\"{a}is\"{a}l\"{a} frequency in Section~\ref{sec:BVFreq} and the compressional modes in Section~\ref{sec:NormalModes}. Our EOS is based on a relatively simple, parametrized model. We adopt parameters to satisfy constraints near nuclear density $n_{\text{nuc}}=0.16$ fm$^{-3}$ and allow neutron star masses above 2$M_{\odot}$. It is a phenomenological model, but is sufficiently detailed that we can compute all thermodynamic quantities we need to find the normal modes. $\hbar=c=1$ is used throughout.
\subsection{Core equation of state}
\label{subsec:CoreEoS}
We consider an electrically neutral fluid of neutrons, protons, electrons and muons at zero temperature. Its energy density $\rho$ is specified as a function of three variables: baryon number density $n_{\rm b}$, proton fraction $Y$ and electron fraction $f$, where the neutron, proton, electron and muon number densities are respectively $n_{\rm n}=n_{\rm b}(1-Y)$, $n_{\rm p}=n_{\rm b}Y$, $n_{\rm e}=n_{\rm b}Yf$ and $n_{\upmu}=n_{\rm b}Y(1-f)$. We separate the energy density into kinetic and interaction parts $\rho=\rho_{\text{kin}}+\rho_{\text{int}}$; the kinetic part (including the rest mass) is given by
\begin{equation}
\rho_{\text{kin}}(n_{\rm b},Y,f)=\frac{p_{\rm Fe}^4}{4\pi^2}+\sum_{j={\rm n,p,}\upmu}\frac{m_j^4}{\pi^2}\phi\left(\frac{p_{{\rm F}j}}{m_j}\right)
\label{eq:KineticEnergyDensity}
\end{equation}
for Fermi momenta $p_{{\rm F}j}=(3\pi^2n_j)^{1/3}$ and bare mass $m_j$ of particle species $j$, and where
\begin{equation}
\phi(x)=\frac{x^3}{4}\sqrt{x^2+1}+\frac{x}{8}\sqrt{x^2+1}-\frac{1}{8}\text{arsinh}(x).
\end{equation}
We have assumed the electrons are ultrarelativistic and ignore the difference between the proton and neutron mass, assuming $m_{\rm n}=m_p=m_{\rm N}$. Although we use the bare nucleon mass, we assume that $\rho_{\text{int}}$ includes effective mass corrections adequately. The interaction energy density $\rho_{\text{int}}(n_{\rm b},Y)$ employed is based on that of~\citet{Hebeler2013}, but with a different form for the symmetry penalty term:
\begin{align}
\rho_{\text{int}}(n_{\rm b},Y)={}&n_{\text{nuc}}E_{\rm S}\frac{\overline{n}^2+f_{\rm S}\overline{n}^{\gamma_{\rm S}+1}}{1+f_{\rm S}} \nonumber
\\ & +n_{\text{nuc}}E_{\rm A}\overline{n}^2\left(\frac{\overline{n}+\overline{n}_0}{1+\overline{n}_0}\right)^{\gamma_{\rm A}-1}(1-2Y)^2,
\label{eq:NucleonInteractionEnergyDensity}
\end{align}
where $\overline{n}=n_{\rm b}/n_{\text{nuc}}$ and $\overline{n}_0$ is a characteristic number density. For $\overline{n}\ll\overline{n}_0$, the symmetry penalty term is quadratic in $\overline{n}$, as the energy per baryon should be linear in the density at low densities.
The requirements of $-16$ MeV per baryon binding energy and zero pressure for symmetric nuclear matter at nuclear density constraint the parameters $E_{\rm S}$, $\gamma_{\rm S}$ and $f_{\rm S}$, while experimental measurements like those used in constructing Figure~6 of~\citet{Lattimer2016} constrain $E_{\rm A}$, $\gamma_{\rm A}$ and $\overline{n}_0$. Another constraint is that the EOS must allow a maximum mass $\geq 2M_{\odot}$~\citep{Antoniadis2013}, though this can be adjusted upward to allow for higher masses if required by future observations. We consider four possible parameter choices PC1--PC4, differing in the values of $\gamma_{\rm S}$ and $f_{\rm S}$, with each choice corresponding to a value of the nuclear compressibility parameter $K=9(\partial^2(\rho/n_{\rm b})/\partial\overline{n}^2)|_{\overline{n}=1,Y=1/2}$. These are listed in Table~\ref{tab:EOSParameters}. For these choices of $E_{\rm A}$, $\gamma_{\rm A}$ and $\overline{n}_0$, the symmetry energy $S_{\rm v}=31.73$ MeV and density derivative $L=60.32$ MeV, within the $1\sigma$ confidence region of Figure~6 of~\citet{Lattimer2016}. Three of the chosen values of $K$ are within the $240\pm20$ MeV confidence range cited in~\citet{Lattimer2016}, with $K=220$ MeV not being used due to not allowing a $2M_{\odot}$ star to exist in our EOS. The $K=280$ MeV parametrization represents a causal limit i.e. the sound speed equals the speed of light for central densities just beyond that which has the maximum mass for this parametrization. While the EOS is flexible, we found that it was difficult to obtain a maximum mass greater than $2.2M_{\odot}$, and also found that adjusting the parameters $E_{\rm A}$, $\gamma_{\rm A}$ and $\overline{n}_0$ had only a small effect on the nuclear compressibility, so these parameters were fixed for all four parameter sets.
\begin{table}
\caption{Different parametrizations of core equation of state and corresponding nuclear compressibility $K$, maximum mass $M_{\text{max}}$, radius at maximum mass $R_{\text{max}}$, central baryon number density for the maximum mass star $n_{b,\text{cntr},\text{max}}$ and the baryon number density at which the sound speed equals the speed of light $n_{b,\text{cl}}$.}
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
\multicolumn{1}{|c|}{} & \textbf{PC1} & \textbf{PC2} & \textbf{PC3} & \textbf{PC4} \\
\hline
$E_{\rm S}$ (MeV) & -37.8 & -37.8 & -37.8 & -37.8 \\
\hline
$\gamma_{\rm S}$ & 1.31 & 1.356 & 1.452 & 1.547 \\
\hline
$f_{\rm S}$ & -0.667 & -0.634 & -0.577 & -0.530 \\
\hline
$E_{\rm A}$ (MeV) & 19.9 & 19.9 & 19.9 & 19.9 \\
\hline
$\gamma_{\rm A}$ & 0.61 & 0.61 & 0.61 & 0.61 \\
\hline
$\overline{n}_0$ (MeV) & 0.05 & 0.05 & 0.05 & 0.05 \\
\hline
$K$ (MeV) & 230 & 240 & 260 & 280 \\
\hline
$M_{\text{max}}/M_{\odot}$ & 2.01 & 2.05 & 2.15 & 2.24 \\
\hline
$R_{\text{max}}$ (km) & 10.23 & 10.34 & 10.62 & 10.88 \\
\hline
$n_{{\rm b},\text{cntr},\text{max}}/n_{\text{nuc}}$ & 7.43 & 7.22 & 6.73 & 6.32 \\
\hline
$n_{{\rm b},\text{cl}}/n_{\text{nuc}}$ & 9.8 & 8.9 & 7.5 & 6.4 \\
\hline
\end{tabular}
\label{tab:EOSParameters}
\end{table}
The pressure is specified by
\begin{equation}
P=n_{\rm b}\frac{\partial \rho}{\partial n_{\rm b}}-\rho
\end{equation}
and the chemical potential by
\begin{equation}
\mu=\frac{\partial \rho}{\partial n_{\rm b}}.
\end{equation}
The individual chemical potentials are calculated using
\begin{equation}
\mu_x=\frac{\partial \rho}{\partial n_x},\quad x={\rm n,p,e},\upmu.
\end{equation}
The background star is assumed to be in beta equilibrium, implying
\begin{equation}
\mu_{\rm n}=\mu_{\rm p}+\mu_{\rm e}=\mu_{\rm p}+\mu_{\upmu} \Rightarrow \mu_{\rm e}=\mu_{\upmu}.
\end{equation}
We find that muons first appear at $n_{\rm b}=0.8n_{\text{nuc}}$ for all four EOS parametrizations that we considered. In Figure~\ref{fig:CoreEoS} (top), we compare our $\rho(n_{\rm b})$ in the core to the BSk19--BSk21 EOSs from~\citet{Potekhin2013}, finding that ours is in good agreement with all three of their EOSs in the lower half of the density range and with the BSk19 and BSk20 EOS in the higher density region. We also plot the proton fraction $Y$ and $Y_{\rm e}=fY$ as functions of $n_{\rm b}$ in the core (bottom).
\begin{figure}
\centering
\includegraphics[width=1.0\columnwidth]{CoreEOSCombined.pdf}
\caption{Top: Energy density $\rho$ as a function of $n_{\rm b}/n_{\text{nuc}}$ in the core from this paper (RW with PC1 parameters) and the BSk19--BSk21 EOS from~\citet{Potekhin2013}. Bottom: Proton fraction $Y=n_{\rm q}/n_{\rm b}$ and electron fraction $Y_{\rm e}=fY=n_{\rm e}/n_{\rm b}$ in the core for the RW EOS with PC1 parameters. $Y=Y_{\rm e}$ below the muon threshold density $n_{\rm b}/n_{\text{nuc}}=0.8$. }
\label{fig:CoreEoS}
\end{figure}
\subsection{Crust equation of state}
\label{subsec:CrustEoS}
In the inner crust between neutron drip at $n_{\rm b}\sim 10^{11}$ g/cm$^3$ and the transition to the core at $\sim 10^{14}$ g/cm$^3$ ($n_{\rm b}\sim 0.5n_{\text{nuc}}$), neutron stars are expected to consist of neutron-rich nuclei surrounded by a dripped neutron gas and a pervasive ultrarelativistic electron gas. Below neutron drip, the outer crust, consisting only of nuclei and an electron gas, is included using the BPS EOS~\citep{Baym1971a}. It is, however, neglected when computing the oscillation modes as it constitutes less than a hundredth of a percent of the star by mass, thus having a negligible effect on the bulk oscillation modes. The effects of this omission are briefly discussed at the conclusion of Section~\ref{sec:BCs}.
Following~\citet{Baym1971} and~\citet{Haensel2001b}, we consider a liquid drop-type model with spherical nuclei of radius $r_{\rm n}$ inside spherical unit cells of radius $r_{\rm c}$. We do not model exotic shapes or nuclear pasta~\citep{Ravenhall1983,Hashimoto1984,Watanabe2003} at this stage, and we allow the proton and nucleon numbers $Z$ and $A$ to vary continuously. The density of neutrons outside the nuclei is $n_{\rm n,o}$, while the nuclei themselves have baryon density $n_{\rm i}$. The neutron and proton densities inside the nuclei are $n_{\rm n,i}=(1-Y)n_{\rm i}$ and $n_{\rm p,i}=Yn_{\rm i}$ respectively, where in the crust $Y=Z/A$ is the proton fraction of the nuclei and $Z$ and $A$ are defined to include only baryons inside the nuclei. The electron number density is fixed by electric charge neutrality to be equal to the average proton number density over the cell, so
\begin{equation}
n_{\rm e} = wn_{\rm p,i}=wn_{\rm i}Y,
\end{equation}
where $w=(r_{\rm n}/r_{\rm c})^3$ is the fraction of the volume of each cell occupied by the nucleus and $r_{\rm n}=(3A/4\pi n_{\rm i})^{1/3}$. We also define the number density of nuclei $n_{\rm n}$, which is given in terms of the cell radius by
\begin{equation}
n_{\rm n} = \frac{3}{4\pi r_{\rm c}^3},
\end{equation}
so the total baryon number density $n_{\rm b}$ is given by
\begin{equation}
n_{\rm b} = An_{\rm n}+(1-w)n_{\rm n,o}.
\end{equation}
Later, we discuss the fluid oscillations in the crust in terms of the macroscopic motion of two fluids: a free neutron superfluid and a normal fluid of nuclei. In the fluid equations, we use the mean density of free neutrons outside the nuclei $n_{\rm f}\equiv(1-w)n_{\rm n,o}$ and the mean density of nuclear baryons $n_{\rm c}\equiv An_{\rm n}$. In terms of these densities, the total baryon number density is
\begin{equation}
n_{\rm b}=n_{\rm f}+n_{\rm c};
\end{equation}
note too that $w=n_{\rm c}/n_{\rm i}$.
We write the energy density for the inner crust in terms of the five variables $n_{\rm f}$, $n_{\rm c}$, $n_{\rm i}$, $A$ and $Y$. The energy density for the inner crust has five components: bulk energy densities for the nuclei $\rho_{i,\text{bulk}}$, surrounding neutron gas $\rho_{{\rm o},\text{bulk}}$, and electron gas $\rho_{\rm e}$, Coulomb energy density $\rho_{\text{Coul}}$ including the self-energy of the nuclei and the lattice energy, and surface energy density $\rho_{\text{surf}}$. The bulk energy density for nuclear matter, the neutron gas and the electron gas have the same form as in the core, discussed in the previous section. We then have
\begin{align}
\rho(n_{\rm f},n_{\rm c},A,Y,n_{\rm i}) ={}& w\rho_{{\rm i},\text{bulk}}(n_{\rm i},Y)+(1-w)\rho_{{\rm o},\text{bulk}}(n_{\rm f}/(1-w))\nonumber \\
{}&+\rho_{\rm e}(Yn_{\rm c})+n_{\rm n}E_{\text{Coul}}(n_{\rm c},n_{\rm i},A,Y)\nonumber \\
{}&+n_{\rm n}E_{\text{surf}}(n_{\rm i},A,Y), \label{eq:CrustEnergyDensity}
\end{align}
where
\begin{align}
{}&\rho_{{\rm i},\text{bulk}}(n_{\rm i},Y)=\frac{m_{\rm N}^4}{\pi^2}\left[\phi\left(\frac{p_{\rm Fi}}{m_{\rm N}}Y^{1/3}\right)+\phi\left(\frac{p_{\rm Fi}}{m_{\rm N}}(1-Y)^{1/3}\right)\right] \nonumber \\
&\quad +n_{\text{nuc}}E_{\rm S}\frac{\overline{n}_{\rm i}^2+f_{\rm S}\overline{n}_{\rm i}^{\gamma_{\rm S}+1}}{1+f_{\rm S}}+n_{\text{nuc}}E_{\rm A}\overline{n}_{\rm i}^2\left(\frac{\overline{n}_{\rm i}+\overline{n}_{\rm o}}{1+\overline{n}_{\rm o}}\right)^{\gamma_{\rm A}-1}(1-2Y)^2, \\
{}&\rho_{{\rm o},\text{bulk}}(n_{\rm n,o})=\frac{m_{\rm N}^4}{\pi^2}\phi\left(\frac{p_{\rm Fo}}{m_{\rm N}}\right)+n_{\text{nuc}}E_{\rm S}\frac{\overline{n}_{\rm n,o}^2+f_{\rm S}\overline{n}_{\rm n,o}^{\gamma_{\rm S}+1}}{1+f_{\rm S}} \nonumber \\
&\qquad +n_{\text{nuc}}E_{\rm A}\overline{n}^2_{\rm n,o}\left(\frac{\overline{n}_{\rm n,o}+\overline{n}_{\rm o}}{1+\overline{n}_{\rm o}}\right)^{\gamma_{\rm A}-1}, \\
{}&\rho_{\rm e}(Yn_{\rm c}=n_{\rm e})=\frac{(3\pi^2n_{\rm e})^{4/3}}{4\pi^2}, \\
{}&E_{\text{Coul}}(n_{\rm c},n_{\rm i},A,Y)=\frac{16}{15}(\pi Yn_{\rm i}e)^2r_{\rm n}^5\left[1-\frac{3}{2}w^{1/3}+\frac{1}{2}w\right], \\
{}&E_{\text{surf}}(n_{\rm i},A,Y)=4\pi r_{\rm n}^2\sigma_{\rm s}(Y),
\end{align}
where $p_{\rm Fi}=(3\pi^2n_{\rm i})^{1/3}$, $n_{\rm n,o}=n_{\rm f}/(1-w)$, $p_{\rm Fo}=(3\pi^2n_{\rm n,o})^{1/3}$, $\overline{n}_{\rm i}=n_{\rm i}/n_{\text{nuc}}$, $\overline{n}_{\rm n,o}=n_{\rm n,o}/n_{\text{nuc}}$ and $\sigma_{\rm s}$ denotes the nuclear surface energy. We assume that the electrons are relativistic down to neutron drip and ignore the neutron-proton mass difference here.
Following~\citet{Ravenhall1983a} and~\citet{Lattimer1985}, we take the surface energy $\sigma_s$ to be a function only of the proton fraction $Y$ at zero temperature, and use the parametrization
\begin{equation}
\sigma_{\rm s}(Y)=\frac{\sigma_0(2^{\alpha+1}+\beta)}{Y^{-\alpha}+(1-Y)^{-\alpha}} .
\label{eq:SurfaceEnergy}
\end{equation}
We selected parameters $\sigma_0$, $\alpha$, $\beta$ which give a approximately constant proton number $Z\approx 40$ throughout the density range of the inner crust, as is found in more detailed calculations of the inner crust equation of state \citep{Douchin2000,Onsi2008,Pearson2012,Potekhin2013}:
\begin{align*}
&\sigma_0 = 1.4\text{ MeV/fm}^2, \\
&\alpha = 3, \\
&\beta = 24.
\end{align*}
$\alpha$ and $\beta$ are close to the corresponding parameters in~\citet{Ravenhall1983a}, but $\sigma_0$ is $\approx 50$\% larger than its corresponding parameter.
For a general change of state, the change in the energy density is
\begin{align}
d\rho={}&\left.\frac{\partial\rho}{\partial A}\right|_{n_{\rm f},n_{\rm c},n_{\rm i},Y}{\rm d}A+\left.\frac{\partial\rho}{\partial Y}\right|_{n_{\rm f},n_{\rm c},n_{\rm i},A}{\rm d}Y \nonumber \\
{}&+\left.\frac{\partial\rho}{\partial n_{\rm i}}\right|_{n_{\rm f},n_{\rm c},A,Y}{\rm d}n_{\rm i}+\left.\frac{\partial\rho}{\partial n_{\rm c}}\right|_{n_{\rm f},n_{\rm i},A,Y}{\rm d}n_{\rm c} \nonumber \\
{}&+\left.\frac{\partial\rho}{\partial n_{\rm f}}\right|_{n_{\rm c},n_{\rm i},A,Y}{\rm d}n_{\rm f}.
\end{align}
At fixed $n_{\rm b}$, ${\rm d}n_{\rm c}+{\rm d}n_{\rm f}=0$, so this becomes
\begin{align}
&d\rho=\left.\frac{\partial\rho}{\partial A}\right|_{n_{\rm f},n_{\rm c},n_{\rm i},Y}{\rm d}A+\left.\frac{\partial\rho}{\partial Y}\right|_{n_{\rm f},n_{\rm c},n_{\rm i},A}{\rm d}Y \nonumber \\
&+\left.\frac{\partial\rho}{\partial n_{\rm i}}\right|_{n_{\rm f},n_{\rm c},A,Y}{\rm d}n_{\rm i}+\left(\left.\frac{\partial\rho}{\partial n_{\rm c}}\right|_{n_{\rm f},n_{\rm i},A,Y}-\left.\frac{\partial\rho}{\partial n_{\rm f}}\right|_{n_{\rm c},n_{\rm i},A,Y}\right){\rm d}n_{\rm c}.
\end{align}
The ``nuclear virial theorem'' \citep{Haensel2001b} and pressure balance correspond to the conditions
\begin{equation}
\left.\frac{\partial\rho}{\partial Y}\right|_{n_{\rm f},n_{\rm c},n_{\rm i},A}=
\left.\frac{\partial\rho}{\partial n_{\rm i}}\right|_{n_{\rm f},n_{\rm c},A,Y} = 0, \label{eq:NVTandPB}
\end{equation}
respectively, while the condition that there is no energy associated with exchanging neutrons between the nuclei and the external free neutron gas (henceforth the ``exchange condition'') is
\begin{equation}
\left.\frac{\partial\rho}{\partial n_{\rm c}}\right|_{n_{\rm c},n_{\rm i},A,Y}-\left.\frac{\partial\rho}{\partial n_{\rm f}}\right|_{n_{\rm c},n_{\rm i},A,Y}-\frac{Y}{n_{\rm c}}\left.\frac{\partial\rho}{\partial Y}\right|_{n_{\rm f},n_{\rm c},n_{\rm i},A}=0 \label{eq:Exchange1}
\end{equation}
since proton density $Yn_{\rm c}$ is unchanged by exchange of neutrons. Beta equilibrium is simply
\begin{equation}
\left.\frac{\partial\rho}{\partial Y}\right|_{n_{\rm f},n_{\rm c},n_{\rm i},A}=0, \label{eq:BetaEq}
\end{equation}
so Eq.~(\ref{eq:Exchange1}) becomes
\begin{equation}
\left.\frac{\partial\rho}{\partial n_{\rm c}}\right|_{n_{\rm c},n_{\rm i},A,Y}-\left.\frac{\partial\rho}{\partial n_{\rm f}}\right|_{n_{\rm c},n_{\rm i},A,Y}=0. \label{eq:Exchange2}
\end{equation}
Imposing the four conditions Eqs.~(\ref{eq:NVTandPB},\ref{eq:BetaEq},\ref{eq:Exchange2}), we can determine the values of $n_{\rm f}$, $n_{\rm c}$, $n_{\rm i}$, $A$ and $Y$ at each $n_{\rm b}$ and thus compute at each $n_{\rm b}$ the energy density $\rho$ and pressure $P$, which is given by
\begin{equation}
P = P_{\rm e}+P_{\text{Coul}}+P_{{\rm o},\text{bulk}}=\frac{1}{3}\rho_{\rm e}+\frac{n_{\rm c}^2}{A}\frac{\partial E_{\text{Coul}}}{\partial n_{\rm c}}+P_{{\rm o},\text{bulk}}.
\label{eq:CrustPressure}
\end{equation}
We will also require the chemical potentials for each fluid $\mu_{\rm f}$ and $\mu_{\rm c}$, given by
\begin{align}
\mu_{\rm f}=\left.\frac{\partial\rho}{\partial n_{\rm f}}\right|_{n_{\rm c},n_{\rm i},A,Y}={}&(1-w)\frac{\partial\rho_{\text{bulk},{\rm o}}}{\partial n_{\rm n,o}}\frac{\partial n_{\rm n,o}}{\partial n_{\rm f}}=\mu_{\rm n,o}, \label{eq:MuFUnsimplified} \\
\mu_{\rm c}=\left.\frac{\partial\rho}{\partial n_{\rm c}}\right|_{n_{\rm f},n_{\rm i},A,Y}={}&Y\mu_{\rm e}+\frac{P_{{\rm o},\text{bulk}}+\rho_{\text{bulk},i}}{n_{\rm i}} \nonumber \\
{}&+\frac{16}{15}(\pi Yn_{\rm i}e)^2\frac{Y}{A}r_{\rm n}^5\left[3-5w^{1/3}+2w\right]. \label{eq:MuCUnsimplified}
\end{align}
Using the four equilibrium conditions, Eqs.~(\ref{eq:MuFUnsimplified}--\ref{eq:MuCUnsimplified}) can be used to show that $\mu_{\rm c}=\mu_{\rm f}$ in equilibrium.
For this inner crust equation of state, neutron drip occurs between $2.7-2.8\times 10^{11}$ g/cm$^3$ or $n_{\rm b}=0.00103-0.00106n_{\text{nuc}}$, depending on the parametrization of the nuclear physics. The core and crust equations of state must be joined when the pressure and chemical potentials of each are equal. This occurs at different densities for each EOS parametrization described in Table~\ref{tab:EOSParameters}, with the pressure, chemical potential and densities in the core and crust at the transition for each parametrization listed in Table~\ref{tab:CCTransition}.
\begin{table}
\caption{Pressure $P$, chemical potential $\mu$ and baryon number density $n_{\rm b}$ at the core ($+$) and crust ($-$) sides of the crust-core transition for each EOS parametrization employed in this paper. The size of the density jump as a percentage of the baryon number density at the transition is also listed.}
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
\multicolumn{1}{|c|}{} & \textbf{PC1} & \textbf{PC2} & \textbf{PC3} & \textbf{PC4} \\
\hline
$P$ (MeV/fm$^3$) & 0.215 & 0.225 & 0.239 & 0.249 \\
\hline
$\mu/m_{\rm N}$ & 1.0108 & 1.0112 & 1.0117 & 1.0121 \\
\hline
$n_{\rm b}^+/n_{\text{nuc}}$ & 0.399 & 0.413 & 0.437 & 0.458 \\
\hline
$n_{\rm b}^-/n_{\text{nuc}}$ & 0.394 & 0.407 & 0.431 & 0.451 \\
\hline
$\Delta n_{\rm b}/n_{\rm b}$ (\%) & 1.2 & 1.4 & 1.3 & 1.5 \\
\hline
\end{tabular}
\label{tab:CCTransition}
\end{table}
The energy density and pressure for our EOS in the crust (Figure~(\ref{fig:CrustEOS}), top) are within 10--20\% of those found in more detailed calculations such as~\citet{Pearson2012} and~\citet{Potekhin2013}. The bottom panel shows $A$, $Z$ and $n_{\rm c}/n_{\rm b}$ as functions of $n_{\rm b}/n_{\text{nuc}}$ in the crust. The values of $Z$ closely match those of~\citet{Ravenhall1972}, including the dip downward just before the transition to the core. Our values of $n_{\rm c}/n_{\rm b}$ are in agreement with~\citet{Kobyakov2016}; our values of $A$ are typically greater than theirs by a factor of $1.5$, though the difference increases as the crust-core transition is approached, while our values are typically a factor of $4$ lower than those of Pearson et al.
\begin{figure}
\centering
\includegraphics[width=1.0\columnwidth]{CrustEOSCombined.pdf}
\caption{Top: Mass-energy density $\rho$ and pressure $P$ as functions of $n_{\rm b}/n_{\text{nuc}}$ across the density range of the inner crust for the RW EOS with PC1 parametrization. Bottom: Nucleon number $A$, proton number $Z$ and the ratio $0<n_{\rm c}/n_{\rm b}<1$ as functions of $n_{\rm b}/n_{\text{nuc}}$ across the density range of the inner crust for the same EOS. The maximum density in the crust before the transition to the core is $n_{\rm b}/n_{\text{nuc}}=0.394$ for the PC1 parametrization.}
\label{fig:CrustEOS}
\end{figure}
Figure~\ref{fig:MvsR} compares the mass-radius relation for two different parametrizations of the two-part EOS described here (RW) to a few other representative neutron star equations of state. The RW EOS uses the BPS EOS~\citep{Baym1971a} for densities below neutron drip. The radius, radius at neutron drip and central density for each EOS parametrization and stellar mass used in the rest of this paper are described in Table~\ref{tab:StellarModelProperties}.
\begin{figure}
\centering
\includegraphics[width=1.0\columnwidth]{MvsR}
\caption{Neutron star mass-radius plot for the EOS described in this paper with two parametrizations as given by Table~\ref{tab:EOSParameters} (RW PC1 and RW PC4) and three equations of state of increasing stiffness from both~\citet{Hebeler2013} (denoted H1, H2, H3) and~\citet{Potekhin2013} (denoted BSk19, BSk20, BSk21).}
\label{fig:MvsR}
\end{figure}
\begin{table}
\caption{List of radius $R$, radius at neutron drip $R_{\text{ND}}$, and central density $n_{b,\text{cntr}}$ for each stellar mass and parametrization choice employed in the calculation of compressional mode frequencies in this paper.}
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
\textbf{PC1} & $\mathbf{1.2M_{\odot}}$ & $\mathbf{1.4M_{\odot}}$ & $\mathbf{1.7M_{\odot}}$ & $\mathbf{2M_{\odot}}$ \\
\hline
$R$ (km) & 12.24 & 12.11 & 11.75 & 10.54 \\
\hline
$R_{\text{ND}}$ (km) & 11.72 & 11.70 & 11.47 & 10.39 \\
\hline
$n_{\rm b,\text{cntr}}/n_{\text{nuc}}$ & 2.73 & 3.17 & 4.07 & 6.67 \\
\hline
\textbf{PC2} & $\mathbf{1.2M_{\odot}}$ & $\mathbf{1.4M_{\odot}}$ & $\mathbf{1.7M_{\odot}}$ & $\mathbf{2M_{\odot}}$ \\
\hline
$R$ (km) & 12.31 & 12.21 & 11.90 & 11.06 \\
\hline
$R_{\text{ND}}$ (km) & 11.78 & 11.79 & 11.62 & 10.89 \\
\hline
$n_{\rm b,\text{cntr}}/n_{\text{nuc}}$ & 2.66 & 3.05 & 3.86 & 5.61 \\
\hline
\textbf{PC3} & $\mathbf{1.2M_{\odot}}$ & $\mathbf{1.4M_{\odot}}$ & $\mathbf{1.7M_{\odot}}$ & $\mathbf{2M_{\odot}}$ \\
\hline
$R$ (km) & 12.48 & 12.43 & 12.23 & 11.71 \\
\hline
$R_{\text{ND}}$ (km) & 11.94 & 11.99 & 11.92 & 11.50 \\
\hline
$n_{\rm b,\text{cntr}}/n_{\text{nuc}}$ & 2.49 & 2.82 & 3.46 & 4.55 \\
\hline
\textbf{PC4} & $\mathbf{1.2M_{\odot}}$ & $\mathbf{1.4M_{\odot}}$ & $\mathbf{1.7M_{\odot}}$ & $\mathbf{2M_{\odot}}$ \\
\hline
$R$ (km) & 12.63 & 12.62 & 12.50 & 12.14 \\
\hline
$R_{\text{ND}}$ (km) & 12.07 & 12.17 & 12.17 & 11.92 \\
\hline
$n_{\rm b,\text{cntr}}/n_{\text{nuc}}$ & 2.35 & 2.64 & 3.16 & 3.96 \\
\hline
\end{tabular}
\label{tab:StellarModelProperties}
\end{table}
\section{Fluid dynamics}
\label{sec:FluidDynamics}
\subsection{Two-fluid formalism in the core}
We now derive the equations of motion for the perturbations of a two-superfluid neutron star that we use to compute its normal modes. We first consider the core, and the same model is generalized to the crust in Sections~(\ref{subsec:CrustEoM}) and~(\ref{subsec:SFCrustEoM}). We work at zero temperature, so there are no normal fluid neutron or proton components, and include general relativity, but work in the Cowling approximation and so ignore perturbations of the metric.
In a core composed of superfluid neutrons and protons and normal fluid electrons and muons, the leptons will move along with the protons since the plasma frequency $\sim10^{22}$ s$^{-1}$ is much greater than the frequencies of the compressional modes. We thus have two independently-moving fluids-- a neutron superfluid and charged fluid. This differs from the commonly-chosen separation of the fluid into normal and superfluid components, though the formulation used here has a number of advantages which illuminate the underlying physics. First, it makes clear the role of the leptonic buoyancy, which exists only in the charged fluid. It also reveals the significance of thermodynamic coupling and entrainment between the two fluids, with the equations describing the motion of the fluids becoming completely uncoupled if these quantities are zero as is demonstrated at the end of this section. At densities above $2$--$3n_{\text{nuc}}$, the $s$-wave pairing energy gap for protons may go to zero~\citep{Zhou2004,Baldo2007}, meaning that the proton fluid can be a normal fluid in the inner core, but the equations of motion for the charged fluid will remain unchanged in this case, and the charged fluid and neutrons would still behave as two separately-moving fluids as long as the neutrons remain superfluid. Note that both methods should be equivalent, and the normal fluid and superfluid displacement modes can be reconstructed from taking the appropriate superposition of the neutron and charged fluid modes.
We assume that neutrons remain superfluid throughout the core; calculations summarized in Fig. 2 of~\citet{Gezerlis2014} support the idea that the neutron gap does not vanish at any density below at least $4.2n_{\text{nuc}}$, which includes all neutron stars less massive than about $1.7M_{\odot}$ for our adopted equation of state. Moreover, these calculations suggest that neutron superfluidity would be maintained out to the crust-core boundary for core temperatures below $\simeq 3\times 10^8$K; the model used in KG14 is similar. If neutrons become normal somewhere inside the core, their coupling to electrons~\citep{Bertoni2015} suffices to merge the two fluids into a single fluid, irrespective of whether the protons are superfluid. Thus, if neutrons were entirely normal throughout the core, then $g$-modes would arise from a combination of the leptonic buoyant force associated with the gradient of $f$ and the buoyant force associated with the gradient of $Y$~\citep{Reisenegger1992}, but overall their frequencies would be lower than when neutrons are superfluid; see Section~\ref{sec:BVFreq}, especially Eqs.~(\ref{eq:NFBruntVaisalaFrequency}--\ref{eq:YGradientBruntVaisalaFrequency}). However, if the neutrons are only normal deep inside the core of the neutron star, then the $g$-modes arising from leptonic buoyancy, which is only substantial near the outer boundary of the core, would be largely unaffected. Thus, the $g$-mode frequency spectrum is, in principle, a probe of neutron superfluidity in the cores of neutron stars.
We specify the neutron superfluid four-velocity $u^{\mu}_{\rm n}$ and number density $n_{\rm n}$, and charged fluid four-velocity $u^{\mu}_{\rm q}$ and number density $n_{\rm q}=n_{\rm p}=n_{\rm e}+n_{\upmu}$. We can rewrite the energy density $\rho$ as a function of $n_{\rm n}$, $n_{\rm q}$ and the electron fraction $f=n_{\rm e}/n_{\rm q}$;
\begin{equation}
\rho(n_{\rm n},n_{\rm q},f)=\rho_{\text{nuc}}(n_{\rm n},n_{\rm q})+\rho_{\rm e}(n_{\rm q}f)+\rho_{\upmu}(n_{\rm q}(1-f)),
\end{equation}
where $\rho_{\text{nuc}}$ includes both the kinetic and interaction contributions relating to the nucleons. This gives two chemical potentials
\begin{align}
\mu_{\rm n}= & \frac{\partial\rho}{\partial n_{\rm n}}=\frac{\partial\rho_{\text{nuc}}}{\partial n_{\rm n}}, \\
\mu_{\rm q}= & \frac{\partial\rho}{\partial n_{\rm q}}=\frac{\partial\rho_{\text{nuc}}}{\partial n_{\rm q}}+\frac{\partial\rho_{\rm e}}{\partial n_{\rm e}}\frac{\partial n_{\rm e}}{\partial n_{\rm q}}+\frac{\partial\rho_{\upmu}}{\partial n_{\upmu}}\frac{\partial n_{\upmu}}{\partial n_{\rm q}},
\label{eq:ChargedChemicalPotential}
\end{align}
which are equal in beta equilibrium.
The motion of the two fluids is described by the relativistic Euler equations
~\citep{Carter1998,Andersson2007}
\begin{align}
0={}&u_{\rm n}^{\rho}\nabla_{\rho}(\mu_{\rm n}u^n_{\sigma})+\nabla_{\sigma}\mu_{\rm n}-2u_{\rm n}^{\rho}\nabla_{[\rho}(\mu_{\rm n}\epsilon_{\rm n}W_{\sigma]}), \label{eq:EulerN} \\
0={}&u_{\rm q}^{\rho}\nabla_{\rho}(\mu_{\rm q}u_{\sigma}^{\rm q})+\nabla_{\sigma}\mu_{\rm q}+(\mu_{\upmu}-\mu_{\rm e})\nabla_{\sigma}f+2u_{\rm q}^{\rho}\nabla_{[\rho}(\mu_{\rm n}\epsilon_{\rm p}W_{\sigma]}), \label{eq:EulerQ}
\end{align}
and the continuity equations
\begin{align}
\nabla_{\rho}(n_{\rm n}u^{\rho}_{\rm n})=0, \\
\nabla_{\rho}(n_{\rm q}u^{\rho}_{\rm q})=0.
\end{align}
where $W_{\sigma}=u^{\rm n}_{\sigma}-u^{\rm q}_{\sigma}$. $\epsilon_{\rm n}$ and $\epsilon_{\rm p}$ are defined to parameterize the entrainment, and are related by
\begin{equation}
n_{\rm q}\epsilon_{\rm p} = n_{\rm n}\epsilon_{\rm n}.
\end{equation}
The entrainment parameters $\epsilon_{\rm p}$ and $\epsilon_{\rm n}$ here are dimensionless and are the same as those of~\citet{Prix2002}. We vary the parameter $\epsilon_{\rm p}$ to adjust the strength of the entrainment, noting that the effective mass of the proton $m_{\rm p}^*$ is often related to $\epsilon_{\rm p}$ via
\begin{equation}
\epsilon_{\rm p}=1-\frac{m_{\rm p}^*}{m_{\rm N}}.
\end{equation}
The term in Eq.~(\ref{eq:EulerQ}) $\propto \nabla_{\sigma} f$ is responsible for the leptonic buoyancy, and in the outer regions of the core without muons, it is zero.
We now calculate the equations of motion for perturbations to a spherically-symmetric, static background in chemical equilibrium. The metric in Schwarzschild coordinates is
\begin{equation}
{\rm d}s^2=-{\rm e}^{\nu(r)}{\rm d}t^2+{\rm e}^{\lambda(r)}{\rm d}r^2+r^2({\rm d}\theta^2+\sin^2\theta {\rm d}\phi^2),
\end{equation}
where ${\rm e}^{\lambda(r)}=(1-2M(r)/r)^{-1}$, $M(r)$ is the mass enclosed within radius $r$, and ${\rm e}^{\nu(r)}$ is determined using the gravitational redshift formula $\mu(r)\sqrt{-g_{00}}=\text{constant}$, so
\begin{equation}
{\rm e}^{\nu(r)}=(-g_{00})=\left(\frac{m_{N,\text{Fe}-56}}{\mu_0(r)}\right)^2\left(1-\frac{2M}{R}\right),
\end{equation}
where $R$ is the coordinate radius of the star, $M=M(R)$ its total mass computed using the equation of state and the TOV equation and $m_{N,\text{Fe}-56}$ is the mass per baryon of an iron-56 nucleus.
Since the velocities under consideration are much less than the speed of light, we can ignore relativistic gamma factors and write the fluid four-velocities as
\begin{equation}
u^{\mu}_a=\frac{{\rm d}x^{\mu}_a}{{\rm d}\tau}\approx {\rm e}^{-\nu/2}\frac{{\rm d}x^{\mu}_a}{{\rm d}t}.
\label{eq:FourVelocityDefinition}
\end{equation}
For a stationary background, $u_a^{\mu}={\rm e}^{-\nu/2}(1,0,0,0)$. The velocity of the perturbation $\delta u_a^{\mu}$ to first order in perturbation theory and in the Cowling approximation is thus given by \citep{Andersson2007}
\begin{align}
\delta u_a^{\mu} = {}&(\delta^{\mu}_{\nu}+u_a^{\mu}u^a_{\nu})(u_a^{\sigma}\nabla_{\sigma}\overline{\xi}_a^{\nu}-\overline{\xi}_a^{\sigma}\nabla_{\sigma}u_a^{\nu}) \nonumber \\
={}&(\delta^{\mu}_{\nu}-\delta^{\mu}_0\delta^0_{\nu})\left({\rm e}^{-\nu/2}\nabla_{0}\overline{\xi}_a^{\nu}-\overline{\xi}_a^{\sigma}\nabla_{\sigma}{\rm e}^{-\nu/2}\delta^{\nu}_0\right)
\end{align}
for Lagrangian displacement fields $\overline{\xi}_a^{\mu}$ defined in a coordinate basis. We set $\overline{\xi}_a^0=0$ using the gauge freedom within the definition of $\overline{\xi}^{\mu}_a$. Taking the Eulerian perturbation of Eq.~(\ref{eq:EulerQ}) and considering its spatial components $\sigma=i=1,2,3$, we obtain to first order in perturbation theory
\begin{align}
0={}&{\rm e}^{-\nu}\partial^2_t\overline{\xi}_{\rm q}^i+{\rm e}^{-\nu}\epsilon_{\rm p}\partial^2_t(\overline{\xi}^i_{\rm n}-\overline{\xi}_{\rm q}^i)+g^{ii}\partial_i\left(\frac{\delta\mu_{\rm q}}{\mu_0}\right) \nonumber \\
{}&+\frac{(\delta\mu_{\upmu}-\delta\mu_{\rm e})}{\mu_0}g^{ii}\partial_i f,
\label{eq:PerturbedEulerQ} \\
0={}&{\rm e}^{-\nu}\partial^2_t\overline{\xi}_{\rm n}^i+{\rm e}^{-\nu}\epsilon_{\rm n}\partial^2_t(\overline{\xi}^i_{\rm q}-\overline{\xi}_{\rm n}^i)+g^{ii}\partial_i\left(\frac{\delta\mu_{\rm n}}{\mu_0}\right),
\label{eq:PerturbedEulerN}
\end{align}
where $\mu_0$ is the common background equilibrium chemical potential. The perturbed continuity equations are identical:
\begin{align}
\delta n_a = -n_a\Theta_a-\overline{\xi}^r_a\frac{{\rm d}n_a}{{\rm d}r},
\end{align}
where we have defined
\begin{align}
\Theta_a = \frac{1}{\sqrt{-g}}\frac{\partial(\sqrt{-g}\overline{\xi}_a^i)}{\partial x^i}.
\label{eq:CovariantDivergence}
\end{align}
Since we consider nonrotating stars, spherical symmetry is preserved and the normal modes are spheroidal/poloidal. The displacement field for such a mode in the orthonormal tetrad is
\begin{equation}
\boldsymbol{\xi}_a={\rm e}^{i\omega t}\left[\xi^r_a(r)Y_{lm}(\theta,\phi)\hat{\mathbf{e}}_r+\xi^{\perp}_a(r)r\nabla Y_{lm}(\theta,\phi)\right] \quad a={\rm n,q},
\label{eq:DisplacementField}
\end{equation}
where $Y_{lm}(\theta,\phi)$ are the usual spherical harmonics and we use the usual orthonormal basis vectors. $\omega$ is the angular frequency of the oscillation as observed far from the star. In the coordinate basis, the components of $\overline{\xi}^i_a$ are
\begin{align}
\overline{\xi}^r_a ={}& {\rm e}^{-\lambda/2}\xi^r_a(r)Y_{lm}(\theta,\phi){\rm e}^{i\omega t}, \label{eq:CoordinateBasisXir}\\
\overline{\xi}^{\theta}_a ={}& \xi^{\perp}_a(r)\frac{1}{r}\partial_{\theta}Y_{lm}(\theta,\phi){\rm e}^{i\omega t}, \label{eq:CoordinateBasisXitheta}\\
\overline{\xi}^{\phi}_a ={}& \xi^{\perp}_a(r)\frac{1}{r\sin\theta}\partial_{\phi}Y_{lm}(\theta,\phi){\rm e}^{i\omega t}. \label{eq:CoordinateBasisXiphi}
\end{align}
To compute the buoyant term in Eq.~(\ref{eq:PerturbedEulerQ}), use
\begin{equation}
\frac{\partial\rho}{\partial f}=\frac{\partial \rho_{\rm e}}{\partial n_{\rm e}}\frac{\partial n_{\rm e}}{\partial f}+\frac{\partial \rho_{\upmu}}{\partial n_{\upmu}}\frac{\partial n_{\upmu}}{\partial f}=n_{\rm q}(\mu_{\rm e}-\mu_{\upmu});
\end{equation}
then
\begin{align}
\delta\mu_{\upmu}-\delta\mu_{\rm e}= & -\delta\left(\frac{1}{n_{\rm q}}\frac{\partial\rho}{\partial f}\right) \nonumber \\
= & -\frac{1}{n_{\rm q}}\left(\frac{\partial^2\rho}{\partial f^2}\delta f+\frac{\partial\mu_{\rm q}}{\partial f}\delta n_{\rm q} \right) \nonumber \\
= & \mu_{\rm qf}\Theta_{\rm q}, \label{eq:EulerianPertLeptonChemPotDiff}
\end{align}
where we have defined the thermodynamic derivatives
\begin{equation}
\mu_{ab}\equiv\frac{\partial\mu_a}{\partial n_{\rm b}} \quad a,b\in\{{\rm n,q}\}; \quad\mu_{\rm qf}\equiv\frac{\partial\mu_{\rm q}}{\partial f},\label{eq:MuAB}
\end{equation}
where $\mu_{\rm nq}=\mu_{\rm qn}$; explicitly,
\begin{align}
\mu_{\rm n} &=\frac{\partial\rho}{\partial n_{\rm b}}-\frac{Y}{n_{\rm b}}\frac{\partial\rho}{\partial Y}, \\
\mu_{\rm q} &=\frac{\partial\rho}{\partial n_{\rm b}}+\frac{(1-Y)}{n_{\rm b}}\frac{\partial\rho}{\partial Y}, \\
\mu_{\rm nn} &=\frac{\partial^2\rho}{\partial n_{\rm b}^2}-\frac{2Y}{n_{\rm b}}\frac{\partial^2\rho}{\partial n_{\rm b}\partial Y}+\frac{Y^2}{n_{\rm b}^2}\frac{\partial^2\rho}{\partial Y^2}, \label{eq:Munn} \\
\mu_{\rm qq} &=\frac{\partial^2\rho}{\partial n_{\rm b}^2}+\frac{2(1-Y)}{n_{\rm b}}\frac{\partial^2\rho}{\partial n_{\rm b}\partial Y}+\frac{(1-Y)^2}{n_{\rm b}^2}\frac{\partial^2\rho}{\partial Y^2}, \label{eq:Muqq}\\
\mu_{\rm nq} &=\frac{\partial^2\rho}{\partial n_{\rm b}^2}+\frac{(1-2Y)}{n_{\rm b}}\frac{\partial^2\rho}{\partial n_{\rm b}\partial Y}-\frac{Y(1-Y)}{n_{\rm b}^2}\frac{\partial^2\rho}{\partial Y^2}. \label{eq:Munq}
\end{align}
Henceforth, we define
\begin{equation}
\Pi_a=\Pi_a(r)Y_{lm}(\theta,\phi)\equiv\frac{\delta\mu_a}{\mu_0},
\label{eq:PiDefinition}
\end{equation}
in terms of which the Euler equations are
\begin{align}
&\omega^2{\rm e}^{-\nu}(1-\epsilon_{\rm n})\xi^r_{\rm n}+\omega^2{\rm e}^{-\nu}\epsilon_{\rm n}\xi^r_{\rm q}={\rm e}^{-\lambda/2}\frac{{\rm d}\Pi_{\rm n}}{{\rm d}r}, \label{eq:XiRN}\\
&\omega^2{\rm e}^{-\nu}r(1-\epsilon_{\rm n})\xi^{\perp}_{\rm n}+\omega^2{\rm e}^{-\nu}r\epsilon_{\rm n}\xi^{\perp}_{\rm q}=\Pi_{\rm n}, \label{eq:XiPerpN} \\
&\omega^2{\rm e}^{-\nu}(1-\epsilon_{\rm p})\xi^r_{\rm q}+\omega^2{\rm e}^{-\nu}\epsilon_{\rm p}\xi^r_{\rm n}={\rm e}^{-\lambda/2}\frac{{\rm d}\Pi_{\rm q}}{{\rm d}r}+{\rm e}^{-\lambda/2}\frac{\mu_{\rm qf}}{\mu_0}\frac{{\rm d}f}{{\rm d}r}\Theta_{\rm q}, \label{eq:XiRQ} \\
&\omega^2{\rm e}^{-\nu}r(1-\epsilon_{\rm p})\xi^{\perp}_{\rm q}+\omega^2{\rm e}^{-\nu}r\epsilon_{\rm p}\xi^{\perp}_{\rm n}=\Pi_{\rm q}. \label{eq:XiPerpQ}
\end{align}
We also recast the continuity equations in terms of $\xi_a^r$ and $\Pi_a$; using
\begin{equation}
\Theta_a = \left(\frac{{\rm e}^{-\lambda/2}}{r^2}\frac{d(r^2\xi^r_a)}{dr}-\frac{l(l+1)}{r}\xi^{\perp}_a\right)Y_{lm},
\end{equation}
we obtain
\begin{align}
\delta n_{\rm n}(r) ={}& -n_{\rm n}\left[\frac{{\rm e}^{-\lambda/2}}{r^2}\frac{{\rm d}(r^2\xi^r_{\rm n})}{{\rm d}r}-{\rm e}^{\nu}\frac{k^2_{\perp}}{\omega^2}\left\{(1+x)\Pi_{\rm n}-x\Pi_{\rm q}\right\}\right] \nonumber \\
&-{\rm e}^{-\lambda/2}\xi_{\rm n}^r\frac{{\rm d}n_{\rm n}}{{\rm d}r}, \label{eq:EulerDensPertRBVn} \\
\delta n_{\rm q}(r) ={}& -n_{\rm q}\left[\frac{{\rm e}^{-\lambda/2}}{r^2}\frac{{\rm d}(r^2\xi^r_{\rm q})}{{\rm d}r}-{\rm e}^{\nu}\frac{k^2_{\perp}}{\omega^2}\left\{(1+y)\Pi_{\rm q}-y\Pi_{\rm n}\right\}\right] \nonumber \\ & -{\rm e}^{-\lambda/2}\xi_{\rm q}^r\frac{{\rm d}n_{\rm q}}{{\rm d}r}, \label{eq:EulerDensPertRBVq}
\end{align}
where $k^2_{\perp}\equiv l(l+1)r^{-2}$ and
\begin{align}
x\equiv \frac{\epsilon_{\rm n}}{1-\epsilon_{\rm p}-\epsilon_{\rm n}}, \\
y\equiv \frac{\epsilon_{\rm p}}{1-\epsilon_{\rm p}-\epsilon_{\rm n}}.
\end{align}
Using
\begin{align}
\delta\mu_{\rm n} = {}& \mu_{\rm nn}\delta n_{\rm n}+\mu_{\rm nq}\delta n_{\rm q}, \label{eq:EulerianPertMun} \\
\delta\mu_{\rm q}= {}& \mu_{\rm qq}\delta n_{\rm q}+\mu_{\rm nq}\delta n_{\rm n}+\mu_{\rm qf}\delta f ,\label{eq:EulerianPertMuq}
\end{align}
we find
\begin{align}
\delta n_{\rm n}(r)= & \frac{\mu_{\rm qq}\mu_0\Pi_{\rm n}-\mu_{\rm nq}\mu_0\Pi_{\rm q}-\mu_{\rm nq}\mu_{\rm qf}\xi^r_{\rm q}({\rm d}f/{\rm d}r)}{D}, \label{eq:DensityNEulerianPert} \\
\delta n_{\rm q}(r)= & \frac{\mu_{\rm nn}\mu_0\Pi_{\rm q}-\mu_{\rm nq}\mu_0\Pi_{\rm n}+\mu_{\rm nn}\mu_{\rm qf}\xi^r_{\rm q}({\rm d}f/{\rm d}r)}{D}, \label{eq:DensityQEulerianPert}
\end{align}
where $D\equiv(\mu_{\rm nn}\mu_{\rm qq}-\mu_{\rm nq}^2)$; then Eq.~(\ref{eq:EulerDensPertRBVn}--\ref{eq:EulerDensPertRBVq}) are
\begin{align}
&\frac{{\rm d}\xi^r_{\rm n}}{{\rm d}r}+\left[\frac{2}{r}+\frac{{\rm d}\ln n_{\rm n}}{{\rm d}r}\right]\xi^r_{\rm n}+\left[-\frac{k^2_{\perp}}{\omega^2}{\rm e}^{\nu+\lambda/2}(1+x)+\frac{\mu_0\mu_{\rm qq}}{n_{\rm n}D}{\rm e}^{\lambda/2}\right]\Pi_{\rm n} \nonumber
\\&=\left[-\frac{k^2_{\perp}}{\omega^2}{\rm e}^{\nu+\lambda/2}x+\frac{\mu_0\mu_{\rm nq}}{n_{\rm n}D}{\rm e}^{\lambda/2}\right]\Pi_{\rm q}+\frac{\mu_{\rm nq}\mu_{\rm qf}}{n_{\rm n}D}\frac{{\rm d}f}{{\rm d}r}\xi^r_{\rm q}, \label{eq:PerturbedContinuityN} \\
&\frac{{\rm d}\xi^r_{\rm q}}{{\rm d}r}+\left[\frac{2}{r}+\frac{{\rm d}\ln n_{\rm q}}{{\rm d}r}+\frac{\mu_{\rm nn}\mu_{\rm qf}}{n_{\rm q}D}\frac{{\rm d}f}{{\rm d}r}\right]\xi^r_{\rm q} \nonumber \\
&+\left[-\frac{k^2_{\perp}}{\omega^2}{\rm e}^{\nu+\lambda/2}(1+y)+\frac{\mu_0\mu_{\rm nn}}{n_{\rm q}D}{\rm e}^{\lambda/2}\right]\Pi_{\rm q} \nonumber
\\& =\left[-\frac{k^2_{\perp}}{\omega^2}{\rm e}^{\nu+\lambda/2}y+\frac{\mu_0\mu_{\rm nq}}{n_{\rm q}D}{\rm e}^{\lambda/2}\right]\Pi_{\rm n}. \label{eq:PerturbedContinuityQ}
\end{align}
Notice that, in the case of zero entrainment and zero thermodynamic coupling $\mu_{\rm nq}=0$, the two equations describing the neutron fluid motion (\ref{eq:XiRN}) and~(\ref{eq:XiRQ}) are completely uncoupled from those describing the charged fluid motion (\ref{eq:PerturbedContinuityN}) and~(\ref{eq:PerturbedContinuityQ}), leading to two sets of equations coupling ($\xi^r_{\rm n}$,$\Pi_{\rm n}$) and ($\xi^r_{\rm q}$,$\Pi_{\rm q}$), respectively.
\subsection{Brunt--V\"{a}is\"{a}l\"{a} frequency}
\label{sec:BVFreq}
To determine the Brunt--V\"{a}is\"{a}l\"{a} frequency, we use the radial components of the Euler equations to get
\begin{align}
\omega^2{\rm e}^{-\nu}\frac{1}{1+y}\xi^r_{\rm q}={}& {\rm e}^{-\lambda/2}\frac{{\rm d}\Pi_{\rm q}}{{\rm d}r}-{\rm e}^{-\lambda/2}\frac{y}{1+y}\frac{{\rm d}\Pi_{\rm n}}{{\rm d}r} \nonumber \\
& -{\rm e}^{-\lambda/2}\frac{\mu_{\rm qf}(\mu_{\rm nn}\Pi_{\rm q}-\mu_{\rm nq}\Pi_{\rm n})}{n_{\rm q}D}\frac{{\rm d}f}{{\rm d}r} \nonumber \\
&-{\rm e}^{-\lambda}\xi^r_{\rm q}\frac{\mu_{\rm qf}}{\mu_0n_{\rm q}}\frac{{\rm d}f}{{\rm d}r}\left[\frac{{\rm d}n_{\rm q}}{{\rm d}r}+\frac{\mu_{\rm nn}\mu_{\rm qf}}{D}\frac{{\rm d}f}{{\rm d}r}\right];
\label{eq:ChargedFluidRadialComponent2}
\end{align}
Eqs.~(\ref{eq:EulerianPertMun}--\ref{eq:DensityQEulerianPert}) with $\delta\mu_a=(d\mu_a/dr)\delta r$ and $\delta n_a=(dn_a/dr)\delta r$ imply
\begin{equation}
\frac{{\rm d}n_{\rm q}}{{\rm d}r}+\frac{\mu_{\rm nn}\mu_{\rm qf}}{D}\frac{{\rm d}f}{{\rm d}r}=\frac{\mu_{\rm nn}-\mu_{\rm nq}}{D}\frac{{\rm d}\mu_0}{{\rm d}r},
\label{eq:dnqdRReplacement}
\end{equation}
hence
\begin{align}
\xi^r_{\rm q}&\left[\omega^2+{\rm e}^{\nu-\lambda}(1+y)\left(\frac{\mu_{\rm qf}}{\mu_0}\frac{{\rm d}\mu_0}{{\rm d}r}\right)\left(\frac{\mu_{\rm nn}-\mu_{\rm nq}}{n_{\rm q}D}\right)\frac{{\rm d}f}{{\rm d}r}\right] \nonumber \\
={}&{\rm e}^{\nu-\lambda/2}\left((1+y)\frac{{\rm d}\Pi_{\rm q}}{{\rm d}r}-y\frac{{\rm d}\Pi_{\rm n}}{{\rm d}r}\right. \nonumber \\ {}&\left.-(1+y)\frac{\mu_{\rm qf}(\mu_{\rm nn}\Pi_{\rm q}-\mu_{\rm nq}\Pi_{\rm n})}{n_{\rm q}D}\frac{{\rm d}f}{{\rm d}r}\right).
\end{align}
So the square of the Brunt--V\"{a}is\"{a}l\"{a} frequency is
\begin{equation}
N_{\rm q}^2(r)=-{\rm e}^{\nu-\lambda}(1+y)\left(\frac{\mu_{\rm qf}}{\mu_0}\frac{{\rm d}\mu_0}{{\rm d}r}\right)\left(\frac{\mu_{\rm nn}-\mu_{\rm nq}}{n_{\rm q}D}\right)\frac{{\rm d}f}{{\rm d}r}.
\label{eq:BruntVaisalaFrequency1}
\end{equation}
This can be rewritten in a manner which eliminates the dependence on derivatives of $f$. Using $\mu_{\rm e}=\mu_{\upmu}$ in the background, we can write
\begin{align}
(3\pi^2n_{\rm e})^{1/3}={}&\sqrt{(3\pi^2n_{\upmu})^{2/3}+m_{\upmu}^2} \nonumber \\
&\Rightarrow f^{2/3}-(1-f)^{2/3}=\left(\frac{m_{\upmu}^3}{3\pi^2n_{\rm q}}\right)^{2/3}
\label{eq:ElectronFractionRelation}
\end{align}
and
\begin{equation}
\frac{{\rm d}f}{{\rm d}r}=-\frac{f^{1/3}(1-f)^{1/3}}{n_{\rm q}^{5/3}[f^{1/3}+(1-f)^{1/3}]}\left(\frac{m_{\upmu}^2}{(3\pi^2)^{2/3}}\right)\frac{dn_{\rm q}}{dr}.
\label{eq:dfdR}
\end{equation}
Differentiating Eq.~(\ref{eq:ChargedChemicalPotential}) to find
\begin{align}
\mu_{\rm qf}={}&\frac{\partial}{\partial f}\left(\mu_{\rm p}+f\mu_{\rm e}(fn_{\rm q})+(1-f)\mu_{\upmu}((1-f)n_{\rm q})\right) \nonumber \\
={}&n_{\rm e}\frac{{\rm d}\mu_{\rm e}}{{\rm d}n_{\rm e}}-n_{\upmu}\frac{{\rm d}\mu_{\upmu}}{{\rm d}n_{\upmu}}=\frac{m_{\upmu}^2}{3(3\pi^2n_{\rm q}f)^{1/3}},
\label{eq:MuQF2}
\end{align}
we therefore have
\begin{align}
&\mu_{\rm qf}\frac{{\rm d}f}{{\rm d}r}=-\frac{m_{\upmu}}{n_{\rm q}}G(f)\frac{{\rm d}n_{\rm q}}{{\rm d}r}, \nonumber \\
&G(f)\equiv{}\frac{(1-f)^{1/3}}{3}\left(f^{1/3}-(1-f)^{1/3}\right)\left(f^{2/3}-(1-f)^{2/3}\right)^{1/2}.
\label{eq:G(f)}
\end{align}
Inserting this into Eq.~(\ref{eq:dnqdRReplacement}) to eliminate $dn_{\rm q}/dr$, then using the resulting equation to eliminate $\mu_{\rm qf}df/dr$ from Eq.~(\ref{eq:BruntVaisalaFrequency1}) gives us
\begin{equation}
N_{\rm q}^2(r)={\rm e}^{\nu-\lambda}(1+y)\frac{m_{\upmu}G(f)(\mu_{\rm nn}-\mu_{\rm nq})^2({\rm d}\mu_0/{\rm d}r)^2}{\mu_0n_{\rm q}D[n_{\rm q}D-\mu_{\rm nn}m_{\upmu}G(f)]}.
\label{eq:BruntVaisalaFrequency2}
\end{equation}
$N_{\rm q}(r)$ is plotted in Figure~(\ref{fig:BVFrequency}) with zero entrainment ($y=0$), along with the Brunt--V\"{a}is\"{a}l\"{a} frequency for a normal fluid star $N_{\text{nf}}$, given by
\begin{equation}
N_{\text{nf}}=\sqrt{N^2_{Y}+YN_{\rm q}^2},
\label{eq:NFBruntVaisalaFrequency}
\end{equation}
where the lepton gradient contribution is reduced by a factor of $Y$ due to the increased inertia of moving both the protons and neutron, and where~\citep{Reisenegger1992}
\begin{align}
N_{Y}^2(r)={}&-{\rm e}^{\nu-\lambda}\left(\frac{1}{\mu_0}\frac{{\rm d}\mu_0}{{\rm d}r}\right)\left(\frac{\mu_{{\rm b}Y}}{n_{\rm b}\mu_{\rm bb}}\frac{{\rm d}Y}{{\rm d}r}+\frac{Y\mu_{\rm qf}}{n_{\rm b}\mu_{\rm bb}}\frac{{\rm d}f}{{\rm d}r}\right), \label{eq:YGradientBruntVaisalaFrequency} \\
\mu_{\rm bb}={}&Y^2\mu_{\rm qq}+2Y(1-Y)\mu_{\rm nq}+(1-Y)^2\mu_{\rm nn} \nonumber , \\
\mu_{{\rm b}Y}={}&n_{\rm q}(\mu_{\rm qq}-\mu_{\rm nq})-n_{\rm n}(\mu_{\rm nn}-\mu_{\rm nq}). \nonumber
\end{align}
The superfluid leptonic Brunt--V\"{a}is\"{a}l\"{a} frequency is similar to that of Figure 2 of KG14, Figure 5 of~\citet{Passamonti2016} and the zero entrainment results of YW17. Differences are due to differences in the equations of state used, and in the case of the YW17 results, our inclusion of general relativity in both the background star and the perturbations and our neglect of self-gravity perturbations. Figure~\ref{fig:BVFrequencyVarK} shows $N_{\rm q}$ for fixed mass $M=1.4M_{\odot}$ using each of the four parametrizations specified by Table~\ref{tab:EOSParameters}, with the peak value of $N_{\rm q}$ decreasing slightly with increasing nuclear compressibility $K$.
\begin{figure}
\centering
\includegraphics[width=1.0\columnwidth]{BVFrequencyRel.pdf}
\caption{Brunt--V\"{a}is\"{a}l\"{a} (cyclical) frequency as a function of coordinate radius $r$ with no entrainment, calculated using the PC1 parametrization for our equation of state from Section~(\ref{sec:EoS}) for two different model stars: $M=1.4M_{\odot}$, $n_{{\rm b},\text{cntr}}=3.17n_{\text{nuc}}$, $R=12.11$ km, $R_{\text{cc}}=11.27$ km and $M=2.0M_{\odot}$, $n_{{\rm b},\text{cntr}}=6.67n_{\text{nuc}}$, $R=10.54$ km and $R_{\text{cc}}=10.23$ km, where $n_{{\rm b},\text{cntr}}$ is the central baryon density and $R_{\text{cc}}$ the crust-core interface radius. $R_{\text{cc}}$ for each star is also indicated on the graph by a vertical line at the frequency cutoff for $N_{\text{nf}}$. The Brunt--V\"{a}is\"{a}l\"{a} frequency arising due to the leptonic composition gradient in superfluid stars $N_{\rm q}$ and the total Brunt--V\"{a}is\"{a}l\"{a} frequency for a normal fluid star $N_{\text{nf}}$ are displayed. The frequency cutoffs for $N_{\rm q}$ correspond to the muon threshold, which is at the same density but a different radius for each star.}
\label{fig:BVFrequency}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1.0\columnwidth]{BVFrequencyRelVarK.pdf}
\caption{Brunt--V\"{a}is\"{a}l\"{a} (cyclical) frequency as a function of coordinate radius $r$ with no entrainment, for fixed mass $1.4M_{\odot}$ using the four EOS parametrizations specified by Table~\ref{tab:EOSParameters}. The frequency cutoffs correspond to the muon threshold, which is at the same density but a different radius for each star.}
\label{fig:BVFrequencyVarK}
\end{figure}
Using the definition of the Brunt--V\"{a}is\"{a}l\"{a} frequency, we can rewrite Eq.~(\ref{eq:XiRQ}) as
\begin{align}
\xi^r_{\rm q} {\rm e}^{-\nu}(\omega^2-N_{\rm q}^2)={}&{\rm e}^{-\lambda/2}(1+y)\frac{{\rm d}\Pi_{\rm q}}{{\rm d}r}-{\rm e}^{-\lambda/2}y\frac{{\rm d}\Pi_{\rm n}}{{\rm d}r} \nonumber \\
&+\frac{{\rm e}^{\lambda/2-\nu}\mu_0N_{\rm q}^2(\mu_{\rm nn}\Pi_{\rm q}-\mu_{\rm nq}\Pi_{\rm n})}{({\rm d}\mu_0/{\rm d}r)(\mu_{\rm nn}-\mu_{\rm nq})}.
\label{eq:XiRQ2}
\end{align}
Eqs.~(\ref{eq:XiRN}),~(\ref{eq:PerturbedContinuityN}--\ref{eq:PerturbedContinuityQ}) and~(\ref{eq:XiRQ2}) are the four coupled first-order ODEs describing the fluid perturbations in the core.
\subsection{Two-fluid formalism in the crust}
\label{subsec:CrustEoM}
To correctly calculate the compressional modes, we need to allow the oscillations in the core to propagate into the crust. As in the core, we consider two fluids- superfluid free neutrons, with displacement field $\overline{\xi}^i_{\rm f}$, and normal fluid nuclei, with displacement field $\overline{\xi}^i_{\rm c}$. We ignore elastic stresses here for simplicity, as we are not interested in the $s$ (shear) and $i$ (interface) modes caused by elasticity in the crust \citep{McDermott1983}. We also neglect entrainment, which is expected in the crust~\citep{Kobyakov2013,Chamel2017a} but which we do not expect to play a large role in the $g$-modes. Its effect on the $p$-modes, which we do not expect to be large either, will be examined in a later paper.
We derive the two-fluid crust equations of motion by first assuming that the perturbations of the chemical potential $\mu_{\rm f}$ and $\mu_{\rm c}$ in the crust can be written as a function of the two crustal number densities $n_{\rm f}$ (free neutrons) and $n_{\rm c}$ (baryons in nuclei), with changes in the other parameters of the crust EOS $A$, $Y$ and $n_{\rm i}$ having been absorbed into the changes in either $n_{\rm f}$ or $n_{\rm c}$. Thus we can write a series of four equations-- the two radial and two tangential components of the perturbed Euler equations-- analogously to Eqs.~(\ref{eq:XiRN}--\ref{eq:XiPerpQ}). We then have
\begin{align}
&\omega^2{\rm e}^{-\nu}\xi^r_{\rm f}={\rm e}^{-\lambda/2}\frac{{\rm d}\Pi_{\rm f}}{{\rm d}r}, \label{eq:XiRF}\\
&\omega^2{\rm e}^{-\nu}r\xi^{\perp}_{\rm f}=\Pi_{\rm f}, \label{eq:XiPerpF} \\
&\omega^2{\rm e}^{-\nu}\xi^r_{\rm c}={\rm e}^{-\lambda/2}\frac{{\rm d}\Pi_{\rm c}}{{\rm d}r}, \label{eq:XiRC} \\
&\omega^2{\rm e}^{-\nu}r\xi^{\perp}_{\rm c}=\Pi_{\rm c}, \label{eq:XiPerpC}
\end{align}
where $\Pi_{\rm f}\equiv\delta\mu_{\rm f}/\mu_0$, $\Pi_{\rm c}\equiv\delta\mu_{\rm c}/\mu_0$ in analogy with Eq.~(\ref{eq:PiDefinition}). From the perturbed continuity equation, we also have
\begin{align}
\delta n_{\rm f} = -n_{\rm f}\Theta_{\rm f}-{\rm e}^{-\lambda/2}\xi^r_{\rm f}\frac{{\rm d}n_{\rm f}}{{\rm d}r}, \label{eq:EulerDensPertnf}\\
\delta n_{\rm c} = -n_{\rm c}\Theta_{\rm c}-{\rm e}^{-\lambda/2}\xi^r_{\rm c}\frac{{\rm d}n_{\rm c}}{{\rm d}r}. \label{eq:EulerDensPertnc}
\end{align}
Using our assumption about the number density dependence of the perturbations, we can then rearrange
\begin{align}
\delta\mu_{\rm f}={}& \mu_{\rm ff}\delta n_{\rm f}+\mu_{\rm fc}\delta n_{\rm c}, \\
\delta\mu_{\rm c}={}& \mu_{\rm cc}\delta n_{\rm c}+\mu_{\rm fc}\delta n_{\rm f}
\end{align}
to obtain
\begin{align}
\delta n_{\rm f}=\frac{\mu_0(\mu_{\rm cc}\Pi_{\rm f}-\mu_{\rm fc}\Pi_{\rm c})}{D_{\text{crust}}}, \label{eq:deltanf}\\
\delta n_{\rm c}=\frac{\mu_0(\mu_{\rm ff}\Pi_{\rm c}-\mu_{\rm fc}\Pi_{\rm f})}{D_{\text{crust}}}, \label{eq:deltanc}
\end{align}
where $\mu_{ab}\equiv\partial \mu_a/\partial n_{\rm b}$ for $a,b\in\{\rm f,c\}$, $D_{\text{crust}}\equiv \mu_{\rm ff}\mu_{\rm cc}-\mu_{\rm fc}^2$ and $\mu_0=\mu_{\rm f}=\mu_{\rm c}$ in the background. There are subtleties involved in the calculation of $\mu_{\rm ff}$, $\mu_{\rm cc}$ and $\mu_{\rm fc}$, which are discussed in Appendix~\ref{app:1}. Combining Eqs.~(\ref{eq:deltanf}--\ref{eq:deltanc}) with Eq.~(\ref{eq:EulerDensPertnf}--\ref{eq:EulerDensPertnc}), we obtain
\begin{align}
\frac{{\rm d}\xi^r_{\rm f}}{{\rm d}r}+\left[\frac{2}{r}+\frac{{\rm d}\ln n_{\rm f}}{{\rm d}r}\right]\xi^r_{\rm f}+\left[-{\rm e}^{\nu+\lambda/2}\frac{k^2_{\perp}}{\omega^2}+\frac{\mu_0\mu_{\rm cc}}{n_{\rm f}D_{\text{crust}}}{\rm e}^{\lambda/2}\right]\Pi_{\rm f} \nonumber \\
= \frac{\mu_0\mu_{\rm fc}}{n_{\rm f}D_{\text{crust}}}{\rm e}^{\lambda/2}\Pi_{\rm c}, \label{eq:PerturbedContinuityF}\\
\frac{{\rm d}\xi^r_{\rm c}}{{\rm d}r}+\left[\frac{2}{r}+\frac{{\rm d}\ln n_{\rm c}}{{\rm d}r}\right]\xi^r_{\rm c}+\left[-{\rm e}^{\nu+\lambda/2}\frac{k^2_{\perp}}{\omega^2}+\frac{\mu_0\mu_{\rm ff}}{n_{\rm c}D_{\text{crust}}}{\rm e}^{\lambda/2}\right]\Pi_{\rm c} \nonumber \\
= \frac{\mu_0\mu_{\rm fc}}{n_{\rm c}D_{\text{crust}}}{\rm e}^{\lambda/2}\Pi_{\rm f}. \label{eq:PerturbedContinuityC}
\end{align}
Eqs.~(\ref{eq:XiRF}),~(\ref{eq:XiRC}) and (\ref{eq:PerturbedContinuityF}--\ref{eq:PerturbedContinuityC}) are the four coupled first-order ODEs describing the fluid perturbations in the crust. Similarly to the equations in the core, in the case of zero thermodynamic coupling $\mu_{\rm fc}=0$, these equations become two sets of coupled equations for ($\xi^r_{\rm f}$,$\Pi_{\rm f}$) and ($\xi^r_{\rm c}$,$\Pi_{\rm c}$).
\subsection{Single normal fluid in crust}
\label{subsec:SFCrustEoM}
In the crust, the neutron superfluid is $_1S^0$, with gaps $\sim 0.1$--$1$ MeV until low free neutron density~\citep{Gezerlis2010,Gezerlis2014}. Thus, we expect the free neutron gas to remain superfluid throughout most of the crust. The superfluid gap for neutrons falls precipitously at a Fermi wave number of approximately $0.05$ fm$^{-1}$ \citep{Gezerlis2014}, corresponding to a free neutron number density of $n_{\rm f}=n_{\rm NF}=2.64\times10^{-5}n_{\text{nuc}}$. For our calculations, we assume that the critical temperature $T_{\rm c}$ for free crustal neutrons is purely a function of $n_{\rm f}$. Thus, we assume that the free crustal neutrons are superfluid at $n_{\rm f}>n_{\rm NF}$ and normal at $n_{\rm f}\leq n_{\rm NF}$, so there is a sharp transition from superfluid to normal at $n_{\rm f}=n_{\rm NF}$. More realistically, $T_{\rm c}$ will also depend on $n_{\rm c}$; we assume that $T_{\rm c}$ depends on $n_{\rm c}$ much more weakly than it does on $n_{\rm f}$. In a finite temperature star, the superfluid density is proportional to $T_{\rm c}-T$ near the transition to normal fluid, but this temperature dependence is only important in a very thin region in the crust for $T$ small compared with typical values of $T_{\rm c}$ in the crust, which are $\sim 10^{10}$ K.
Between the transition density and neutron drip, free neutrons and nuclei move together as a single fluid. We represent this single normal fluid, which exists only in a very thin layer just above the neutron drip density, using the displacement field $\overline{\xi}^i_{\rm b}$ and the total baryon number density $n_{\rm b}=n_{\rm c}+n_{\rm f}$. The equation of state in this region is the same as in the two-fluid region of the crust. Since the two fluids move together here, there is a buoyancy and Brunt--V\"{a}is\"{a}l\"{a} frequency associated with the gradient of $Y_{\rm c}\equiv n_{\rm c}/n_{\rm b}$.
By analogy with Eqs.~(\ref{eq:XiRN}--\ref{eq:XiPerpQ}), ~(\ref{eq:PerturbedContinuityN}--\ref{eq:PerturbedContinuityQ}) and~(\ref{eq:XiRQ2}), we obtain two Euler equations and the perturbed continuity equation for the single normal fluid displacement field in the crust:
\begin{align}
&\omega^2{\rm e}^{-\nu}r\xi^{\perp}_{\rm b} = \Pi_{\rm b}, \label{eq:XiPerpB} \\
&\xi^r_{\rm b}{\rm e}^{-\nu}(\omega^2-N_{\rm b}^2) = {\rm e}^{-\lambda/2}\frac{{\rm d}\Pi_{\rm b}}{{\rm d}r}+\frac{{\rm e}^{\lambda/2-\nu}\mu_0n_{\rm b}^2}{{\rm d}\mu_0/{\rm d}r}\Pi_{\rm b}, \label{eq:XiRB} \\
&\frac{{\rm d}\xi^r_{\rm b}}{{\rm d}r}+\left[\frac{2}{r}+\frac{{\rm d}\ln n_{\rm b}}{{\rm d}r}+\frac{\mu_{bY_{\rm c}}}{n_{\rm b}\mu_{\rm bb}}\frac{{\rm d}Y_{\rm c}}{{\rm d}r}\right]\xi^r_{\rm b} \nonumber \\
&+\left[-{\rm e}^{\nu+\lambda/2}\frac{k_{\perp}^2}{\omega^2}+\frac{\mu_0}{n_{\rm b}\mu_{\rm bb}}{\rm e}^{\lambda/2}\right]\Pi_{\rm b}=0, \label{eq:PerturbedContinuityB}
\end{align}
where $\Pi_{\rm b}\equiv\delta\mu_{\rm b}/\mu_0$ and $n_{\rm b}$ is the Brunt--V\"{a}is\"{a}l\"{a} frequency associated with the gradient of $Y_{\rm c}$, given by
\begin{equation}
N_{\rm b}^2 = -{\rm e}^{\nu-\lambda}\frac{1}{\mu_0}\frac{{\rm d}\mu_0}{{\rm d}r}\frac{\mu_{{\rm b}Y_{\rm c}}}{n_{\rm b}\mu_{\rm bb}}\frac{{\rm d}Y_{\rm c}}{{\rm d}r}.
\label{eq:CrustBVFrequency}
\end{equation}
The two thermodynamic derivatives $\mu_{\rm bb}$ and $\mu_{{\rm b}Y_{\rm c}}$ are
\begin{align}
\mu_{\rm bb} ={}& Y_{\rm c}^2\mu_{\rm cc}+(1-Y_{\rm c})^2\mu_{\rm ff}+2Y_{\rm c}(1-Y_{\rm c})\mu_{\rm fc},\\
\mu_{{\rm b}Y_{\rm c}} ={}& n_{\rm c}(\mu_{\rm cc}-\mu_{\rm fc})-n_{\rm f}(\mu_{\rm ff}-\mu_{\rm fc}).
\end{align}
\subsection{Interface and boundary conditions}
\label{sec:BCs}
At the centre of the star, we impose the regularity condition $\Theta_a=0$, which implies that the displacement fields and $\Pi_a$ satisfy the following conditions at $r=0$:
\begin{align}
\xi^r_a ={}& l(\xi^r_a)_0r^{l-1}, \\
\Pi_a ={}& \omega^2{\rm e}^{-\nu}(\xi^r_a)_0r^l,
\end{align}
where $(\xi^r_a)_0$ is a constant. Since we can scale the overall amplitude of each mode, we only need to specify $(\xi^r_{\rm n})_0$ and can set $(\xi^r_{\rm q})_0=1$.
We require four conditions at the crust-core transition which allow the computation of the four quantities $(\xi^r_{\rm c},\xi^r_{\rm f},\Pi_{\rm c},\Pi_{\rm f})$ on the crust side of the transition using the quantities $(\xi^r_{\rm q},\xi^r_{\rm n},\Pi_{\rm q},\Pi_{\rm n})$ on the core side of the transition. Since the crust-core interface is denoted by the formation of nuclei, we know that the radial component of the displacement fields for the protons must be continuous at this interface. Since the motion of the protons is described by $\xi_{\rm q}^i$ and $\xi_{\rm c}^i$, this implies
\begin{equation}
(\xi^r_{\rm q})^+ = (\xi^r_{\rm c})^-,
\label{eq:InterfaceCondition1}
\end{equation}
where $+$ indicates the high-density (core) side and $-$ the low-density (crust) side of the transition. As baryons are not allowed to build up at the interface, baryon conservation is the second transition condition. Denoting the Lagrangian perturbation moving along with the nuclei (and hence the crust-core boundary) as $\Delta_{\rm c}$, the perturbed continuity equation for the total baryon number density is
\begin{equation}
\Delta_{\rm c}n_{\rm b}+n_{\rm b}\Theta_{\rm c}=0.
\end{equation}
Integrating this across the crust-core interface, we obtain
\begin{equation}
(n_{\rm n}\xi^r_{\rm n}-n_{\rm n}\xi^r_{\rm q})^+=(n_{\rm f}\xi^r_{\rm f}-n_{\rm f}\xi^r_{\rm c})^-.
\label{eq:InterfaceCondition2}
\end{equation}
As we have neglected elastic stresses in the crust, continuity of the tractions across the crust-core interface is given by the continuity of the pressure perturbation moving with the interface, or
\begin{equation}
(\Delta_{\rm c}P)^+=(\Delta_{\rm c}P)^-.
\end{equation}
Using the Gibbs--Duhem equation, we can use $\Delta P = \sum_{a}n_a\Delta \mu_a$ to rewrite this condition, giving
\begin{equation}
(n_{\rm n}\Pi_{\rm n}+n_{\rm q}\Pi_{\rm q})^+=(n_{\rm c}\Pi_{\rm c}+n_{\rm f}\Pi_{\rm f})^-+(n_{\rm b}^--n_{\rm b}^+){\rm e}^{-\lambda/2}\xi^r_{\rm c}\frac{{\rm d}\ln\mu_0}{{\rm d}r}.
\label{eq:InterfaceCondition3}
\end{equation}
Following~\citet{Andersson2011} and~\citet{Passamonti2012}, the final boundary condition we impose is continuity of the neutron chemical potential perturbation, $(\Delta_{\rm c}\mu_{\rm n})^+=(\Delta_{\rm c} \mu_{\rm f})^-$, which results from the ``chemical gauge''-independence of the neutron chemical potential. This final interface condition simplifies to
\begin{equation}
(\Pi_{\rm n})^+ = (\Pi_{\rm f})^-,
\label{eq:InterfaceCondition4}
\end{equation}
where we have used $\mu_0=\mu_{\rm c}=\mu_{\rm f}$ in the background equilibrium. The chemical potential is the same for crustal neutrons that are bound in nuclei or in the surrounding free superfluid; this condition is satisfied in the crustal equation of state, which allows neutrons to be exchanged freely between nuclei and the surrounding free neutron vapor (see Section~\ref{subsec:CrustEoS} and references therein). Thus, Eq.~(\ref{eq:InterfaceCondition4}) states the condition that there is no energy change when a crustal neutron is exchanged with a core neutron at the crust-core boundary irrespective of whether the crustal neutron is bound or free.
At the two fluid-single fluid transition in the crust just above neutron drip, baryon conservation and continuity of the tractions must be imposed. These two conditions can be expressed as
\begin{align}
(n_{\rm f}\xi^r_{\rm f}+n_{\rm c}\xi^r_{\rm c})^+={}&(n_{\rm b}\xi^r_{\rm b})^-,
\label{eq:TFSFCondition1}\\
(n_{\rm f}\Pi_{\rm f}+n_{\rm c}\Pi_{\rm c})^+={}&(n_{\rm b}\Pi_{\rm b})^-,\label{eq:TFSFCondition2}
\end{align}
where $+/-$ indicate the high density (two fluid) and low density (single fluid) regions respectively.
We also require another boundary condition at the two fluid-single fluid transition (SFT). In a very thin region where the superfluid neutron fraction $f$ falls from one to zero, $\xi_{\rm f}^r=f\xi_{\rm sf}^r+(1-f)\xi_{\rm nf}^r$, where $\xi_{\rm sf}^r$ and $\xi_{\rm nf}^r$ are the radial components of the superfluid neutron and normal fluid neutron displacement fields. If the normal neutrons couple perfectly to the charged component, then $f\xi_{\rm sf}^r=\xi_{\rm f}^r-(1-f)\xi_{\rm c}^r\to\xi_{\rm f}^r-\xi_{\rm c}^r$ for $f\to 0$. The current carried by the superfluid component should vanish where the superfluid neutrons disappear, which is true if
\begin{equation}
(\xi^r_{\rm f})^+=(\xi^r_{\rm c})^+
\label{eq:XifXicequal}
\end{equation}
at the surface where $f=0$. Combined with Eq.~(\ref{eq:TFSFCondition1}), this implies that
\begin{equation}
(\xi^r_{\rm f})^+=(\xi^r_{\rm c})^+=(\xi^r_{\rm b})^-.
\label{eq:ThreeComovingXis}
\end{equation}
A boundary condition is required at the outer surface of the star, which we approximate to occur at the neutron drip line since the outer crust contains so little of the star's mass (less than $0.01$\%) that we assume that its effect on the modes is negligible. We impose a form of the condition expressed in Eqs.~(\ref{eq:InterfaceCondition3}), but applied to the displacement field of the single normal fluid which exists just above neutron drip (ND)
\begin{align}
\left(\Pi_{\rm b} + {\rm e}^{-\lambda/2}\xi^r_{\rm b}\frac{{\rm d}\ln\mu_0}{{\rm d}r}\right)_{\text{at ND}}=0.
\label{eq:DeltaP}
\end{align}
As a check, we also compute a few $g$-modes and $p$-modes while integrating out to lower densities in the crust, imposing Eq.~(\ref{eq:DeltaP}) at $n_{\rm b}/n_{\text{nuc}}=1\times10^{-8}$. The $g$-mode frequencies obtained when doing so agree to within 0.1\% of those found when we stopped the integration at neutron drip. There is no discernible change in the core displacement fields for $g$-modes for these two boundary conditions, and the changes in the displacement fields in the crust are larger than in the core but still very small. The $p$-mode frequencies obtained in this way are within 2\% of those calculated with neutron drip as the stopping point for the integration. The $p$-mode displacement fields in the core are weakly affected by this shift in the minimum density, but the modes in the crust can differ significantly, particularly for the higher frequency modes which can have additional oscillations in the crust.
\section{Normal mode calculations}
\label{sec:NormalModes}
\subsection{WKB solutions}
Since the leptonic Brunt--V\"{a}is\"{a}l\"{a} frequency does not exist in the crust, we expect that the $g$-mode displacement fields in the crust will be evanescent and nearly zero. We thus employed the WKB approximation to calculate approximate $g$-mode displacement fields and mode frequencies, assuming no propagation into the crust, and also use the resulting approximate $p$-mode dispersion relations in discussing the $p$-mode displacement fields in the core. First, we convert Eqs.~(\ref{eq:XiRN},\ref{eq:PerturbedContinuityN}--\ref{eq:PerturbedContinuityQ},\ref{eq:XiRQ2}) into two second-order equations for $\Pi_{\rm n}$ and $\Pi_{\rm q}$, neglecting curvature terms, derivatives of the metric, $f$, the $\mu_{ab}$ and $N_{\rm q}$, and ignoring entrainment. We obtain
\begin{align}
&\frac{{\rm d}^2\Pi_{\rm n}}{{\rm d}r^2}+\frac{{\rm d}\ln n_{\rm n}}{{\rm d}r}\frac{{\rm d}\Pi_{\rm n}}{{\rm d}r}+\left[-k_{\perp}^2{\rm e}^{\lambda}+\frac{\mu_0\mu_{\rm qq}{\rm e}^{\lambda-\nu}\omega^2}{n_{\rm n}D}\right]\Pi_{\rm n}
\nonumber \\
{}&= \frac{\mu_{\rm nq}\mu_0{\rm e}^{\lambda-\nu}\omega^2}{n_{\rm n}D}\Pi_{\rm q},
\label{eq:2ndOrderN1} \\
&\frac{{\rm d}^2\Pi_{\rm q}}{{\rm d}r^2}+\frac{{\rm d}\ln n_{\rm q}}{{\rm d}r}\frac{{\rm d}\Pi_{\rm q}}{{\rm d}r}+\left(1-\frac{N_{\rm q}^2}{\omega^2}\right)\left[-k_{\perp}^2{\rm e}^{\lambda} +\frac{\mu_0\mu_{\rm nn}{\rm e}^{\lambda-\nu}\omega^2}{n_{\rm q}D}\right]\Pi_{\rm q} \nonumber \\
{}&= \frac{\mu_{\rm nq}\mu_0{\rm e}^{\lambda-\nu}(\omega^2-N_{\rm q}^2)}{n_{\rm q}D}\Pi_{\rm n}, \label{eq:2ndOrderQ1}
\end{align}
where we have used Eq.~(\ref{eq:BruntVaisalaFrequency1}) to replace ${\rm d}\mu_0/{\rm d}r$.
Defining $\Psi_a=\sqrt{n_a}\Pi_a$ and assuming that the $\Psi_a$ have a slowly-varying amplitude $C_a(r)$ and a rapidly-oscillating phase $S(r)=\int k_rdr$. Inserting this definition into Eqs.~(\ref{eq:2ndOrderN1}--\ref{eq:2ndOrderQ1}) gives
\begin{align}
(S')^2C_{\rm n} = M_{\rm nn}C_{\rm n} + M_{\rm nq}C_{\rm q}, \label{eq:WKB0O_1}\\
(S')^2C_{\rm q} = M_{\rm qq}C_{\rm q} + M_{\rm qn}C_{\rm n}, \label{eq:WKB0O_3}
\end{align}
where $d/dr='$ and
\begin{align}
M_{\rm nn} &= -k^2_{\perp}{\rm e}^{\lambda}+\frac{{\rm e}^{\lambda-\nu}\omega^2\mu_0\mu_{\rm qq}}{n_{\rm n}D}-\frac{1}{\sqrt{n_{\rm n}}}\frac{{\rm d}^2\sqrt{n_{\rm n}}}{{\rm d}r^2}, \\
M_{\rm qq} &= -\left(1-\frac{N_{\rm q}^2}{\omega^2}\right)k^2_{\perp}{\rm e}^{\lambda}+\frac{{\rm e}^{\lambda-\nu}\omega^2\mu_0\mu_{\rm nn}}{n_{\rm q}D}-\frac{1}{\sqrt{n_{\rm q}}}\frac{{\rm d}^2\sqrt{n_{\rm q}}}{{\rm d}r^2}, \\
M_{\rm nq} &= M_{qn} = -\frac{{\rm e}^{\lambda-\nu}\omega^2\mu_0\mu_{\rm nq}}{\sqrt{n_{\rm n}n_{\rm q}}D}.
\end{align}
Eqs.~(\ref{eq:WKB0O_1}--\ref{eq:WKB0O_3}) have solutions
\begin{equation}
(S')^2 =\frac{1}{2}\left[(M_{\rm nn}+M_{\rm qq}) \pm \sqrt{(M_{\rm nn}-M_{\rm qq})^2+4M_{\rm nq}^2} \right].
\label{eq:ZerothOrderPhase}
\end{equation}
As $|M_{\rm nq}| \ll |M_{\rm nn}|, |M_{\rm qq}|$, we can identify a neutron-dominated mode with $(S')^2=(k_r^2)_+\approx M_{\rm nn}$ and a charged fluid-dominated mode with $(S')^2=(k_r^2)_-\approx M_{\rm qq}$. In the low frequency $\omega^2\lesssim N^2_{\rm q}$ limit, $M_{\rm nn}\sim-k_{\perp}^2{\rm e}^{\lambda}$, so the low-frequency neutron-dominated mode is nonpropagating. The charged fluid-dominated mode does propagate, however, since $M_{\rm qq}\sim (N_{\rm q}^2/ \omega^2-1)k_{\perp}^2 {\rm e}^{\lambda}$ in the low-frequency limit, thus giving a dispersion relation for the $g$-modes
\begin{equation}
\omega^2_g\approx\frac{N_{\rm q}^2k_{\perp}^2{\rm e}^{\lambda}}{k^2},
\label{eq:ApproximateGModeDispersion}
\end{equation}
where $k^2=k_r^2+k^2_{\perp}{\rm e}^{\lambda}$, in agreement with the standard result of~\citet{McDermott1983}. The high frequency limit $\omega^2\gg N_{\rm q}^2$, keeping the $M_{\rm nq}$ contribution, gives the $p$-mode dispersion relation in the crust
\begin{align}
{}& \omega^2_{\rm p}\approx c_{\rm s\pm}^2k^2, \nonumber \\
{}&c_{\rm s\pm}^2={\rm e}^{\nu-\lambda}\frac{n_{\rm n}n_{\rm q}}{2\mu_0}\left[\left(\frac{\mu_{\rm qq}}{n_{\rm n}}+\frac{\mu_{\rm nn}}{n_{\rm q}}\right)\pm\sqrt{\left(\frac{\mu_{\rm qq}}{n_{\rm n}}-\frac{\mu_{\rm nn}}{n_{\rm q}}\right)^2+\frac{4\mu_{\rm nq}^2}{n_{\rm n}n_{\rm q}}}\right],
\label{eq:ApproximatePModeDispersion}
\end{align}
suggesting two sets of $p$-modes, one associated with each superfluid. This is similar to the simplified $p$-mode dispersion relation given by~\citet{Passamonti2016}. Here we have implicitly assumed that the phases of the two fluids are the same. If the thermodynamic coupling $\mu_{\rm nq}$ is ignored, Eq.~(\ref{eq:ApproximatePModeDispersion}) gives two completely separate dispersions, one for the charged fluid and one for the neutron fluid.
In the inner region of the star $r<r_t$, $k_r^2<0$ and the normal modes are exponentially damped. In the outer region $r_t<r<r_{\text{out}}$, $k_r^2>0$ and the modes are oscillatory. Matching at $r_t$ with the exponential solution in the inner region and imposing $\Psi_a(r_{\text{out}})=0$ assuming no propagation into the crust, allowed $g$-modes will have $k_r$ satisfying
\begin{equation}
\int_{r_t}^{r_{\text{out}}}k_r(r')dr'=\left(n_r-\frac{1}{4}\right)\pi, \quad n_r=1,2,3,...,
\label{eq:krCondition}
\end{equation}
where $n_r$ is the radial node number. This condition determines the allowed frequencies since $k_r(r)$ is a function of $\omega$. $n_r$ here is the radial index of the solution, setting the radial node number for $\Psi_a$ and by extension $\Pi_a$ and $\xi^r_a$.
\subsection{Numerical results}
\subsubsection{$g$-modes}
To obtain solutions for the displacement fields $\xi^i_a$ and the $\Pi_a$, we numerically integrated the system of four first-order equations given in the core by Eqs.~(\ref{eq:XiRN}),~(\ref{eq:PerturbedContinuityN}--\ref{eq:PerturbedContinuityQ}) and~(\ref{eq:XiRQ2}), in the crust by Eqs.~(\ref{eq:XiRF}),~(\ref{eq:XiRC}) and~(\ref{eq:PerturbedContinuityF}--\ref{eq:PerturbedContinuityC}), and in the crust just above neutron drip by Eqs.~(\ref{eq:XiRB}--\ref{eq:PerturbedContinuityB}). We use a standard energy normalization to set the amplitude of the displacement fields. Reinserting factors of $c$, this condition is
\begin{align}
\frac{\omega^2}{c^2}\int_0^{R_{\text{cc}}}\int_{\Omega} {\rm d}V\mu_0\left[ (1-\epsilon_{\rm p})n_{\rm q}\xi^{*i}_{\rm q}\xi^{\rm q}_i +(1-\epsilon_{\rm n})n_{\rm n}\xi^{*i}_{\rm n}\xi^{\rm n}_{\rm i} \right. \nonumber \\
\left. + n_{\rm q}\epsilon_{\rm p}(\xi^{*i}_{\rm q}\xi^{\rm n}_{\rm i}+\xi^{*i}_{\rm n}\xi^{\rm q}_i)\right] \nonumber \\
+\frac{\omega^2}{c^2}\int_{R_{\text{cc}}}^R\int_{\Omega} {\rm d}V\mu_0(n_{\rm f}\xi^{*i}_{\rm f}\xi^{\rm f}_i+n_{\rm c}\xi^{*i}_{\rm c}\xi^{\rm c}_i) =\frac{GM^2}{R},
\end{align}
where $M$ and $R$ are the mass and radius of the star, $R_{\text{cc}}$ is the coordinate radius of the crust-core transition, $\Omega$ indicates integration over the solid angle of a sphere and ${\rm d}V={\rm e}^{\lambda/2}r^2\sin\theta {\rm d}r{\rm d}\theta {\rm d}\phi$. In the very thin single fluid region at densities just above neutron drip, $\xi^i_{\rm f}=\xi^i_{\rm c}=\xi^i_{\rm b}$. Each function ($\xi^r_{\rm n}$,$\xi^r_{\rm q}$,$\xi^r_{\rm c}$,$\xi^r_{\rm f}$,$\Pi_{\rm n}$,$\Pi_{\rm q}$,$\Pi_{\rm c}$,$\Pi_{\rm f}$,$\xi^r_{\rm b}$,$\Pi_{\rm n}$) is scaled by the same amount, since they are all linearly related.
Figures~\ref{fig:GModesMass} and~\ref{fig:GModesEntrainment} show the $l=2$ $g$-mode frequency spectrum as a function of the stellar mass and the entrainment parameter $\epsilon_{\rm p}$, respectively, for the four different EOS parametrizations described in Table~\ref{tab:EOSParameters}. The WKB frequencies for the $1.4M_{\odot}$ star with no entrainment are shown in Figure~\ref{fig:GModesEntrainment} to illustrate that they are extremely close to the exact frequencies for $n_{r,{\rm q}}\gtrsim 2$, even though we did not permit propagation into the crust in the WKB approximation. This indicates that the crust is largely unimportant to the mode frequency for the $g$-modes.
We find that our frequencies, which include general relativity, are redshifted compared to those of YW17. For example, they use the approximate $g$-mode frequency $\omega_g/2\pi\approx 590/n_{q,r}$ Hz for their $1.40M_{\odot}$, $m_{\rm p}^*/m_{\rm N}=0.8$ and $K=230.9$ MeV~\citep{RikovskaStone2003} star, while we obtain an approximate frequency spectrum
\begin{equation}
\frac{\omega_g}{2\pi}\approx\frac{608-0.83(K-240\text{ MeV})-90\frac{M}{M_{\odot}}+297\epsilon_{\rm p}}{n_{r,{\rm q}}}\text{ Hz}, \label{eq:OmegaGApprox}
\end{equation}
which is accurate to within $\lesssim$5\% for $n_{r,q}>2$. This formula gives $\omega_g/2\pi\approx 549/n_{r,{\rm q}}$ Hz for a $1.40M_{\odot}$, $\epsilon_{\rm p}=1-m_{\rm p}^*/m_{\rm N}=0.2$ and $K=230.9$ MeV star in our model. Our frequencies are also lower than those of KG14, who did include general relativity. In this case, the differences in the frequencies are due to the different equations of state used, which also contributed to the differences between the results in this paper and those in YW17. Eq.~\ref{eq:OmegaGApprox} also indicates that the $g$-mode frequency is relatively insensitive to the nuclear compressibility $K$, with the numerator changing by only $41$ Hz over the range of $K$ values used here.
As expected from the $(1+y)=(1-Y(1+\epsilon_{\rm p}))/(1-\epsilon_{\rm p}-Y)$ proportionality of the Brunt--V\"{a}is\"{a}l\"{a} frequency, the $g$-mode frequencies are increased as the entrainment parameter $\epsilon_{\rm p}$ is increased. We find that the frequencies increase by a factor or $\sim 1.4$ from the $\epsilon=0$ to the $\epsilon_{\rm p}=0.5$ values, in agreement with an expected scaling factor of $1/\sqrt{1-\epsilon_{\rm p}}=\sqrt{2}$ for low $Y$. However, we do not find an increase as large as found in YW17, with our frequencies with $\epsilon_{\rm p}=0.5$ being a factor of $\sim1.5$ lower than theirs with $m^*_{\rm p}/m_{\rm N}=0.4$. Possible reasons for the disagreement are the differences in the equation of state and in the structure of the star, which we compute by solving the TOV equation. The decrease in $\omega_g$ with $K$ is also expected from the inverse relationship between $N_{\rm q}$ and $K$ as seen in Figure~\ref{fig:BVFrequencyVarK}, though this decrease is small (hence the relative insensitivity to $K$) because the $g$-modes can propagate over a longer distance in higher $K$ stars due to their greater radii. The decrease in $\omega_g$ with $M$ , even though the maximum value of $N_{\rm q}$ in the star increases with $M$, is explained as follows. Figure~\ref{fig:BVFrequency} shows that $N_{\rm q}$ becomes more peaked as a function of mass, meaning that the region of the star where $k_r$ is real (between $r_t$ and $r_{\text{out}}$ in Eq.~(\ref{eq:krCondition})) is smaller for large $M$. For large $n_{r,q}$ Eq.~(\ref{eq:krCondition}) becomes
\begin{equation}
\omega_g \approx \frac{l}{n_{r,{\rm q}}\pi} \int_{r_t}^{r_{\text{out}}} \frac{N_{\rm q}}{r}dr,
\label{eq:ApproximatekrCondition}
\end{equation}
so $\omega_g$ is smaller for a particular $n_{r,{\rm q}}$ when the range of integration is smaller, or when $M$ is larger.
\begin{figure*}
\centering
\includegraphics[width=1.0\textwidth]{CombinedGModePlot.pdf}
\caption{$l=2$ $g$-mode (cyclical) frequencies for different values of the stellar mass and grouped by the EOS parametrization, denoted in the bottom left corner of each subplot. The entrainment in the core was set to zero when computing these frequencies.}
\label{fig:GModesMass}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=1.0\textwidth]{EntrainmentGModePlot.pdf}
\caption{$l=2$ $g$-mode (cyclical) frequencies for different values of the entrainment parameter $\epsilon_{\rm p}$, grouped by the EOS parametrization, denoted in the bottom left corner of each subplot. The WKB frequencies for a zero entrainment, $1.4M_{\odot}$ star with EOS parametrization PC1 is included in the upper left subplot. All stellar models used in this plot are of mass $1.4M_{\odot}$.}
\label{fig:GModesEntrainment}
\end{figure*}
Figure~\ref{fig:GModesDisplacementFields} shows the displacement fields $\xi^r(r)$ and $\xi^{\perp}(r)$ for a few representative $l=2$ $g$-modes in the $1.40M_{\odot}$ star. Since the leptonic Brunt--V\"{a}is\"{a}l\"{a} frequency only acts on the charged fluid, the amplitude of the charged component is two orders of magnitude larger than the neutron component, and the neutron component is pulled along by the charge component through the thermodynamic coupling term $\mu_{\rm nq}$ (and also by the entrainment if $\epsilon_{\rm p}\neq 0$). In the crust, the $g$-mode frequencies are too low to excite oscillatory motion, and thus both the nuclear and neutron fluid displacements damp.
In the core, the charged component displacement fields are in reasonable agreement with YW17, but the neutron components have important differences. Our crust-core transition conditions change the oscillatory structure of the neutron component displacement fields, shifting them away from $\xi^r=0$ in the outer part of the core. This justifies our specification of the $g$-modes using $n_{r,{\rm q}}$, the radial node number of the charged fluid in the core. This is in contrast to results of YW17, which assumed a single normal fluid in the crust and imposes a crust-core transition condition (Eq. (B40) in YW17) that is equivalent to making both superfluid displacement fields equal. As the entrainment is increased in strength and the neutron fluid is forced to move along with the charged fluid to an even greater extent, we find that the radial nodes of the neutron fluid reappear at the locations of the charged fluid nodes. We cannot compare our results to YW17 in the crust because, unlike them, we treat the crust as two-fluids. We also do not compare our results to KG14, who do not show any displacement fields and who also use a single-fluid crust.
\begin{figure*}
\centering
\includegraphics[width=1.0\textwidth]{GModesPlot.pdf}
\caption{Displacement fields $\xi^r$ and $\xi^{\perp}$ for four $l=2$ $g$-modes in a $1.40M_{\odot}$, zero entrainment star with EOS parametrization PC1: ($n_{r,{\rm q}}$, $\omega/2\pi$)=(1, 435.2 Hz), (3, 155.8 Hz), (5, 95.34 Hz) and (10, 48.84 Hz). The crust-core interface is indicated by the thin line at $11.27$ km. To the left of this line, the displacement fields are $(\xi^r_{\rm n},\xi^r_{\rm q},\xi^{\perp}_{\rm n},\xi^{\perp}_{\rm q})$, while to the right they are $(\xi^r_{\rm f},\xi^r_{\rm c},\xi^{\perp}_{\rm f},\xi^{\perp}_{\rm c})$. $(\xi^r_{\rm b},\xi^{\perp}_{\rm b})$, which do not vary much over the very thin region ($\sim 10$ m) of single fluid above neutron drip, are not shown.}
\label{fig:GModesDisplacementFields}
\end{figure*}
\subsubsection{$p$-modes}
Figure~{\ref{fig:PModesDisplacementFields}} shows four distinct $l=2$ $p$-modes for a $\epsilon_{\rm p}$, $1.4M_{\odot}$ star, the first of which is actually an $n_{r,{\rm n}}=n_{r,{\rm q}}=0$ $f$-modes. These illustrate that 1) there are twice as many $p$-modes since there are two fluids, a result which is well-known \citep{Lindblom1994,Lee1995,Gualtieri2014}, including multiple modes with the same radial node number for one or both fluids, and 2) the fluids need not oscillate in phase, meaning the $n$ and $q$ fluids can have different numbers of radial nodes. In fact, we find that, for $\epsilon_{\rm p}=0$, most of the $p$-modes for a two-superfluid star behave as if the two fluids are (almost) uncoupled. This agrees with previous work \citep{Gusakov2011,Gualtieri2014}. This means that the core WKB result Eq.~(\ref{eq:ZerothOrderPhase}) does not apply for all $p$-modes since it assumes that the two fluids have identical phase, which is not necessarily true. In contrast to the $q$-led $g$-modes, the amplitudes of the $n$ and $q$-components of the $p$-modes are comparable. Additionally, the crust displacement fields can have multiple radial nodes, even with the crust constituting only a few percent of the star's radial extent, since the wave number for the $p$-mode is often significantly smaller in the crust than in the core. Following~\citet{Lindblom1994} we can classify $p$-modes by calculating the baryon current $Y\xi^r_{\rm q}+(1-Y)\xi^r_{\rm n}$: those with small baryon current compared with $\xi_{\rm n}^r-\xi_{\rm q}^r$ are classified as superfluid modes, denoted ``$s_i$'', while all others are classified as normal fluid modes, denoted ``$p_i$''. This is similar to the classification scheme of~\citet{Lindblom1994} and~\citet{Lee1995}, who use a scheme based on quantities related to our $\Pi_{\rm q}$ and $\Pi_{\rm n}$.
\begin{figure*}
\centering
\includegraphics[width=1.0\textwidth]{PModesPlot.pdf}
\caption{Displacement fields $\xi^r$ and $\xi^{\perp}$ for the $l=2$ $f$-mode and three $p$-modes in a $1.40M_{\odot}$, zero entrainment star: ($x_i$, $n_{r,{\rm q}}$, $n_{r,{\rm n}}$, $\omega_g/2\pi$)=($f$, 0, 0, 2302 Hz), ($p_1$, 1, 1, 6576 Hz), ($p_2$, 4, 2, 9703 Hz) and ($s_3$, 3, 2, 10378 Hz), where $x_i$ refers to the standard classification of the mode and its order as a subscript. The crust-core interface is indicated by the thin line at $11.27$ km. To the left of this line, the displacement fields are $(\xi^r_{\rm n},\xi^r_{\rm q},\xi^{\perp}_{\rm n},\xi^{\perp}_{\rm q})$, while to the right they are $(\xi^r_{\rm f},\xi^r_{\rm c},\xi^{\perp}_{\rm f},\xi^{\perp}_{\rm c})$. $(\xi^r_{\rm b},\xi^{\perp}_{\rm b})$, which do not vary much over the very thin region ($\sim 10$ m) of single fluid above neutron drip, are not shown. The radial node numbers $n_{r,{\rm q}}$ and $n_{r,{\rm n}}$ for each fluid for each mode are indicated in the upper left of each plot. The displacement fields have been scaled by factors of $\overline{n}^{1/2}_a=(n_a/n_{\text{nuc}})^{1/2}$, which accounts for the abrupt jumps occurring at the crust-core transition.}
\label{fig:PModesDisplacementFields}
\end{figure*}
Figure~\ref{fig:PModesScatter} plots the radial node numbers in the core for each $p$-mode as a function of the mode frequency with zero entrainment. We plot $n_{r,{\rm n}}$ and $n_{r,{\rm q}}$ separately for modes in which they are not identical and only one of them for modes in which they are the same. The $p$-modes for which $n_{r,{\rm n}}\neq n_{r,{\rm q}}$ the two components of the mode each roughly obey the uncoupled fluid dispersion relations $k^{\rm n}_r\approx M_{\rm nn}$ and $k^{\rm q}_r\approx M_{\rm qq}$, and those which have $n_{r,n}= n_{r,q}$ and roughly obey one of the two (coupled) WKB results given by Eq.~(\ref{eq:ZerothOrderPhase}). The modes of the latter type are labeled distinctly based on which solution $(k_r)_{\pm}$ they follow most closely. The separate, uncoupled dispersion relations obeyed by most $p$-modes suggest that they are formed from separate $n$ and $q$ oscillations which are paired together through the weak thermodynamic coupling (in the case of zero entrainment) due to having similar frequencies, with the pairing shifting the mode away from either of the exact frequencies that the uncoupled fluid modes would have. This means that the $n$ and $q$ components of each mode are not required to have the same node number, which is what we observe. The frequency residuals $\Delta\omega_{\rm p}$ compared to the uncoupled fluid or WKB result are shown in the right panel. For the nearly uncoupled modes, these were obtained by comparing the numerically calculated frequency to the expected frequency for the separate fluid components for a given $n_{r,{\rm n}}$ or $n_{r,{\rm q}}$ as calculated using $k^{\rm n}_r\approx M_{\rm nn}$ and $k^{\rm q}_r\approx M_{\rm qq}$ and Eq.~(\ref{eq:krCondition}). For the $n_{r,{\rm n}}=n_{r,{\rm q}}$ modes, the expected frequency was calculated for a given $n_r$ by using Eq.~(\ref{eq:krCondition}) and the WKB solution from Eq.~(\ref{eq:ZerothOrderPhase}) which gave the smallest frequency difference for each mode. $\Delta\omega_{\rm p}$ is small for most modes, indicating that they are well-described by either the nearly uncoupled or standard WKB dispersions. Many of the residuals for the high-frequency uncoupled-type neutron fluid modes are large, suggesting that for these modes the charged part of the mode could be ``pulling'' the neutron part towards being an $n_{r,{\rm n}}=n_{r,{\rm q}}$, charged fluid-like mode obeying the dispersion relation $(k_r)_{-}$.
\begin{figure*}
\centering
\includegraphics[width=1.0\textwidth]{CombinedPModePlot.pdf}
\caption{Left: $l=2$ $p$-mode radial node numbers $n_{r,{\rm n}}$ and $n_{r,{\rm q}}$ plotted as a function of the frequency of the corresponding mode for a $1.40M_{\odot}$, zero entrainment star. The $f$ and $s_0$ ($\omega/2\pi=28687$ Hz) modes are not shown. Modes where $n_{r,{\rm n}}\neq n_{r,{\rm q}}$ have the radial node numbers of the $n$ and $q$ displacement fields denoted separately, but are paired i.e. there are two ticks at the same frequency $\omega_{\rm p}$, one ($+$) denoting the value of $n_{r,{\rm n}}$ and the other (x) denoting $n_{r,{\rm q}}$. Modes where $n_{r,{\rm n}}=n_{r,{\rm q}}$ are denoted by distinct symbols depending on whether they more closely follow the $(k_r)_{+}$ ($n$-dominated, denoted by a triangle) or $(k_r)_{-}$ ($q$-dominated, denoted by a square) WKB dispersion relation.
Right: Residuals $\Delta\omega_{\rm p}$ between the full numerically calculated $p$-mode frequencies and those that an uncoupled $n$ or $q$ mode of identical $n_{r,{\rm n}}$/$n_{r,{\rm q}}$ would have (for $n_{r,{\rm n}}\neq n_{r,{\rm q}}$) \textit{or} between the fully numerically calculated $p$-mode frequencies and the nearest coupled WKB frequency corresponding to the same radial node number $n_{r,{\rm n}}=n_{r,{\rm q}}$.}
\label{fig:PModesScatter}
\end{figure*}
We also calculated the $p$-modes for a $1.40M_{\odot}$ star with strong entrainment $\epsilon_{\rm p}=0.5$. As expected, this drastic increase in the entrainment reduces the difference in the radial node number between the two fluids to at most $\pm 1$. It additionally tries to force the modes to obey the neutron-dominated WKB dispersion $(k_r)_{+}\approx M_{\rm nn}$, which is shown in Figure~(\ref{fig:PModesEntrainment}). That it is this solution that is selected as opposed to the charged fluid-dominated one suggests that the $p$-modes can be thought of as neutron-dominated in the same way that the $g$-modes can be considered charge-fluid dominated, with this shift arising because the entrainment coefficient in the neutron equations $\epsilon_{\rm n}=n_{\rm q}/n_{\rm n}\epsilon_{\rm p}$ is about an order of magnitude smaller than $\epsilon_{\rm p}$, which appears in the charged fluid equations.
\begin{figure}
\centering
\includegraphics[width=1.0\columnwidth]{EntrainmentPModesScatter.pdf}
\caption{$p$-modes for $1.40M_{\odot}$, $\epsilon_{\rm p}=0.5$ star with parametrization PC1 for the EOS, and $n_r$ (including fractional values) as a function of $\omega$ determined from Eq.~(\ref{eq:krCondition}) using $(k_r)_{+}\approx M_{\rm nn}$.}
\label{fig:PModesEntrainment}
\end{figure}
A final point of interest concerning the $p$-modes is the possible existence of pairs of distinct $p$-modes which are closely-spaced in frequency, as opposed to the nearly uniformly-spaced in frequency $p$-modes expected in the single fluid or WKB two-fluid cases. An example of such a mode pair we found for the $1.40M_{\odot}$, $K=230$ MeV, $\epsilon_{\rm p}=0$ star is the pair ($n_{r,{\rm q}}=6$, $n_{r,{\rm n}}=6$, $\omega_{\rm p}/2\pi=24116$ Hz) and ($n_{r,{\rm q}}=10$, $n_{r,{\rm n}}=9$, $\omega_{\rm p}/2\pi=24264$ Hz), which have a frequency spacing of the order of the $g$-mode frequencies. A similar phenomenon is observed in the finite temperature calculation of \citet{Gualtieri2014}, where the $p$-mode frequencies become very similar at certain ``resonance'' temperatures, though our results indicate that nearly-resonant $p$-modes can occur at any temperature. These mode pairs could provide a source of large nonlinear mode couplings for the two-superfluid version of the $p-g$ instability discussed in recent papers~\citep{Weinberg2013,Venumadhav2014,Weinberg2016}. These instabilities may be observable through phase shifts in the gravitational waveforms of binary neutron star mergers~\citep{Essick2016,Andersson2018}.
\section{Conclusions}
We have calculated the $g$- and $p$-modes of a superfluid star with leptonic buoyancy using a specific model for nuclear matter in the core and the crust. We have included general relativity and a two-fluid crust when computing the normal modes, finding that the crust-core interface conditions for the displacement fields change the neutron components of the $g$-mode displacement fields in the core by removing many of their radial nodes. In order to compute the modes, we have developed a simple but flexible equation of state for both crust and core which contains all of the thermodynamics required by our formalism. This allowed us to compute oscillation modes for a range of stellar and nuclear physics parameters, and our EOS can be easily adjusted to agree with new neutron star or nuclear physics measurements. In general our leptonic buoyancy $g$-mode frequencies are similar to those found previously, considering differences in the equations of state used to model the star and redshift factors, and are dominated by the charged fluid in which the buoyancy exists. We find that the $g$-mode frequencies increase with entrainment and decrease with stellar mass and nuclear compressibility, with only weak dependence on the latter. Our decomposition of the fluid into neutron and charged components clearly illustrates that the neutrons are pulled along by the charged fluid in the $g$-modes through thermodynamic coupling and entrainment, and otherwise would not participate in the $g$-mode.
In contrast, for zero entrainment, we reproduce earlier results~\citep{Gusakov2011,Gualtieri2014} that most of the $p$-modes behave as nearly uncoupled fluids, with the weak coupling between the two superfluids leading to pairing between uncoupled $n$- and $q$-fluid modes with similar frequencies. This results in $p$-modes whose charged and neutron components can have widely-differing radial node numbers, and in $p$-modes with frequency differences on the order of the $g$-mode frequencies. These could thus contribute to the recently proposed tidal-$p$-$g$ or related instabilities which depends on nonlinear couplings between $p$ and $g$-modes. For large entrainment, we find ``neutron-dominated'' $p$-modes, in which the phases of the two superfluids in the core are nearly the same so that they almost behave as a single neutron fluid.
As mentioned briefly by YW17 and incorporated in a recent paper~\citep{Yu2017a}, we should include hyperons in the neutron star core, as the chemical potential above $\sim 3n_{\text{nuc}}$ reaches the bare rest mass of the $\Lambda$ hyperon. This will lead to a softening of the equation of state and may provide additional hyperon superfluids which couple thermodynamically to the neutron and charged fluids, or if the hyperons are not superfluid, a hyperonic Brunt--V\"{a}is\"{a}l\"{a} frequency which can modify the $g$-modes in the inner core~\citep{Dommes2016}. If the star is able to contain $\Xi^{-}$ hyperons it could have a hyperonic Brunt--V\"{a}is\"{a}l\"{a} frequency even if the hyperons are superfluid, since the $\Xi^{-}$ would be expected to comove with the protons to which they are electrostatically coupled. Such hyperonic buoyancy would shift the $g$-mode frequencies obtained from leptonic buoyancy alone, which could be used as an indicator of the presence of hyperons in neutron stars if the resulting gravitational waveform phase shifts from the resonant excitation of these $g$-modes in binary neutron star inspirals could be measured. However, if the EOS is softened too much by the hyperons, it could become difficult for it to allow stars of mass $>2M_{\odot}$, as reaching this mass already required large nuclear compressibilities or high central densities.
\section*{Acknowledgements}
This work was supported in part by NASA ATP grant NNX13AH42G. PBR was also supported in part by the Boochever Fellowship at Cornell for fall 2017. We also thank the referee for many helpful comments that improved our paper.
|
{
"timestamp": "2018-10-15T02:12:51",
"yymm": "1802",
"arxiv_id": "1802.08741",
"language": "en",
"url": "https://arxiv.org/abs/1802.08741"
}
|
\section{Introduction}
Grasping is one of the most significant manipulation in everyday life. Robotic grasping develops rapidly in recent years. However, it is still far behind human performance and remains unsolved. For example, when humans encounter a stack of objects like shown in Fig. \ref{fig1}, they instinctively know how to grasp them. As for the robot, when it is planning to grasp a stack of objects, the process includes detecting objects and their grasps, determining the grasping order and finally planning grasping motion. These problems still remain challenging and hinder the widespread use of robots in everyday life.
Recently, with rapid development of deep learning, it is proved to be a useful tool in computer vision, which has made impressive breakthroughs in many research fields, such as image classification\cite{imagenet} and object detection\cite{SSD,c5,c4}. The reason that deep learning has made so much progress is that through its training, deep networks can extract features of objects or images that are far superior to hand-designed ones \cite{DL}. These advantages have encouraged researchers to apply deep learning to robotics. Some recent works have proved the effectiveness of deep learning and convolutional neural network (CNN) in robotic perception \cite{spatialae,handeyegrasp} and control \cite{opendoor,endtoend}. In particular, deep learning has achieved unprecedented performance in robotic grasp detection \cite{lenz,redmongrasp,selfsupervisedgrasp,resnetgrasp}. Most current robotic grasp detection methods take RGB or RGB-D images as input and output a vectorized and standardized grasps. These robotic grasp detection algorithms essentially solve an object detection problem. They treat the grasps as a special kind of objects and detect them by using a neural network.
\begin{figure}[tb]
\center{\includegraphics[scale=0.32]{Motivation.pdf}}
\caption{Importance of manipulation order. Left: A cup is on a book. Right: A phone is on a box. As shown in two scenes, if we do not consider the relationships of manipulation, and robots choose to pick up the book or the box first, then the cup or the phone may be dumped or even broken. }
\label{fig1}
\end{figure}
However, using this type of grasp detection algorithm for robotic grasping experiments can only deal with scenes containing a single target. Robot will execute the grasp having the highest confidence score. Doing this can have a devastating effect on objects in some multi-object scenes. For example, as shown in Fig. \ref{fig1}, a cup is placed on a book, and if the detected grasp with the highest confidence score belongs to the book, which means the robot chooses to pick up the book first, the cup may fall apart and be broken. Therefore, in this paper, we focus on helping the robot find the right grasping order when it is facing a stack of objects, which is defined as {\bf manipulation relationship prediction}.
Some recent works have used convolutional neural network to predict the relationships between objects rather than just object detection\cite{c1,c2,rel1,rel2}. These works show that convolutional neural network has the potential to understand the relationships between objects. Therefore, we hope to establish a method based on neural network so that the robot can understand the manipulation relationships between objects in multi-object scenes to help the robot finish more complicated grasp tasks.
In our work, we design a new network architecture named Visual Manipulation Relationship Network (VMRN) to simultaneously detect objects and predict the manipulation relationships. The network architecture has two stages. The output of the first-stage is the object detection result, and the output of the second-stage is the prediction result of the manipulation relationships. To train our network, we contribute a new dataset called Visual Manipulation Relationship Dataset (VMRD). The dataset contains 5185 images of hundreds of objects with 51530 manipulation relationships and the category and location information of each object. The format of the annotations refers to the PASCAL VOC dataset. In summary, the contributions of our work include three points:
\begin{itemize}
\item We design a new convolutional neural network architecture to simultaneously detect objects and predict manipulation relationships, which meets the real-time requirements in robot tasks.
\item We collect a new dataset of hundreds graspable objects, which includes the location and category information and the manipulation relationships between pairs of objects.
\item As we know, it is the first end-to-end architecture to predict robotic manipulation relationships directly using images as input with convolutional neural network.
\end{itemize}
The rest part of this paper is organized as follows: section II reviews the background and related works; section III introduces the problem formulation including the object detection and representation of manipulation relationships; section IV gives the details of our approach and network including the training method; section V shows the experiment results of manipulation relationship prediction and image-based and object-based test including some subjective experiment results; and finally, the conclusion of our results and future work are discussed in section VI.
\section{Background}
\subsection{Object Detection}
Object detection is defined as a process using a image including several objects as input to locate and classify as many target objects as possible in the image. Sliding window used to be the most common method to detect objects. When using such way to do object detection, the features, such as HOG\cite{HOG} and SIFT\cite{SIFT}, of the target objects are usually extracted first, and then they are used to train a classifier, like Supported Vector Machine, to classify the candidates coming from sliding window stage. Deformable Parts Model (DPM)\cite{DPM} is the most successful one of this type of object detection algorithms.
Recently, object detection algorithms based on deep features, such as Region-based CNN (RCNN) family\cite{c4,c5} and Single Shot Detector (SSD) family\cite{SSD}, are proved to drastically outperform the previous algorithms which are based on hand-designed features. Based on the detection process, the main algorithms are classified into two types, which we call one-stage algorithms such as SSD\cite{SSD} and two-stage algorithms such as Faster RCNN\cite{c5}. One-stage algorithms are usually faster than two-stage algorithms while two-stage algorithms often get better results \cite{objdetecreview}.
Our work focuses on not only the object detection, but also the manipulation relation prediction. The challenge is how to combine the relation prediction stage with object detection stage. To solve this problem, we design the Object Pairing Pooling Layer, which is used to generate the input of manipulation relationship predictor using the object detection results and convolutional feature maps as input. The details will be described in following sections.
\subsection{Visual Relationship Detection}
Visual relationship detection means understanding object relationships of an image. Some previous works try to learn spatial relationships\cite{relseg,relspatial}. Later, researchers attempt to collect relationships from images and videos and help models map these relationships from images to language\cite{rel3,rel4,rel5,rel6}. Recently, with the help of deep learning, the relationship prediction between objects has made a great process\cite{c1,c2,rel1,rel2}. Lu et al. \cite {c1} collect a new dataset of object relationships called Visual Relationship Dataset and propose a new relationship prediction model consisting of visual and language parts, which outperforms previous methods. Liang et al. \cite {c2} firstly combine deep reinforcement learning with relationships and their model can sequentially output the relationships between objects in one image. Yu et al. \cite{rel1} use internal and external linguistic knowledge to compute the conditional probability distribution of a predicate given a $(subject,object)$ pair, which achieves better performance. Dai et al. \cite{rel2} propose an integrated framework called Deep Relational Network for exploiting the statistical dependencies between objects and their relationships.
These works focus on relationships represented by linguistic information between objects but not manipulation relationships. In our work, we introduce relationship detection methods to help robot find the right order in which the objects should be manipulated. And because of the real-time requirements of robot system, we need to find a way to accelerate the prediction of manipulation relationships. Therefore, we propose an end-to-end architecture different from all previous works.
\section{Problem Formulation}
\begin{figure}[b]
\center{\includegraphics[scale=0.25]{Example.pdf}}
\caption{An example of manipulation relationship tree. Left: Images including several objects. Middle: All pair of objects and manipulation relationships. Right: manipulation relationship tree, in which the leaf nodes should be manipulated before the other nodes.}
\label{fig2}
\end{figure}
\begin{figure*}[tb]
\center{\includegraphics[scale=0.32]{Network.pdf}}
\caption{Network architecture of VMRN. Input of the network is $3 \times 300 \times 300$ images including several graspable objects. Feature extractor is a stack of convolution layers ($e.g.$ VGG \cite{c6} or ResNet \cite{c7}), which output feature maps with size of $512 \times 38 \times 38$. These features are used by object detector and OP$^2$L to respectively detect objects and generate the feature groups of all possible object pairs which are used to predict manipulation relationships by manipulation relationship predictor.}
\label{fig3}
\end{figure*}
\subsection{Object Detection}
Output of object detection is the location of each object and its category. The location of objects is represented by a vertical bounding box, which is a 4-dimensional vectors: ${loc} = (x_{min}, y_{min}, x_{max}, y_{max})$, where $(x_{min}, y_{min})$ and $(x_{max}, y_{max})$ represent the coordinates of upper left vertex and lower right vertex, respectively. The category is encoded as an integer which is the index of the maximum value of classification confidence scores: ${cls} = index(max(conf_{1},...,conf_{c}))$, where $c$ is the number of categories and $(conf_{1},...,conf_{c})$ is the confidence score vector with each element standing for the likelihood that the object is classified into the corresponding category.
\subsection{Manipulation Relationship Representation}
In this paper, manipulation relationship is the order of grasping. Therefore, we need a objective criterion to determine the grasping order, which is described as following: if moving one object will have an effect on the stability of another object, this object should not be moved first. Since we only focus on the manipulation relationships between objects and do not concern the linguistic information, a tree-like structure (two objects may have one same child), called \textbf {manipulation relationship tree} in the following, can be constructed to represent the manipulation relationships of all the objects in each image. Objects are represented by nodes and parent-child relationships between nodes indicate the manipulation relationships. In manipulation relationship tree, the object represented by the parent node should be grasped after the object represented by the child node. Fig. \ref{fig2} is an example of the manipulation relationship tree. A pen is on a remote controller, and a remote controller, an apple and a stapler are on a book. Therefore, the pen is the child of the remote controller and the remote controller, the apple and the stapler are children of the book in the manipulation tree.
\section{Proposed Approach}
The proposed network architecture is shown in Fig. \ref{fig3}. The inputs of our network are images and outputs are object detection results and manipulation relationship trees. Our network consists of three parts: feature extractor, object detector and manipulation relationship predictor, with parameters denoted by $\Phi$, $\Omega$ and $\Theta$ respectively.
In our work, taking into account the real-time requirements of object detection, we use Single Shot Detector (SSD) algorithm\cite{SSD} as our object detector. SSD is an one-stage object detection algorithm based on CNN. It utilizes multi-scale feature maps to regress and classify bounding boxes in order to adapt to object instances with different size. Input of object detector is convolution feature maps (in our work, we use VGG \cite {c6} or ResNet50 \cite {c7} features). Through object classification and multi-scale object location regression, we obtain the final object detection results. The result of each object is a 5-dimensional vector $(cls, x_{min}, y_{min}, x_{max}, y_{max})$. Then the inputs of Object Pairing Pooling Layer (OP$^2$L) are object detection results and convolution features. The outputs are concatenated as a mini-batch for predicting manipulation relationships by traversing all possible pairs of objects. Finally, the manipulation relationship between each pair of objects is predicted by manipulation relationship predictor.
\subsection{Object Pairing Pooling Layer}
OP$^2$L is designed to implement the end-to-end training of the whole network. In our work, weights of feature extractor $\Phi$ are shared by manipulation relationship predictor and object detector. OP$^2$L is added between feature extractor and manipulation relationship predictor like in Fig. \ref{fig3}, using object location ($e.g.$ the online output of object detection or the offline ground truth bounding box) and shared feature maps $CNN(I;\Phi)$ as input, where $I$ is the input image. It finds out all possible pairs ($n$ objects correspond to $n(n-1)$ pairs) of objects and makes their features a mini-batch to train the manipulation relationship predictor. Although in complex visual relationship prediction tasks, traversing all possible object pairs is time-consuming \cite {relpn} due to the large number of objects in the scene and the sparsity of the relationships between the objects. However, in our manipulation relationship prediction task, there are only a few of objects in the scene and it does not take a long time to traverse all the object pairs.
Let $O_i$ and $O_j$ stand for an object pair. OP$^2$L can generate the features of $O_i$ and $O_j$ denoted by $CNN(O_i,O_j;\Phi)$, which includes features of two objects and their union. In detail, the features are cropped from shared feature maps and adaptively pooled into a fixed spatial size $H \times W$ ($e.g. 7 \times 7$). The gradients with respect to the same object or union bounding box coming from manipulation relationship predictor are accumulated and propagated backward to the front layers.
\begin{figure}[tb]
\center{\includegraphics[scale=0.35]{onlinelabel.pdf}}
\caption{Method to label online data. First, we match the predicted bounding boxes to the ground truth by areas of overlap. Then we use the manipulation relationship between ground truth bounding boxes as the ground truth manipulation relationship between predicted bounding boxes to generate online data used to train manipulation relationship predictor. }
\label{fig4}
\end{figure}
\subsection{Training Data of Relation Predictor}
An extra branch of CNN is cascaded after OP$^2$L to predict manipulation relationships between objects. Training data for manipulation relationship predictor $D_{RP}$ is generated by OP$^2$L, which includes two parts: online data $D_{on}$ and offline data $D_{off}$, coming from object detection results and ground truth bounding boxes respectively. That is to say $D_{RP}=D_{on} \cup D_{off}$. For each image, $D_{RP}$ is a set of CNN features $CNN(O_i,O_j;\Phi)$ of all possible object pairs and their labels $(O_i,R,O_j)$, where $R$ is the manipulation relationship between $O_i$ and $O_j$. The reason we mix online data and offline data to train manipulation relationship predictor is that online data can be seen as the augmentation of offline data while offline data can be seen as the correction of online data. Manipulation relationships between online object instances are labeled according to the manipulation relationships between ground truth bounding boxes that maximumly overlap the online ones. As shown in Fig. \ref{fig4}, object detection result is shown in right. The manipulation relationship between the mobile phone and the box is determined by the following two steps: 1) match detected bounding boxes of the mobile phone and the box to the ground truth ones by overlaps; 2) use manipulation relationship between the ground truth bounding boxes to label the manipulation relationship of detected bounding boxes.
\subsection{Loss Function of Relation Predictor}
In our work, there are three manipulation relationship types between any two objects in one image:
\begin{itemize}
\item 1) object 1 is the parent of object 2
\item 2) object 2 is the parent of object 1
\item 3) object 1 and object 2 have no manipulation relationship.
\end{itemize}
Therefore, our manipulation relationship prediction process is essentially a classification problem of three categories for any pair of objects $CNN(O_i,O_j;\Phi)$. Let $\Theta$ denote the weights of relation prediction branch. Note that because exchanging the subject and object will possibly change the manipulation relationship type ($e.g.$ from $parent$ to $child$), the prediction of $CNN(O_i,O_j;\Phi)$ and $CNN(O_j,O_i;\Phi)$ may be different. The manipulation relationship likelihood of $R$ is defined as:
\begin{equation}
P(R|O_{i},O_{j};\Theta) = \frac {e^{h_{\Theta}^{R}(CNN(O_i,O_j;\Phi))}} {\sum_{i=1}^{3}e^{h_{\Theta}^{i}(CNN(O_i,O_j;\Phi))}}
\end{equation}
We choose multi-class cross entropy function as loss function of manipulation relationship prediction:
\begin{equation}
L_{rp}(R|O_{i},O_{j};\Theta) =-log(P(R|O_{i},O_{j};\Theta))
\end{equation}
For each image, manipulation relationship prediction loss includes two parts: online data loss $L_{on}$ and offline data loss $L_{off}$. The loss for the whole image is:
\begin{equation}
\begin{split}
L_{RP}(D_{RP};\Theta) = &\lambda L_{on} + (1-\lambda) L_{off}\\
=&\lambda \sum_{D_{on}} L_{rp}(R|O_{i},O_{j};\Theta) + \\
&(1-\lambda) \sum_{D_{off}} L_{rp}(R|O_{i},O_{j};\Theta)
\end{split}
\end{equation}
where $\lambda$ is used to balance the importance of online data $D_{on}$ and offline data $D_{off}$. In our work, we set $\lambda$ to 0.5.
\subsection{Training Method}
The whole network is trained end-to-end, which means that the object detector and manipulation relationship predictor are trained simultaneously.
Let $\Omega$ be the weights of object detector and $D_{OD}$ be the training data of object detector including shared features of the whole image $CNN(I;\Phi)$ and object detection ground truth $(\overline {cls},\overline {loc})$. The loss function for object detector is the same as Liu et al. described in \cite {SSD}:
\begin{equation}
L_{OD}(D_{OD};\Omega) = L_{loc}+\alpha L_{conf}
\end{equation}
where $\alpha$ is set to 1 according to experience. Like in \cite {SSD}, $default$ $bounding$ $boxes$ are defined as a set of predetermined bounding boxes with a few of fixed sizes, which serve as a reference during object detection process. Location loss ${L_{loc}}$ is smooth L1 loss between ground truth bounding box and matched default bounding box and all bounding boxes are encoded as offsets. Classification confidence loss $L_{conf}$ is also multi-class cross entropy loss.
\begin{algorithm}[t]
\caption{Training Algorithm}
\label {alg1}
\hspace*{0.02in} {\bf Input:}
Training set of images with object location $\overline {loc}$, category $\overline {cls}$ and manipulation relationship $(O_i,R,O_j)$, pretrained VGG \cite {c6} or ResNet50 parameters \cite {c7}\\
\hspace*{0.02in} {\bf Output:}
Feature extractor $\Phi$, object detector $\Omega$ and manipulation relationship predictor $\Theta$
\begin{algorithmic}[1]
\State Initialize feature extractor $\Phi$ using pretrained VGG or ResNet50 and object detector $\Omega$, manipulation relationship predictor $\Theta$ randomly.
\State Pretrain feature extractor $\Phi$ and object detector $\Omega$ for 10k iterations on images\cite {SSD}.
\While{$iter<maxiter$}
\State Randomly sample a mini-batch.
\State Extract CNN features $CNN(I;\Phi)$
\State Detect objects and get a set of online predicted bounding boxes $B_{on}$
\State Get offline bounding boxes $B_{off}$
\State Feed $\{CNN(I;\Phi),B_{on}\}$ and $\{CNN(I;\Phi),B_{off}\}$ into OP$^2$L and get manipulation relationship training data $D_{RP}$
\State Update object detector $\Omega$ and manipulation relationship predictor $\Theta$ using $\{D_{OD},L_{OD}\}$ and $\{D_{{RP}},L_{RP}\}$
\State Update feature extractor using $L(I;\Phi)$
\EndWhile
\State \Return $\Phi,\Omega,\Theta$
\end{algorithmic}
\end{algorithm}
Loss function of manipulation relationship prediction $L_{RP}$ is detailed in section IV.B. Combining $L_{RP}$ and $L_{OD}$, the complete loss for shared layers is:
\begin{equation}
\begin{split}
L(I;\Phi) = \mu L_{OD}(D_{OD};\Omega) +
&(1-\mu) L_{RP}(D_{RP};\Theta)
\end{split}
\end{equation}
$\mu$ is used to balance the importance of $L_{OD}$ and $L_{RP}$. In our work, $\mu$ is set to 0.5. And according to chain rule:
\begin{equation}
\begin{split}
\frac {\partial L}{\partial \Phi} =& \mu \frac {\partial L_{OD}} {\partial CNN(I;\Phi)} \frac {\partial CNN(I;\Phi)}{\partial \Phi} +\\
&(1-\mu) \frac {\partial L_{RP}}{\partial CNN(O_i,O_j;\Phi)} \frac {\partial CNN(O_i,O_j;\Phi)}{\partial \Phi}
\end{split}
\end{equation}
Complete training algorithm is shown in Algorithm \ref{alg1}.
\section{Dataset}
\subsection {Data Collection}
Different from visual relationship dataset\cite {c1}, we focus on manipulation relationships, so objects included in our dataset should be manipulatable or graspable. Moreover, manipulation relationship dataset should contain not only objects localized in images, but also rich variety of position relationships.
Our data are collected and labeled using hundreds of objects coming from 31 categories. There are totally 5185 images including 17688 object instances and 51530 manipulation relationships. Category and manipulation relationship distribution is shown in Fig. \ref{fig:subfig:a}. The annotation format is similar to PASCAL VOC: each object node includes category information, bounding box location, different from PASCAL VOC, the index of the current node and indexes of its parent nodes and child nodes. Some examples of our dataset is shown in Fig. \ref{fig:subfig:b}.
During training, we randomly split the dataset into a training set and a testing set in a ratio of nine to one. In detail, training set includes 4656 images, 15911 object instances and 46934 manipulation relationships, and testing set contains the rest.
\begin{figure}[tb]
\centering
\subfigure[Category and manipulation relationship distrubution]{
\label{fig:subfig:a}
\includegraphics[scale=0.3]{datadistribution.pdf}}
\subfigure[Dataset examples]{
\label{fig:subfig:b}
\includegraphics[scale=0.1]{dataset.pdf}}
\caption{Visual Manipulation Relationship Dataset. (a) Category and manipulation relationship distribution of our dataset. (b) Some dataset examples}
\label{fig:subfig}
\end{figure}
\begin{table*}[h]
\caption{Accuracy of Object Detection and Visual Manipulation Relationship Prediction}
\label{resultstable}
\begin{center}
\begin{tabular}{|l|l|l|c|c|cc|c|c|}
\hline
\bf {Author} & \bf {Algorithm} & \bf{Training Data} & \begin{minipage}{1cm} \vspace{0.1cm}\centering \bf {mAP} \vspace{0.1cm}\end{minipage} & \bf{Rel.} & \bf{Obj.Rec.} & \bf{Obj.Prec.} & \bf{Img.} & \begin{minipage}{1.5cm} \centering \bf{Speed (ms)} \end{minipage} \\
\hline
\hline
\bf{Lu et al.}\cite{c1} &\begin{minipage}{3.7cm} \vspace{0.1cm} VGG-SSD, VAM \vspace{0.1cm}\end{minipage} &$D_{on}\cup D_{off}$&93.01 & 88.76 & 75.50 & 71.28 & 46.88 & \multirow{2}{*}{$>$100}\\\
&\begin{minipage}{3.7cm} \vspace{0.1cm} ResNet-SSD, VAM\vspace{0.1cm}\end{minipage} &$D_{on}\cup D_{off}$ &91.72 & 88.76 & 74.33 &75.19 & 49.72 & \\
\hline
\hline
\bf{Ours}&\begin{minipage}{3.7cm} \vspace{0.1cm} VGG-VMRN (No Rel. Grad.) \vspace{0.1cm}\end{minipage}&$D_{on}\cup D_{off}$& 93.01 & 88.36 & 77.28 & 73.04 & 50.66 & \\
&\begin{minipage}{3.7cm} \vspace{0.1cm} ResNet-VMRN (No Rel. Grad.)\vspace{0.1cm}\end{minipage} &$D_{on}\cup D_{off}$& 91.72 & 90.73 & 77.68 & 77.55 & 53.12 &\\
&\begin{minipage}{3.7cm} \vspace{0.1cm} VGG-VMRN\vspace{0.1cm}\end{minipage} & $D_{on}$ & 94.18 & 92.80 & \bf{82.64} & 77.76 & 60.49 & \\
& \begin{minipage}{3.7cm} \vspace{0.1cm} VGG-VMRN\vspace{0.1cm}\end{minipage} &$D_{off}$ & \bf{94.36} & 92.75 & 81.55 & 76.09 & 58.60 & \multirow{2}{*}{\bf{28}} \\
&\begin{minipage}{3.7cm} \vspace{0.1cm} VGG-VMRN\vspace{0.1cm}\end{minipage} &$D_{on}\cup D_{off}$ & 94.09 &\bf{93.36} & 82.29 & \bf{78.01} &\bf{63.14} & \\
&\begin{minipage}{3.7cm} \vspace{0.1cm} ResNet-VMRN\vspace{0.1cm}\end{minipage} &$D_{on}$ & 91.81 & 92.01 & 79.03 & 72.55 & 54.44 & \\
&\begin{minipage}{3.7cm} \vspace{0.1cm} ResNet-VMRN\vspace{0.1cm}\end{minipage} &$D_{off}$ & 92.71 & 91.86 & 79.33 & 74.71 & 55.95 & \\
&\begin{minipage}{3.7cm} \vspace{0.1cm} ResNet-VMRN\vspace{0.1cm}\end{minipage} &$D_{on}\cup D_{off}$ & 92.67 & 92.19 & 80.55 & 76.02 & 57.28 & \\
\hline
\end{tabular}
\end{center}
\end{table*}
\subsection {Labeling Criterion}
Because our dataset focuses on the manipulation relationship with no linguistic or position information, instead of directly giving position relationships ($e.g.$ under, on, beside and so on) between objects, we only give the order of manipulation of objects: manipulation relationship tree. There are several advantages over giving position relationships: 1) the manipulation relationships are more simpler, which makes relationship prediction task easier; 2) the prediction can directly give the manipulation relationships between objects, without the need to reconstruct the manipulation relationships through position relationships.
During labeling, there should be a criterion that can be strictly enforced. Therefore, in our work, we set a {\bf labeling criterion} of manipulation relationship: when the movement of an object will affect the stability of other objects, the object should not be the leaf node of the manipulation relationship tree, which means that the object should not be moved first. For example, as shown in the up-left image in Fig. \ref{fig:subfig:b}, there are three objects: on the left, there is an orange can and on the right, a red box is put on a green box. If the green box is moved first, it will have an effect on the stability of the red box, so it should not be the leaf node of the manipulation relationship tree. If the red box or the can is moved first, it will not affect stability of any other object, so they should be the leaf node.
\section{Experiments}
\subsection{Training Settings}
Our models are trained on Titan Xp with 12 GB memory. We have trained two Visual Manipulation Relationship Network (VMRN) models based on VGGNet and ResNet called VGG-VMRN and ResNet VMRN. Because of the unstability of the random object detection results in the beginning, the two VMRN models are pretrained without $D_{on}$ for the first 10k iterations. Detailed training settings are listed in Table \ref{trainsettings}.
\begin{table}[t]
\caption{Training Settings}
\label{trainsettings}
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
\bf{Hyperparam.} & \begin{minipage}{2cm}\vspace{0.1cm} \centering \bf{VGG-VMRN} \vspace{0.1cm}\end{minipage} & \begin{minipage}{2cm} \centering \bf{ResNet-VMRN}\end{minipage}\\
\hline
\hline
\multirow{2}{*}{Learning Rate} & 0-80k iters: 1e-3 & 0-80k iters: 1e-3 \\
& 80k-120k iters: 1e-4 & 80k-120k iters: 1e-4 \\
\hline
Learning Rate Decay& 0 & 0\\
\hline
Weight Decay & 3e-3 & 1e-4 \\
\hline
Batch Size & 8 & 8 \\
\hline
Momentum & 0.9 & 0.9 \\
\hline
Nesterov & True & True \\
\hline
Max Epoches & 120 & 120 \\
\hline
Iters per Epoch & 1000 & 1000 \\
\hline
Framework & Torch7 & Torch7 \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure*}[t]
\center{\includegraphics[scale=0.28]{Results.pdf}}
\caption{\label{fig6} Result examples. Upper: examples with right object detection and manipulation relationship prediction. Lower: examples with wrong results (from left to right: redundant object detection, failing object detection, redundant manipulation relationship, failing manipulation relationship)}
\end{figure*}
\subsection{Testing Settings}
{\bf Comparison Model} As we know, there is no research about robotic manipulation relationships so far. So we compare our experiment results with Visual Appearance Model (VAM) in Lu et al. \cite{c1}, which is modified to adapt to our task. VAM takes union bounding box as input and outputs the relationship. But in our work, exchanging the subject and object may change the manipulation relationship. Therefore, instead of only using union bounding box, we parallel subject, object and union bounding boxes as input to get the final manipulation relationship.
{\bf Self Comparison} To study the contribution of OP$^2$L and end-to-end training, we also confirm the performance of our models that are trained with no gradients backward from manipulation relationship predictor (\{VGG-SSD, VMRN (No Rel. Grad.)\} and \{ResNet-SSD, VMRN (No Rel. Grad.)\}). To explore the benefits from online and offline data, we also train our models with only online ($D_{on}$) or offline ($D_{off}$) training data.
{\bf Metrics} Three metrics are used in our experiment:
1) Manipulation Relationship Testing (Rel.): this metric focus on the accuracy of manipulation relationship model on ground truth object instance pairs, in which the input features or image patches of manipulation relationship predictor are obtained based on the offline ground truth bounding boxes; 2) Object-based Testing (Obj. Rec. and Obj. Prec.): this metric tests the accuracy based on object pairs. In this setting, the triplet $(O_i,R,O_j)$ is treated as a whole. The prediction is considered correct if both objects are detected correctly (category is right and IoU between predicted bounding box and ground truth is more than 0.5) and the predicted manipulation relationship is correct. We compute the recall (Obj. Rec.) and precision (Obj. Prec.) of our models during object-based testing 3) Image-based Testing (Img.): this metric tests the accuracy based on the whole image. In this setting, the image is considered correct only when all possible triplets are predicted correctly.
\subsection{Analysis}
Results are shown in Table \ref{resultstable}. Compared with VAM, we can conclude that:
1) {\bf Performance is better:} VAM performs worse than proposed VMRN models in all three experiment settings. The gains mainly come from the end-to-end training process, which improves the accuracy of manipulation relationship prediction a lot (from 88.76\% to 93.36\%). This is confirmed in the following self comparison part.
2) {\bf Speed is faster:} The proposed VMRN models (VGG-VMRN and ResNet-VMRN) are both less time-consuming than VAM. Forward process of OP$^2$L and manipulation relationship predictor takes 5.5ms per image in average. As described in \cite{SSD}, the speed of SSD object detector is 21.74ms per image on Titan X with mini-batch of 1 image. So our manipulation relationship prediction has little effect on speed of the whole network. But because of the huge network architecture and sequential prediction process, VAM spends 122ms on each image in average to predict all of the manipulation relationships. Even when we put all possible triplets of one image to a batch, it still spends 86ms for one image.
Self comparison results indicate that proposed VMRN models trained end-to-end can outperform the models that trained without the gradients from manipulation relationship predictor. It mainly benefits from the influence coming from manipulation relationship prediction loss $L_{RP}$. The parameters of the network are adjusted to better predict the visual manipulation relationships and the network is more holistic. As explored in Pinto et al.\cite{multilearn}, multi-task learning in our network can help improve the performance because of diversity of data and regularization in learning. Finally, we can observe that using online and offline data simultaneously may actually help to improve the performance of the network due to the complementing of online and offline data.
The difference between the performance of VGG-VMRN and ResNet-VMRN is also interesting. Gradients coming from manipulation relationship prediction loss $L_{RP}$ improve both networks, but its improvement on ResNet-VMRN is less than that on VGG-VMRN as shown in Table \ref{resultstable}. Note that VGG-based feature extractor has 7.63 million parameters and ResNet-based feature extractor has 1.45 million parameters, so the number of parameters may limit the performance ceiling of ResNet-VMRN. In the future, we will try deeper ResNet as our base network.
Some subjective results are shown in Fig. \ref{fig6}. From the four examples in the first line, we can see that our model can simultaneously detect objects and manipulation relationships in one image. From the four examples in the second line, we can conclude that the occlusion, the similarity between different categories and visual illusion can have a negative influence on the predicted results.
\section{Conclusions}
In this paper, we focus on solving the problem of visual manipulation relationship prediction to help robot manipulate things in the right order. We propose a new network architecture named Visual Manipulation Relationship Network and collect a dataset called Visual Manipulation Relationship Dataset to implement simultaneously detecting objects and predicting manipulation relationships, which meets the real-time requirement on robot platform. The proposed Object Paring Pooling Layer (OP$^2$L) can not only accelerate the manipulation relationship prediction by replacing the sequential prediction with a simple forward process, but also improve the performance of the whole network by back-propagating the gradients from manipulation relationship predictor.
However, due to the limited number of objects used in training, it is difficult for the object detector to generalize to objects with a large difference in appearance from our dataset. In our future work, we will expand our dataset using more graspable objects and combine the grasp detection with VMRN to implement an all-in-one network which can simultaneously detects objects and their grasp positions and predicts the correct manipulation relationships.
\addtolength{\textheight}{0cm}
\section*{ACKNOWLEDGMENT}
This work was supported in part by NSFC No. 91748208, the National Key Research and Development Program of China under grant No. 2017YFB1302200 and 2016YFB1000903, NSFC No. 61573268, and Shaanxi Key Laboratory of Intelligent Robots.
\bibliographystyle{unsrt}
|
{
"timestamp": "2018-03-05T02:12:38",
"yymm": "1802",
"arxiv_id": "1802.08857",
"language": "en",
"url": "https://arxiv.org/abs/1802.08857"
}
|
\section*{Abstract}
The relationship between brain structure and function has been probed using a variety of approaches, but how the underlying structural connectivity of the human brain drives behavior is far from understood. To investigate the effect of anatomical brain organization on human task performance, we use a data-driven computational modeling approach and explore the functional effects of naturally occurring structural differences in brain networks. We construct personalized brain network models by combining anatomical connectivity estimated from diffusion spectrum imaging of individual subjects with a nonlinear model of brain dynamics. By performing computational experiments in which we measure the excitability of the global brain network and spread of synchronization following a targeted computational stimulation, we quantify how individual variation in the underlying connectivity impacts both local and global brain dynamics. We further relate the computational results to individual variability in the subjects' performance of three language-demanding tasks both before and after transcranial magnetic stimulation to the left-inferior frontal gyrus. Our results show that task performance correlates with either local or global measures of functional activity, depending on the complexity of the task. By emphasizing differences in the underlying structural connectivity, our model serves as a powerful tool to predict individual differences in task performances, to dissociate the effect of targeted stimulation in tasks that differ in cognitive complexity, and to pave the way for the development of personalized therapeutics.
\section*{Author summary}
How the organization of the brain drives human behavior is an important but open question in neuroscience. Recent advances in non-invasive imaging techniques and analytical tools allow one to build personalized brain network models that simulate an individual's brain activity based on the underlying anatomical connectivity. Here, we use these computational models to perform virtual experiments in order to predict individual performance of three different language-demanding tasks. For each individual, we build a subject-specific brain network model and measure the global brain activation and patterns of synchronized activity after a targeted computational stimulation. We find that depending on the complexity and type of the language task performed, either global or local measures of brain dynamics are able to predict individual performance. By emphasizing individual differences in human brain structure, the model serves as a powerful tool to predict cognitive task performance and to promote the development of personalized therapeutics.
\newpage
\section*{Introduction}
Cognitive responses and human behavior have been hypothesized to be the outcome of complex interactions between regional populations of neurons \cite{{Mcintosh1999},{Misic2016}} and show significant variability across individuals. While certain patterns of brain activity are robust \cite{Bressler2010}, many patterns change with learning and aging \cite{{Zatorre2012},{Bassett2011b},{Zacks2006},{Nestor2009},{TELESFORD2016}}, and an underlying inter-subject variability in neural activity has been observed \cite{{Mueller2013},{Schmaelzle2017},{Telesford2017},{GARCIA2017},{Nestor2013}}. Importantly, the underlying anatomical connectivity of the brain provides a crucial backbone that drives neuronal dynamics and thus behavior \cite{{Zatorre2012},{Roberts2013},{Reijmer2013}}. Given recent and ongoing advancements in imaging techniques, such as diffusion spectrum imaging (DSI), which estimates the presence of white matter tracts connecting brain regions, mesoscale maps of anatomical brain connectivity can now be obtained \cite{{Roberts2013},{Vettel2017}}. While differences in brain connectivity have long been known to exist between diseased and healthy populations \cite{{Bassett2009},{vandenHeuvel2013},{Gong2015}}, recent findings indicate measurable differences in patterns of white matter connectivity across healthy individuals \cite{{BASSETT2011},{Yeh2016},{Kahn2017},{Powell2017}}. Although work is beginning to link individual variability in white matter structure, functional activity, and task performance \cite{{Medaglia2017},{KELLER2016},{vandenBos2015},{Mckenna2015},{Brown2015},{Muraskin2016},{Muraskin2017}}, there is currently no standard methodology for evaluating the interplay between the brain's structural topology, dynamics, and function, and many open questions remain about how these features are coupled.
The new field of network neuroscience \cite{Bassett2017} provides a coherent framework in which to model and investigate this coupling. In the context of modeling human brain networks, network nodes can be chosen to represent brain regions and network edges can represent either physical connections (anatomical networks) or statistical relationships between nodal dynamics (functional networks) \cite{{Bassett2009},{FELDT2011225},{Bassett2017}}. Analytical tools from network science have been successfully utilized to quantify both structural and functional brain network organization and to gain insight into topics as diverse as brain development \cite{CAO201476}, disease states \cite{{vandenHeuvel2013},{Gong2015},{MENON2011483}}, learning \cite{Zatorre2012}, and intelligence \cite{Cole2012}.
Combining a network representation of the brain with computational modeling of brain dynamics allows one to further investigate the links between brain structure, dynamics, and performance by providing a controlled environment in which to perform \emph{in silico} experiments and make predictions about real-world brain function. Using experimentally obtained structural brain network data combined with biologically motivated computational models of brain dynamics \cite{Breakspear2017}, one can build personalized brain network models of human brain activity \cite{{Honey2009},{SANZLEON2015},{Bansal:2018ug}}. This modeling approach has been used to gain insight into structure-function relationships in disease populations \cite{{Stefanovski2016},{Adhikari2015}}, perform virtual lesioning or resection experiments \cite{{Alstott2009},{Hutchings2015},{Sinha2017}}, and assess the differential impact of stimulation to different brain regions \cite{{Muldoon2016},{Spiegler2016}}.
Here, we use this data-driven modeling approach to investigate the interplay between structural variation, brain activity, and task performance. Our computational model is built upon subject-specific connectomes that are combined with biologically informed Wilson-Cowan nonlinear oscillators (WCOs) \cite{{Wilson1972},{Wilson1973}} to produce simulated patterns of personalized brain activity. Additionally, this controlled computational environment allows us to perform targeted stimulation experiments -- motivated by actual laboratory experiments -- and quantify the emerging neural activity patterns that represent both global brain network activation and task-specific local subnetwork activation. Using this model, we construct computational measures in order to relate structure and individual performance variability across a cohort of ten healthy individuals who performed three language-demanding cognitive tasks before and after transcranial magnetic stimulation (TMS) \cite{Ferreri2013} targeted at the left inferior frontal gyrus (L-IFG) of the brain. The performed tasks involved verb generation, sentence completion, and number reading, and are known to vary in their cognitive complexity \cite{{KRIEGERREDWOOD201424},{Snyder2008}}.
Based upon the patterns of simulated brain activity driven by the structural networks in our model, we find that task performance can be explained and predicted by either local or global circuitry depending on the complexity of the task. Further, we observe that, post experimentally applied TMS, the correlation between model output and task performance is weakened. Finally, we show that the eigenspectrum of the observed structural brain networks plays a key role in global brain dynamics which can additionally provide predictive insight into performance of some, but not all, tasks. Taken together, our results reveal that using personalized brain network models to simulate brain dynamics provides an important tool for understanding and predicting human performance in cognitively demanding tasks and represents an important step towards the development of personalized therapeutics.
\section*{Results}
In order to assess the link between variability in brain connectivity, activity patterns, and behavior, we build data-driven personalized brain network models of ten individuals who performed three language-demanding cognitive tasks before and after TMS targeted at the L-IFG (see Materials and Methods). Subject-specific anatomical connectivity was derived from DSI imaging data, combined with a parcellation of the brain into 234 regions (network nodes) based on the Lausanne atlas \cite{Hagmann2008} (Fig. \ref{fig:schematic}a). As a result, we obtained a weighted connectivity matrix whose entries represent the strength of the connection between two brain regions (Fig. \ref{fig:schematic}b). The dynamics of each brain region are then modeled using nonlinear Wilson-Cowan oscillators (WCOs), coupled through a subject-specific anatomical brain network (Materials and Methods). A WCO is a biologically motivated model of local brain activity, developed to describe the mean behavior of small neuronal populations \cite{Wilson1972}. The model therefore simulates a specific individual's spatiotemporal macro-level brain activity.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth, keepaspectratio]{Bansal_Fig1}
\caption{{\bf Data-driven brain network models.}
(a) The brain connectivity used as a basis for the computational model is obtained by combining tractography estimates from diffusion spectrum imaging data of a specific individual's brain and a parcellation of the brain into 234 regions. (b) The resulting anatomical connectivity matrix where entries indicate the density of connections between two brain regions. (c) The dynamic state of brain activity can be tuned by the global coupling parameter, $c_5$. As this parameter crosses a threshold, a sudden transition to an excited state is observed as marked by the red arrow. (d) Representative excitatory dynamics of selected brain regions in the excited state, demonstrating a rich spectrum of temporal activity that is driven by the underlying anatomical connectivity.}
\label{fig:schematic}
\end{figure}
The only model feature that varies across subjects is the underlying anatomical connectivity matrix, $A$, providing a causal link between differences in regional brain dynamics and structural organization. The global strength of the coupling between different brain regions via $A$ is controlled by a global coupling parameter ($c_5$; Materials and Methods). The dynamical state of the brain can be tuned by varying this parameter as shown in Fig. \ref{fig:schematic}c which depicts the average excitatory dynamics as a function of global coupling. When the global coupling parameter exceeds a threshold value ($c_5^{T}$), the brain dynamics abruptly transition to an active state (Fig. \ref{fig:schematic}d). Previous work has shown that the computational model is particularly sensitive to the point at which model dynamics undergo this transition: the value of $c_5^{T}$ at which the transition takes places is subject-specific, and the inter-subject variability in $c_5^{T}$ is greater than that seen using anatomical networks derived from different scans of the same individual \cite{Muldoon2016}. The transition value $c_5^{T}$ quantifies the ease (or difficulty) with which a brain network can be excited, and we use this as a parameter to measure differences in global network dynamics between individuals.
\subsection*{Individual variability: model and experiments}
Because we are interested in linking variability in brain structure and behavior, we first measured the extent of individual variability in anatomical connectivity, simulated brain activity, and task performance in our data set. Across our cohort of subjects, we observed measurable variability in anatomical network structure as seen in Fig. \ref{fig:variability}a, which shows the standard deviation in edge weights between two brain regions, normalized by the mean edge weight. This structural variability is also manifested in the simulated brain activity depicted in Fig. \ref{fig:variability}b-c. We observe variability across individuals in the global coupling transition values ($c_5^{T}$) (Fig. \ref{fig:variability}b) as well as in the specific patterns of brain activity at this transition (Fig. \ref{fig:variability}c). Since the nonlinear WCOs are all identical in the model, these observed differences in simulated brain activity are a direct result of variation in the underlying anatomical connectivity.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth, keepaspectratio]{Bansal_Fig2}
\caption{{\bf Assessing variability in structure, dynamics, and performance.}
(a) Variability in the connection strengths between pairs of brain regions across ten subjects, measured as the standard deviation of edge weights between brain regions across individuals, normalized by the mean of edge weights. Although there are few regions of low variability (blue), there are a significant number of regions with moderate to high variability (green to yellow). (b) The global coupling transition value ($c_5^{T}$) varies across our cohort of ten individuals. (c) Spatial patterns of excitatory activity vary across individuals due to regional variations in the structural connectivity between subjects. Each column represents the temporal average of the excitatory activity across brain regions for a given individual in the excited state and is unique in terms of the overall activity pattern. (d) Cognitive task performance for the ten subjects was assessed by experimentally measuring the response times for three language-demanding tasks: verb generation (VG), sentence completion (SC), and number reading (NR), both before (solid lines) and after (dotted lines) a targeted transcranial magnetic stimulation (TMS) to the left-inferior frontal gyrus (L-IFG) of the brain.}
\label{fig:variability}
\end{figure}
Finally, we assessed the extent of variability in the cognitive performances of individuals across three language-demanding tasks: (i) verb-generation (VG); (ii) sentence-completion (SC); and (iii) number-reading (NR). Performance was measured as the median response time across $50$ trials (see Materials and Methods) and shows variability across both subjects and tasks (Fig. \ref{fig:variability}d). Moreover, performances were altered after transcranial magnetic stimulation (TMS) was applied to the left inferior frontal gyrus (L-IFG). The L-IFG, also known as Broca's area, is traditionally believed to play an important role in language comprehension and syntactic processing, and specifically in the selection and retrieval of words \cite{Dronkers2004}. It is therefore expected that external stimulation to this region should affect task performance. However, it is important to note that while subject performance on a task did often change after TMS, we did not consistently observe an improvement (or degradation) in task performance across individuals, nor was the effect consistent between tasks within a given subject. This suggests that although the L-IFG plays an important role in the context of language comprehension, the actual cognitive response reflects contributions from a larger part of the brain network.
Given the observed variability in the structural, dynamical, and behavioral aspects of our data, we next focused on assessing how this variability was related across these three domains. We therefore examined network features measured at both the global network level (using the entire brain network) and within task-specfic subnetworks that were selected to represent the specific circuitry involved in task completion and asked how these measures related to task performance.
\subsection*{Global network features predict certain task performances}
We first examined the relationship between global network properties and task performance by estimating the correlation between global brain activation and task performance. For each subject, we measured the threshold value of the global coupling parameter ($c_5^{T}$) at which the individual's brain transitions to the excited state, and we calculated the correlation between this value and their performance on each task (before the application of TMS). As seen in Fig. \ref{fig:pre_tms1} and Table \ref{table:corr_pre_tms}, we observed a significant positive correlation ($r = 0.86$, $p = 0.001$) between model transition values and task performance in the sentence completion (SC) task. Thus for the SC task, individuals with a lower value of $c_5^{T}$ (more easily excitable brain) are likely to perform better (as measured through a short response time). However, we did not observe a significant correlation in the verb generation (VG) or number reading (NR) tasks, indicating that the performance of these tasks cannot be predicted by a global network property. To ensure that these results were dependent upon the organization of the subject-specific anatomical connectivity used as a basis for the computational model, for each individual, we created randomized brain networks by preserving the distribution of edge weights but randomly reassigning connection strengths between brain regions (Materials and Methods). We recalculated the $c_5^{T}$ values for simulations using these randomized connectivity matrices, but did not observe any significant correlations between transition values and task performance in this case.
\begin{figure
\centering
\includegraphics[width=0.85\linewidth] {Bansal_Fig3}
\caption{{\bf Correlation between model transition threshold and task performance.}
Task performance versus model transition values, $c_5^{T}$, for three tasks: (a) verb generation (VG), (b) sentence completion (SC), and (c) number reading (NR). The red lines represent a linear fit to the data for visual guidance. The corresponding Pearson correlation coefficients are given in Table \ref{table:corr_pre_tms}. There is a significant correlation between the global transition value and task performance for the SC task ($r=0.86$, $p=0.001$).}
\label{fig:pre_tms1}
\end{figure}
\begin{table}[!ht]
\centering
\caption{
{\bf Correlations between model features and task performance.}}
\begin{tabular}{|l|cc|cc|cc|}
\hline
&\multicolumn{2}{c|}{VG} & \multicolumn{2}{c|}{SC} & \multicolumn{2}{c|}{NR}\\
Model feature& r & p & r & p& r & p \\ \hline
Transition value& 0.30& 0.39& \textbf{0.86} & \textbf{0.001} & 0.27 & 0.45\\ \hline
Functional effect (global brain) & -0.04& 0.91&0.39 & 0.26 & 0.23 & 0.52\\ \hline
Functional effect (task circuit) & 0.42& 0.22 & \textbf{0.73} & \textbf{0.017} & \textbf{0.74} & \textbf{0.016}\\ \hline
Functional effect (outside the task circuit) &-0.06& 0.87 & 0.38 &0.29 & 0.18 & 0.62\\ \hline
\end{tabular}
\begin{flushleft} The variables $r$ and $p$ denote the Pearson correlation coefficient and associated \textit{p}-value, respectively. VG = verb generation, SC = sentence completion, and NR = number reading.
\end{flushleft}
\label{table:corr_pre_tms}
\end{table}
To further explore the link between global brain dynamics and behavior, we additionally assessed the relationship between specific patterns of brain activity and task performance. Since the L-IFG is involved in controlled language processing \cite{{Whitney2011},{Costafreda2006},{Hannah2007}}, one can argue that the pattern of brain synchronization as a result of targeted stimulation to the region might also be predictive of task performance. We therefore computationally stimulated the brain regions comprising the L-IFG (Fig. \ref{fig:functional}a), and quantified how the stimulation spread throughout the global brain network (see Materials and Methods for details). As shown in Fig. \ref{fig:functional}b, computational stimulation pushes the dynamics of the region into oscillatory activity which then drives the functional dynamics of other brain regions through the underlying structural connections. We measure the resulting pattern of brain activity by calculating the pairwise \textit{functional connectivity} using the maximum normalized correlation between brain regions \cite{{Muldoon2016},{Kramer2009},{Feldt2007}} (Materials and Methods). Due to the variability in the subject-specific structural connectivity matrices, the resulting patterns of functional connectivity differ between individuals. Fig. \ref{fig:functional}c shows the functional connectivity for three subjects as a result of L-IFG stimulation. The change in the patterns of how the stimulation spreads through the network is evident. We measure the extent of the global spread of synchronization by calculating the functional effect \cite{Muldoon2016} which measures the average value of synchronization across the entire brain (Materials and Methods). Interestingly, unlike our observation with the global coupling parameter, the global functional effect does not show a significant correlation with task performance for any of the cognitive tasks (Table \ref{table:corr_pre_tms}).
\begin{figure
\centering
\includegraphics[width=.7\linewidth, keepaspectratio]{Bansal_Fig4}
\caption{{\bf Effects of targeted stimulation.}
(a) The left-inferior frontal gyrus (L-IFG) is composed of four brain regions in the parcellation scheme used in this study. These regions are \textit{pars orbitalis} (POB), \textit{pars triangularis} (PTA) and \textit{pars opercularis} (POC, two regions). (b) Effect of computational stimulation to a single brain region. The stimulation pushes the dynamics of the region into a limit cycle (oscillations). (c) Functional connectivity, calculated as the pairwise maximum normalized correlation between brain regions for three subjects as a result of stimulating the L-IFG. The observed connectivity patterns were highly variable across subjects.}
\label{fig:functional}
\end{figure}
\subsection*{Localized task-specific circuits predict other task performance}
While we did observe a significant correlation between the global threshold value and task performance for the SC task, we saw no correlations between global brain dynamics and task performance in the remaining two tasks, and the global functional effect was not correlated with performance in any of the tasks. However, the three language tasks performed in this study differ in semantic demands, and the absence of a significant correlation for the VG and NR tasks could be due to either a drastically different cognitive mechanism for performing these tasks, or the dependence of these tasks on a more localized brain circuit. To investigate the latter possibility, we investigated the role of task-specific subnetworks in task performance.
To construct a task dependent, spatially localized measure, we follow the work of Roux et al. \cite{Roux2008} to identify the possible brain regions involved in reading alphabets and numbers. Roux et al. performed experimental electrostimulation to spatially map the brain regions that were differentially involved in reading both alphabets and numbers. We mapped these regions to the Lausanne atlas and constructed two task circuits: one involved in VG and SC (alphabets-related, Fig. \ref{fig:circuits}a), and one involved in NR (number-related, Fig. \ref{fig:circuits}b). These circuits are also consistent with other studies mapping brain regions involved in language processing \cite{{Dronkers2004},{Turken2011},{NathanielJames2002},{Holland2001}}. Note that both of these subnetworks are contained entirely in the left hemispheric language network.
\begin{figure
\centering
\includegraphics[width=0.85\linewidth]{Bansal_Fig5}
\caption{{\bf Task-specific circuits.}
(a) Alphabet reading task circuit used in the analysis of VG and SC tasks. (b) Number reading task circuit used in the analysis of the NR task. Brain regions in the circuits are 1: \textit{pars orbitalis}, 2: \textit{pars triangularis}, 3: \textit{pars opercularis-1}, 4: \textit{pars opercularis-2}, 5: \textit{superiofrontal-9}, 6: \textit{caudal middle frontal-1}, 7: \textit{postcentral-7}, 8-12: \textit{supramarginal}, 13: \textit{inferioperietal-3}, 14-15: \textit{fusiform-2 and 3}, 16-17: \textit{inferio temporal-2 and 3}, 18: \textit{temporal pole}, 19-22: \textit{middle temporal}, 23-27: \textit{superior temporal}. (c)-(e)Task performance versus the functional effect within task-specific cirucuits for three tasks: (a) verb generation (VG), (b) sentence completion (SC), and (c) number reading (NR). The red lines represent a linear fit to the data for visual guidance. The corresponding Pearson correlation coefficients are given in Table \ref{table:corr_pre_tms}. There is a significant correlation between the functional effect and task performance for the NR task ($r=0.74$, $p=0.016$).}
\label{fig:circuits}
\end{figure}
We then calculated the functional effect within these task circuits (averaging the functional connectivity values only within the subnetwork as opposed to the entire brain network as done previously) and correlated this local measure with task performances (Fig. \ref{fig:circuits}c-e and Table \ref{table:corr_pre_tms}). We observed a significant positive correlation between the task-specific functional effect and task performance for the NR task ($r=0.74$, $p=0.016$), indicating that performance on the NR task depends on the localized spread of activation throughout the task sub-network and is less dependent on the global brain network structure. Individuals with a lower functional effect (less synchronization within the task circuit) also have a lower response time (better performance). When synchronization within the task circuit is increased (a high functional effect), task performance degrades, suggesting that high levels of synchronized activity within the task circuit could potentially impede the ability of the circuit to perform localized computations necessary for task completion.
If we compare the functional effect measured only within brain regions outside of the task circuit, the significant correlation with the NR task is lost (Table \ref{table:corr_pre_tms}), indicating the specificity of the task circuit in the model. Although we also observed a significant correlation between the task-specific functional effect and task performance for the SC task, this result is driven by a single subject and does not hold if this subject is removed from the data set. Performing the analysis on a larger data set would therefore be necessary to confirm this finding for the SC task. No significant correlations were observed for the VG task.
To validate the specificity of our selected task circuits, we constructed 10,000 random sub-networks by randomly selecting the same number of brain regions as in each task circuit and then calculated the functional effect within these random circuits after stimulating the L-IFG (Materials and Methods). The randomized circuits had a variable degree of overlap with the actual task circuit (0 to 35\%). We observed that only 1.8\% of the randomly selected circuits gave significant correlations ($r>0.5$ and $p<0.05$) between the circuit-specific functional effect and task performance, which we estimate to be the false positive rate in our computational predictions. This low error rate signifies that the observed significant correlation in the NR task is due to the selection of brain regions in the task-specific circuit.
We also verified that the observed effects were related to our choice of stimulating the L-IFG as opposed to some other brain structure. We chose different sets of brain regions (equal in size to the number of brain regions that compose the L-IFG) that were randomly distributed within the task circuit and applied targeted computational stimulation to these randomly selected regions. When randomly selected brain regions were stimulated, we did not observe a significant correlation between the task performance and functional effect, confirming the importance of specifically targeting the L-IFG in our \textit{in silico} experiments. Additionally, consistent with previous findings \cite{Muldoon2016}, we observed that the patterning of activation as a result of stimulation of randomly selected brain regions varied with the selection of brain regions and across subjects. These findings support the possibility that in the future our modeling approach can be used to help design and optimize therapeutic strategies that use external activation such as TMS to treat neurological disorders \cite{Eldaief2013}.
\subsection*{Predicting task performance post-TMS}
We also asked if our computational model could predict individual performance after the application of experimentally applied TMS targeted at the L-IFG. The underlying mechanisms of how TMS affects the brain are not well understood, but it is believed that TMS locally influences neuronal firing which can then propagate within the brain through inter-regional neuroanatomical pathways \cite{{Gu2015},{Muldoon2016}}. We therefore examined the correlation between model features and behavioral performance during the post-TMS task. As seen in Fig. \ref{fig:tms} and Table \ref{table:corr_post_tms}, while we still observe statistically significant positive correlations between the global coupling parameter and performance in the SC task ($r =0.68$, $p<0.03$) and between the functional effect within the task circuit and performance in the NR task ($r=0.63$, $p=0.05$), in both cases, the strength of the correlation is decreased when compared to correlations with task performance before TMS. Speculatively, the fact that the correlation strength is weakened post-TMS potentially reflects contributions of noise to the system mediated through inhibitory stimulation to the L-IFG.
\begin{figure
\centering
\includegraphics[width=0.8\linewidth]{Bansal_Fig6}
\caption{{\bf Correlations between model features and task performance post-TMS.}
(a-c) Task performance post-TMS versus the model transition value for three tasks: (a) verb generation (VG), (b) sentence completion (SC), and (c) number reading (NR). (d)-(f) Task performance post-TMS versus the functional effect within task-specific circuits for three tasks: (a) verb generation (VG), (b) sentence completion (SC), and (c) number reading (NR). The red lines represent a linear fit to the data for visual guidance. The corresponding Pearson correlation coefficients are given in Table \ref{table:corr_post_tms}.
There is a significant correlation between the global transition value and task performance for the SC task ($r=0.68$, $p=0.03$) and between the functional effect and task performance of the NR task ($r=0.63$, $p=0.05$).}
\label{fig:tms}
\end{figure}
\begin{table}[!ht]
\centering
\caption{
{\bf Correlations between post-TMS task performances and model features.}}
\begin{tabular}{|l|cc|cc|cc|}
\hline
&\multicolumn{2}{c|}{VG} & \multicolumn{2}{c|}{SC} & \multicolumn{2}{c|}{NR}\\
Model feature& r & p & r & p& r & p \\ \hline
Transition value& 0.35 & 0.33& \textbf{0.68} & \textbf{0.03} & 0.45& 0.20\\ \hline
Functional effect (global brain) & 0.19&0.59&0.39 &0.27 &0.08 & 0.82\\ \hline
Functional effect (task circuit) &0.23 & 0.52 & 0.52 & 0.12 & \textbf{0.63}& \textbf{0.05}\\ \hline
Functional effect (outside the task circuit) &0.18& 0.62 & 0.37 &0.29 & 0.04& 0.91\\ \hline
\end{tabular}
\begin{flushleft} The variables $r$ and $p$ denote the Pearson correlation coefficient and associated \textit{p}-value, respectively. VG = verb generation, SC = sentence completion, and NR = number reading.
\end{flushleft}
\label{table:corr_post_tms}
\end{table}
\subsection*{Correlations with graph theoretical measures of network connectivity}
Finally, to be able to draw a direct connection between anatomical brain connectivity and behavior, we asked if the predictions made by our computational model of brain dynamics could be revealed using only graph theoretical measures of network structure applied directly to the anatomical connectivity matrices alone (in the absence of model dynamics). We therefore calculated three measures of network structure that have been shown to be related to the spread of synchronization in networks: the average degree, inverse spectral radius, and synchronizability (see Materials and Methods). The average degree measures the average strength of connections per brain region, while the spectral radius and synchronizability are related to the eigenspectrum of the adjacency matrix or graph Laplacian respectively, and relate to the ease and extent to which the network is expected to synchronize under the assumption that regions behaved as oscillators.
In Table \ref{table:network_features}, we report the Pearson correlation coefficients for these measures of network structure with task performance both before and after TMS was applied. Both the average degree and the inverse spectral radius show significant correlations with task performance for the SC task, and as observed earlier, show a decrease in their correlation magnitude after the application of TMS. Both of these measures are known to relate to the ease of global synchronization in a network \cite{{Restrepo2005},{Meghanathan2014}}, and therefore this finding is somewhat expected given our prior observations about $c_5^{T}$. Interestingly, we find that the synchronizability of the network does not correlate with performance of the SC task. Instead, this measure, which has also been shown to be more sensitive to changes in local network structure \cite{{Huang2008},{Chen2008}}, shows a correlation with performance of the NR task that increases after the application of TMS. However, this correlation is weaker than that observed with the local measurements of the functional effect within the task circuit reported earlier in Table \ref{table:corr_pre_tms}.
\begin{table}[!ht]
\centering
\caption{
{\bf Correlations between task performances and structural network features.}}
\begin{tabular}{|l|cc|cc|cc|cc|cc|cc|}
\hline
& \multicolumn{4}{c|}{VG} & \multicolumn{4}{c|}{SC} & \multicolumn{4}{c|}{NR} \\
Network feature& \multicolumn{2}{c|}{Before-TMS} & \multicolumn{2}{c|}{Post-TMS} & \multicolumn{2}{c|}{Before-TMS} & \multicolumn{2}{c|}{Post-TMS} & \multicolumn{2}{c|}{Before-TMS} & \multicolumn{2}{c|}{Post-TMS} \\
& r & p & r & p & r & p & r & p & r & p & r & p \\
\hline
Average degree & -0.25&0.48&-0.14&0.70& \textbf{-0.84}&0.002&\textbf{-0.65}& 0.04& -0.10&0.78& -0.26&0.47\\
Inverse spectral radius & 0.29&0.42&0.36&0.31& \textbf{0.87}&0.001&\textbf{0.70}&0.02& 0.31& 0.38& 0.48& 0.16 \\
Synchronizability &-0.02&0.96&0.51&0.13& 0.58& 0.08&\textbf{0.64}&0.05& \textbf{0.68}&0.03&\textbf{0.73}&0.02\\
\hline
\end{tabular}
\begin{flushleft} The variables $r$ and $p$ denote the Pearson correlation coefficient and associated \textit{p}-value, respectively. VG = verb generation, SC = sentence completion, and NR = number reading.
\end{flushleft}
\label{table:network_features}
\end{table}
\section*{Discussion}
We presented a computational model of individualized brain dynamics that allows us to make predictions about cognitive task performance based on the variability of anatomical brain connectivity between people. By analyzing global model excitability and localized patterns in the spread of a targeted stimulation in our data-driven model, we explain individual variability in the performance of cognitively demanding tasks and establish the significant role of brain network organization in driving individual behavior.
We examined three cognitive tasks and found that different measures of model dynamics gave rise to correlations with task performance. Our results indicate that as the complexity of the cognitive task increases, a larger portion of the brain network, including interhemispheric connections, contribute in determining the overall response. The SC task, which has additional language processing demands compared to the other two tasks, requires understanding different words, constructing the overall meaning of the sentence, and determining the appropriate word to fit in the sentence. For this complex task, a global parameter of the model, $c_5^{T}$, which could be related to the overall excitability of the network, predicted individual task performance. We also observed significant correlations between global network properties of the anatomical connectivity matrices (specifically the average degree and inverse spectral radius) and performance of the SC task. The fact that these global measures are correlated with performance in the SC task but not the other two tasks possibly reflects the global nature and complexity of the network of brain regions required to complete a sentence \cite{Just1996}.
On the other hand, performances on the NR task, which has been established to be more localized in terms of the brain regions involved \cite{Roux2008}, can be predicted by tracking the spread of a targeted stimulation within a task-specific circuit in the brain. Further, we observed that a higher functional effect of the stimulation correlates with a higher response time (poorer performance) on the NR task. In our model, a high functional effect indicates a high degree of synchronization within the task circuit. This heightened level of synchronization could effectively constrain the degrees of freedom for brain circuit activity and computation, resulting in poorer task performance. Better task performance might be achieved through an integration and segregation mechanism which allows brain regions to mutually communicate but does not require synchrony over a longer time scale. Such a modular functional organization has been associated with efficient learning of motor skills \cite{Bassett2011b}.
Interestingly, no tested measures of model dynamics or structural network features were able to explain the performance of the VG task. It is likely that VG requires a different coupling mechanism of global and local dynamics which is not captured in the model features constructed here. For example, while verb generation and sentence completion are both ``open ended'' language tasks, SC requires our ability to accumulate the probability of a response over an entire sentence, whereas VG is cued by a single word. Thus, even if both involve the L-IFG, SC might recruit quite a few more distributed resources, while VG might require more focal resources. It is possible that our selection of regions comprising the language task circuit was not sensitive enough to these focal areas. We propose that this problem could potentially be resolved by a more extensive research effort that combines experimental and theoretical approaches from cognitive and network neuroscience in order to better understand the specific circuitry involved in task completion.
A strength of our modeling approach is that the construction of localized task circuits is not limited to those involved in the tasks studied here, and could easily be extended to represent different brain circuitry. Ultimately, our model magnifies the effects of differences in the observed anatomical brain structure, and it is not always feasible to measure differences in network structure in terms of general network statistics. While some network statistics also provided predictive power of task performance, the strength of the correlation was only comparable for measures of global network structure, and our computational model can generate measures of structure and dynamics specific to the task that is being performed.
It should also be noted that the functional measures examined here (for both global and task circuit specific calculations) involved measuring the spread of synchronization after computational stimulation of the L-IFG. The choice to stimulate the L-IFG represents essential \emph{a priori} knowledge from cognitive neuroscience studies, and stimulating randomly chosen sets of brain regions within the task circuits did not produce significant correlations with task performances. The ability to integrate knowledge from cognitive neuroscience into computational experiments represents a strength of our data-driven modeling approach, as it makes the model flexible and opens the door to studies across a range of cognitive paradigms.
Although the current study was constrained by its small subject population size, it is encouraging that despite this limitation, we were still able to make certain predictions about task performance. These findings should therefore promote the use of similar data-driven modeling approaches in larger data sets with more subjects and a higher diversity of task conditions. The use of personalized brain network models will serve as an increasingly valuable tool to establish explicit links between brain connectivity, dynamics, and behavior, and to develop personalized therapeutic strategies.
\section*{Materials and Methods}
\subsection*{Subjects and cognitive tasks}
Ten healthy individuals (mean age = 25.4, St.D. = 4.5, 6 female) from a larger neuroimaging study \cite{Betzel_2016} returned to participate in the present study. All procedures were approved in a convened review by the University of Pennsylvania's Institutional Review Board and were carried out in accordance with the guidelines of the Institutional Review Board/Human Subjects Committee, University of Pennsylvania. All participants volunteered with informed consent in writing prior to data collection.
Participants performed two open-ended language tasks and one closed-ended number naming task. The language tasks included a verb generation task \cite{KRIEGERREDWOOD201424} and a sentence completion task \cite{Snyder2008}. For the verb generation task, subjects were instructed to generate the first verb that came to mind when presented with a noun stimulus (e.g., `cat'). The verb could be either something that the noun does or something that can be done with the noun. Response times (RTs) were collected from the onset of the noun cue to the onset of the verb response. For the sentence completion task, subjects were presented with a sentence, for example ``They left the dirty dishes in the -----?'', and were instructed to generate a single word that appropriately completes the sentence, such as 'sink'. Words in the sentences were presented serially in 1 s segments consisting of one or two words. RTs were computed as the latency between the onset of the last segment, which always contained two words (i.e., a word and an underline), and the onset of the participant's response. For all items in the sentence completion task, items in the high vs. low selection demand conditions were matched on retrieval demands (association strength) \cite{Snyder2008}. For both language tasks, each trial began with the presentation of a fixation point (+) for 500 ms, followed by the presentation of the target stimulus, which remained on the screen for 10 s until the subject made a response. Subjects were given an example and five practice trials in the first administration of each language task (i.e., before TMS), and were reminded of the instructions before performing the task a second time (i.e., after TMS). In each of the before and after TMS conditions, subjects completed 50 trials for a total of 100 trials.
In the number reading task, participants produced the English names for strings of Arabic numerals presented on the screen. On each trial, a randomized number (from tens of thousands to millions; e.g., 56395, 614592, 7246856) was presented in black text on a white background. The numbers were uniformly distributed over three lengths (17 per length for each task administration). The position of items on the screen was randomized between the center, left, and right of the screen to reduce the availability of visual cues to number length and syntax \cite{KRIEGERREDWOOD201424}. RTs were collected from the onset of the stimulus presentations to the onset of the subject's response. The number appeared in gray following the detection of a response (i.e., voice key trigger), and remained on the screen thereafter to reduce the working memory demands required for remembering the digit string. At the start of the experiment, subjects performed 50 trials of the number naming task to account for initial learning effects \cite{KRIEGERREDWOOD201424}. Prior to performing the task for the first time, subjects were given an example and five practice trials, and were later reminded of the instructions before performing the task a second (i.e., before TMS) and a third (i.e., after TMS) time. In each of the before and after TMS conditions, subjects completed 51 trials for a total of 102 experimental trials. The items for the verb generation task were identical to those used in \cite{Snyder2011} and the items for the sentence completion task were those from \cite{Snyder2014}. The difficulty of items was sampled to cover a distribution of values computed via latent semantic analysis (LSA) applied to corpus data.
Verbal responses for all tasks were collected from a computer headset microphone. The microphone was calibrated to reduce sensitivity to environment background noise prior to the collection of data for each session such that the recording software was not triggered without clear verbalizations. List order (before or after TMS) was counterbalanced across participants. Item presentation order within each task was fully randomized across participants. Task performance was assessed based on the subject's median response time across all the trials.
\subsection*{Transcranial Magnetic Stimulation}
The Brainsight system (Rogue Research, Montreal) was used to co-register MRI data with the location of the subject and the TMS coil. The stimulation site was defined as the posterior extent of the pars triangularis in each individual subject's registered T1 image. A Magstim Super Rapid2 Plus1 stimulator (Magstim; Whitland, UK) was used to deliver continuous theta burst stimulation (cTBS) via a 70 mm diameter figure-eight coil. cTBS was delivered at 80\% of each participant's active motor threshold \cite{Huang2005}. Each participant's threshold was determined prior to the start of the experimental session using a standard up-down staircase procedure with stimulation to the motor cortex [M1].
\subsection*{Human DSI data acquisition and preprocessing}
Diffusion spectrum images (DSI) were acquired for a total of 10 subjects along with a T1-weighted anatomical scan at each scanning session, in line with previous work \cite{Gu2015}. DSI scans sampled 257 directions using a Q5 half-shell acquisition scheme with a maximum b-value of 5,000 and an isotropic voxel size of 2.4 mm. We utilized an axial acquisition with the following parameters: repetition time (TR) = 5 s, echo time (TE)= 138 ms, 52 slices, field of view (FoV) (231, 231, 125 mm). DSI data were reconstructed in DTI Studio (www.dsi-studio.labsolver.org) using q-space diffeomorphic reconstruction (QSDR)\cite{YEH20111054}. QSDR first reconstructs diffusion-weighted images in native space and computes the quantitative anisotropy (QA) in each voxel. These QA values are used to warp the brain to a template QA volume in Montreal Neurological Institute (MNI) space using the statistical parametric mapping (SPM) nonlinear registration algorithm. Once in MNI space, spin density functions were again reconstructed with a mean diffusion distance of 1.25 mm using three fiber orientations per voxel. Fiber tracking was performed in DSI Studio with an angular cutoff of $35^{\circ}$, step size of 1.0 mm, minimum length of 10 mm, spin density function smoothing of 0.0, maximum length of 400 mm and a QA threshold determined by DWI signal in the colony-stimulating factor. Deterministic fiber tracking using a modified FACT algorithm was performed until 1,000,000 streamlines were reconstructed for each individual.
Anatomical (T1) scans were segmented using FreeSurfer \cite{FISCHL2012774} and parcellated using the connectome mapping toolkit \cite{CAMMOUN2012386}. A parcellation scheme including n=234 regions was registered to the B0 volume from each subject's DSI data. The B0 to MNI voxel mapping produced via QSDR was used to map region labels from native space to MNI coordinates. To extend region labels through the grey-white matter interface, the atlas was dilated by 4 mm \cite{Cieslak2014}. Dilation was accomplished by filling non-labelled voxels with the statistical mode of their neighbors' labels. In the event of a tie, one of the modes was arbitrarily selected. Each streamline was labeled according to its terminal region pair.
\subsection*{Construction of anatomical brain networks}
To construct the subject-specific anatomical connectivity networks used as a basis for our computational model, we segmented the brain into 234 regions (network nodes) based on the Lausanne atlas \cite{{CAMMOUN2012386},{Hagmann2008}}. As in prior studies, we define pairwise connection weights between nodes based on the number of streamlines connecting brain regions and normalized by the sum of the volumes of the nodes \cite{{Muldoon2016}, {Hagmann2008}}. This procedure results in a sparse, weighted, undirected structural brain network for each subject ($N=10$), where network connections represent the density of white matter tracts between brain regions (Fig. \ref{fig:schematic}b).
\subsection*{Computational model of brain dynamics}
In our data-driven network model, regional brain dynamics are given by Wilson-Cowan oscillators \cite{{Wilson1972},{Muldoon2016}}. In this biologically motivated model of neuronal populations, the fraction of excitatory and inhibitory neurons active at time $t$ in the $i^{th}$ brain region are denoted by $E_i(t)$ and $I_i(t)$ respectively, and their temporal dynamics are given by:
\begin{equation}
\tau\frac{dE_i}{dt}=-E_i(t)+(S_{E_m} - E_i(t))S_E\Big(c_1E_i(t)-c_2I_i(t) \\+ c_5\sum\limits_{j}A_{ij}E_j(t-\tau_d^{ij})+P_i(t)\Big)+\sigma w_i(t),
\end{equation}
\begin{equation}
\tau\frac{dI_i}{dt}=-I_i(t)+(S_{I_m} - I_i(t))S_I\Big(c_3E_i(t)-c_4I_i(t) \\+ c_6\sum\limits_{j}A_{ij}I_j(t-\tau_d^{ij})\Big)+\sigma v_i(t),
\end{equation}
where
\begin{equation}
S_{E,I}(x) = \frac{1}{1+e^{(-a_{E,I}(x-\theta_{E,I})}} - \frac{1}{1+e^{a_{E,I}\theta_{E,I}}}.
\end{equation}
\noindent We note that $A_{ij}$ is an element of the subject-specific coupling matrix, $A$, whose value is the connection strength between brain regions $i$ and $j$ as determined from DSI data (see above). The global strength of coupling between brain regions is tuned by excitatory and inhibitory coupling parameters $c_5$ and $c_6$ respectively. We fix $c_6=c_5/4$, representing the approximate ratio of excitatory to inhibitory coupling. $P_i(t)$ represents the external inputs to excitatory state activity and is used to perform computational stimulation experiments (see below). The parameter $\tau_d^{ij}$ represents the communication delay between regions $i$ and $j$. If the spatial distance between regions $i$ and $j$ is $d_{ij}$, $\tau_d^{ij}=d_{ij}/t_d$, where $t_d = 10m/s$ is the signal transmission velocity. Additive noise is input to the system through the parameters $w_i(t)$ and $v_i(t)$ which are derived from a normal distribution and $\sigma = 10^{-5}$. Other constants in the model are biologically derived: $c_1 = 16$, $c_2 = 12$, $c_3 = 15$, $c_4 = 3$, $a_E = 1.3$, $a_I = 2$, $\theta_E = 4$, $\theta_I = 3.7$, $\tau = 8$ as described in references \cite{{Wilson1972},{Muldoon2016}}.
To numerically simulate the dynamics of the system we use a second order Runge Kutta method with step size $0.1$ and initial conditions $E_i(0)=I_i(0)=0.1$. All analysis is performed after allowing the system to stabilize for $1$s.
\subsection*{Assessing individual variability}
In order to calculate the model transition values for each individual, we ran $1$s simulations (after allowing the system to stabilize) for a range of $c_5$ parameters ($0.05\leq c_5\geq 0.25$, with a step-size of $0.001$) in which no external input was applied ($P = 0$). The average excitatory activity was recorded for each region as a function of $c_5$ as shown in Fig. \ref{fig:schematic}c. The value of $c_5$ at which we observed a sudden increase in the average activity (marked by an arrow in Fig. \ref{fig:schematic}c), was identified as the transition value, $c_5^T$.
Structural variability was measured by calculating the standard deviation of a given connection strength $A_{ij}$ between brain regions $i$ and $j$ for all the subjects and then normalizing by the average connection strength, $<A_{ij}>$.
\subsection*{Targeted computational activation}
To activate (stimulate) a particular brain region, we applied a constant external input $P_i = 1.15$ which drives the regional activity into a limit cycle as shown in Figure \ref{fig:functional}b. Before targeted activation, the global coupling parameter $c_5$ was fixed just below its transition value $c_5^T$ (which differs between subjects) to perturb the system in a state with maximum sensitivity for perturbations. When stimulating the L-IFG, we simultaneously activated four brain regions (\textit{pars orbitalis} (POB), \textit{pars triangularis} (PTA) and \textit{pars opercularis} (POC, two regions), Fig. \ref{fig:functional}a) by applying an external input of $P=1.15$.
\subsection*{Functional connectivity and functional effect of stimulation}
To quantify the spread of computational stimulation, functional connectivity was determined by calculating the pairwise maximum normalized cross-correlation \cite{{Kramer2009},{Feldt2007}} between the excitatory activity $E_i(t)$ and $E_j(t)$, for brain regions $i$ and $j$. We used a window size of $1$s and calculated correlations over a maximum lag of $250$ms.
In order to quantify the functional effect of stimulation, we considered the regional dynamics within a window of $2$s once the system is stabilized after the initial transient period. The system was first allowed to evolve without any external input ($P_i = 0$) for $1$s and then an external input of $P_i=1.15$ was applied for $1$s to the set of brain regions selected for activation or stimulation. Functional connectivity was calculated separately for the stimulation-free period and the period of stimulation. We then calculated the difference between the pairwise values of functional connectivity measured during and before stimulation, resulting in a matrix that describes the pairwise changes in functional connectivity resulting from the stimulation. The functional effect of stimulation \cite{Muldoon2016} was then calculated using this matrix either globally by averaging over the entire matrix, or within a task circuit by averaging within the task circuit, or outside of a task circuit by averaging over regions outside of the task circuit.
\subsection*{Defining task circuits}
To construct a task dependent localized measure, we followed the work of Roux et al. \cite{Roux2008} to identify the possible brain regions involved in reading alphabets and numbers. We extended their findings to propose that the brain regions involved in reading alphabets contribute towards performing VG and SC tasks, and the brain regions involved in reading numbers contribute towards performing the NR task. We mapped these regions to the Lausanne atlas and constructed possible task circuits involved in VG and SC (alphabets-related, Fig. \ref{fig:circuits}a), and NR (number-related, Fig. \ref{fig:circuits}b). These circuits are also consistent with other studies mapping brain regions involved in language processing \cite{{Dronkers2004},{Turken2011},{NathanielJames2002},{Holland2001}}.
\subsection*{Randomizing network structure and stimulation effect}
To assess the effect of global network organization, we repeated our entire computational analysis with randomized anatomical connectivity matrices. Anatomical connectivity matrices were randomized separately for each individual in order to alter the original brain network organization. Here, we preserved the overall connectivity distribution but randomly reassigned connection strength values to each pair of the brain regions from this distribution.
To test the specificity of our results to the choice of regions comprising the task circuit, we constructed $10,000$ sub-networks by randomly picking the same number of brain regions as in the NR circuit ($22$) from all of the $234$ brain regions. We then stimulated the L-IFG and calculated the functional effect within these randomly constructed sub-networks for each individual. These sub-networks had varying degree of overlap with the actual task circuit, ranging from $0$ to $35\%$. We observed that only $1.8\%$ of the random sub-networks produced a significant correlation between the localized functional effect and task performance with $r > 0.5$ and $p < 0.05$, indicating that our results are indeed related to the specific construction of the task circuit.
In order to assess the importance of stimulating the L-IFG for the cognitive tasks chosen in this particular work, we compared our findings with those when stimulation was applied to a group of regions randomly chosen (under spatial constraints) within the actual task circuits with a regional brain volume equivalent to L-IFG ($4$ brain regions). The $4$ brain regions were chosen such that they formed a continuous spatial volume within the brain. We could identify $7$ such volumes within the task circuits that we constructed, and used these $7$ alternate stimulation sites to assess the specificity of our findings to the choice of stimulation site. The model did not produced significant predictions when any of these $7$ volumes were stimulated, indicating that the model is indeed sensitive to the choice of stimulation site.
\subsection*{Network statistics}
We calculated network statistics for each subject using the structural matrices derived from their DSI data. The weighted degree centrality, $k_i$, for a given region $i$ is defined as $k_i= \sum_{j=1}^{234}A_{ij}$ \cite{Newman_book}. The average across the degree centralities of all network nodes was used to obtain the average degree of a given subject. The spectral radius, $S_r$, is given by the maximum eigenvalue of the connectivity matrix $A$, $S_r=\lambda|_{Max}$, and is a global measure of network structure that is related to the spread of synchronization in a network \cite{{Restrepo2005},{Meghanathan2014}}. Synchronizability, $S$, is defined as, $S=\frac {\lambda_2^L}{\lambda_{Max}^L}$, where ${\lambda_2^L}$ and ${\lambda_{Max}^L}$ denote the second smallest and the largest eigenvalue of the Laplacian matrix $L$ ($L = D - A$, where D is the degree matrix of $A$) \cite{{Huang2008},{Chen2008}}.
\section*{Acknowledgments}
This work is supported by the U.S. Army Research Laboratory through contract numbers W911NF-10-2-0022 and W911NF-16-2-0158 from the U.S. Army research office. DSB acknowledges MacArthur Foundation and the Alfred P. Sloan Foundation. JDM acknowledges funding from NIH and NIMH (award no. 1-DP5 OD021352-01). The content is solely the responsibility of the authors and does not necessarily represent the official views of the funding agencies.
|
{
"timestamp": "2018-02-27T02:01:53",
"yymm": "1802",
"arxiv_id": "1802.08747",
"language": "en",
"url": "https://arxiv.org/abs/1802.08747"
}
|
\section{Introduction}
The characteristics of heterogeneity across economic units are informative for many econometric applications.
For example, there is an interest in heterogeneity in the dynamics of price deviations or changes (e.g., \citealp{klenow2010microeconomic}; \citealp{CruciniShintaniTsuruga15}).
As another example, allowing for the presence of heterogeneity may make a crucial difference in the identification and estimation of production functions (e.g., \citealp{AckerbergBenkardBerryPakes07}; \citealp{Kasahara2017}).
Thus, there are many econometric studies that investigate the degree of heterogeneity using panel data (e.g., \citealp{HsiaoPesaranTahmiscioglu99}; \citealp{FernandezValLee13}; \citealp{JochmansWeidner2018}; \citealp{Okui2017}).
This paper proposes kernel-smoothing estimation for panel data to analyze heterogeneity across cross-sectional units.\footnote{An {\ttfamily R} package to implement the proposed procedure is available from the authors' websites.
}
After estimating the mean, autocovariances, and autocorrelations of each unit, we compute the kernel densities based on these estimated quantities.
This easy-to-implement procedure provides useful visual information for heterogeneity in a model-free manner.
For example, the densities of the heterogeneous mean, variance, and first-order autocorrelation of the price deviations indicate visually the characteristics of heterogeneity in the long-run level, variance, and persistence of the price deviations across items (goods and services) that are cross-sectional units in this example.
We show the consistency and asymptotic normality of the kernel density estimator based on the double asymptotics under which both the cross-sectional size $N$ and the time-series length $T$ tend to infinity with the bandwidth $h$ shrinking to zero (denoted $N, T \to \infty$ and $h \to 0$).\footnote{More precisely,
the double asymptotics $N, T \to \infty$ are any monotonic sequence $T=T(N) \to \infty$ as $N \to \infty$ and the bandwidth $h \to 0$ is any monotonic sequence $h = h(N, T(N)) = h(N) \to 0$ as $N, T \to \infty$ in our setting. Note that each theoretical result in this paper specifies additional conditions on the relative magnitudes of $N$, $T$, and $h$.
}
The asymptotic properties exhibit several unique features that have not been well examined in the literature.
Most importantly, asymptotic bias of even very high order affects the conditions on the relative magnitudes of $N$, $T$, and $h$ required for consistency and asymptotic normality. As a result, the different orders of asymptotic expansion we can execute have different relative magnitude conditions.
This unique feature contrasts our analysis with the existing analyses where the required relative magnitude conditions do not depend on the order of expansions (e.g., $N / T^2 \to 0$ for asymptotic normality in \citealp{HsiaoPesaranTahmiscioglu99} and \citealp{Okui2017}).
The weakest condition (i.e., how small $T$ can be compared with $N$) can be obtained by executing an infinite order expansion.
Even in that case, the required condition is stronger than those typically observed in the long-panel literature ($N/T^2 \to 0$).
Moreover, it requires nontrivial technical discussions for the expansion (e.g., the summability of the infinite-order series).
We clarify that these unique features are caused by the presence of the bandwidth $h$ and by using the estimated quantities.
Based on an infinite-order expansion, we show three asymptotic biases for the density estimation.
The first is the standard kernel-smoothing bias of order $O(h^2)$ (see, e.g., \citealp{LiRacine2007}).
The second is caused by the incidental parameter problem (\citealp{NeymanScott48} and \citealp{Nickell1981}) and is of order $O(1/T)$.
The third results from the nonlinearity of the kernel function and the difference between the estimated quantity and the true quantity.
We show that this is of order $O(1/(Th^2)) + \sum_{j=3}^{\infty} O(1/\sqrt{T^j h^{2j}})$, which is obtained only if we execute an infinite-order expansion.
By showing these asymptotic biases, we prove that the relative magnitude conditions for consistency and asymptotic normality are $N^2 / T^5 \to 0$ and $N^2 / T^3 \to 0$, respectively, when using the standard bandwidth $h \asymp N^{-1/5}$ in the density estimation with second-order kernels.
We propose to apply a split-panel jackknife method in \citet{DhaeneJochmans15} to reduce these biases.
In particular, we formally show that the half-panel jackknife (HPJ) corrects the incidental parameter bias and the second-order nonlinearity bias without inflating the asymptotic variance.
While the jackknife is useful in bias reduction especially when $T$ is small, we also show that it does not weaken the relative magnitude conditions for consistency and asymptotic normality.
We also develop confidence interval estimation and selection of bandwidth.
To construct confidence intervals, we extend the robust bias-corrected procedure in \citet{calonico2018effect} to split-panel jackknife bias-corrected estimation. This method explicitly corrects all three biases above.
While an alternative and more traditional approach is to avoid smoothing bias by undersmoothing, it is not suitable in our context because undersmoothing makes the conditions on $N$ and $T$ stronger.
For the bandwidth selection, we can apply any standard procedures in the literature.
This is because, under the relative magnitude conditions, the asymptotic mean squared error (AMSE) and asymptotic distribution of the split-panel jackknife bias-corrected estimator are the same as those of the infeasible estimator based on the true quantity.
We also examine the properties of the cumulative distribution function (CDF) estimator constructed by integrating the kernel density estimator.
This kernel CDF estimator also exhibits asymptotic bias that varies in the order of the asymptotic expansion that we execute.
We also derive the closed form formula for the asymptotic bias.
This is an interesting result because the formula for asymptotic bias for the empirical distribution is available only for Gaussian errors \citep{JochmansWeidner2018} and has not been derived in general form \citep{Okui2017}.
However, the required conditions on $N$ and $T$ for the kernel CDF estimation turn out to be stronger than those for the empirical CDF estimation derived in those studies.
We illustrate our procedures by an empirical application on heterogeneity of price deviations from the law of one price (LOP) and Monte Carlo simulations.
\paragraph{Related literature.}
Our setting and motivation closely relate to \citet{Okui2017}, but there are several important distinctions between our paper and theirs in both theoretical and practical aspects.
First, the relative magnitude conditions for our kernel estimation vary in the order of expansions we can execute, and our relative magnitude conditions are different from \citet{Okui2017} in which second-order expansions suffice to derive the conditions on estimating the moments of the quantities (e.g., the variance of the heterogeneous mean).
This feature in particular contrasts the theoretical contributions in both papers, and indeed our relative magnitude conditions are new in the literature.
Second, we show the new insight that the split-panel jackknife is applicable even to kernel estimation.
Third, because it is well known that bootstrap inferences do not capture kernel-smoothing bias (see, e.g., \citealp{hall2013simple}), we extend the robust bias-corrected inference in \citet{calonico2018effect} instead of the cross-sectional bootstrap in \citet{Okui2017}.\footnote{The failure of the cross-sectional bootstrap inference in our kernel estimation is formally shown in the previous version of this study uploaded to arXiv (arXiv:1802.08825v2).}
Finally, while \citet{Okui2017} do not clarify asymptotic biases for their empirical CDFs, we formalize those of our kernel estimators.
Our CDF estimation relates to \citet{JochmansWeidner2018} who derive the bias of the empirical distribution based on noisy measurements (e.g., estimated quantities) for the true variables of interest.
Their results are complementary to ours because both papers consider CDF estimation, but they consider a situation where observations exhibit Gaussian errors.
We do not assume such errors, and the kernel smoothing allows us to derive bias under much weaker distributional assumptions.
Instead, the kernel smoothing creates additional higher-order biases.
Many econometric studies examine heterogeneity in panel data (e.g., \citealp{PesaranSmith95}; \citealp{HorowitzMarkatou96}; \citealp{HsiaoPesaranTahmiscioglu99}; \citealp{PesaranShinSmith99}; \citealp{ArellanoBonhomme12}; \citealp{FernandezValLee13}; \citealp{MavroeidisSasakiWelch15}).
Compared with them, we propose model-free kernel-smoothing estimation.
No existing theoretical study examines kernel-smoothing for heterogeneity in panel data, while several applied studies have used such estimation (e.g., \citealp[Figure 2]{Kasahara2017} and \citealp[Figure 8]{roca2017learning}).
Several studies propose model-free analyses for panel data, but do not focus on the degree of heterogeneity in the dynamics.
For example, \citet{Okui08, Okui11, Okui14} and \citet{LeeOkuiShintani13} consider homogeneous dynamics, and \citet{GalvaoKato14} study the properties of the possibly misspecified fixed effects estimator in the presence of heterogeneous dynamics.
Kernel density estimation using estimated quantities is also examined in the literature on structural estimation of auction models.
For example, \citet{mamarmershneyerov2018} and \citet{gpv2000} estimate the density of individual evaluations of auctioned goods.
In their first stage, individual evaluations of auctioned goods are estimated nonparametrically and their second stage is the kernel density estimation applied to estimated evaluations.
They also observe that the estimation errors from the first stage affect the asymptotic behavior of the second stage estimator in a nonstandard way.
However, their problems are different from ours.
Their main issue is the cross-sectional correlation caused by the use of the same set of observations to estimate individual evaluations.
As a result, their estimation errors affect the precision and the convergence rate of the second stage estimator.
In our case, estimation errors in the first stage are cross-sectionally independent and affect the bias but not the (first-order) variance of the second stage estimator.
\paragraph{Paper organization.}
Section \ref{sec-setup} introduces our setting and density estimation.
Section \ref{sec-theory} develops the asymptotic theory, bias correction, confidence interval estimation, bandwidth selection, and CDF estimation.
Sections \ref{sec-application} and \ref{sec-simulation} present the application and simulations.
Section \ref{sec-conclusion} concludes.
The supplementary appendix contains the proofs of the theorems, technical lemmas, and other technical discussions.
\section{Kernel density estimation} \label{sec-setup}
This section describes the setting and the proposed estimation.
We explain our setting and motivation in a succinct manner because they are similar to those in \citet{Okui2017}.\footnote{Several remarks and possible extensions can be found in the previous version of this study and \citet{Okui2017}.
For example, we can consider the presence of covariates and time effects and estimation based on other heterogeneous quantities with minor modifications.
In this paper, we explain our estimation briefly to save space.}
We observe panel data $\{ \{ y_{it} \}_{t=1}^T \}_{i=1}^N$ where $y_{it}$ is a scalar random variable.
We assume that $y_{it}$ is strictly stationary across time and that each individual time series $\{y_{it}\}_{t=1}^T$ is generated from some unknown probability distribution $\mathcal{L}(\{y_{it}\}_{t=1}^T; \alpha_i)$ that may depend on the unit-specific unobserved variable $\alpha_i$.
We denote the conditional expectation given $\alpha_i$ by $E(\cdot | i)$.
Our goal is to examine the degree of heterogeneity of the dynamics of $y_{it}$ across units in a model-free manner.
To this end, we focus on estimating the density of the mean $\mu_i \coloneqq E(y_{it} | i)$, $k$-th autocovariance $\gamma_{k,i} \coloneqq E( (y_{it} - \mu_i)(y_{i,t-k} - \mu_i) | i )$, and $k$-th autocorrelation $\rho_{k,i} \coloneqq \gamma_{k, i} / \gamma_{0,i}$.
We first estimate $\mu_i$, $\gamma_{k,i}$, and $\rho_{k,i}$ by the sample analogues: $\hat \mu_i \coloneqq \bar y_i \coloneqq T^{-1} \sum_{t=1}^{T} y_{it}$, $\hat \gamma_{k,i} \coloneqq (T - k)^{-1} \sum_{t=k+1}^T (y_{it} - \bar y_i) (y_{i, t-k} - \bar y_i)$, and $\hat \rho_{k,i} \coloneqq \hat \gamma_{k,i} / \hat \gamma_{0,i}$.
Throughout the paper, we use the notation $\xi_i$ to represent one of $\mu_i$, $\gamma_{k,i}$, or $\rho_{k,i}$ and the notation $\hat \xi_i$ for the corresponding estimator.
The kernel estimator for the density $f_{\xi}(x)$ is given by:
\begin{align} \label{eq-dens-cdf}
\hat f_{\hat \xi}(x) \coloneqq \frac{1}{Nh} \sum_{i=1}^N K \left( \frac{x - \hat \xi_{i}}{h}\right),
\end{align}
where $x \in \mathbb{R}$ is a fixed point, $K:\mathbb{R} \to \mathbb{R}$ is a kernel function, and $h > 0$ is a bandwidth satisfying $h \to 0$.\footnote{We can consider estimating the joint density for $\mu_i$, $\gamma_{k, i}$, and $\rho_{k,i}$ in the same manner.}
This is a standard estimator except that we replace the true $\xi_i$ with the estimated $\hat \xi_i$.
\section{Asymptotic theory} \label{sec-theory}
This section develops our asymptotic theory, confidence interval estimation, bandwidth selection, and CDF estimation based on the density estimator $\hat f_{\hat \xi}(x)$.
We define the notations $w_{it} \coloneqq y_{it} - \mu_i = y_{it} - E(y_{it}|i)$ and $\bar w_i \coloneqq T^{-1} \sum_{t=1}^T w_{it}$.
By construction, $y_{it} = \mu_i + w_{it}$.
Note that $\hat \mu_i = \bar y_i = \mu_i + \bar w_i$, $E(w_{it}|i) = 0$, and $\gamma_{k,i} = E(w_{it} w_{i,t-k}|i)$.
\subsection{Unique features in asymptotic investigations} \label{sec-unique}
Before formally showing the asymptotic properties, we explore the unique features of our asymptotic investigations in an informal manner.
By doing so, we clarify the mechanism behind the observation that even very high orders of asymptotic bias matter for our asymptotic analysis.
We here focus on the density estimator for $\hat \mu_i$, but similar discussions are also relevant for $\hat \gamma_{k,i}$ and $ \hat \rho_{k,i}$.
Noting that $\hat \mu_i - \mu_i = \bar w_i$, we examine the $J$-th order Taylor expansion of $\hat f_{\hat \mu}(x)$:
\begin{align}\label{eq:issue}
\begin{split}
\hat f_{\hat \mu}(x)
=& \frac{1}{Nh} \sum_{i=1}^N K \left( \frac{ x - \mu_i }{h}\right)\\
& + \sum_{j = 1}^{J-1} \frac{(-1)^j}{j! Nh^{j+1}} \sum_{i=1}^N (\bar w_i)^j K^{(j)} \left( \frac{x - \mu_i }{h}\right)\\
& + \frac{(-1)^J}{J! Nh^{J+1}} \sum_{i=1}^N (\bar w_i)^J K^{(J)} \left( \frac{x - \tilde \mu_i }{h}\right),
\end{split}
\end{align}
where $K^{(j)}$ denotes the $j$-th order derivative and $\tilde \mu_i$ is between $\mu_i$ and $\hat \mu_i$.
The first term in \eqref{eq:issue} is the infeasible density estimator based on the true $\mu_i$, and its asymptotic behavior is standard and well known in the kernel-smoothing literature.
It converges in probability to the density of interest $f_{\mu}(x)$ as $N\to \infty$ and $h\to 0$ with $Nh \to \infty$.
In addition, when $Nh^5 \to C \in [0, \infty)$ also holds, it can hold that:
\begin{align*}
\sqrt{Nh} \left( \frac{1}{Nh} \sum_{i=1}^N K \left( \frac{ x - \mu_i }{h}\right) -f_{\mu}(x) - h^2 \frac{\kappa_1 f_{\mu}^{''} (x)}{2} \right) \stackrel{d}{\longrightarrow} \mathcal{N} \big( 0, \kappa_2 f_{\mu}(x) \big),
\end{align*}
where $\kappa_1 \coloneqq \int s^2 K(s) ds$ and $\kappa_2 \coloneqq \int K^2(s) ds$ and $\mathcal{N}(\mu, \sigma^2)$ is a normal distribution with mean $\mu$ and variance $\sigma^2$.
The unique features in our situation are caused from the second and third terms in \eqref{eq:issue}.
For the second term, under regularity conditions, the mean can be evaluated as:
\begin{align*}
E\left( \frac{(-1)^j}{j!Nh^{j+1}} \sum_{i=1}^N (\bar w_i)^j K^{(j)} \left( \frac{x - \mu_i }{h}\right) \right)
&= \frac{(-1)^j}{j!h^{j+1}} E\left( E\left( (\bar w_i)^j | \mu_i \right) K^{(j)} \left( \frac{x - \mu_i }{h}\right) \right)\\
&= \frac{(-1)^j}{j!h^j} E( (\bar w_i)^j | \mu_i = x ) f_{\mu}(x) \int K^{(j)}(s) ds + o\left(\frac{1}{\sqrt{T^j h^{2j}}} \right) \\
&= O\left(\frac{1}{\sqrt{T^j h^{2j}}}\right),
\end{align*}
where we used $E( (\bar w_i)^j | \mu_i = x) = O(T^{-j/2})$ (see Assumption \ref{as-limit} and Lemma \ref{lem-wbar} below).
Noting that $E(\bar w_i| \mu_i) = 0$, the bias caused from the second term in \eqref{eq:issue} can be written as $\sum_{j = 2}^{J-1} O(1/\sqrt{T^j h^{2j}})$.
This bias is negligible when $1/(Th^2) \to 0$, which is identical to the relative magnitude condition $N^2/T^5 \to 0$ when using the standard bandwidth $h \asymp N^{-1/5}$ in the density estimation with second-order kernels.
For the third term in \eqref{eq:issue}, the absolute mean can be evaluated as:
\begin{align*}
E\left| \frac{(-1)^J}{J! Nh^{J+1}} \sum_{i=1}^N (\bar w_i)^J K^{(J)} \left( \frac{x - \tilde \mu_i }{h}\right) \right|
\le \frac{M}{h^{J+1}} E|\bar w_i|^J
= O\left( \frac{1}{\sqrt{T^Jh^{2J+2}}} \right),
\end{align*}
where $M > 0$ denotes a generic constant and we use $E|\bar w_i|^J = O(T^{-J/2})$ (see Lemma \ref{lem-wbar}).
Hence, the third term in \eqref{eq:issue} is of order $O_p(1/\sqrt{T^J h^{2J+2}})$ by Markov's inequality.
Remarkably, this term does not vanish even when $1/(Th^2) \to 0$ under which the lower-order terms are negligible.
This term can be negligible only if $1/(Th^{2+2/J}) \to 0$, which implies $N^{(2 + 2/J)} / T^5 \to 0$ when $h \asymp N^{-1/5}$.
Note that $N^{(2 + 2/J)} / T^5 \to 0$ is ``stronger'' than $N^2 / T^5$.
The asymptotic investigation above exhibits several unique features.
First, it implies that the relative magnitude condition for consistency (and also that for asymptotic normality) varies in the order of the expansion.
Specifically, we need $1/(Th^{2+2/J}) \to 0$ to achieve the consistency of $\hat f_{\hat \mu}(x)$ based on the $J$-th order expansion, in contrast to the existing literature in which the relative magnitude conditions for consistency or asymptotic normality do not depend on the order of expansions.
Second, we can obtain the ``weakest'' relative magnitude condition $1/(Th^2) \to 0$ for consistency, only if we execute the infinite-order expansion (that is, as $J \to \infty$).
Finally, while we can derive the suitable condition $1/(Th^2) \to 0$ via the infinite-order expansion, it requires the existence of higher-order moments of $w_{it}$.
The above investigation implies that the evaluation based on the infinite-order expansion demands the existence of $E|w_{it}|^j$ for any $j$.
Hence, there is a trade-off between the relative magnitude condition and the existence of higher-order moments.
Asymptotic normality requires a further stronger condition.
Because the rate of convergence of the kernel estimator is $\sqrt{Nh}$, it requires $N/(T^J h^{2J+1})\to 0$.
This condition is at best $N^2/T^3 \to 0$, which is obtained under an infinite order expansion with standard bandwidth ($h \asymp N^{-1/5}$).
Note that, as in the density estimation above, the highest order of the expansion determines the required condition for asymptotic normality.
Such a very high order of bias cannot be corrected in practice, even though methods to correct the first few orders of bias are available in the long-panel literature (e.g., \citealp{DhaeneJochmans15}).
This result is in stark contrast to the existing studies in which bias correction improves the conditions on the relative magnitudes of $N$ and $T$.
The main reason behind these unique features is that the curvature of the summand (i.e., $K((x - \hat \mu_i)/h)$) depends on the bandwidth $h$.
Roughly speaking, as $h \to 0$, the summand function becomes steeper and more ``nonlinear.''
It exaggerates the bias caused by the nonlinearity and it turns out that even a very high order derivative of $K$ affects the bias.
Alternatively, we may also interpret this problem based on the equation $K((x -\hat \mu_i)/h) = K((x - \mu_i)/h + (\mu_i -\hat \mu_i)/h))$.
The contribution of the error by using the estimated $\hat \mu_i$ is $(\mu_i - \hat \mu_i)/h$ and it increases as $h\to 0$.
Hence, the bias of the density estimator heavily depends on the magnitude of $h$ and the nonlinearity of $K$.
\subsection{Asymptotic biases for the density estimation}
We here formally show the presence of asymptotic biases of the kernel density estimator in \eqref{eq-dens-cdf}.
We conduct asymptotic investigations based on an infinite-order expansion under which the weakest possible condition on the relative magnitude of $N$ and $T$ is obtained.
We assume the following basic conditions for the data-generating process.
These are essentially the same as the assumptions in \citet{Okui2017}.
\begin{assumption}\label{as-basic}
The sample space of $\alpha_i$ is some Polish space and $y_{it} \in \mathbb{R}$ is a scalar real random variable.
$\{(\{y_{it}\}_{t=1}^T, \alpha_i)\}_{i=1}^N$ is i.i.d. across $i$.
\end{assumption}
\begin{assumption}\label{as-mixing-c}
For each $i$, $\{y_{it}\}_{t=1}^{\infty}$ is strictly stationary and $\alpha$-mixing given $\alpha_i$ with mixing coefficients $\{\alpha (m|i)\}_{m=0}^\infty$.
For any natural number $r_m \in \mathbb{N}$, there exists a sequence $\{ \alpha (m) \}_{m=0}^\infty$ such that for any $i$ and $m$, $\alpha (m|i) \le \alpha (m)$ and $\sum_{m=0}^{\infty} (m+1)^{r_m/2-1} \alpha(m) ^{\delta / (r_m+\delta)} < \infty$ for some $\delta>0$.
\end{assumption}
\begin{assumption}\label{as-w-moment-c}
For any natural number $r_d \in \mathbb{N}$, it holds that $E|w_{it}|^{r_d+\delta} < \infty$ for some $\delta > 0$.
\end{assumption}
\begin{assumption}\label{as-rho}
There exists a constant $\epsilon > 0$ such that $\gamma_{0,i} > \epsilon$ almost surely.
\end{assumption}
Assumptions \ref{as-basic} and \ref{as-mixing-c} require that the individual time series given $\alpha_i$ is strictly stationary across time but i.i.d. across units.
Note that the i.i.d. assumption does not exclude the presence of heterogeneity in panel data.
In our setting, heterogeneity is caused by differences in the realized values of $\{\alpha_i\}_{i=1}^N$ across units.
Assumption \ref{as-mixing-c} also restricts the degree of persistence of the individual time series.
The conditions for stationarity and degree of persistence require that the times series for each unit is not a unit root process and that the initial value of each time series is generated from a stationary distribution.
Assumption \ref{as-w-moment-c} requires the existence of the moments of $w_{it}$, and it allows us to derive the asymptotic biases of the estimators.
While we can develop the theoretical properties of the estimators in situations where Assumptions \ref{as-mixing-c} and \ref{as-w-moment-c} do not hold for some numbers $r_m$ and $r_d$, we cannot derive the higher-order biases based on infinite-order expansions in such situations.
As a result, in such situations, we demand stronger conditions on the relative magnitudes as discussed in the previous section.
Assumption \ref{as-rho} allows us to derive the asymptotic properties of the kernel estimators for $\rho_{k,i}$.
All of the assumptions can be satisfied in popular panel data models.
For example, they all hold when $y_{it}$ follows a heterogeneous stationary panel autoregressive moving average model with a Gaussian error term.
We also assume the following additional conditions.
\begin{assumption}\label{as-kernel}
The kernel function $K:\mathbb{R} \to \mathbb{R}$ is bounded, symmetric, and infinitely differentiable.
It satisfies $\int K(s) ds = 1$, $\int |K^{(j)}(s)| ds < \infty$, $\int |s K^{(j)}(s)| ds < \infty$, $\int |s^2 K^{(j)}(s)| ds < \infty$, and $\int |s^3 K^{(j)}(s)| ds < \infty$ for any nonnegative integer $j$.
\end{assumption}
Assumption \ref{as-kernel} includes the standard conditions for the kernel function in kernel-smoothing estimation, except for infinite differentiability.
We require the differentiability in order to expand the kernel estimator for the estimated $\hat \xi_i$ at the true $\xi_i$ based on the infinite-order expansion.
Note that the symmetry of $K$ implies that $\int K^{(j)}(s) d s = 0$ for any odd $j$.
For example, the Gaussian kernel function satisfies this assumption.
\begin{assumption}\label{as-density-kernel}
The random variables $\mu_i \in \mathbb{R}$, $\gamma_{k,i} \in \mathbb{R}$, and $\rho_{k,i} \in (-1,1)$ are continuously distributed.
The densities $f_{\xi}$ with $\xi=\mu$, $\gamma_k$, and $\rho_k$ are bounded away from zero near $x$ and three-times boundedly continuously differentiable near $x$.
\end{assumption}
Assumption \ref{as-density-kernel} requires that $\xi_i$ is continuously distributed without a mass of probability.
The continuity of the random variable is essential for implementing kernel-smoothing estimation as it rules out situations where there is no heterogeneity for $\xi_i$ (that is, the situation where $\xi_i = \xi$ for any $i$ with some constant $\xi$) and where there is finite grouped heterogeneity (that is, $\xi_{i_1} = \xi_{i_2}$ for any $i_1, i_2 \in \mathbb{I}_g$ with some sets $\mathbb{I}_1, \mathbb{I}_2, \dots, \mathbb{I}_G$ satisfying $ \bigoplus_{g=1}^G \mathbb{I}_g = \{1, 2, \dots, N\}$).
\begin{assumption}\label{as-limit}
The following functions are twice boundedly continuously differentiable near $x$ for any $T \in \mathbb{N}$ with finite limits at $x$ as $T \to \infty$:
\begin{align*}
& \sqrt{T^j} E\left( (\bar w_i)^j \middle| \mu_i =\cdot \right),
\qquad
\sqrt{T^j} E \left( (\bar w_i)^j \middle| \gamma_{k,i} = \cdot \right),
\qquad
\sqrt{T^j} E \left( (\bar w_i)^j \middle| \rho_{k,i} =\cdot \right), \\
& \frac{1}{\sqrt{T^j}} E \left( \left( \sum_{t=k+1}^T (w_{it}w_{i,t-k}-\gamma_{k,i}) \right)^j \middle| \gamma_{k,i}=\cdot \right),\\
& \frac{1}{\sqrt{T^{j_1 + j_2}}} E\left( \left( \sum_{t=k+1}^T (w_{it}w_{i,t-k}-\gamma_{k,i}) \right)^{j_1} \left( \sum_{t=1}^T (w_{it}^2-\gamma_{0,i}) \right)^{j_2} \frac{\gamma_{k,i}^{j_3}}{\gamma_{0,i}^{j_4}} \middle| \rho_{k,i}=\cdot \right),
\end{align*}
for any nonnegative integers $j, j_1, j_2, j_3, j_4$.
\end{assumption}
Assumption \ref{as-limit} states the existence and smoothness of the conditional expectations.
This assumption allows us to derive the exact forms of the asymptotic biases.
The convergence rates of the terms are standard and guaranteed by Lemmas \ref{lem-wbar} and \ref{lem-wk} in Appendix \ref{sec-lemma}.
For example, the assumption requires that $T \cdot E((\bar w_i)^2 | \mu_i = \cdot) = O(1)$ and the convergence rate is consistent with the result in Lemma \ref{lem-wbar}.
The following theorem shows that the kernel density estimators are consistent and asymptotically normal but exhibit asymptotic biases.
While the theorem assumes an infinite-order Taylor expansion and the summability of the infinite series of the asymptotic biases directly, we can show their validity under unrestrictive regularity conditions.
Because these discussions are highly technical and demand lengthy explanations, they appear in Appendices \ref{sec-expansion} and \ref{sec-series}.
\begin{theorem}\label{thm-dens}
Let $x \in \mathbb{R}$ be an interior point in the support of $\xi_i = \mu_i$, $\gamma_{k,i}$, or $\rho_{k,i}$.
Suppose that Assumptions \ref{as-basic}, \ref{as-mixing-c}, \ref{as-w-moment-c}, \ref{as-kernel}, \ref{as-density-kernel}, and \ref{as-limit} hold.
In addition, if $\xi_i = \rho_{k, i}$, suppose that Assumption \ref{as-rho} also holds.
Suppose that the infinite-order Taylor expansion of $\hat f_{\hat \xi}(x) = (Nh)^{-1} \sum_{i=1}^N K((x - \hat \xi_i) / h)$ at $\xi_i$ holds and that the infinite series of the asymptotic biases below is well defined.
When $N,T \to \infty$ and $h \to 0$ with $Nh \to \infty$, $Nh^5 \to C \in [0, \infty)$, and $T h^2 \to \infty$, it holds that:
\begin{align*}
\hat f_{\hat \xi}(x) - f_{\xi} (x) = \frac{1}{Nh} \sum_{i=1}^{N} K\left( \frac{x - \xi_i }{h} \right) - f_{\xi}(x) + \frac{A_{\xi, 1}(x)}{T} + \frac{A_{\xi, 2}(x)}{Th^2} + \sum_{j = 3}^{\infty} \frac{A_{\xi, j}(x)}{\sqrt{T^j h^{2j}}} + o_p\left( \frac{1}{Th^2} \right),
\end{align*}
where $A_{\xi, j}(x)$ is a nonrandom bias term that depends on $x$ and satisfies $A_{\mu, 1}(x) = 0$ for any $x$.
As a result, when $N / (T^3 h^5) \to 0$ also holds, it holds that:
\begin{align*}
\sqrt{Nh} \left( \hat f_{\hat \xi}(x) - f_{\xi} (x) - h^2 \frac{\kappa_1 f_{\xi}^{''}(x)}{2} - \frac{A_{\xi, 1}(x)}{T} - \frac{A_{\xi, 2}(x)}{Th^2} \right) \stackrel{d}{\longrightarrow} \mathcal{N} \big( 0, \kappa_2 f_{\xi}(x) \big).
\end{align*}
\end{theorem}
The density estimator can be written as the sum of the infeasible estimator based on the true $\xi_i$, say $\hat f_{\xi}(x) \coloneqq (Nh)^{-1} \sum_{i=1}^N K((x - \xi_i)/h)$, and the asymptotic biases.
The convergence rate of the estimator is the standard order of $O_p(1/ \sqrt{Nh})$, and the asymptotic distribution is the same as that of the infeasible estimator $\hat f_{\xi}(x)$.
However, the feasible estimator exhibits asymptotic biases.
These results also require the relative magnitude conditions of $N$, $T$, and $h$; that is, $1/(Th^2) \to 0$ and $N / (T^3 h^5) \to 0$ for consistency and asymptotic normality, respectively.
The estimator for $\mu_i$ has two main asymptotic biases given $A_{\mu, 1}(x) = 0$, but the estimators for $\gamma_{k,i}$ and $\rho_{k,i}$ have three main asymptotic biases, in addition to the higher-order biases.
The first bias of the form $h^2 \kappa_1 f_\xi''(x)/2$ is the standard kernel-smoothing bias.
The second bias of the form $A_{\xi, 1}(x) / T$ is the incidental parameter bias caused from estimating $\gamma_{k,i}$ and $\rho_{k,i}$ by $\hat \gamma_{k,i}$ and $\hat \rho_{k,i}$, respectively.
The estimation of $\hat \gamma_{k,i}$ and $\hat \rho_{k,i}$ involves estimating $\mu_i$ by $ \hat \mu_i = \bar y_i$ for each $i$, which becomes a source of the incidental parameter bias.
The third bias of the form $A_{\xi, 2}(x) / (Th^2)$ is the second-order nonlinearity bias caused by expanding $K((x - \hat \xi_i )/h)$ for $K(( x - \xi_i )/h)$ by Taylor expansion.
Moreover, the $j$-th order nonlinearity bias exhibits the form $A_{\xi, j}(x) / \sqrt{T^j h^{2j}}$ for $j \ge 3$.
We need the two conditions, $1/(Th^2) \to 0$ and $N / (T^3 h^5) \to 0$, to ensure the asymptotic negligibility of the higher-order nonlinearity biases.
If we use the standard bandwidth $h \asymp N^{-1/5}$ with second-order kernels, the conditions $1 / (T h^2) \to 0$ and $N / (T^3 h^5) \to 0$ imply that $N^2 / T^5 \to 0$ and that $N^2 / T^3 \to 0$, respectively, which are integrated to $N^2 / T^3 \to 0$.
Note that while the incidental parameter bias and the second-order nonlinearity bias are also asymptotically negligible under these conditions, the practical magnitudes of these biases would be larger than those of the higher-order nonlinearity biases.
We have already discussed the source of the nonlinearity bias in Section \ref{sec-unique} so here we provide a slightly more detailed discussion of the incidental parameter bias.
It does not appear in $\hat f_{\hat \mu}(x)$ because the estimation error in $\hat \mu_i$ (that is, $\bar w_i$) has zero mean.
However, errors in $\gamma_{k,i}$ and $\rho_{k,i}$ are not mean-zero.
For example, $\hat \gamma_{0,i} = \sum_{t=1}^T (y_{it} - \bar y_i)^2 /T = \gamma_{0,i} + \sum_{t=1}^T (w_{it}^2 - \gamma_{0,i}) /T - (\bar w_i)^2$ and $(\bar w_i)^2$ is not mean-zero although it converges to zero at the rate $1/T$.
This is the source of the incidental parameter bias $A_{\gamma_0,1}/T$ and the order $1/T$ comes from the fact that $(\bar w_i)^2$ is of order $O_p(1/T)$.
\begin{remark} \label{rem-diff}
There are important differences between our result and \cite{Okui2017} in terms of the relative magnitude conditions.
They show that the asymptotic normality of estimators for moments without bias correction requires the condition that $N/T^2 \to 0$.
As a result, some might surmise that our kernel smoothing requires a ``weaker'' condition such as $Nh / T^2 \to 0$ because the kernel estimation is essentially taking the average number of observations in a local neighborhood that contains $Nh$ observations on average.
However, such conjecture is not true, as the asymptotic normality of our smoothed density requires somewhat stronger conditions than \citet{Okui2017}.
The failure of the conjecture stems from the fact that, as $h\to 0$, the summands, $K ((x - \hat \xi_i)/h)$, become more nonlinear, which increases the nonlinear biases and necessitates imposing a stronger assumption to ignore higher-order nonlinear biases. This feature is not observed in the empirical distribution estimation in \citet{Okui2017}.
\end{remark}
\begin{remark}
When using higher-order kernels, the relative magnitude conditions of $N$ and $T$ for consistency and asymptotic normality are altered.
For example, when using fourth-order kernels, the optimal bandwidth is $h \asymp N^{-1/9}$ (see, e.g., \citealp[Section 1.11]{LiRacine2007}).
Then, the conditions $1/Th^2 \to 0$ and $N/(T^3 h^5) \to 0$ are identical to $N^2 / T^9 \to 0$ and $N^{14} / T^{27} \to 0$, respectively, which are weaker than the relative magnitude conditions with second-order kernels.
Thus, one may employ higher-order kernels especially when $T$ is much smaller than $N$.
Nonetheless, our Monte Carlo simulations observe that the performance of the jackknife bias-corrected estimator with a second-order kernel is satisfactory even when $T$ is small.
\end{remark}
\subsection{HPJ bias correction for density estimation}
As the incidental parameter bias and the nonlinearity biases in $\hat f_{\hat \xi}(x)$ may be severe in practice, we propose adoption of the split-panel jackknife to correct them.
Among split-panel jackknifes, here we consider half-panel jackknife (HPJ) bias correction.
For simplicity, suppose that $T$ is even.\footnote{The bias correction with odd $T$ is similar.
See \citet[page 999]{DhaeneJochmans15} for details.
}
For $\xi_i = \mu_i$, $\gamma_{k,i}$, or $\rho_{k,i}$, we obtain the estimators $\hat f_{\hat \xi,(1)}(x)$ and $\hat f_{\hat \xi,(2)}(x)$ of $f_\xi(x)$ based on two half-panel data $\{\{y_{it}\}_{t=1}^{T/2}\}_{i=1}^N$ and $\{\{y_{it}\}_{t=T/2+1}^T \}_{i=1}^N$, respectively.
The HPJ bias-corrected estimator is $\hat f_{\hat \xi}^H(x) \coloneqq \hat f_{\hat \xi}(x) - (\bar f_{\hat \xi}(x) - \hat f_{\hat \xi}(x) )$ where $\bar f_{\hat \xi}(x) \coloneqq [\hat f_{\hat \xi,(1)}(x)+ \hat f_{\hat \xi,(2)}(x)]/2$.
The term $\bar f_{\hat \xi}(x) - \hat f_{\hat \xi}(x)$ estimates the bias in the original estimator $\hat f_{\hat \xi}(x)$.
Importantly, the bandwidths for computing $\hat f_{\hat \xi,(1)}(x)$ and $\hat f_{\hat \xi,(2)}(x)$ must be the same as that for the original estimator $\hat f_{\hat \xi}(x)$ to reduce the biases.
The next theorem formally shows that the HPJ bias-corrected estimator $\hat f_{\hat \xi}^H(x)$ does not suffer from incidental parameter bias and second-order bias, and does not alter the asymptotic variance of the estimator.
Similar results are observed by \citet{GalvaoKato14}, \citet{DhaeneJochmans15}, and \citet{Okui2017} for other situations.
\begin{theorem}\label{thm-dens-HPJ}
Suppose that the assumptions in Theorem \ref{thm-dens} hold.
When $N,T \to \infty$ and $h \to 0$ with $Nh \to \infty$, $Nh^5 \to C \in [0, \infty)$, $Th^2 \to \infty$, and $N / (T^3 h^5) \to 0$, it holds that:
\begin{align*}
\sqrt{Nh} \left( \hat f_{\hat \xi}^H(x) - f_{\xi} (x) - h^2 \frac{\kappa_1 f_{\xi}^{''}(x)}{2} \right) \stackrel{d}{\longrightarrow} \mathcal{N} \big( 0, \kappa_2 f_{\xi}(x) \big).
\end{align*}
\end{theorem}
Note that HPJ bias correction does not weaken the relative magnitude condition of $N$, $T$, and $h$ for asymptotic normality in Theorem \ref{thm-dens}; that is, $N / (T^3 h^5) \to 0$.
This is because HPJ bias correction cannot eliminate higher-order nonlinearity biases.
This result is in stark contrast to the existing literature where bias correction typically weakens the condition on the relative magnitudes of $N$ and $T$ \citep[see, e.g.,][]{DhaeneJochmans15}.
\begin{remark}
We can also consider higher-order jackknifes to eliminate higher-order biases as in \citet{DhaeneJochmans15} and \citet{Okui2017}.
For example, we can consider the third-order jackknife (TOJ) in the same manner as in \citet{Okui2017}, which is slightly different from the original TOJ in \citet{DhaeneJochmans15} because both studies treat different higher-order biases.
We investigate its performance by Monte Carlo simulations below, which shows that the TOJ can work better than the HPJ, especially when the naive estimator without bias correction exhibits a large bias.
Hence, for practical situations, we recommend the adoption of higher-order jackknifes as well as HPJ bias correction.
\end{remark}
\subsection{Confidence interval and bandwidth selection for density estimation}
This section considers confidence interval estimation and the selection of optimal bandwidth for density estimation based on the results in the previous sections.
\paragraph{Confidence interval estimation.}
We propose to apply the robust bias-corrected procedure in \citet{calonico2018effect} for confidence interval estimation.
It allows us to construct a valid $1 - \alpha$ confidence interval of $f_{\xi}(x)$ while correcting the kernel-smoothing bias $\mathcal{B}_{\xi}(x) \coloneqq h^2 \kappa_1 f_{\xi}^{''}(x) / 2$.
The robust bias-corrected procedure based on the naive estimator $\hat f_{\hat \xi}(x)$ is almost the same as the original procedure in \citet{calonico2018effect}.
We first note that the kernel-smoothing bias can be estimated by $\hat{\mathcal{B}}_{\hat \xi}(x) \coloneqq h^2 \kappa_1 \hat f_{\hat \xi}''(x)$ where $\hat f_{\hat \xi}''(x) \coloneqq (Nb^3)^{-1} \sum_{i=1}^{N} L''((x - \hat \xi_i)/b)$ with a kernel function $L$ and bandwidth $b \to 0$.
Then, the estimator that corrects the kernel-smoothing bias is:
\begin{align*}
\hat f_{\hat \xi}(x) - \hat{\mathcal{B}}_{\hat \xi}(x)
= \frac{1}{Nh} \sum_{i=1}^{N} \left( K\left(\frac{x - \hat \xi_i}{h}\right) - \kappa_1 \lambda^3 L''\left(\frac{x - \hat \xi_i}{b}\right) \right)
\eqqcolon \frac{1}{Nh} \sum_{i=1}^{N} \mathcal{K}_i(x),
\end{align*}
where $\lambda \coloneqq h / b$ and $\lambda = 1$ should be set following the suggestion in \citet{calonico2018effect}.
The robust bias-corrected $t$ statistic is given by:
\begin{align*}
T_{rbc}(x) \coloneqq \frac{[\hat f_{\hat \xi}(x) - \hat{\mathcal{B}}_{\hat \xi}(x)] - f_{\xi}(x)}{\hat \sigma_{rbc}(x)},
\end{align*}
where $\hat \sigma_{rbc}^2(x)$ is the estimator of the nonasymptotic variance of $\hat f_{\hat \xi}(x) - \hat B_{\hat \xi}(x)$:
\begin{align*}
\hat \sigma_{rbc}^2(x) \coloneqq \frac{1}{Nh^2} \left[ \frac{1}{N} \sum_{i=1}^{N} \mathcal{K}_i^2(x) - \left( \frac{1}{N} \sum_{i=1}^{N} \mathcal{K}_i(x) \right)^2 \right].
\end{align*}
It holds that $T_{rbc}(x) \stackrel{d}{\longrightarrow} \mathcal{N}(0, 1)$ under similar conditions in Theorem \ref{thm-dens}, so that we can construct the $1 - \alpha$ confidence interval of $f_{\xi}(x)$ in the usual manner.
The robust bias-corrected procedure based on the split-panel jackknife bias-corrected estimator demands some modifications.
To see this, the HPJ bias-corrected estimator that also reduces the kernel-smoothing bias can be written as follows:
\begin{align*}
\hat f_{\hat \xi}^H(x) - \hat{\mathcal{B}}_{\hat \xi}(x)
&= \left[ 2 \hat f_{\hat \xi}(x) - \frac{1}{2} \left( \hat f_{\hat \xi}^{(1)}(x) + \hat f_{\hat \xi}^{(2)}(x) \right) \right] - \hat{\mathcal{B}}_{\hat \xi}(x)\\
&= \frac{1}{Nh} \sum_{i=1}^N \left[ 2 K \left( \frac{x-\hat \xi_i}{h} \right) - \frac{1}{2} \left( K \left( \frac{x - \hat \xi_i^{(1)}}{h} \right) + K \left( \frac{x - \hat \xi_i^{(2)}}{h} \right)\right) - \kappa_1 \lambda^3 L''\left(\frac{x - \hat \xi_i}{b}\right) \right]\\
&\eqqcolon \frac{1}{Nh} \sum_{i=1}^n \mathcal{K}_i^H(x),
\end{align*}
where $\hat \xi_i^{(1)}$ and $\hat \xi_i^{(2)}$ are the estimators based on the half-series $\{y_{it}\}_{t=1}^{T/2}$ and $\{y_{it}\}_{t=T/2+1}^T$, respectively.
Then, the nonasymptotic variance of $\hat f_{\hat \xi}^H(x) - \hat{\mathcal{B}}_{\hat \xi}(x)$ can be estimated by:
\begin{align*}
(\hat \sigma_{rbc}^H(x))^2 \coloneqq \frac{1}{Nh^2} \left[ \frac{1}{N} \sum_{i=1}^{N} (\mathcal{K}_i^H(x))^2- \left( \frac{1}{N} \sum_{i=1}^{n} \mathcal{K}_i^H(x) \right)^2 \right].
\end{align*}
As a result, the robust bias-corrected $t$ statistic based on the HPJ estimator is:
\begin{align*}
\hat T_{rbc}^H(x) \coloneqq \frac{[\hat f_{\hat \xi}^H(x) - \hat{\mathcal{B}}_{\hat \xi}(x)] - f_{\xi}(x)}{\hat \sigma_{rbc}^H(x)}.
\end{align*}
Note that $\hat \sigma_{rbc}^H(x)$ is different from $\hat \sigma_{rbc}(x)$ above because the former also captures the finite-sample variability of HPJ bias correction.
We can construct the $1 - \alpha$ confidence interval of $f_{\xi}(x)$ based on $\hat T_{rbc}^H(x)$ in the usual manner.
We can also consider similar robust bias-corrected procedures based on higher-order split-panel jackknife bias correction.
\begin{remark}
Undersmoothing is often used to construct confidence intervals for the kernel density estimator.
However, it is not desirable in our context.
Undersmoothing means that we use bandwidth that converges faster than $N^{-1/5}$ so that the smoothing bias does not appear in the asymptotic distribution.
In our setting, the smaller is the bandwidth, the larger is the higher-order nonlinearity bias, which in turn calls for a stronger assumption on the relative magnitude of $N$ and $T$.
We thus prefer the method based on \citet{calonico2018effect} because we can still use the bandwidth of order $N^{-1/5}$.
Note also that \citet{calonico2018effect} demonstrate that their method performs better than undersmoothing.
\end{remark}
\paragraph{Bandwidth selection.}
We can select the bandwidth $h$ for the density estimation using any standard procedures based on the estimated $\hat \xi_i$.
This is because Theorem \ref{thm-dens-HPJ} shows that the AMSE and asymptotic distribution of the HPJ bias-corrected estimator $\hat f_{\hat \xi}^H(x)$ are identical to those of the infeasible estimator $\hat f_{\xi}(x) = (Nh)^{-1} \sum_{i=1}^N K((x - \xi_i)/h)$.
In our application and Monte Carlo simulations, we apply the coverage error optimal bandwidth selection procedure in \citet{calonico2018effect} because of its desirable properties as shown in the paper. Furthermore, their bandwidth tends to be larger than the bandwidth that minimizes AMSE and would be more suitable in our context because a larger bandwidth makes the nonlinearity biases smaller.
Our Monte Carlo simulations also confirm the appropriate finite-sample properties of the procedure.
\subsection{Asymptotic biases for CDF estimation}
In this section, we consider the smoothed CDF estimator and derive its asymptotic biases.
The CDF $F_{\xi}(x) \coloneqq \Pr(\xi_i \le x)$ can be estimated by integrating the kernel density estimator:
$\hat F_{\hat \xi}(x) = \int^{x}_{-\infty} \hat f_{\hat \xi}(v) dv$.
It is convenient to write this kernel CDF estimator as:
\begin{align*}
\hat F_{\hat \xi} (x) \coloneqq \frac{1}{N} \sum_{i=1}^N \mathbb{K} \left( \frac{x - \hat \xi_i}{h}\right),
\end{align*}
where $x \in \mathbb{R}$ is a fixed point, and $\mathbb{K}:\mathbb{R} \to [0,1]$ is a Borel-measurable CDF (or $\mathbb{K} (a) = \int^{x}_{-\infty} K(v) dv$).
For the CDF estimation, we need the following condition instead of Assumption \ref{as-density-kernel}.
The continuity of the random variable is essential, even for the kernel-smoothing CDF estimation.
\begin{assumption}
\label{as-cdf}
The random variables $\mu_i \in \mathbb{R}$, $\gamma_{k,i} \in \mathbb{R}$, and $\rho_{k,i} \in (-1,1)$ are continuously distributed.
The CDFs $F_{\xi}$ with $\xi = \mu$, $\gamma_k$, and $\rho_k$ are three-times boundedly continuously differentiable near $x$.
\end{assumption}
The following theorem shows the presence of asymptotic biases for the kernel CDF estimator.
\begin{theorem}\label{thm-CDF}
Let $x \in \mathbb{R}$ be an interior point in the support of $\xi_i = \mu_i$, $\gamma_{k,i}$, or $\rho_{k,i}$.
Suppose that Assumptions \ref{as-basic}, \ref{as-mixing-c}, \ref{as-w-moment-c}, \ref{as-kernel}, \ref{as-limit}, and \ref{as-cdf} hold.
In addition, if $\xi_i = \rho_{k, i}$, suppose that Assumption \ref{as-rho} also holds.
Suppose that the infinite-order Taylor expansion of $\hat F_{\hat \xi}(x) = N^{-1} \sum_{i=1}^N \mathbb{K}((x - \hat \xi_i) / h)$ at $\xi_i$ holds and that the infinite series of the asymptotic biases below is well defined.
When $N,T \to \infty$ and $h \to 0$ with $Nh^3 \to C \in [0, \infty)$ and $T h^2 \to \infty$, it holds that:
\begin{align*}
\hat F_{\hat \xi}(x) - F_{\xi}(x) = \frac{1}{N} \sum_{i=1}^N \mathbb{K} \left( \frac{x - \xi_i}{h} \right) - F_{\xi}(x) + \frac{B_{\xi, 1}(x)}{T} + \frac{B_{\xi, 2}(x)}{T} + \sum_{j=3}^{\infty} \frac{B_{\xi, j}(x)}{\sqrt{T^j h^{2j - 2}}} + o_p\left( \frac{1}{\sqrt{T^3 h^4}} \right),
\end{align*}
where $B_{\xi, j}(x)$ is a nonrandom bias term that depends on $x$ and that satisfies $B_{\mu, 1}(x) = 0$ for any $x$.
As a result, when $N / (T^3 h^4) \to 0$ also holds, it holds that:
\begin{align*}
\sqrt{N} \left( \hat F_{\hat \xi}(x) - F_{\xi} (x) - \frac{B_{\xi, 1}(x)}{T} - \frac{B_{\xi, 2}(x)}{T} \right) \stackrel{d}{\longrightarrow} \mathcal{N} \big( 0, F_{\xi}(x) [1-F_{\xi}(x)] \big).
\end{align*}
\end{theorem}
The CDF estimator can be rearranged as the sum of the infeasible estimator based on the true $\xi_i$ and the asymptotic biases.
We present the result based on an infinite-order expansion because it yields the best possible condition of the relative magnitudes of $N$ and $T$ but it requires the validity of the infinite-order expansion, in particular the summability of the infinite series and they hold under technical regularity conditions as in the case of the density estimation in Theorem \ref{thm-dens}.
The biases of the forms $B_{\xi, 1}(x) / T$ and $B_{\xi, 2}(x) / T$ are the incidental parameter bias and the second-order nonlinearity bias, respectively.
Note that $\hat F_{\hat \mu} (x)$ does not exhibit the incidental parameter bias as in the case of the density estimation.
We also note that the standard kernel-smoothing bias of order $O(h^2)$ does not exist under asymptotic normality because it is asymptotically negligible under $Nh^3 \to C$
(see Lemma \ref{lem-CDF} in Appendix \ref{sec-lemma}).
Consistency and asymptotic normality require the conditions $1 / (T h^2) \to 0$ and $N / (T^3 h^4) \to 0$, respectively, which asymptotically eliminate the higher-order biases.
When using the standard bandwidth $h \asymp N^{-1/3}$ in the CDF estimation with second-order kernels, the conditions $1 / (T h^2) \to 0$ and $N / (T^3 h^4) \to 0$ are the same as $N^2 / T^3 \to 0$ and $N^7 / T^9 \to 0$, respectively, which are integrated to $N^7 / T^9 \to 0$.
Note that we can weaken the relative magnitude condition by using higher-order kernels, which leads to a larger bandwidth, as in the density estimation.
The relative magnitude conditions for the kernel CDF estimation with second-order kernels are stronger than those for the empirical CDF estimation in \citet{JochmansWeidner2018} and \citet{Okui2017}.
The empirical CDF estimation is also easier to implement in practice.
Hence, one should probably employ empirical CDF estimation in practice, and here we do not explore split-panel jackknife, confidence interval estimation, and bandwidth selection for the kernel CDF estimation (although they are feasible).
Nonetheless, the asymptotic biases for the kernel CDF estimation in Theorem \ref{thm-CDF} are new in the literature, and they would be interesting in their own right.
\begin{remark}
The bias of order $O(1/T)$ for $\hat{F}_{\hat \mu} (x)$ corresponds to the result in \citet{JochmansWeidner2018}.
They derive the asymptotic bias of the empirical distribution under Gaussian errors.
Suppose that $\hat \mu_i \sim \mathcal{N}( \mu_i, \sigma_i^2 /T)$ as in \citet{JochmansWeidner2018}.
Note that $B_{\mu, 1} (x) =0$.
The formula for $B_{\mu ,2} (x)$ is available in the proof of Theorem \ref{thm-CDF} and becomes $0.5 \cdot \partial ( E(\sigma_i^2 | \mu_i = x) f_{\mu} (x) ) / \partial x $ in this case.\footnote{To derive this result, note that integration by parts can lead to $\int s K'(s) ds = [sK(s)]_{-\infty}^{\infty} - \int K(s) ds = -1$.}
It is identical to the bias formula in \citet{JochmansWeidner2018}.
Note that the bias of order $O(1/T)$ for $\hat{F}_{\hat \gamma_{k}} (x)$ and $\hat{F}_{\hat \rho_{k}} (x)$ includes $B_{\gamma_k, 1}(x)$ and $B_{\rho_k, 1}(x)$, respectively, which do not appear in \citet{JochmansWeidner2018}.
\end{remark}
\begin{remark}
While we obtain the same bias formula of order $O(1/T)$ for the CDF estimator for $\hat \mu_i$ as that in \citet{JochmansWeidner2018}, it is still not clear whether bias formulas including higher-order terms correspond to each other in both papers.
The empirical CDF can be regarded as the kernel CDF by letting $h\to 0$ in the given sample (i.e., when $h \to 0$ while keeping $N$ and $T$ fixed).
However, the higher-order biases of our kernel CDF are derived in the joint asymptotics (i.e., $N, T \to \infty$ and $h \to 0$) and explode as $h\to 0$, so that it is not trivial how those higher-order asymptotic biases contribute as $h\to 0$, while keeping $N$ and $T$ fixed (or as $N, T \to \infty$ after $h \to 0$).
Therefore, although we obtain the same bias formulas of order $O(1/T)$, we still hesitate to conclude definitely that our bias formula, including higher-order terms, corresponds exactly to that in \citet{JochmansWeidner2018}.
\end{remark}
\section{Empirical application} \label{sec-application}
We apply our procedure to panel data on prices of items in US cities.
Our procedure allows us to examine the heterogeneous properties of the deviations of prices from the LOP across items and cities, and the difference in the degree of heterogeneity between goods and services.
Many empirical studies examine the heterogeneous properties of the level and variance of price deviations and the speed of price adjustment toward the long-run LOP deviation (see \citealp{AndersonVanWincoop04} for a review).
For example, \cite{EngelRogers01}, \cite{ParsleyWei01}, and \cite{CruciniShintaniTsuruga15} examine such heterogeneous properties and find that the LOP deviation dynamics are significantly heterogeneous across items and cities based on regression models.
Our investigation below complements such empirical analyses by using our model-free procedure, as it provides visual information concerning the degree of heterogeneity.
We estimate the densities of the mean $\mu_i$, variance $\gamma_{0,i}$, and first-order autocorrelation $\rho_{1,i}$.
We use the Epanechnikov kernel with the coverage error optimal bandwidth in \citet{calonico2018effect}.\footnote{We also observed similar results with different kernels and different bandwidths.}
The codes to compute the confidence intervals and the optimal bandwidths are developed based on the {\ttfamily nprobust} package for {\ttfamily R} \citep{nprobust}.
\paragraph{Data.}
We use data from the American Chamber of Commerce Researchers Association Cost of Living Index produced by the Council of Community and Economic Research.\footnote{Mototsugu Shintani kindly provided us with the data set ready for analysis.}
The same data set is used by \cite{ParsleyWei96}, \cite{YazganYilmazkuday11}, \cite{CruciniShintaniTsuruga15}, \cite{LeeOkuiShintani13}, and \citet{Okui2017}.
The data set contains quarterly price series of 48 consumer price index items (goods and services) for 52 US cities from 1990Q1 to 2007Q4.\footnote{While the original data source contains price information for more items in additional cities, we restrict the observations to obtain a balanced panel data set, as in \cite{CruciniShintaniTsuruga15}.}
The categorization of goods and services can be found in \citet[Table 2]{Okui2017}.
We define the LOP deviation for item $k$ in city $i$ at time $t$ as $y_{i k t}=\ln P_{i k t}-\ln P_{0kt}$, where $P_{ikt}$ is the price of item $k$ in city $i$ at time $t$ and $P_{0kt}$ is that for the benchmark city of Albuquerque, NM.
We regard each item--city pair as a cross-sectional unit, such that we focus on the degree of heterogeneity of the LOP deviations across item--city pairs.
The number of cross-sectional units is $N = 48 \times (52 - 1) = 2448$ and the length of the time series is $T = 18 \times 4 = 72$.
\paragraph{Results.}
Figure \ref{fig:density} depicts the density estimates for $\mu_i$, $\gamma_{0,i}$, and $\rho_{1,i}$.
In each panel, the solid black line indicates the density estimates without split-panel jackknife bias correction, the red dashed line shows the HPJ estimates, and the blue dotted line shows the TOJ estimates.
\begin{figure}[!t]
\begin{minipage}{0.33\hsize}
\includegraphics[width=55mm, bb = 0 0 841 595]{dens_mean.pdf}
\end{minipage}
\begin{minipage}{0.33\hsize}
\includegraphics[width=55mm, bb = 0 0 841 595]{dens_acov.pdf}
\end{minipage}
\begin{minipage}{0.33\hsize}
\includegraphics[width=55mm, bb = 0 0 841 595]{dens_acor.pdf}
\end{minipage}
\caption{The densities of $\mu$, $\gamma_0$, and $\rho_1$.
In each figure, the solid black line indicates the estimates without split-panel jackknife correction, the dashed red line shows the HPJ bias-corrected estimates, and the dotted blue line shows the TOJ bias-corrected estimates.}
\label{fig:density}
\end{figure}
\begin{figure}[!t]
\begin{minipage}{0.33\hsize}
\includegraphics[width=55mm, bb = 0 0 841 595]{ci_mean.pdf}
\end{minipage}
\begin{minipage}{0.33\hsize}
\includegraphics[width=55mm, bb = 0 0 841 595]{ci_acov.pdf}
\end{minipage}
\begin{minipage}{0.33\hsize}
\includegraphics[width=55mm, bb = 0 0 841 595]{ci_acor.pdf}
\end{minipage}
\caption{The HPJ bias-corrected densities of $\mu$, $\gamma_0$, and $\rho_1$ with 95\% point-wise confidence intervals.}
\label{fig:ci}
\end{figure}
The estimation results with and without bias correction show that the LOP deviation dynamics are significantly heterogeneous across items.
The density estimates without bias correction for $\mu_i$ are similar to those with bias correction.
The results for $\mu_i$ also show that the mode of the heterogeneous long-run LOP deviations is close to zero, with a nearly symmetric, unimodal distribution.
In contrast, the estimates without bias correction for $\gamma_{0,i}$ and $\rho_{1,i}$ are very different from the bias-corrected estimates.
The bias-corrected estimates for $\gamma_{0,i}$ demonstrate larger variances for the LOP deviation dynamics, while the bias-corrected estimates for $\rho_{1,i}$ show more persistent dynamics with a more left-skewed distribution.
These results suggest the severe impact of the incidental parameter biases, which highlights the importance of bias correction methods.
Figure \ref{fig:ci} depicts 95\% point-wise confidence bands based on the HPJ estimates.
The confidence bands are narrow, implying that our HPJ estimates seem to be precise and reliable.
Figure \ref{fig:gs} illustrates the HPJ estimates of $\mu_i$, $\gamma_{0,i}$, and $\rho_{1,i}$ for goods and services separately.
The solid black lines are the HPJ estimates for goods, and the dashed red lines are those for services.
The estimated densities and CDFs show that the heterogeneous properties are significantly different between goods and services.
The densities for $\mu_i$ show that the long-run LOP deviation for goods generally tends to be larger than that for services (in an absolute sense).
The estimation results for $\gamma_{0,i}$ and $\rho_{1,i}$ show that the LOP deviation for goods tends to be more volatile but less persistent than that for services.
These results suggest that goods tend to have more volatile processes with faster adjustment speeds toward the nonnegligible long-run LOP deviation.
\begin{figure}[!t]
\begin{minipage}{0.33\hsize}
\includegraphics[width=55mm, bb = 0 0 841 595]{gs_mean.pdf}
\end{minipage}
\begin{minipage}{0.33\hsize}
\includegraphics[width=55mm, bb = 0 0 841 595]{gs_acov.pdf}
\end{minipage}
\begin{minipage}{0.33\hsize}
\includegraphics[width=55mm, bb = 0 0 841 595]{gs_acor.pdf}
\end{minipage}
\caption{The HPJ bias-corrected densities of $\mu$, $\gamma_0$, and $\rho_1$ for goods and services separately. In each figure, the solid black line indicates the HPJ bias-corrected estimates for goods, and the dashed red line shows the HPJ bias-corrected estimates for services.}
\label{fig:gs}
\end{figure}
If we seek to examine the degree of heterogeneity of the LOP deviations across items and cities as in \citet{CruciniShintaniTsuruga15}, our model-free results are informative in their own right.
There are several possible sources of differences in the degree of heterogeneity, including the differences in trade costs across items (e.g., \citealp{AndersonVanWincoop04}) and differences in sale and nonsale prices across goods and services (e.g., \citealp{NakamuraSteinsson08}).
Furthermore, our model-free results also suggest how we should model heterogeneity when implementing structural estimation for price deviations or change.
For example, as our procedure demonstrates that the heterogeneous properties of goods and services differ, we should model unobserved heterogeneity differently for goods and services.
\section{Monte Carlo simulations} \label{sec-simulation}
This section presents the results of the Monte Carlo simulations.
We here focus on the density estimation only.
The number of simulation replications is 5,000.
\paragraph{Design.}
We generate the data using the AR(1) process $y_{it}= (1-\phi_i)\varsigma_i + \phi_i y_{i,t-1} + \sqrt{(1-\phi_i^2)\sigma_i^2} u_{it}$ where $u_{it} \sim \mathcal{N}(0,1)$, $y_{i0} \sim \mathcal{N}(\varsigma_i , \sigma_i^2)$, and $u_{i0} \sim \mathcal{N}(0, 1)$.
Note that this design satisfies $\mu_i = \varsigma_i$, $\gamma_{0,i} = \sigma_i^2$, and $\rho_{1,i} = \phi_i$.
We generate the unit-specific random variables $\varsigma_i \sim \mathcal{N}(-1, 1)$, $\phi_i \sim 2 \cdot Beta(2, 4) - 1$, and $\sigma_i^2 \sim 3 \cdot Beta(3, 2)$.
We consider $N = 250, 500, 1000$ and $T = 12, 24, 48, 96$.
\paragraph{Estimators.}
We estimate the densities of $\mu_i$, $\gamma_{0,i}$, and $\rho_{1,i}$ at their 20\%, 40\%, 60\%, and 80\% quantiles based on four estimators.
The first is the naive estimator (NE) without split-panel jackknife bias correction.
The second and third are the HPJ and TOJ estimators.
The fourth is the infeasible estimator (IE) based on the true $\mu_i$, $\gamma_{0,i}$, and $\rho_{1,i}$.
For all estimators, we use the Epanechnikov kernel and the coverage error optimal bandwidth in \citet{calonico2018effect}.
\paragraph{Results.}
Tables \ref{table:mean}, \ref{table:acov}, and \ref{table:acor} present the simulation results for the densities of $\mu_i$, $\gamma_{0,i}$, and $\rho_{1,i}$, respectively.
The tables report the true values of the parameters and the bias and standard deviation (std) of each estimator.
They also report the coverage probability (cp) of the 95\% confidence interval computed by the robust bias-corrected procedure based on each estimator.
Table \ref{table:band} also describes the mean and the standard deviation of the selected bandwidths for NE and IE.
Note that we use the same bandwidth for HPJ and TOJ as NE as discussed above.
The NE exhibits large biases, especially with small $T$.
In particular, the biases of the density for $\gamma_{0,i}$ and $\rho_{1,i}$ are crucial because of the incidental parameter biases.
As a result, the coverage probabilities of the NE are much smaller than 0.95.
The performance of the NE improves as $T$ grows, but it is unsatisfactory for several parameters even when $T = 96$.
These results highlight the importance of bias correction even with relatively large $T$.
The performances of the HPJ and TOJ are significantly better than the NE.
The HPJ and TOJ operate well especially when the NE exhibits large biases.
The TOJ outperforms the HPJ when the HPJ exhibits relatively large biases, as a result of relatively large higher-order nonlinearity biases.
Furthermore, for several parameters, the TOJ operates as well as the IE in terms of bias reduction and coverage probability.
Note that the TOJ may inflate the estimation variability, especially when $T$ is small, but such cost is inevitable when our goal is to conduct unbiased inferences.
These simulation results demonstrate the severity of the incidental parameter biases and the nonlinearity biases and the success of the split-panel jackknife and the robust bias-corrected inference.
We thus recommend the robust bias-corrected inference based on the split-panel jackknife bias-corrected estimation.
\section{Conclusion} \label{sec-conclusion}
This paper presented nonparametric kernel-smoothing estimation to examine the degree of heterogeneity in panel data.
The kernel density and CDF estimators are consistent and asymptotically normal under the relative magnitude conditions on the cross-sectional size $N$, time-series length $T$, and bandwidth $h$.
Because of the presence of incidental parameter bias and nonlinearity biases, the relative magnitude conditions vary in the order of the expansions.
Via infinite-order expansions, we derived the relative magnitude conditions that are suitable for microeconometric applications.
We discussed the split-panel jackknife to correct biases, the construction of confidence intervals, and the selection of bandwidth.
We also illustrated our procedure based on an application on price deviations and Monte Carlo simulations.
\section*{Acknowledgments}
The authors greatly appreciate the assistance of Mototsugu Shintani in providing the price panel data.
The authors would also like to thank Stephane Bonhomme, Kazuhiko Hayakawa, Koen Jochmans, Shin Kanaya, Hiroyuki Kasahara, Yoonseok Lee, Oliver Linton, Jun Ma, Yukitoshi Matsushita, Martin Weidner, Yohei Yamamoto, Yu Zhu and the participants of many conferences for helpful discussions and comments. Sebastian Calonico kindly helped us better understand the use of the {\ttfamily nprobust} package.
All remaining errors are our own.
Part of this research was conducted while Okui was at Kyoto University and while Yanagi was at Hitotsubashi University.
Okui acknowledges financial support from the Japan Society for the Promotion of Science (JSPS) under KAKENHI Grant Nos. 25780151, 25285067, 15H03329, and 16K03598.
Yanagi acknowledges the financial support of KAKENHI Grant Nos. 15H06214 and 17K13715.
\begin{landscape}
\begin{table}[!t]
{\scriptsize
\caption{Monte Carlo simulation results for $\mu$}
\label{table:mean}
\begin{center}
\begin{tabular}{lrrrcrrrcrrrcrrrcrrr}
\hline\hline
\multicolumn{1}{l}{\bfseries }&\multicolumn{3}{c}{\bfseries }&\multicolumn{1}{c}{\bfseries }&\multicolumn{3}{c}{\bfseries NE}&\multicolumn{1}{c}{\bfseries }&\multicolumn{3}{c}{\bfseries HPJ}&\multicolumn{1}{c}{\bfseries }&\multicolumn{3}{c}{\bfseries TOJ}&\multicolumn{1}{c}{\bfseries }&\multicolumn{3}{c}{\bfseries IE}\tabularnewline
\cline{6-8} \cline{10-12} \cline{14-16} \cline{18-20}
\multicolumn{1}{l}{}&\multicolumn{1}{c}{true}&\multicolumn{1}{c}{$N$}&\multicolumn{1}{c}{$T$}&\multicolumn{1}{c}{}&\multicolumn{1}{c}{bias}&\multicolumn{1}{c}{std}&\multicolumn{1}{c}{cp}&\multicolumn{1}{c}{}&\multicolumn{1}{c}{bias}&\multicolumn{1}{c}{std}&\multicolumn{1}{c}{cp}&\multicolumn{1}{c}{}&\multicolumn{1}{c}{bias}&\multicolumn{1}{c}{std}&\multicolumn{1}{c}{cp}&\multicolumn{1}{c}{}&\multicolumn{1}{c}{bias}&\multicolumn{1}{c}{std}&\multicolumn{1}{c}{cp}\tabularnewline
\hline
at $\mu$'s 20\%Q&$0.280$&$ 250$&$12$&&$-0.013$&$0.032$&$0.914$&&$-0.008$&$0.039$&$0.938$&&$-0.007$&$0.051$&$0.945$&&$-0.005$&$0.034$&$0.941$\tabularnewline
&$0.280$&$ 250$&$24$&&$-0.009$&$0.032$&$0.940$&&$-0.006$&$0.038$&$0.950$&&$-0.005$&$0.047$&$0.951$&&$-0.004$&$0.034$&$0.946$\tabularnewline
&$0.280$&$ 250$&$48$&&$-0.007$&$0.033$&$0.936$&&$-0.006$&$0.037$&$0.942$&&$-0.006$&$0.043$&$0.947$&&$-0.004$&$0.034$&$0.946$\tabularnewline
&$0.280$&$ 250$&$96$&&$-0.007$&$0.034$&$0.938$&&$-0.006$&$0.036$&$0.938$&&$-0.006$&$0.040$&$0.942$&&$-0.005$&$0.034$&$0.943$\tabularnewline
&$0.280$&$ 500$&$12$&&$-0.011$&$0.024$&$0.916$&&$-0.007$&$0.030$&$0.940$&&$-0.006$&$0.042$&$0.948$&&$-0.004$&$0.026$&$0.949$\tabularnewline
&$0.280$&$ 500$&$24$&&$-0.008$&$0.024$&$0.938$&&$-0.005$&$0.029$&$0.947$&&$-0.005$&$0.037$&$0.951$&&$-0.003$&$0.026$&$0.950$\tabularnewline
&$0.280$&$ 500$&$48$&&$-0.006$&$0.026$&$0.931$&&$-0.005$&$0.030$&$0.936$&&$-0.005$&$0.036$&$0.938$&&$-0.004$&$0.026$&$0.940$\tabularnewline
&$0.280$&$ 500$&$96$&&$-0.005$&$0.025$&$0.944$&&$-0.004$&$0.028$&$0.946$&&$-0.004$&$0.031$&$0.946$&&$-0.004$&$0.026$&$0.944$\tabularnewline
&$0.280$&$1000$&$12$&&$-0.011$&$0.019$&$0.900$&&$-0.006$&$0.024$&$0.936$&&$-0.005$&$0.034$&$0.945$&&$-0.002$&$0.019$&$0.952$\tabularnewline
&$0.280$&$1000$&$24$&&$-0.007$&$0.019$&$0.933$&&$-0.004$&$0.023$&$0.948$&&$-0.003$&$0.030$&$0.953$&&$-0.003$&$0.020$&$0.948$\tabularnewline
&$0.280$&$1000$&$48$&&$-0.005$&$0.019$&$0.945$&&$-0.003$&$0.022$&$0.953$&&$-0.003$&$0.027$&$0.953$&&$-0.003$&$0.019$&$0.950$\tabularnewline
&$0.280$&$1000$&$96$&&$-0.004$&$0.019$&$0.947$&&$-0.003$&$0.021$&$0.950$&&$-0.003$&$0.024$&$0.952$&&$-0.003$&$0.019$&$0.949$\tabularnewline
\hline
at $\mu$'s 40\%Q&$0.386$&$ 250$&$12$&&$-0.033$&$0.035$&$0.743$&&$-0.020$&$0.042$&$0.848$&&$-0.014$&$0.054$&$0.890$&&$-0.008$&$0.038$&$0.899$\tabularnewline
&$0.386$&$ 250$&$24$&&$-0.022$&$0.036$&$0.830$&&$-0.013$&$0.041$&$0.883$&&$-0.009$&$0.050$&$0.902$&&$-0.008$&$0.038$&$0.906$\tabularnewline
&$0.386$&$ 250$&$48$&&$-0.016$&$0.036$&$0.865$&&$-0.010$&$0.040$&$0.897$&&$-0.008$&$0.046$&$0.910$&&$-0.007$&$0.037$&$0.909$\tabularnewline
&$0.386$&$ 250$&$96$&&$-0.013$&$0.037$&$0.888$&&$-0.009$&$0.039$&$0.901$&&$-0.008$&$0.043$&$0.908$&&$-0.008$&$0.037$&$0.905$\tabularnewline
&$0.386$&$ 500$&$12$&&$-0.031$&$0.027$&$0.693$&&$-0.018$&$0.034$&$0.847$&&$-0.011$&$0.045$&$0.899$&&$-0.006$&$0.029$&$0.910$\tabularnewline
&$0.386$&$ 500$&$24$&&$-0.021$&$0.027$&$0.811$&&$-0.012$&$0.031$&$0.892$&&$-0.008$&$0.040$&$0.913$&&$-0.006$&$0.028$&$0.916$\tabularnewline
&$0.386$&$ 500$&$48$&&$-0.015$&$0.028$&$0.856$&&$-0.009$&$0.031$&$0.895$&&$-0.007$&$0.036$&$0.912$&&$-0.006$&$0.029$&$0.907$\tabularnewline
&$0.386$&$ 500$&$96$&&$-0.011$&$0.028$&$0.884$&&$-0.008$&$0.030$&$0.904$&&$-0.007$&$0.033$&$0.911$&&$-0.007$&$0.028$&$0.914$\tabularnewline
&$0.386$&$1000$&$12$&&$-0.030$&$0.020$&$0.620$&&$-0.016$&$0.026$&$0.850$&&$-0.010$&$0.036$&$0.911$&&$-0.005$&$0.022$&$0.917$\tabularnewline
&$0.386$&$1000$&$24$&&$-0.020$&$0.021$&$0.781$&&$-0.010$&$0.025$&$0.888$&&$-0.005$&$0.033$&$0.921$&&$-0.004$&$0.022$&$0.911$\tabularnewline
&$0.386$&$1000$&$48$&&$-0.013$&$0.022$&$0.847$&&$-0.007$&$0.025$&$0.905$&&$-0.005$&$0.030$&$0.921$&&$-0.004$&$0.022$&$0.912$\tabularnewline
&$0.386$&$1000$&$96$&&$-0.010$&$0.022$&$0.879$&&$-0.006$&$0.023$&$0.910$&&$-0.005$&$0.026$&$0.918$&&$-0.005$&$0.022$&$0.911$\tabularnewline
\hline
at $\mu$'s 60\%Q&$0.386$&$ 250$&$12$&&$-0.033$&$0.035$&$0.752$&&$-0.019$&$0.043$&$0.854$&&$-0.014$&$0.055$&$0.888$&&$-0.007$&$0.037$&$0.909$\tabularnewline
&$0.386$&$ 250$&$24$&&$-0.023$&$0.036$&$0.820$&&$-0.014$&$0.042$&$0.878$&&$-0.010$&$0.051$&$0.904$&&$-0.008$&$0.038$&$0.907$\tabularnewline
&$0.386$&$ 250$&$48$&&$-0.016$&$0.036$&$0.870$&&$-0.010$&$0.039$&$0.900$&&$-0.008$&$0.045$&$0.912$&&$-0.008$&$0.037$&$0.907$\tabularnewline
&$0.386$&$ 250$&$96$&&$-0.012$&$0.037$&$0.887$&&$-0.009$&$0.040$&$0.900$&&$-0.007$&$0.043$&$0.907$&&$-0.008$&$0.037$&$0.912$\tabularnewline
&$0.386$&$ 500$&$12$&&$-0.031$&$0.027$&$0.691$&&$-0.018$&$0.033$&$0.849$&&$-0.012$&$0.044$&$0.898$&&$-0.006$&$0.029$&$0.911$\tabularnewline
&$0.386$&$ 500$&$24$&&$-0.022$&$0.027$&$0.809$&&$-0.012$&$0.032$&$0.883$&&$-0.008$&$0.040$&$0.905$&&$-0.006$&$0.029$&$0.904$\tabularnewline
&$0.386$&$ 500$&$48$&&$-0.015$&$0.028$&$0.858$&&$-0.008$&$0.032$&$0.893$&&$-0.006$&$0.037$&$0.908$&&$-0.006$&$0.029$&$0.902$\tabularnewline
&$0.386$&$ 500$&$96$&&$-0.010$&$0.029$&$0.888$&&$-0.006$&$0.031$&$0.904$&&$-0.005$&$0.034$&$0.914$&&$-0.005$&$0.029$&$0.914$\tabularnewline
&$0.386$&$1000$&$12$&&$-0.030$&$0.020$&$0.628$&&$-0.016$&$0.026$&$0.851$&&$-0.009$&$0.036$&$0.912$&&$-0.005$&$0.022$&$0.918$\tabularnewline
&$0.386$&$1000$&$24$&&$-0.020$&$0.021$&$0.777$&&$-0.010$&$0.025$&$0.887$&&$-0.006$&$0.033$&$0.919$&&$-0.004$&$0.022$&$0.916$\tabularnewline
&$0.386$&$1000$&$48$&&$-0.013$&$0.022$&$0.855$&&$-0.007$&$0.024$&$0.907$&&$-0.005$&$0.030$&$0.920$&&$-0.004$&$0.022$&$0.919$\tabularnewline
&$0.386$&$1000$&$96$&&$-0.009$&$0.022$&$0.881$&&$-0.005$&$0.024$&$0.898$&&$-0.004$&$0.027$&$0.911$&&$-0.004$&$0.022$&$0.914$\tabularnewline
\hline
at $\mu$'s 80\%Q&$0.280$&$ 250$&$12$&&$-0.012$&$0.032$&$0.924$&&$-0.008$&$0.039$&$0.939$&&$-0.007$&$0.051$&$0.946$&&$-0.004$&$0.033$&$0.951$\tabularnewline
&$0.280$&$ 250$&$24$&&$-0.009$&$0.033$&$0.932$&&$-0.006$&$0.039$&$0.942$&&$-0.006$&$0.049$&$0.947$&&$-0.004$&$0.034$&$0.942$\tabularnewline
&$0.280$&$ 250$&$48$&&$-0.007$&$0.033$&$0.945$&&$-0.006$&$0.036$&$0.945$&&$-0.006$&$0.043$&$0.943$&&$-0.005$&$0.033$&$0.947$\tabularnewline
&$0.280$&$ 250$&$96$&&$-0.005$&$0.033$&$0.943$&&$-0.005$&$0.036$&$0.946$&&$-0.005$&$0.040$&$0.945$&&$-0.004$&$0.034$&$0.944$\tabularnewline
&$0.280$&$ 500$&$12$&&$-0.011$&$0.024$&$0.915$&&$-0.007$&$0.031$&$0.938$&&$-0.005$&$0.041$&$0.948$&&$-0.003$&$0.026$&$0.946$\tabularnewline
&$0.280$&$ 500$&$24$&&$-0.008$&$0.025$&$0.932$&&$-0.005$&$0.030$&$0.944$&&$-0.005$&$0.038$&$0.947$&&$-0.003$&$0.025$&$0.946$\tabularnewline
&$0.280$&$ 500$&$48$&&$-0.006$&$0.025$&$0.940$&&$-0.004$&$0.028$&$0.946$&&$-0.004$&$0.034$&$0.952$&&$-0.003$&$0.025$&$0.946$\tabularnewline
&$0.280$&$ 500$&$96$&&$-0.004$&$0.026$&$0.946$&&$-0.003$&$0.028$&$0.949$&&$-0.003$&$0.031$&$0.945$&&$-0.003$&$0.026$&$0.948$\tabularnewline
&$0.280$&$1000$&$12$&&$-0.010$&$0.018$&$0.912$&&$-0.006$&$0.023$&$0.948$&&$-0.004$&$0.033$&$0.949$&&$-0.002$&$0.019$&$0.948$\tabularnewline
&$0.280$&$1000$&$24$&&$-0.007$&$0.018$&$0.934$&&$-0.004$&$0.022$&$0.949$&&$-0.004$&$0.030$&$0.949$&&$-0.003$&$0.019$&$0.951$\tabularnewline
&$0.280$&$1000$&$48$&&$-0.005$&$0.019$&$0.942$&&$-0.004$&$0.022$&$0.946$&&$-0.003$&$0.027$&$0.950$&&$-0.003$&$0.020$&$0.944$\tabularnewline
&$0.280$&$1000$&$96$&&$-0.004$&$0.019$&$0.944$&&$-0.003$&$0.021$&$0.950$&&$-0.002$&$0.024$&$0.955$&&$-0.003$&$0.019$&$0.950$\tabularnewline
\hline
\end{tabular}\end{center}}
\end{table}
\begin{table}[!t]
{\scriptsize
\caption{Monte Carlo simulation results for $\gamma_0$}
\label{table:acov}
\begin{center}
\begin{tabular}{lrrrcrrrcrrrcrrrcrrr}
\hline\hline
\multicolumn{1}{l}{\bfseries }&\multicolumn{3}{c}{\bfseries }&\multicolumn{1}{c}{\bfseries }&\multicolumn{3}{c}{\bfseries NE}&\multicolumn{1}{c}{\bfseries }&\multicolumn{3}{c}{\bfseries HPJ}&\multicolumn{1}{c}{\bfseries }&\multicolumn{3}{c}{\bfseries TOJ}&\multicolumn{1}{c}{\bfseries }&\multicolumn{3}{c}{\bfseries IE}\tabularnewline
\cline{6-8} \cline{10-12} \cline{14-16} \cline{18-20}
\multicolumn{1}{l}{}&\multicolumn{1}{c}{true}&\multicolumn{1}{c}{$N$}&\multicolumn{1}{c}{$T$}&\multicolumn{1}{c}{}&\multicolumn{1}{c}{bias}&\multicolumn{1}{c}{std}&\multicolumn{1}{c}{cp}&\multicolumn{1}{c}{}&\multicolumn{1}{c}{bias}&\multicolumn{1}{c}{std}&\multicolumn{1}{c}{cp}&\multicolumn{1}{c}{}&\multicolumn{1}{c}{bias}&\multicolumn{1}{c}{std}&\multicolumn{1}{c}{cp}&\multicolumn{1}{c}{}&\multicolumn{1}{c}{bias}&\multicolumn{1}{c}{std}&\multicolumn{1}{c}{cp}\tabularnewline
\hline
at $\gamma_0$'s 20\%Q&$0.646$&$ 250$&$12$&&$ 0.173$&$0.072$&$0.302$&&$ 0.197$&$0.106$&$0.524$&&$ 0.109$&$0.165$&$0.905$&&$-0.013$&$0.068$&$0.921$\tabularnewline
&$0.646$&$ 250$&$24$&&$ 0.131$&$0.074$&$0.564$&&$ 0.083$&$0.097$&$0.879$&&$ 0.011$&$0.137$&$0.944$&&$-0.014$&$0.067$&$0.920$\tabularnewline
&$0.646$&$ 250$&$48$&&$ 0.078$&$0.071$&$0.830$&&$ 0.025$&$0.084$&$0.943$&&$-0.007$&$0.108$&$0.947$&&$-0.013$&$0.068$&$0.918$\tabularnewline
&$0.646$&$ 250$&$96$&&$ 0.038$&$0.070$&$0.926$&&$ 0.001$&$0.077$&$0.942$&&$-0.010$&$0.090$&$0.942$&&$-0.012$&$0.068$&$0.919$\tabularnewline
&$0.646$&$ 500$&$12$&&$ 0.174$&$0.058$&$0.114$&&$ 0.211$&$0.087$&$0.284$&&$ 0.127$&$0.142$&$0.858$&&$-0.009$&$0.053$&$0.928$\tabularnewline
&$0.646$&$ 500$&$24$&&$ 0.136$&$0.058$&$0.331$&&$ 0.093$&$0.080$&$0.789$&&$ 0.017$&$0.119$&$0.948$&&$-0.010$&$0.053$&$0.928$\tabularnewline
&$0.646$&$ 500$&$48$&&$ 0.083$&$0.057$&$0.702$&&$ 0.030$&$0.070$&$0.933$&&$-0.005$&$0.095$&$0.944$&&$-0.009$&$0.054$&$0.926$\tabularnewline
&$0.646$&$ 500$&$96$&&$ 0.042$&$0.055$&$0.897$&&$ 0.004$&$0.062$&$0.948$&&$-0.009$&$0.075$&$0.939$&&$-0.009$&$0.053$&$0.928$\tabularnewline
&$0.646$&$1000$&$12$&&$ 0.176$&$0.046$&$0.014$&&$ 0.223$&$0.071$&$0.091$&&$ 0.145$&$0.120$&$0.777$&&$-0.007$&$0.041$&$0.932$\tabularnewline
&$0.646$&$1000$&$24$&&$ 0.140$&$0.045$&$0.102$&&$ 0.099$&$0.064$&$0.669$&&$ 0.019$&$0.102$&$0.949$&&$-0.007$&$0.042$&$0.928$\tabularnewline
&$0.646$&$1000$&$48$&&$ 0.085$&$0.045$&$0.512$&&$ 0.032$&$0.057$&$0.922$&&$-0.004$&$0.081$&$0.946$&&$-0.007$&$0.042$&$0.933$\tabularnewline
&$0.646$&$1000$&$96$&&$ 0.046$&$0.043$&$0.834$&&$ 0.007$&$0.050$&$0.950$&&$-0.007$&$0.063$&$0.945$&&$-0.007$&$0.040$&$0.939$\tabularnewline
\hline
at $\gamma_0$'s 40\%Q&$0.701$&$ 250$&$12$&&$-0.094$&$0.057$&$0.558$&&$ 0.008$&$0.087$&$0.917$&&$ 0.030$&$0.142$&$0.915$&&$-0.010$&$0.068$&$0.929$\tabularnewline
&$0.701$&$ 250$&$24$&&$-0.026$&$0.061$&$0.888$&&$ 0.027$&$0.089$&$0.926$&&$ 0.021$&$0.134$&$0.927$&&$-0.011$&$0.070$&$0.918$\tabularnewline
&$0.701$&$ 250$&$48$&&$-0.003$&$0.064$&$0.933$&&$ 0.015$&$0.085$&$0.935$&&$ 0.002$&$0.118$&$0.934$&&$-0.010$&$0.068$&$0.933$\tabularnewline
&$0.701$&$ 250$&$96$&&$ 0.000$&$0.067$&$0.929$&&$ 0.002$&$0.081$&$0.926$&&$-0.005$&$0.101$&$0.927$&&$-0.012$&$0.070$&$0.918$\tabularnewline
&$0.701$&$ 500$&$12$&&$-0.097$&$0.047$&$0.382$&&$ 0.014$&$0.071$&$0.918$&&$ 0.043$&$0.118$&$0.903$&&$-0.008$&$0.052$&$0.931$\tabularnewline
&$0.701$&$ 500$&$24$&&$-0.024$&$0.047$&$0.877$&&$ 0.034$&$0.070$&$0.907$&&$ 0.029$&$0.108$&$0.929$&&$-0.008$&$0.054$&$0.924$\tabularnewline
&$0.701$&$ 500$&$48$&&$ 0.000$&$0.050$&$0.941$&&$ 0.021$&$0.066$&$0.936$&&$ 0.007$&$0.094$&$0.942$&&$-0.008$&$0.053$&$0.928$\tabularnewline
&$0.701$&$ 500$&$96$&&$ 0.001$&$0.052$&$0.930$&&$ 0.003$&$0.064$&$0.931$&&$-0.005$&$0.081$&$0.937$&&$-0.010$&$0.053$&$0.926$\tabularnewline
&$0.701$&$1000$&$12$&&$-0.098$&$0.037$&$0.224$&&$ 0.021$&$0.058$&$0.913$&&$ 0.059$&$0.102$&$0.885$&&$-0.006$&$0.040$&$0.935$\tabularnewline
&$0.701$&$1000$&$24$&&$-0.025$&$0.036$&$0.847$&&$ 0.035$&$0.055$&$0.883$&&$ 0.032$&$0.086$&$0.929$&&$-0.007$&$0.040$&$0.927$\tabularnewline
&$0.701$&$1000$&$48$&&$ 0.000$&$0.038$&$0.936$&&$ 0.020$&$0.053$&$0.930$&&$ 0.006$&$0.076$&$0.940$&&$-0.008$&$0.041$&$0.929$\tabularnewline
&$0.701$&$1000$&$96$&&$ 0.003$&$0.039$&$0.941$&&$ 0.006$&$0.049$&$0.943$&&$-0.004$&$0.066$&$0.941$&&$-0.006$&$0.040$&$0.928$\tabularnewline
\hline
at $\gamma_0$'s 60\%Q&$0.623$&$ 250$&$12$&&$-0.221$&$0.062$&$0.064$&&$-0.114$&$0.102$&$0.795$&&$-0.054$&$0.187$&$0.938$&&$-0.011$&$0.068$&$0.950$\tabularnewline
&$0.623$&$ 250$&$24$&&$-0.134$&$0.062$&$0.471$&&$-0.054$&$0.094$&$0.919$&&$-0.026$&$0.164$&$0.950$&&$-0.011$&$0.068$&$0.946$\tabularnewline
&$0.623$&$ 250$&$48$&&$-0.072$&$0.063$&$0.814$&&$-0.025$&$0.088$&$0.943$&&$-0.019$&$0.141$&$0.952$&&$-0.010$&$0.070$&$0.942$\tabularnewline
&$0.623$&$ 250$&$96$&&$-0.042$&$0.066$&$0.904$&&$-0.018$&$0.085$&$0.945$&&$-0.021$&$0.123$&$0.951$&&$-0.012$&$0.068$&$0.947$\tabularnewline
&$0.623$&$ 500$&$12$&&$-0.220$&$0.050$&$0.005$&&$-0.110$&$0.085$&$0.733$&&$-0.044$&$0.159$&$0.933$&&$-0.009$&$0.052$&$0.947$\tabularnewline
&$0.623$&$ 500$&$24$&&$-0.131$&$0.050$&$0.280$&&$-0.046$&$0.079$&$0.912$&&$-0.011$&$0.143$&$0.952$&&$-0.008$&$0.052$&$0.950$\tabularnewline
&$0.623$&$ 500$&$48$&&$-0.072$&$0.049$&$0.740$&&$-0.020$&$0.070$&$0.949$&&$-0.011$&$0.120$&$0.952$&&$-0.007$&$0.051$&$0.950$\tabularnewline
&$0.623$&$ 500$&$96$&&$-0.040$&$0.050$&$0.891$&&$-0.014$&$0.067$&$0.948$&&$-0.015$&$0.102$&$0.949$&&$-0.010$&$0.051$&$0.952$\tabularnewline
&$0.623$&$1000$&$12$&&$-0.222$&$0.039$&$0.000$&&$-0.111$&$0.067$&$0.614$&&$-0.042$&$0.125$&$0.934$&&$-0.006$&$0.039$&$0.953$\tabularnewline
&$0.623$&$1000$&$24$&&$-0.130$&$0.040$&$0.092$&&$-0.040$&$0.065$&$0.914$&&$-0.001$&$0.120$&$0.949$&&$-0.006$&$0.039$&$0.951$\tabularnewline
&$0.623$&$1000$&$48$&&$-0.071$&$0.038$&$0.598$&&$-0.015$&$0.057$&$0.952$&&$-0.004$&$0.100$&$0.955$&&$-0.006$&$0.039$&$0.950$\tabularnewline
&$0.623$&$1000$&$96$&&$-0.037$&$0.038$&$0.861$&&$-0.009$&$0.054$&$0.947$&&$-0.010$&$0.087$&$0.944$&&$-0.006$&$0.040$&$0.951$\tabularnewline
\hline
at $\gamma_0$'s 80\%Q&$0.433$&$ 250$&$12$&&$-0.205$&$0.045$&$0.004$&&$-0.142$&$0.074$&$0.546$&&$-0.103$&$0.135$&$0.885$&&$-0.008$&$0.060$&$0.934$\tabularnewline
&$0.433$&$ 250$&$24$&&$-0.144$&$0.050$&$0.183$&&$-0.086$&$0.079$&$0.817$&&$-0.056$&$0.142$&$0.932$&&$-0.006$&$0.060$&$0.938$\tabularnewline
&$0.433$&$ 250$&$48$&&$-0.094$&$0.055$&$0.603$&&$-0.050$&$0.080$&$0.905$&&$-0.036$&$0.136$&$0.943$&&$-0.006$&$0.059$&$0.943$\tabularnewline
&$0.433$&$ 250$&$96$&&$-0.056$&$0.057$&$0.832$&&$-0.026$&$0.078$&$0.936$&&$-0.021$&$0.122$&$0.946$&&$-0.004$&$0.059$&$0.945$\tabularnewline
&$0.433$&$ 500$&$12$&&$-0.206$&$0.034$&$0.000$&&$-0.143$&$0.058$&$0.322$&&$-0.104$&$0.108$&$0.846$&&$-0.004$&$0.043$&$0.958$\tabularnewline
&$0.433$&$ 500$&$24$&&$-0.143$&$0.038$&$0.026$&&$-0.084$&$0.062$&$0.761$&&$-0.052$&$0.113$&$0.933$&&$-0.004$&$0.045$&$0.949$\tabularnewline
&$0.433$&$ 500$&$48$&&$-0.093$&$0.042$&$0.403$&&$-0.046$&$0.063$&$0.897$&&$-0.028$&$0.111$&$0.947$&&$-0.004$&$0.046$&$0.936$\tabularnewline
&$0.433$&$ 500$&$96$&&$-0.055$&$0.044$&$0.764$&&$-0.023$&$0.062$&$0.933$&&$-0.018$&$0.100$&$0.952$&&$-0.003$&$0.045$&$0.945$\tabularnewline
&$0.433$&$1000$&$12$&&$-0.205$&$0.026$&$0.000$&&$-0.142$&$0.044$&$0.097$&&$-0.101$&$0.083$&$0.786$&&$-0.003$&$0.034$&$0.950$\tabularnewline
&$0.433$&$1000$&$24$&&$-0.143$&$0.030$&$0.000$&&$-0.082$&$0.049$&$0.649$&&$-0.049$&$0.091$&$0.920$&&$-0.002$&$0.034$&$0.951$\tabularnewline
&$0.433$&$1000$&$48$&&$-0.091$&$0.032$&$0.155$&&$-0.041$&$0.049$&$0.887$&&$-0.020$&$0.088$&$0.946$&&$-0.002$&$0.034$&$0.946$\tabularnewline
&$0.433$&$1000$&$96$&&$-0.054$&$0.033$&$0.642$&&$-0.021$&$0.048$&$0.940$&&$-0.012$&$0.081$&$0.947$&&$-0.003$&$0.034$&$0.949$\tabularnewline
\hline
\end{tabular}\end{center}}
\end{table}
\begin{table}[!t]
{\scriptsize
\caption{Monte Carlo simulation results for $\rho_1$}
\label{table:acor}
\begin{center}
\begin{tabular}{lrrrcrrrcrrrcrrrcrrr}
\hline\hline
\multicolumn{1}{l}{\bfseries }&\multicolumn{3}{c}{\bfseries }&\multicolumn{1}{c}{\bfseries }&\multicolumn{3}{c}{\bfseries NE}&\multicolumn{1}{c}{\bfseries }&\multicolumn{3}{c}{\bfseries HPJ}&\multicolumn{1}{c}{\bfseries }&\multicolumn{3}{c}{\bfseries TOJ}&\multicolumn{1}{c}{\bfseries }&\multicolumn{3}{c}{\bfseries IE}\tabularnewline
\cline{6-8} \cline{10-12} \cline{14-16} \cline{18-20}
\multicolumn{1}{l}{}&\multicolumn{1}{c}{true}&\multicolumn{1}{c}{$N$}&\multicolumn{1}{c}{$T$}&\multicolumn{1}{c}{}&\multicolumn{1}{c}{bias}&\multicolumn{1}{c}{std}&\multicolumn{1}{c}{cp}&\multicolumn{1}{c}{}&\multicolumn{1}{c}{bias}&\multicolumn{1}{c}{std}&\multicolumn{1}{c}{cp}&\multicolumn{1}{c}{}&\multicolumn{1}{c}{bias}&\multicolumn{1}{c}{std}&\multicolumn{1}{c}{cp}&\multicolumn{1}{c}{}&\multicolumn{1}{c}{bias}&\multicolumn{1}{c}{std}&\multicolumn{1}{c}{cp}\tabularnewline
\hline
at $\rho_1$'s 20\%Q&$0.609$&$ 250$&$12$&&$ 0.148$&$0.089$&$0.457$&&$ 0.089$&$0.152$&$0.862$&&$-0.004$&$0.288$&$0.931$&&$-0.007$&$0.080$&$0.945$\tabularnewline
&$0.609$&$ 250$&$24$&&$ 0.066$&$0.085$&$0.825$&&$-0.018$&$0.131$&$0.947$&&$-0.076$&$0.222$&$0.938$&&$-0.011$&$0.080$&$0.952$\tabularnewline
&$0.609$&$ 250$&$48$&&$ 0.027$&$0.083$&$0.930$&&$-0.017$&$0.110$&$0.946$&&$-0.019$&$0.160$&$0.954$&&$-0.009$&$0.080$&$0.943$\tabularnewline
&$0.609$&$ 250$&$96$&&$ 0.008$&$0.082$&$0.944$&&$-0.014$&$0.098$&$0.947$&&$-0.015$&$0.124$&$0.947$&&$-0.010$&$0.081$&$0.943$\tabularnewline
&$0.609$&$ 500$&$12$&&$ 0.151$&$0.067$&$0.302$&&$ 0.094$&$0.115$&$0.805$&&$-0.002$&$0.228$&$0.930$&&$-0.007$&$0.060$&$0.944$\tabularnewline
&$0.609$&$ 500$&$24$&&$ 0.070$&$0.065$&$0.717$&&$-0.013$&$0.102$&$0.947$&&$-0.068$&$0.176$&$0.941$&&$-0.006$&$0.061$&$0.949$\tabularnewline
&$0.609$&$ 500$&$48$&&$ 0.029$&$0.063$&$0.909$&&$-0.014$&$0.087$&$0.951$&&$-0.016$&$0.134$&$0.950$&&$-0.006$&$0.062$&$0.944$\tabularnewline
&$0.609$&$ 500$&$96$&&$ 0.010$&$0.062$&$0.942$&&$-0.010$&$0.076$&$0.947$&&$-0.009$&$0.098$&$0.954$&&$-0.007$&$0.060$&$0.947$\tabularnewline
&$0.609$&$1000$&$12$&&$ 0.154$&$0.050$&$0.149$&&$ 0.099$&$0.088$&$0.665$&&$-0.004$&$0.184$&$0.933$&&$-0.004$&$0.046$&$0.950$\tabularnewline
&$0.609$&$1000$&$24$&&$ 0.073$&$0.048$&$0.539$&&$-0.008$&$0.078$&$0.950$&&$-0.060$&$0.138$&$0.933$&&$-0.005$&$0.046$&$0.949$\tabularnewline
&$0.609$&$1000$&$48$&&$ 0.031$&$0.048$&$0.868$&&$-0.012$&$0.069$&$0.944$&&$-0.014$&$0.108$&$0.949$&&$-0.005$&$0.046$&$0.946$\tabularnewline
&$0.609$&$1000$&$96$&&$ 0.014$&$0.047$&$0.937$&&$-0.006$&$0.058$&$0.950$&&$-0.005$&$0.079$&$0.958$&&$-0.005$&$0.047$&$0.943$\tabularnewline
\hline
at $\rho_1$'s 40\%Q&$0.823$&$ 250$&$12$&&$ 0.054$&$0.093$&$0.903$&&$ 0.139$&$0.156$&$0.801$&&$ 0.105$&$0.283$&$0.915$&&$-0.015$&$0.091$&$0.942$\tabularnewline
&$0.823$&$ 250$&$24$&&$ 0.041$&$0.094$&$0.919$&&$ 0.027$&$0.146$&$0.935$&&$-0.051$&$0.244$&$0.940$&&$-0.016$&$0.089$&$0.949$\tabularnewline
&$0.823$&$ 250$&$48$&&$ 0.012$&$0.090$&$0.949$&&$-0.017$&$0.124$&$0.952$&&$-0.039$&$0.183$&$0.951$&&$-0.015$&$0.091$&$0.944$\tabularnewline
&$0.823$&$ 250$&$96$&&$-0.004$&$0.094$&$0.942$&&$-0.021$&$0.114$&$0.944$&&$-0.024$&$0.144$&$0.951$&&$-0.015$&$0.090$&$0.946$\tabularnewline
&$0.823$&$ 500$&$12$&&$ 0.056$&$0.070$&$0.869$&&$ 0.142$&$0.119$&$0.673$&&$ 0.104$&$0.219$&$0.903$&&$-0.011$&$0.067$&$0.950$\tabularnewline
&$0.823$&$ 500$&$24$&&$ 0.045$&$0.070$&$0.895$&&$ 0.031$&$0.112$&$0.934$&&$-0.048$&$0.192$&$0.947$&&$-0.010$&$0.069$&$0.947$\tabularnewline
&$0.823$&$ 500$&$48$&&$ 0.016$&$0.069$&$0.941$&&$-0.012$&$0.099$&$0.952$&&$-0.034$&$0.153$&$0.944$&&$-0.012$&$0.070$&$0.944$\tabularnewline
&$0.823$&$ 500$&$96$&&$ 0.003$&$0.069$&$0.953$&&$-0.013$&$0.087$&$0.953$&&$-0.015$&$0.115$&$0.953$&&$-0.011$&$0.068$&$0.951$\tabularnewline
&$0.823$&$1000$&$12$&&$ 0.059$&$0.053$&$0.770$&&$ 0.148$&$0.091$&$0.459$&&$ 0.105$&$0.167$&$0.876$&&$-0.008$&$0.052$&$0.951$\tabularnewline
&$0.823$&$1000$&$24$&&$ 0.045$&$0.054$&$0.834$&&$ 0.030$&$0.088$&$0.931$&&$-0.049$&$0.153$&$0.940$&&$-0.009$&$0.052$&$0.950$\tabularnewline
&$0.823$&$1000$&$48$&&$ 0.019$&$0.053$&$0.932$&&$-0.008$&$0.077$&$0.953$&&$-0.029$&$0.123$&$0.950$&&$-0.009$&$0.052$&$0.947$\tabularnewline
&$0.823$&$1000$&$96$&&$ 0.003$&$0.052$&$0.952$&&$-0.013$&$0.067$&$0.951$&&$-0.016$&$0.095$&$0.957$&&$-0.009$&$0.051$&$0.952$\tabularnewline
\hline
at $\rho_1$'s 60\%Q&$0.889$&$ 250$&$12$&&$-0.083$&$0.089$&$0.744$&&$ 0.087$&$0.143$&$0.910$&&$ 0.103$&$0.248$&$0.924$&&$-0.016$&$0.092$&$0.931$\tabularnewline
&$0.889$&$ 250$&$24$&&$-0.014$&$0.091$&$0.928$&&$ 0.056$&$0.135$&$0.933$&&$ 0.028$&$0.216$&$0.944$&&$-0.013$&$0.095$&$0.930$\tabularnewline
&$0.889$&$ 250$&$48$&&$-0.005$&$0.091$&$0.943$&&$ 0.005$&$0.121$&$0.946$&&$-0.025$&$0.172$&$0.945$&&$-0.013$&$0.092$&$0.935$\tabularnewline
&$0.889$&$ 250$&$96$&&$-0.009$&$0.093$&$0.940$&&$-0.011$&$0.110$&$0.942$&&$-0.016$&$0.132$&$0.943$&&$-0.014$&$0.094$&$0.931$\tabularnewline
&$0.889$&$ 500$&$12$&&$-0.078$&$0.067$&$0.701$&&$ 0.095$&$0.110$&$0.867$&&$ 0.132$&$0.196$&$0.889$&&$-0.011$&$0.069$&$0.946$\tabularnewline
&$0.889$&$ 500$&$24$&&$-0.011$&$0.070$&$0.928$&&$ 0.060$&$0.106$&$0.918$&&$ 0.029$&$0.173$&$0.949$&&$-0.012$&$0.069$&$0.944$\tabularnewline
&$0.889$&$ 500$&$48$&&$-0.003$&$0.069$&$0.947$&&$ 0.006$&$0.094$&$0.953$&&$-0.023$&$0.139$&$0.936$&&$-0.011$&$0.070$&$0.943$\tabularnewline
&$0.889$&$ 500$&$96$&&$-0.007$&$0.071$&$0.938$&&$-0.009$&$0.086$&$0.943$&&$-0.014$&$0.108$&$0.945$&&$-0.011$&$0.069$&$0.944$\tabularnewline
&$0.889$&$1000$&$12$&&$-0.077$&$0.051$&$0.587$&&$ 0.097$&$0.084$&$0.793$&&$ 0.154$&$0.162$&$0.804$&&$-0.010$&$0.052$&$0.944$\tabularnewline
&$0.889$&$1000$&$24$&&$-0.008$&$0.053$&$0.929$&&$ 0.063$&$0.082$&$0.886$&&$ 0.030$&$0.138$&$0.944$&&$-0.010$&$0.053$&$0.942$\tabularnewline
&$0.889$&$1000$&$48$&&$ 0.000$&$0.052$&$0.947$&&$ 0.009$&$0.073$&$0.950$&&$-0.021$&$0.111$&$0.947$&&$-0.008$&$0.053$&$0.944$\tabularnewline
&$0.889$&$1000$&$96$&&$-0.004$&$0.053$&$0.946$&&$-0.005$&$0.065$&$0.948$&&$-0.010$&$0.086$&$0.942$&&$-0.009$&$0.053$&$0.941$\tabularnewline
\hline
at $\rho_1$'s 80\%Q&$0.790$&$ 250$&$12$&&$-0.247$&$0.075$&$0.099$&&$-0.048$&$0.120$&$0.886$&&$-0.069$&$0.194$&$0.911$&&$-0.015$&$0.086$&$0.919$\tabularnewline
&$0.790$&$ 250$&$24$&&$-0.105$&$0.084$&$0.656$&&$ 0.021$&$0.119$&$0.930$&&$ 0.052$&$0.179$&$0.938$&&$-0.011$&$0.087$&$0.921$\tabularnewline
&$0.790$&$ 250$&$48$&&$-0.045$&$0.086$&$0.863$&&$ 0.014$&$0.108$&$0.934$&&$ 0.007$&$0.145$&$0.938$&&$-0.012$&$0.087$&$0.920$\tabularnewline
&$0.790$&$ 250$&$96$&&$-0.025$&$0.086$&$0.901$&&$-0.003$&$0.097$&$0.934$&&$-0.010$&$0.113$&$0.935$&&$-0.012$&$0.086$&$0.921$\tabularnewline
&$0.790$&$ 500$&$12$&&$-0.240$&$0.058$&$0.026$&&$-0.031$&$0.094$&$0.888$&&$-0.082$&$0.156$&$0.887$&&$-0.009$&$0.065$&$0.928$\tabularnewline
&$0.790$&$ 500$&$24$&&$-0.103$&$0.064$&$0.555$&&$ 0.027$&$0.092$&$0.929$&&$ 0.063$&$0.144$&$0.930$&&$-0.010$&$0.067$&$0.922$\tabularnewline
&$0.790$&$ 500$&$48$&&$-0.043$&$0.066$&$0.848$&&$ 0.018$&$0.086$&$0.935$&&$ 0.008$&$0.121$&$0.938$&&$-0.009$&$0.066$&$0.926$\tabularnewline
&$0.790$&$ 500$&$96$&&$-0.022$&$0.066$&$0.900$&&$-0.001$&$0.076$&$0.938$&&$-0.010$&$0.094$&$0.934$&&$-0.009$&$0.065$&$0.931$\tabularnewline
&$0.790$&$1000$&$12$&&$-0.237$&$0.043$&$0.001$&&$-0.024$&$0.071$&$0.891$&&$-0.110$&$0.109$&$0.836$&&$-0.007$&$0.050$&$0.936$\tabularnewline
&$0.790$&$1000$&$24$&&$-0.097$&$0.048$&$0.455$&&$ 0.037$&$0.071$&$0.931$&&$ 0.073$&$0.116$&$0.921$&&$-0.007$&$0.051$&$0.932$\tabularnewline
&$0.790$&$1000$&$48$&&$-0.039$&$0.050$&$0.839$&&$ 0.021$&$0.066$&$0.948$&&$ 0.009$&$0.099$&$0.950$&&$-0.008$&$0.050$&$0.937$\tabularnewline
&$0.790$&$1000$&$96$&&$-0.021$&$0.050$&$0.905$&&$-0.001$&$0.059$&$0.946$&&$-0.013$&$0.077$&$0.936$&&$-0.008$&$0.051$&$0.924$\tabularnewline
\hline
\end{tabular}\end{center}}
\end{table}
\begin{table}[!t]
{\scriptsize
\caption{Monte Carlo simulation results for bandwidths}
\label{table:band}
\begin{center}
\begin{tabular}{lrrcrrcrrcrrcrrcrrcrr}
\hline\hline
\multicolumn{1}{l}{\bfseries }&\multicolumn{2}{c}{\bfseries }&\multicolumn{1}{c}{\bfseries }&\multicolumn{2}{c}{\bfseries $\mu$ NE}&\multicolumn{1}{c}{\bfseries }&\multicolumn{2}{c}{\bfseries $\mu$ IE}&\multicolumn{1}{c}{\bfseries }&\multicolumn{2}{c}{\bfseries $\gamma_0$ NE}&\multicolumn{1}{c}{\bfseries }&\multicolumn{2}{c}{\bfseries $\gamma_0$ IE}&\multicolumn{1}{c}{\bfseries }&\multicolumn{2}{c}{\bfseries $\rho_1$ NE}&\multicolumn{1}{c}{\bfseries }&\multicolumn{2}{c}{\bfseries $\rho_1$ IE}\tabularnewline
\cline{5-6} \cline{8-9} \cline{11-12} \cline{14-15} \cline{17-18} \cline{20-21}
\multicolumn{1}{l}{}&\multicolumn{1}{c}{$N$}&\multicolumn{1}{c}{$T$}&\multicolumn{1}{c}{}&\multicolumn{1}{c}{mean}&\multicolumn{1}{c}{std}&\multicolumn{1}{c}{}&\multicolumn{1}{c}{mean}&\multicolumn{1}{c}{std}&\multicolumn{1}{c}{}&\multicolumn{1}{c}{mean}&\multicolumn{1}{c}{std}&\multicolumn{1}{c}{}&\multicolumn{1}{c}{mean}&\multicolumn{1}{c}{std}&\multicolumn{1}{c}{}&\multicolumn{1}{c}{mean}&\multicolumn{1}{c}{std}&\multicolumn{1}{c}{}&\multicolumn{1}{c}{mean}&\multicolumn{1}{c}{std}\tabularnewline
\hline
at 20\% Q&$ 250$&$12$&&$0.889$&$0.277$&&$0.807$&$0.255$&&$0.374$&$0.025$&&$0.387$&$0.053$&&$0.316$&$0.092$&&$0.320$&$0.098$\tabularnewline
&$ 250$&$24$&&$0.850$&$0.259$&&$0.807$&$0.254$&&$0.370$&$0.024$&&$0.388$&$0.052$&&$0.311$&$0.096$&&$0.317$&$0.097$\tabularnewline
&$ 250$&$48$&&$0.831$&$0.261$&&$0.805$&$0.254$&&$0.373$&$0.030$&&$0.386$&$0.052$&&$0.314$&$0.096$&&$0.318$&$0.097$\tabularnewline
&$ 250$&$96$&&$0.819$&$0.260$&&$0.810$&$0.260$&&$0.379$&$0.039$&&$0.388$&$0.055$&&$0.317$&$0.098$&&$0.318$&$0.098$\tabularnewline
&$ 500$&$12$&&$0.786$&$0.237$&&$0.714$&$0.232$&&$0.316$&$0.015$&&$0.324$&$0.032$&&$0.282$&$0.082$&&$0.285$&$0.086$\tabularnewline
&$ 500$&$24$&&$0.759$&$0.235$&&$0.716$&$0.229$&&$0.311$&$0.014$&&$0.325$&$0.034$&&$0.276$&$0.085$&&$0.283$&$0.087$\tabularnewline
&$ 500$&$48$&&$0.734$&$0.230$&&$0.716$&$0.232$&&$0.313$&$0.017$&&$0.324$&$0.032$&&$0.278$&$0.085$&&$0.283$&$0.086$\tabularnewline
&$ 500$&$96$&&$0.727$&$0.227$&&$0.717$&$0.235$&&$0.317$&$0.022$&&$0.324$&$0.032$&&$0.282$&$0.087$&&$0.284$&$0.087$\tabularnewline
&$1000$&$12$&&$0.699$&$0.219$&&$0.636$&$0.207$&&$0.267$&$0.009$&&$0.273$&$0.019$&&$0.252$&$0.072$&&$0.254$&$0.078$\tabularnewline
&$1000$&$24$&&$0.671$&$0.209$&&$0.630$&$0.202$&&$0.263$&$0.009$&&$0.273$&$0.019$&&$0.247$&$0.076$&&$0.253$&$0.077$\tabularnewline
&$1000$&$48$&&$0.656$&$0.206$&&$0.637$&$0.208$&&$0.265$&$0.011$&&$0.273$&$0.019$&&$0.247$&$0.076$&&$0.253$&$0.078$\tabularnewline
&$1000$&$96$&&$0.650$&$0.210$&&$0.633$&$0.206$&&$0.267$&$0.013$&&$0.273$&$0.020$&&$0.251$&$0.076$&&$0.251$&$0.076$\tabularnewline
\hline
at 40\% Q&$ 250$&$12$&&$0.850$&$0.179$&&$0.792$&$0.175$&&$0.510$&$0.151$&&$0.434$&$0.097$&&$0.323$&$0.084$&&$0.314$&$0.095$\tabularnewline
&$ 250$&$24$&&$0.824$&$0.174$&&$0.789$&$0.164$&&$0.479$&$0.103$&&$0.432$&$0.096$&&$0.318$&$0.091$&&$0.313$&$0.094$\tabularnewline
&$ 250$&$48$&&$0.810$&$0.169$&&$0.790$&$0.171$&&$0.451$&$0.090$&&$0.435$&$0.099$&&$0.312$&$0.093$&&$0.313$&$0.093$\tabularnewline
&$ 250$&$96$&&$0.800$&$0.170$&&$0.788$&$0.166$&&$0.443$&$0.096$&&$0.431$&$0.096$&&$0.313$&$0.094$&&$0.314$&$0.095$\tabularnewline
&$ 500$&$12$&&$0.731$&$0.141$&&$0.679$&$0.131$&&$0.433$&$0.147$&&$0.383$&$0.086$&&$0.286$&$0.074$&&$0.280$&$0.084$\tabularnewline
&$ 500$&$24$&&$0.710$&$0.133$&&$0.681$&$0.129$&&$0.427$&$0.096$&&$0.380$&$0.082$&&$0.280$&$0.079$&&$0.279$&$0.085$\tabularnewline
&$ 500$&$48$&&$0.699$&$0.135$&&$0.682$&$0.131$&&$0.399$&$0.082$&&$0.381$&$0.084$&&$0.278$&$0.084$&&$0.278$&$0.085$\tabularnewline
&$ 500$&$96$&&$0.692$&$0.134$&&$0.684$&$0.131$&&$0.390$&$0.077$&&$0.382$&$0.085$&&$0.276$&$0.082$&&$0.279$&$0.084$\tabularnewline
&$1000$&$12$&&$0.625$&$0.098$&&$0.586$&$0.105$&&$0.340$&$0.139$&&$0.339$&$0.074$&&$0.256$&$0.067$&&$0.247$&$0.075$\tabularnewline
&$1000$&$24$&&$0.610$&$0.102$&&$0.588$&$0.107$&&$0.387$&$0.094$&&$0.339$&$0.074$&&$0.247$&$0.071$&&$0.247$&$0.075$\tabularnewline
&$1000$&$48$&&$0.598$&$0.102$&&$0.584$&$0.099$&&$0.356$&$0.072$&&$0.341$&$0.077$&&$0.244$&$0.073$&&$0.246$&$0.075$\tabularnewline
&$1000$&$96$&&$0.595$&$0.104$&&$0.587$&$0.099$&&$0.345$&$0.070$&&$0.341$&$0.076$&&$0.246$&$0.075$&&$0.247$&$0.075$\tabularnewline
\hline
at 60\% Q&$ 250$&$12$&&$0.842$&$0.165$&&$0.788$&$0.165$&&$0.308$&$0.061$&&$0.415$&$0.131$&&$0.323$&$0.070$&&$0.327$&$0.082$\tabularnewline
&$ 250$&$24$&&$0.823$&$0.173$&&$0.786$&$0.165$&&$0.364$&$0.132$&&$0.411$&$0.129$&&$0.322$&$0.071$&&$0.324$&$0.081$\tabularnewline
&$ 250$&$48$&&$0.811$&$0.174$&&$0.787$&$0.164$&&$0.410$&$0.149$&&$0.413$&$0.129$&&$0.322$&$0.074$&&$0.327$&$0.080$\tabularnewline
&$ 250$&$96$&&$0.801$&$0.171$&&$0.784$&$0.156$&&$0.420$&$0.144$&&$0.414$&$0.131$&&$0.324$&$0.078$&&$0.325$&$0.080$\tabularnewline
&$ 500$&$12$&&$0.731$&$0.139$&&$0.682$&$0.136$&&$0.253$&$0.028$&&$0.363$&$0.117$&&$0.282$&$0.057$&&$0.290$&$0.072$\tabularnewline
&$ 500$&$24$&&$0.710$&$0.128$&&$0.680$&$0.129$&&$0.290$&$0.098$&&$0.363$&$0.117$&&$0.281$&$0.060$&&$0.291$&$0.072$\tabularnewline
&$ 500$&$48$&&$0.698$&$0.138$&&$0.683$&$0.135$&&$0.344$&$0.131$&&$0.362$&$0.118$&&$0.287$&$0.066$&&$0.290$&$0.073$\tabularnewline
&$ 500$&$96$&&$0.689$&$0.134$&&$0.678$&$0.129$&&$0.363$&$0.130$&&$0.364$&$0.119$&&$0.288$&$0.070$&&$0.290$&$0.072$\tabularnewline
&$1000$&$12$&&$0.623$&$0.098$&&$0.586$&$0.099$&&$0.212$&$0.012$&&$0.317$&$0.105$&&$0.249$&$0.049$&&$0.258$&$0.065$\tabularnewline
&$1000$&$24$&&$0.609$&$0.108$&&$0.585$&$0.103$&&$0.233$&$0.049$&&$0.320$&$0.107$&&$0.250$&$0.053$&&$0.258$&$0.065$\tabularnewline
&$1000$&$48$&&$0.597$&$0.097$&&$0.584$&$0.106$&&$0.281$&$0.105$&&$0.320$&$0.106$&&$0.255$&$0.057$&&$0.258$&$0.063$\tabularnewline
&$1000$&$96$&&$0.593$&$0.102$&&$0.584$&$0.099$&&$0.312$&$0.116$&&$0.318$&$0.105$&&$0.256$&$0.061$&&$0.258$&$0.063$\tabularnewline
\hline
at 80\% Q&$ 250$&$12$&&$0.889$&$0.271$&&$0.809$&$0.257$&&$0.373$&$0.116$&&$0.404$&$0.137$&&$0.338$&$0.082$&&$0.312$&$0.057$\tabularnewline
&$ 250$&$24$&&$0.851$&$0.260$&&$0.806$&$0.250$&&$0.370$&$0.131$&&$0.407$&$0.139$&&$0.317$&$0.060$&&$0.310$&$0.054$\tabularnewline
&$ 250$&$48$&&$0.832$&$0.261$&&$0.809$&$0.258$&&$0.384$&$0.145$&&$0.404$&$0.135$&&$0.309$&$0.054$&&$0.311$&$0.056$\tabularnewline
&$ 250$&$96$&&$0.817$&$0.251$&&$0.808$&$0.256$&&$0.397$&$0.144$&&$0.409$&$0.139$&&$0.308$&$0.053$&&$0.310$&$0.054$\tabularnewline
&$ 500$&$12$&&$0.787$&$0.243$&&$0.715$&$0.233$&&$0.323$&$0.097$&&$0.358$&$0.121$&&$0.297$&$0.064$&&$0.265$&$0.040$\tabularnewline
&$ 500$&$24$&&$0.761$&$0.239$&&$0.719$&$0.230$&&$0.319$&$0.109$&&$0.358$&$0.121$&&$0.272$&$0.046$&&$0.265$&$0.040$\tabularnewline
&$ 500$&$48$&&$0.748$&$0.237$&&$0.718$&$0.228$&&$0.331$&$0.125$&&$0.358$&$0.123$&&$0.262$&$0.038$&&$0.264$&$0.039$\tabularnewline
&$ 500$&$96$&&$0.730$&$0.233$&&$0.717$&$0.223$&&$0.348$&$0.129$&&$0.362$&$0.122$&&$0.262$&$0.039$&&$0.264$&$0.039$\tabularnewline
&$1000$&$12$&&$0.704$&$0.210$&&$0.637$&$0.205$&&$0.287$&$0.091$&&$0.317$&$0.111$&&$0.258$&$0.050$&&$0.223$&$0.026$\tabularnewline
&$1000$&$24$&&$0.677$&$0.214$&&$0.635$&$0.205$&&$0.274$&$0.089$&&$0.315$&$0.108$&&$0.227$&$0.031$&&$0.225$&$0.029$\tabularnewline
&$1000$&$48$&&$0.658$&$0.210$&&$0.633$&$0.208$&&$0.287$&$0.105$&&$0.317$&$0.112$&&$0.220$&$0.024$&&$0.224$&$0.027$\tabularnewline
&$1000$&$96$&&$0.655$&$0.215$&&$0.639$&$0.208$&&$0.300$&$0.112$&&$0.314$&$0.107$&&$0.221$&$0.024$&&$0.225$&$0.028$\tabularnewline
\hline
\end{tabular}\end{center}}
\end{table}
\end{landscape}
\clearpage
\bibliographystyle{abbrvnat}
|
{
"timestamp": "2019-01-16T02:12:19",
"yymm": "1802",
"arxiv_id": "1802.08825",
"language": "en",
"url": "https://arxiv.org/abs/1802.08825"
}
|
\section{Introduction}
In a sphere of 11\,Mpc radius around the Milky Way reside more than one thousand galaxies, mostly of the type of dwarf galaxies (M$_B>-17.7$\,mag).
This so-called Local Volume \citep{1979AN....300..181K,2004AJ....127.2031K,2013AJ....145..101K} contains many prominent galaxy aggregates, e.g. our own Local Group (LG), the Sculptor filament, the Centaurus group, the M\,81 group, the Canes-Venatici cloud, the M\,101 group complex, and the Leo I group \citep{1988ang..book.....T}. In recent years many teams have taken up the challenge to search for new dwarf galaxies in the local universe and measure their distances \citep{2009AJ....137.3009C,2013AJ....146..126C,2014ApJ...787L..37M,2014MNRAS.441.2124B,2014ApJ...795L..35C,2016ApJ...823...19C,2015ApJ...804L..44K,2015A&A...583A..79M,2017A&A...597A...7M,2017A&A...602A.119M,2016ApJ...828L...5C,2016A&A...588A..89J,2017ApJ...837..136D,2017MNRAS.465.5026C,2017A&A...603A..18H,2017ApJ...848...19P,2018MNRAS.474.3221M}.
These studies can be used to test the theoretical predictions from the standard model of cosmology ($\Lambda$CDM). For the LG, there is indeed a serious tension between observation and theory, i.e. the long-standing missing satellite problem \citep{1999ApJ...524L..19M}; the too-big-too-fail (TBTF) problem \citep{2010A&A...523A..32K,2011MNRAS.415L..40B}; and the plane-of-satellites problem \citep{2005A&A...431..517K,2012MNRAS.423.1109P,2013Natur.493...62I,2018arXiv180202579P}, see \citet{2017ARA&A..55..343B} for a recent review on small-scale challenges.
Such studies are now extended to other nearby galaxy groups, e.g. for Cen\,A \citep{2015ApJ...802L..25T,2016A&A...595A.119M,2018Sci...359..534M}, addressing the plane-of-satellite problem, or M\,101 \citep{2017ApJ...837..136D,2017A&A...602A.119M}, addressing the TBTF and missing satellite problems.
Using public data from the Sloan Digital Sky Survey (SDSS) we have started to systematically search for new, hitherto undetected dwarf galaxies in the Local Volume, beginning with the M\,101 group complex, covering 330\,deg$^2$ around the spiral galaxies M\,101, M\,51, and M\,63. We found 15 new dwarf galaxy candidates \citep{2017A&A...602A.119M}. We now continue our optical search for dwarf galaxies in an area that covers 500\,deg$^2$ around the Leo-I group (Fig.\,1).
The Leo-I group, with a mean distance of 10.7\,Mpc \citep{2004AJ....127.2031K,2013AJ....145..101K}, consists of seven bright galaxies, NGC\,3351 (= M\,95), NGC\,3368 (= M\,96), NGC\,3377, NGC\,3379 (= M\,105), NGC\,3384, NGC\,3412, and NGC\,3489 \citep{2004ARep...48..267K}. Another four bright galaxies, NGC\,3632 (= M\,65), NGC\,3627 (= M\,66), NGC\,3628, and NGC\,3593 -- the Leo Triplet, about six degrees to the east of the main aggregate -- are possibly also part of the group based on their common distances and systemic velocities \citep{2000ApJS..128..431F}.
Note that about eight degrees to the north-east is another quartet of bright galaxies (NGC\,3599, NGC\,3605, NGC\,3607, and NGC\,3608), which shares the same systemic velocity but is farther behind and is arguably not associated to the group \citep{2000ApJS..128..431F}.
A spectacular feature of the Leo-I group in HI is the so-called Leo ring \citep{1985ApJ...288L..33S} around NGC\,3384/M\,105, one of the largest HI structures in the nearby universe. \citet{2010ApJ...717L.143M} followed this up with a deep optical survey using MegaCam on the CFHT and found no diffuse stellar optical component down to 28 mag\,arcsec$^{-2}$ surface brightness. The authors suggest an origin based on a collision between NGC\,3384 and M\,105 using gas/dark matter simulations and can explain the structure of the ring, together with the absence of apparent light. Deeper images ($\mu_V$ > 29.5 mag\,arcsec$^{-2}$) taken by \citet{2014ApJ...791...38W} still revealed no optical counterpart of the ring, however, they found some stream-like features associated to the ring, which are possibly of tidal origin.
In the Leo Triplet another intriguing feature, this time in the optical, is a stellar stream associated to \om{the} boxy spiral NGC\,3628 \citep{1956ErNW...29..344Z}, which hosts a tidal dwarf galaxy \citep{2014ApJ...786..144N} and an ultra compact dwarf galaxy \citep{2015ApJ...812L..10J}.
For the central part of the Leo-I group (i.e. the M\,96 subgroup) an initial catalogue of 50 dwarf galaxy candidates was produced by \citet{1990AJ....100....1F}. The
authors argued based on morphological properties that half of them are group members. Another collection of dwarf galaxies were discovered by \citet{2002MNRAS.335..712T} who surveyed a $10\times 10$ \om{deg$^2$} field partially covering the Leo-I group. Using the digitized sky survey \citet{2004ARep...48..267K} refined and extended this list to 50 likely members. For many members HI velocities were derived \citep{2009AJ....138..338S}, making it possible to distinguish between actual Leo-I members and background galaxies belonging to the more distant Leo cloud (see Fig.\,1 in \citealt{2002MNRAS.335..712T} for the difference in velocity space). A very deep but spatially limited image, based on amateur telescopes, was produced for NGC\,3628 in the Leo Triplet and revealed another faint dwarf galaxy \citep{2016A&A...588A..89J}.
To follow a consistent naming convention in this paper, from now on we use the term M\,96 subgroup to describe the main galaxy aggregate around M\,96, and the term Leo-Triplet (Leo-T) for the aggregate around M\,66. Both subgroups together are called the Leo-I group (see Fig.~\ref{fieldImage}).\\
In this work we present a search for unresolved dwarf galaxies using publicly available data from the Sloan Digital Sky Survey (SDSS) in 500 deg$^2$, covering the extended Leo-I group region. In Section\,\ref{search} we summarize our search strategy and in Section\,\ref{phot} we present the surface photometry performed for all known and newly found members of the Leo-I group. In Section\,\ref{disc} we discuss our candidate list and potential background contamination.
Finally, in Section\,\ref{concl} we draw our conclusions and give a brief outlook.
\section{Discovery of new dwarf \om{galaxy candidates}}
\label{search}
\om{In recent years, different automatic detection approaches have been proposed to search for low surface brightness galaxies \citep[e.g.][]{2014ApJ...787L..37M,2014ApJ...788..188S,2016A&A...590A..20V,2017ApJ...850..109B} with encouraging results. On the other hand, these pipelines were only applied on small areas of the sky (< 10 deg$^2$) and still have a considerable rate of false detections, or rely on a large number of existing galaxies to study galaxy groups on a statistical basis. It remains to be seen how these methods perform on large-field surveys with areas of several hundreds of square degrees and how time-consuming the task of rejecting false-positives will be. We, as well as other authors \citep[e.g.][]{2017ApJ...848...19P,2017MNRAS.470.1512W}, argue that a visual search on images is still on par with algorithm-based detections.\\}
\om{In this work, we} follow the same methods as described in \citet{2017A&A...602A.119M} to search for dwarf galaxies in an area of $\sim$500\,deg$^2$ around the Leo-I group using data taken from the SDSS. In summary this involves the creation of 1 square degree mosaics of $g$ and $r$ images, the use of several image processing algorithms (e.g. binning and Gaussian convolution) to enhance the low-surface brightness features within the images, and the final visual search for dwarf galaxies in these processed images. Once an object is detected, surface photometry is applied to derive the structural parameters, which are compared to the properties of known dwarf galaxies of the LG and other groups. Based on this morphological comparison, a detection is considered or rejected as dwarf galaxy candidate. To estimate our detection rate we conducted an experiment where we induced artificial galaxies into the SDSS images and derived the recovery rate of these objects (Fig.\,3 in \citealt{2017A&A...602A.119M}). \\
In Fig.\,\ref{fieldImage} we present the survey footprint, the known galaxies in this field (black and gray dots), and the new dwarf galaxy candidates (red dots) found in our search. In the up-to-date online version\footnote{last checked: 19. December 2017.} of the LV catalog 63 dwarf galaxies are listed within our footprint, with four (open circle) having a distance estimate smaller than 7\,Mpc. In Table\,\ref{table:1} we present the coordinates of the 36 dwarf galaxy candidates found in the survey, together with our galaxy type classification and comments on the objects. We indicate if the objects are found in the vicinity of M\,96, the Leo Triplet, or in the surrounding field.
\begin{figure*}[ht]
\centering
\hspace*{-0.7cm}
\includegraphics[width=20cm]{field.pdf}
\caption{ Survey area of $\approx 500$ \om{deg$^2$} in the Leo-I group region. The squares correspond to the created 1 \om{deg$^2$} mosaics.
The small black dots are previously known members based on their photometric properties, compiled from the Local Volume Catalog \citep{2004AJ....127.2031K,2013AJ....145..101K}. The large gray dots are the major galaxies in the M\,96 subgroup and Leo Triplet. The red dots indicate the positions of the 36 new dwarf candidates. Open circles are confirmed foreground (<7\,Mpc) galaxies taken from the LV Catalog.
}
\label{fieldImage}
\end{figure*}
\begin{table}[ht]
\centering
\setlength{\tabcolsep}{3pt}
\caption{Names, coordinates, and morphological types of the 36 new dwarf galaxy candidates of the Leo-I group.}
\label{table:1}
\begin{tabular}{lccll}
\hline\hline
& $\alpha$ & $\delta$ & \\
Name & (J2000) & (J2000) & Type & Notes\\
\hline \\[-2mm]
dw1013+18 & 10:13:29 & $+$18:36:44 & dSph & field \\
dw1037+09 & 10:37:40 & $+$09:06:20 & dIrr & M\,96 \\
dw1040+06 & 10:40:30 & $+$06:56:27 & dSph & field \\
dw1044+11 & 10:44:33 & $+$11:16:10 & dSph & M\,96 \\
dw1045+14a & 10:45:01 & $+$14:06:20 & dSph & M\,96 \\
dw1045+14b & 10:45:56 & $+$14:13:37 & dSph & M\,96 \\
dw1045+16 & 10:45:56& $+$16:55:00 & dSph, bg? & M\,96 \\
dw1045+13 & 10:45:58 & $+$13:32:52 & dSph & M\,96 \\
dw1047+16 & 10:47:00 & $+$16:08:50 & dSph,N & M\,96 \\
dw1048+13 & 10:48:36 & $+$13:03:34 & dSph & M\,96 \\
dw1049+12a& 10:49:11 & $+$12:47:34 & dSph & M\,96 \\
dw1049+15 & 10:49:14 &$+$15:58:20& dSph/dIrr & M\,96 \\
dw1049+12b& 10:49:26 & $+$12:33:08 & dSph/dIrr? & M\,96 \\
dw1051+11 & 10:51:03 & $+$11:01:13 & dSph, UDG? & M\,96 \\
dw1055+11 & 10:55:43 & $+$11:58:05 & dSph,N, UDG? & M\,96 \\
dw1059+11 & 10:59:51 & $+$11:25:38 & dSph & M\,96 \\
dw1101+11 & 11:01:22 & $+$11:45:12 & dSph & M\,96 \\
dw1109+18 & 11:09:08 & $+$18:54:20 & dIrr & field \\
dw1110+18 & 11:10:55 & $+$18:58:52 & dSph & field \\
dw1116+14 & 11:16:14& $+$14:38:17 & dSph, bg? & Leo-T\\
dw1116+15a & 11:16:17 & $+$15:04:02 & dSph, bg? &Leo-T \\
dw1116+15b & 11:16:46 & $+$15:54:19 & dSph, bg? & Leo-T\\
dw1117+15 &11:17:02 & $+$15:10:17 & dSph, UDG?, bg? &Leo-T\\
dw1117+12 & 11:17:44 & $+$12:50:10 & dSph & Leo-T\\
dw1118+13a& 11:18:15 & $+$13:30:53 & dSph & Leo-T \\
dw1118+13b & 11:18:53 & $+$13:48:18 & dSph & Leo-T \\
dw1123+13 & 11:23:56 & $+$13:46:41 & dSph & Leo-T\\
dw1127+13 & 11:27:13 & $+$13:46:50 & dSph &Leo-T \\
dw1130+20 & 11:30:32 & $+$20:45:41 & dIrr & field \\
dw1131+14 & 11:31:01 & $+$15:54:52 & dSph & field \\
dw1137+16 & 11:37:46& $+$16:31:09 & dSph, UDG?& field \\%343 bg of UGC 6594?
dw1140+17 & 11:40:43 & $+$17:38:36 & dSph & field\\
dw1145+14 &11:45:32 & $+$15:52:50 & dSph&field \\
dw1148+12 & 11:48:09 & $+$12:48:43 & dSph &field\\
dw1148+16 & 11:48:45& $+$16:44:24 & dSph &field \\
dw1151+16 & 11:51:15& $+$16:00:20 & dSph & field\\
\hline\hline
\end{tabular}
\end{table}
\begin{figure*}
\includegraphics[width=3.6cm]{dw1013+18.png}
\includegraphics[width=3.6cm]{dw1037+09.png}
\includegraphics[width=3.6cm]{dw1040+06.png}
\includegraphics[width=3.6cm]{dw1044+11.png}
\includegraphics[width=3.6cm]{dw1045+14a.png}\\
\includegraphics[width=3.6cm]{dw1045+14b.png}
\includegraphics[width=3.6cm]{dw1045+16.png}
\includegraphics[width=3.6cm]{dw1045+13.png}
\includegraphics[width=3.6cm]{dw1047+16.png}
\includegraphics[width=3.6cm]{dw1048+13.png}\\
\includegraphics[width=3.6cm]{dw1049+12a.png}
\includegraphics[width=3.6cm]{dw1049+15.png}
\includegraphics[width=3.6cm]{dw1049+12b.png}
\includegraphics[width=3.6cm]{dw1051+11.png}
\includegraphics[width=3.6cm]{dw1055+11.png}\\
\includegraphics[width=3.6cm]{dw1059+11.png}
\includegraphics[width=3.6cm]{dw1101+11.png}
\includegraphics[width=3.6cm]{dw1109+18.png}
\includegraphics[width=3.6cm]{dw1110+18.png}
\includegraphics[width=3.6cm]{dw1116+14.png}
\caption{Gallery showing SDSS $r$-band images of the new Leo-I group member candidates.
One side of an image corresponds to 80\,arcsec or 3.88\,kpc at the distance of 10\,Mpc. \om{North is to the top, east to the right.} }
\label{sample0}
\end{figure*}
\begin{figure*}
\includegraphics[width=3.6cm]{dw1116+15a.png}
\includegraphics[width=3.6cm]{dw1116+15b.png}
\includegraphics[width=3.6cm]{dw1117+15.png}
\includegraphics[width=3.6cm]{dw1117+12.png}
\includegraphics[width=3.6cm]{dw1118+13a.png}\\
\includegraphics[width=3.6cm]{dw1118+13b.png}
\includegraphics[width=3.6cm]{dw1123+13.png}
\includegraphics[width=3.6cm]{dw1127+13.png}
\includegraphics[width=3.6cm]{dw1130+20.png}
\includegraphics[width=3.6cm]{dw1131+14.png}\\
\includegraphics[width=3.6cm]{dw1137+16.png}
\includegraphics[width=3.6cm]{dw1140+17.png}
\includegraphics[width=3.6cm]{dw1145+14.png}
\includegraphics[width=3.6cm]{dw1148+12.png}
\includegraphics[width=3.6cm]{dw1148+16.png}\\
\includegraphics[width=3.6cm]{dw1151+16.png}
\caption{Fig.\,\ref{sample0} continued.}
\label{sample1}
\end{figure*}
\section{Surface photometry}
\label{phot}
We computed the total apparent magnitude $m$, the mean effective surface
brightness $\langle \mu\rangle_{eff} $, and the effective radius $r_{eff}$ in $gr$ bands for each dwarf galaxy candidate, as well as for already known group members as many of them do not have accurate photometry. To measure the surface brightness profiles we used a circular aperture (step size of $0\farcs396$ corresponding to 1 pixel).
S\'ersic profiles \citep{1968adga.book.....S} were fitted at the derived profiles using the equation
$$\mu_{sersic}(r)= \mu_0+1.0857\cdot\left(\frac{r}{r_0}\right)^{n},$$
where $\mu_0$ is the S\'ersic central surface brightness, $r_0$ the S\'ersic scale length, and $n$ the S\'ersic curvature index. The total extinction corrected absolute magnitude $M$ is calculated with a distance modulus of $m-M=30.06$\,mag, corresponding to $D=10.4$\,Mpc, as is used for Leo-I members with unknown distance estimates in the LV catalog.
See Fig.\,\ref{sbp} for all surface brightness profiles in the $r$ band and the associated S\'ersic fits.
In Table\,\ref{table2} we provide the derived photometry for the new candidates, in Tables\,\ref{table3} and \ref{table4} for the previously known (dwarf) members of the Leo-I group.
The magnitude uncertainties are estimated to be around $\approx$\,0.3\,mag \citep{2017A&A...602A.119M}. The main contributions to the error budget are from the uncertainties related to foreground star removal ($\approx$\,0.2\,mag) and sky background estimation ($\approx$\,0.2\,mag). The uncertainties for $\langle \mu\rangle_{eff} $ are driven by the uncertainties in the measured total apparent magnitude ($\approx$\,0.3\,mag \,arcsec$^{-2}$). The error for $r_{eff}$ ($\approx$\,1.3\,arcsec) is given by the determination of the growth curve. Numerical uncertainties for the S\'ersic parameters are provided in the corresponding table
\begin{figure*}
\includegraphics[width=3.6cm]{dw1013+18_r.pdf}
\includegraphics[width=3.6cm]{dw1037+09_r.pdf}
\includegraphics[width=3.6cm]{dw1040+06_r.pdf}
\includegraphics[width=3.6cm]{dw1044+11_r.pdf}
\includegraphics[width=3.6cm]{dw1045+13_r.pdf}\\
\includegraphics[width=3.6cm]{dw1045+14a_r.pdf}
\includegraphics[width=3.6cm]{dw1045+14b_r.pdf}
\includegraphics[width=3.6cm]{dw1045+16_r.pdf}
\includegraphics[width=3.6cm]{dw1047+16_r.pdf}
\includegraphics[width=3.6cm]{dw1048+13_r.pdf}\\
\includegraphics[width=3.6cm]{dw1049+12a_r.pdf}
\includegraphics[width=3.6cm]{dw1049+12b_r.pdf}
\includegraphics[width=3.6cm]{dw1049+15_r.pdf}
\includegraphics[width=3.6cm]{dw1051+11_r.pdf}
\includegraphics[width=3.6cm]{dw1055+11_r.pdf}\\
\includegraphics[width=3.6cm]{dw1059+11_r.pdf}
\includegraphics[width=3.6cm]{dw1101+11_r.pdf}
\includegraphics[width=3.6cm]{dw1109+18_r.pdf}
\includegraphics[width=3.6cm]{dw1110+18_r.pdf}
\includegraphics[width=3.6cm]{dw1116+14_r.pdf}\\
\includegraphics[width=3.6cm]{dw1116+15a_r.pdf}
\includegraphics[width=3.6cm]{dw1116+15b_r.pdf}
\includegraphics[width=3.6cm]{dw1117+12_r.pdf}
\includegraphics[width=3.6cm]{dw1117+15_r.pdf}
\includegraphics[width=3.6cm]{dw1118+13a_r.pdf}\\
\includegraphics[width=3.6cm]{dw1118+13b_r.pdf}
\includegraphics[width=3.6cm]{dw1123+13_r.pdf}
\includegraphics[width=3.6cm]{dw1127+13_r.pdf}
\includegraphics[width=3.6cm]{dw1130+20_r.pdf}
\includegraphics[width=3.6cm]{dw1131+14_r.pdf}\\
\includegraphics[width=3.6cm]{dw1137+16_r.pdf}
\includegraphics[width=3.6cm]{dw1140+17_r.pdf}
\includegraphics[width=3.6cm]{dw1145+14_r.pdf}
\includegraphics[width=3.6cm]{dw1148+12_r.pdf}
\includegraphics[width=3.6cm]{dw1148+16_r.pdf}\\
\includegraphics[width=3.6cm]{dw1151+16_r.pdf}
\caption{Surface brightness profiles of all new dwarf galaxy candidates in $r$ and the best-fitting S\'ersic profiles with $1 \sigma$ confidence intervals.}
\label{sbp}
\end{figure*}
\begin{table*}[ht]
\caption{Photometric and structural parameters of the new dwarf candidates in the surveyed region of the Leo-I group.}
\small
\centering
\setlength{\tabcolsep}{3pt}
\begin{tabular}{lccrrrrcrccrr}
\hline\hline
Name & ${g_{tot}}$ & ${r_{tot}}$ & $A_g$ & $A_r$ & $M_{r}$ & $(g-r)_{0,tot}$ & $\mu_{0,r}$ & $r_{0,r}$ & $n_r$ & $\langle\mu\rangle_{eff,r}$ & $r_{eff,r}$ & $\log r_{eff,r}$\\
& mag & mag & mag & mag & mag & mag & mag arcsec$^{-2}$ &arcsec & & mag arcsec$^{-2}$ &arcsec & $\log$ pc \\
(1)& (2) & (3) & (4) & (5) & (6) & (7) &(8) & (9) & (10) & (11) & (12) & (13)\\
\hline \\
M\,96 subgroup & & & & & & & & &\\
dw1037+09 & 17.61 & 17.09 & 0.080 & 0.056 & -13.04 & 0.496 & $24.04 \pm 0.05 $ & $9.96 \pm 0.38 $ & $1.68 \pm 0.14 $ & 25.44 & 18.7 & 2.97\\
dw1044+11 & 19.39 & 19.17 & 0.088 & 0.061 & -10.97 & 0.200 & $25.42 \pm 0.45 $ & $10.87 \pm 5.89 $ & $0.87 \pm 0.40 $ & 26.59 & 12.1 & 2.78\\
dw1045+14a & 19.02 & 18.68 & 0.097 & 0.067 & -11.46 & 0.313 & $22.79 \pm 0.63 $ & $1.25 \pm 1.14 $ & $0.56 \pm 0.16 $ & 24.87 & 6.92 & 2.54\\
dw1045+14b & 19.79 & 19.29 & 0.094 & 0.065 & -10.85 & 0.470 & $24.50 \pm 0.17 $ & $5.24 \pm 0.83 $ & $1.13 \pm 0.18 $ & 25.39 & 6.61 & 2.52\\
dw1045+16 & 18.50 & 17.62 & 0.085 & 0.059 & -12.51 & 0.854 & $24.59 \pm 0.30 $ & $8.39 \pm 3.22 $ & $0.81 \pm 0.24 $ & 26.43 & 23.0 & 3.06\\
dw1045+13 & 18.81 & 18.08 & 0.110 & 0.076 & -12.07 & 0.696 & $24.96 \pm 0.15 $ & $7.41 \pm 0.70 $ & $1.92 \pm 0.52 $ & 26.59 & 20.0 & 3.00\\
dw1047+16 & 18.07 & 17.91 & 0.091 & 0.063 & -12.23 & 0.128 & $22.77 \pm 0.55 $ & $1.32 \pm 1.18 $ & $0.49 \pm 0.13 $ & 25.17 & 11.2 & 2.75\\
dw1048+13 & 19.83 & 18.67 & 0.111 & 0.077 & -11.48 & 1.126 & $25.53 \pm 0.10 $ & $13.63 \pm 0.64 $ & $2.94 \pm 0.78 $ & 26.19 & 12.6 & 2.80\\
dw1049+12a & 19.39 & 18.98 & 0.088 & 0.061 & -11.16 & 0.386 & $23.78 \pm 0.49 $ & $2.92 \pm 1.91 $ & $0.67 \pm 0.19 $ & 25.54 & 8.18 & 2.61\\
dw1049+15 & 18.56 & 17.88 & 0.088 & 0.061 & -12.26 & 0.655 & $24.30 \pm 0.12 $ & $9.38 \pm 0.90 $ & $1.42 \pm 0.23 $ & 25.08 & 10.9 & 2.74\\
dw1049+12b & 19.10 & 18.05 & 0.085 & 0.059 & -12.08 & 1.020 & $24.74 \pm 0.28 $ & $9.76 \pm 3.03 $ & $0.96 \pm 0.36 $ & 26.04 & 15.7 & 2.90\\
dw1051+11 & 17.85 & 16.95 & 0.092 & 0.063 & -13.19 & 0.872 & $25.34 \pm 0.07 $ & $16.76 \pm 0.63 $ & $4.15 \pm 1.20 $ & 26.20 & 28.2 & 3.15\\
dw1055+11 & 17.59 & 16.40 & 0.066 & 0.046 & -13.72 & 1.169 & $24.88 \pm 0.28 $ & $18.86 \pm 3.90 $ & $0.97 \pm 0.54 $ & 26.18 & 36.0 & 3.25\\
dw1059+11 & 18.98 & 18.60 & 0.060 & 0.041 & -11.51 & 0.359 & $24.61 \pm 0.11 $ & $9.73 \pm 0.72 $ & $1.68 \pm 0.26 $ & 25.02 & 7.65 & 2.58\\
dw1101+11 & 19.47 & 19.45 & 0.058 & 0.040 & -10.66 & 0.005 & $23.33 \pm 1.62 $ & $1.16 \pm 2.78 $ & $0.50 \pm 0.32 $ & 25.56 & 6.64 & 2.52\\
\\
Leo Triplet & & & & & & & & &\\
dw1116+14 & 20.33 & 19.57 & 0.071 & 0.049 & -10.56 & 0.742 & $25.67 \pm 0.13 $ & $10.63 \pm 0.58 $ & $3.28 \pm 1.11 $ & 25.81 & 7.08 & 2.55\\
dw1116+15a & 20.26 & 19.80 & 0.076 & 0.052 & -10.33 & 0.437 & $25.11 \pm 0.32 $ & $6.79 \pm 2.43 $ & $0.95 \pm 0.28 $ & 25.98 & 6.88 & 2.54\\
dw1116+15b & 20.42 & 19.31 & 0.068 & 0.047 & -10.81 & 1.091 & $25.33 \pm 0.26 $ & $8.85 \pm 2.61 $ & $1.00 \pm 0.32 $ & 27.02 & 13.8 & 2.84\\
dw1117+15 & 17.56 & 17.25 & 0.082 & 0.057 & -12.88 & 0.280 & $25.57 \pm 0.07 $ & $17.11 \pm 0.51 $ & $3.79 \pm 0.92 $ & 27.31 & 40.9 & 3.31\\
dw1117+12 & 21.22 & 19.87 & 0.073 & 0.050 & -10.25 & 1.322 & $25.24 \pm 0.52 $ & $7.29 \pm 4.29 $ & $0.93 \pm 0.51 $ & 26.10 & 7.02 & 2.54\\
dw1118+13a & 19.49 & 19.59 & 0.077 & 0.053 & -10.54 & -0.11 & $25.88 \pm 0.24 $ & $14.36 \pm 1.94 $ & $1.74 \pm 1.02 $ & 26.36 & 9.04 & 2.65\\
dw1118+13b & 18.15 & 17.78 & 0.069 & 0.047 & -12.34 & 0.341 & $25.33 \pm 0.13 $ & $15.09 \pm 1.66 $ & $1.35 \pm 0.24 $ & 26.45 & 21.5 & 3.03\\
dw1123+13 & 19.62 & 19.08 & 0.079 & 0.054 & -11.05 & 0.513 & $24.95 \pm 0.16 $ & $8.74 \pm 1.05 $ & $1.55 \pm 0.31 $ & 25.38 & 7.26 & 2.56\\
dw1127+13 & 19.76 & 18.85 & 0.093 & 0.064 & -11.28 & 0.872 & $25.51 \pm 0.13 $ & $12.85 \pm 0.95 $ & $1.72 \pm 0.52 $ & 26.10 & 11.2 & 2.75\\
\\
Field & & & & & & & & &\\
dw1013+18 & 18.02 & 17.65 & 0.106 & 0.073 & -12.50 & 0.340 & $22.67 \pm 0.10 $ & $3.29 \pm 0.38 $ & $0.76 \pm 0.04 $ & 24.16 & 7.99 & 2.60\\
dw1040+06 & 17.96 & 18.22 & 0.120 & 0.083 & -11.94 & -0.29 & $24.36 \pm 0.12 $ & $8.18 \pm 0.88 $ & $1.20 \pm 0.17 $ & 25.37 & 10.7 & 2.73\\
dw1109+18 & 17.73 & 17.18 & 0.077 & 0.054 & -12.95 & 0.523 & $23.25 \pm 0.06 $ & $7.35 \pm 0.41 $ & $1.07 \pm 0.06 $ & 24.17 & 9.94 & 2.70\\
dw1110+18 & 18.00 & 17.39 & 0.077 & 0.053 & -12.74 & 0.587 & $24.30 \pm 0.11 $ & $12.15 \pm 1.16 $ & $1.20 \pm 0.20 $ & 25.15 & 14.2 & 2.85\\
dw1130+20 & 17.53 & 17.28 & 0.068 & 0.047 & -12.84 & 0.220 & $22.63 \pm 0.14 $ & $2.75 \pm 0.58 $ & $0.61 \pm 0.05 $ & 24.49 & 10.9 & 2.74\\
dw1131+14 & 19.51 & 18.87 & 0.171 & 0.118 & -11.32 & 0.581 & $24.59 \pm 0.09 $ & $7.39 \pm 0.42 $ & $1.97 \pm 0.24 $ & 25.22 & 7.43 & 2.57\\
dw1137+16 & 17.32 & 16.77 & 0.097 & 0.067 & -13.37 & 0.523 & $24.49 \pm 0.12 $ & $14.19 \pm 1.99 $ & $0.89 \pm 0.10 $ & 25.89 & 26.6 & 3.12\\
dw1140+17 & 18.54 & 17.89 & 0.098 & 0.068 & -12.25 & 0.623 & $24.85 \pm 0.20 $ & $13.96 \pm 3.30 $ & $0.87 \pm 0.23 $ & 25.67 & 14.3 & 2.85\\
dw1145+14 & 19.86 & 19.20 & 0.147 & 0.101 & -10.97 & 0.613 & $24.02 \pm 0.12 $ & $4.94 \pm 0.49 $ & $1.39 \pm 0.17 $ & 24.65 & 4.89 & 2.39\\
dw1148+12 & 17.95 & 17.91 & 0.119 & 0.082 & -12.25 & 0.002 & $24.58 \pm 0.20 $ & $9.02 \pm 1.75 $ & $1.02 \pm 0.29 $ & 25.78 & 14.9 & 2.87\\
dw1148+16 & 17.49 & 17.34 & 0.150 & 0.104 & -12.84 & 0.109 & $22.85 \pm 0.06 $ & $3.77 \pm 0.32 $ & $0.78 \pm 0.04 $ & 24.61 & 11.3 & 2.75\\
dw1151+16 & 18.04 & 18.27 & 0.109 & 0.075 & -11.88 & -0.25 & $22.92 \pm 0.07 $ & $3.92 \pm 0.25 $ & $1.09 \pm 0.05 $ & 23.86 & 5.24 & 2.42\\
\hline\hline
\end{tabular}
\tablefoot{The quantities listed are as follows:
(1) name of candidate;
(2+3) total apparent magnitude in the $g$ and $r$ bands;
(4+5) galactic extinction in the $g$ and $r$ bands \citep{2011ApJ...737..103S};
(6) extinction corrected absolute $r$ band magnitude, using a distance modulus of $M-m=30.06$\,mag;
(7) integrated and extinction corrected $g-r$ color;
(8) S\'ersic central surface brightness in the $r$ band;
(9) S\'ersic scale length in the $r$ band;
(10) S\'ersic curvature index in the $r$ band;
(11) mean effective surface brightness in the $r$ band;
(12) effective radius in the $r$ band;
(13) the logarithm of the effective radius in the $r$ band, converted to pc with a distance modulus of $M-m=30.06$\,mag. }
\label{table2}
\end{table*}
\section{Discussion}
\label{disc}
In the following we discuss the membership of the candidates based on their morphological parameters, the contamination of the field by nearby background galaxies, and the potential discovery of ultra diffuse galaxies (UDG).
\subsection{Membership estimation}
The standard approach to establish membership based on morphological properties is to compare the structural parameters of the candidates with known dwarf galaxies \citep[e.g.,][]{2000AJ....119..593J,2009AJ....137.3009C,2014ApJ...787L..37M,2017A&A...597A...7M,2017A&A...602A.119M}. If the objects fit into the ($\langle \mu\rangle_{eff} $ -- $M$), ($r_{eff}$ -- $M$), ($\mu_0$ -- $M$), and ($n$ -- $M$) scaling relations defined by the known dwarf galaxies in the local Universe it is reasonable to consider them as dwarf galaxy candidates. The ($\langle \mu\rangle_{eff} $ -- $M$) and ($\mu_0$ -- $M$) are especially crucial because the surface brightness is independent of the assumed distance of the object, therefore making it possible to assess the membership at a certain distance (see \citealt{2017A&A...597A...7M}, Fig.\,11 for what happens to galaxies with unreasonable distance estimates in those relations).
To transform our $gr$ photometry to the Johnson system we used the following equations \citep{SloanConv}:
$$V = g - 0.5784\cdot(g - r)_0 - 0.0038 $$
$$B = r + 1.3130\cdot(g - r)_0 + 0.2271$$
The structural parameters of the newly found dwarf candidates, together with the previously discovered Leo-I members and the Local Group dwarf population are plotted in Fig.\,\ref{relations}.
\om{The structural parameters of the dwarf candidates fall into the relations defined by the Local Group dwarfs, thus we can assume that the candidates are indeed dwarf members of the Leo-I group.}
Additionally, we show the 44 ultra-diffuse galaxy (UDG) candidates in the Coma Cluster discovered by \citet{2015ApJ...798L..45V} (only $g$ band photometry is given, therefore we assume a color index of $(g - r) = 0.6$\,mag to transform them into $V$-band magnitudes). UDGs have typically an effective radius larger then $r_{eff}>1.5$\,kpc and a fainter central surface brightness than $\mu_g>24.0$\,mag\,arcsec$^{-2}$ \citep{2015ApJ...798L..45V}.
\begin{figure*}[ht]
\centering
\includegraphics[width=9cm]{reffM.pdf}
\includegraphics[width=9cm]{muM.pdf}
\caption{The ($r_{eff}$ -- $M$) and ($\mu_0$ -- $M$) scaling relations for the newly discovered dwarf candidates (red squares), previously discovered dwarf members (gray squares), and the Local Group dwarf galaxy population (gray dots). The estimated \om{conservative} completeness limit\om{, as derived in \citet{2017A&A...602A.119M},} is indicated with the line. The UDG candidates discovered in Coma \citep{2015ApJ...798L..45V} are overlaid as black dots in the ($r_{eff}$ -- $M$) diagram\om{, as well as the size cut (dashed line) of 1.5\,kpc for UDGs}. }
\label{relations}
\end{figure*}
Dwarf galaxies can also be characterized by their color using the color-magnitude relation \citep[e.g.][]{2008AJ....135..380L,2017A&A...608A.142V}. Here we compare the $(g-r)_0$ colors of the Leo-I group dwarfs with other well studied systems in the LV where $gr$ photometry is available, i.e. the Centaurus group \citep{2015A&A...583A..79M,2017A&A...597A...7M} and the M101 group complex \citep{2017A&A...602A.119M}. The calculated mean $(g-r)_0$ color and standard deviation for the three group populations are: $(g-r)_{0,Leo-I}=0.491\pm0.282$\,mag, $(g-r)_{0,Cen\,A}=0.463\pm0.258$\,mag, and $(g-r)_{0,M\,101}=0.472\pm0.190$\,mag. In Fig.\,\ref{colors} we show the color distribution as a function of total absolute $V$-magnitude for these different groups. \om{The dwarfs in the different galaxy groups follow a similar distribution in their colors. We note that the extreme blueish colors ($g-r<0$) of some objects -- uncommon for dwarf galaxies -- as well as the scatter at the faint-end of the scale, can arise from the photometric uncertainty.}
\begin{figure}[H]
\centering
\includegraphics[width=9cm]{colors.pdf}
\caption{Color-magnitude relation for the previously known Leo-I dwarf members (gray squares), the new Leo-I members (red squares), the Centaurus group members \citep[black dots,][]{2015A&A...583A..79M,2017A&A...597A...7M}, and the M\,101 group members \citep[blue crosses,][]{2017A&A...602A.119M}. Both early and late type dwarf galaxies were considered. }
\label{colors}
\end{figure}
In the following we discuss some individual candidates which have apparently interesting features.\\
\textbf{dw1037+09:} This candidate has several knots within and around the galaxy, which could either be bright giant stars or globular clusters (GC).\\
\textbf{dw1110+18:} As for dw1037+09 there are several knots sprinkled among the object which could be bright giant stars or GCs. \\
\textbf{dw1130+20:} This galaxy has some bright knots, which could correspond to HII regions.
Under the assumption that all candidates are members of the Leo-I group, we can determine the galaxy luminosity function (see Fig.\,\ref{LF})
and compare it to other nearby galaxy group environments, i.e. the Centaurus group \citep{2015A&A...583A..79M,2017A&A...597A...7M}, the LG \citep{2012AJ....144....4M}, the M101 group \citep{1999A&AS..137..337B,2017A&A...602A.119M}, and the NGC2784 group \citep{2017ApJ...848...19P}. Among these five groups, the Leo-I group is the richest galaxy aggregate with approximately 100 galaxies up to an absolute magnitude of $M_V=-10$, this is, if all candidates are confirmed as members. \om{The Leo-I group has approximately twice as many dwarfs as the LG and a steeper faint-end slope of the LF, comparable to the one of Cen\,A The M\,101 and NGC\,2784 groups have shallower faint-end slopes. This indicates that galaxy groups with massive hosts have steeper faint-ends of the LF. While the faint-end slopes of Leo-I and Cen\,A are comparable, the Leo-I group contains more brighter galaxies in the range of -16 to -14\,mag in $V$-bands, making it more rich (up to $M_V=-10$). In this range (-16 to -14\,$V$ mag), the LF of Leo-I is comparable to the one of the LG.
}
\begin{figure
\centering
\includegraphics[width=9cm]{LF.pdf}
\caption{Cumulative galaxy luminosity functions
for different galaxy groups in the Local Volume. Data taken from: Leo-I (this work), Centaurus group \citep{2015A&A...583A..79M,2017A&A...597A...7M}, LG \citep{2012AJ....144....4M}, NGC\,2784 group \citep{2017ApJ...848...19P}, and M\,101 group \citep{1999A&AS..137..337B,2017A&A...602A.119M}.
}
\label{LF}
\end{figure}
\subsection{Background contamination}
\label{back}
One fundamental challenge when searching for new dwarf galaxies is given by the fact that survey fields are almost always contaminated by galaxy groups in the background. A prime example for such a confusion is the massive elliptical galaxy NGC\,5485
with its many dwarf companions \citep{2011MNRAS.412.2498M} situated $\approx$20\,Mpc behind the Local Volume galaxy M\,101 (7\,Mpc, \citealt{2015MNRAS.449.1171N}). Fig.\,8 in \citet{2016ApJ...833..168M} shows M\,101, the background elliptical NGC\,5485, and former M\,101 dwarf candidates \citep{2014ApJ...787L..37M} that actually belong to the background galaxy population. Out of the seven reported dwarf candidates by \citet{2014ApJ...787L..37M} only three were confirmed to be M\,101 members with HST follow-up observations \citep{2017ApJ...837..136D}. Recently, more new dwarf candidates were reported around M\,101 \citep{2017ApJ...850..109B,2017A&A...602A.119M}, now awaiting confirmation as members by means of distance or velocity measurements. Some will potentially be associated to the background elliptical NGC\,5485.\\
The possibility of contamination prompted us to study the background of the Leo-I group in more detail. In \citet{2017A&A...597A...7M} we used the Cosmicflows-2 catalog \citep{2013AJ....146...86T} to determine the background contamination of the Centaurus group. Here we query the Cosmicflows-2 catalog for bright galaxies with absolute magnitudes $M_B$<-19 and with radial velocities $v_{rad}$<2000\,km\,s$^{-1}$ within our survey footprint. Excluding the Leo-I galaxies this search resulted in 24 bright host galaxies potentially contaminating our survey.
To test how these background galaxies will pollute our detections we surveyed for dwarf galaxies within 300\,kpc of each such host (approximatively the virial radius) with the same methods as used in our search for Leo-I dwarfs, but without removing candidates which are near to a background galaxy. Essentially, we search for the candidates we rejected as background sources. In Table\,\ref{table:app} we compiled the coordinates for the objects which would be considered as dwarf candidates based on their morphology. In total we found 26 additional dwarf candidates, of which 20 are clustered around NGC\,3607 at a distance of $\sim20$\,Mpc. This indicates that (a) it is not feasible to include every object in the survey footprint as Leo-I dwarf, and (b) that there probably will be some confusion between foreground and background, either by rejecting a foreground dwarf or including a background dwarf.\\
\om{Some Leo-I dwarf candidates are both near to a background host and a Leo-I host. In such a case we added a note the Table\,\ref{table:1}. To the north to the Leo Triplet there are four Leo-I candidates (dw1116+14, dw1116+15a, dw1116+15b, and dw1117+15) clustered around NGC\,3596 (15\,Mpc). See Fig.\,\ref{fieldImageBack} for the distribution of the background dwarf galaxies. Distance and velocity measurements will be crucial to distinguish their memberships. Until then, the faint-end of the LF will be affected by these uncertain cases.}
\begin{figure*}[ht]
\centering
\hspace*{-0.7cm}
\includegraphics[width=20cm]{fieldBack.pdf}
\caption{\om{Same as Fig.\,\ref{fieldImage} but with background host galaxies (black squares) and background dwarf galaxies (blue crosses) which are clustered around these hosts , however would be considered Leo-I dwarfs due to their morphology if not for their apparent closeness to these background hosts. See Section \ref{back} for details.}
}
\label{fieldImageBack}
\end{figure*}
\begin{table}[ht]
\centering
\setlength{\tabcolsep}{3pt}
\caption{Coordinates of the possible background dwarf galaxy in our survey footprint around bright host galaxies with v<2000\,km\,s$^{2}$.}
\label{table:app}
\begin{tabular}{lcc}
\hline\hline
& $\alpha$ & $\delta$ \\
Name & (J2000) & (J2000) \\
\hline \\[-2mm]
NGC\,3227\_1&10:22:53 &$+$19:34:36 \\%411
NGC\,3227\_2 &10:24:43 &$+$19:57:16\\%440
NGC\,3227\_3& 10:25:50&$+$19:43:22\\%441
NGC\,3666\_1&11:24:45 &$+$11:20:04 \\%195
NGC\,3666\_2&11:24:10 &$+$11:25:12 \\%195
NGC\,3370\_1&10:46:47 &$+$17:16:18\\%359
NGC\,3607\_1 & 11:14:22&$+$18:02:38\\%395
NGC\,3607\_2 &11:14:26 &$+$18:22:30 \\%395
NGC\,3607\_3 &11:15:35 &$+$18:25:21\\%395
NGC\,3607\_4 & 11:15:36& $+$18:01:04\\%395
NGC\,3607\_5&11:15:48 & $+$18:04:40\\%395
NGC\,3607\_6&11:15:52 &$+$17:54:04 \\%395
NGC\,3607\_7&11:15:57 & $+$17:56:25\\%395
NGC\,3607\_8&11:16:11 &$+$17:57:04 \\%395
NGC\,3607\_9&11:16:18 &$+$18:35:39 \\%395
NGC\,3607\_10&11:16:28 &$+$18:11:35 \\%395
NGC\,3607\_11& 11:16:30&$+$18:19:27 \\%395
NGC\,3607\_12& 11:17:01 & $+$18:18:07\\%396
NGC\,3607\_13& 11:17:07&$+$17:19:09 \\%367
NGC\,3607\_14&11:17:16 &$+$18:46:27 \\%425
NGC\,3607\_15&11:17:22 & $+$17:59:50\\%396
NGC\,3607\_16& 11:18:21&$+$17:41:50 \\%396
NGC\,3607\_17&11:19:13 &$+$18:05:47 \\%396
NGC\,3607\_18&11:19:21 &$+$17:32:09 \\%367
NGC\,3607\_19&11:24:08 &$+$18:13:16\\%397
NGC\,3607\_20&11:31:01 &$+$15:54:48 \\%341
\hline\hline
\end{tabular}
\end{table}
\subsection{UDG candidates}
Originally discovered by \citet{1984AJ.....89..919S} and described as ``a new type of very large diameter (10,000 pc), low central surface brightness (>25 B mag/arcsec$^2$) galaxy, that comes in both early (i.e., dE) and late (i.e., Im V) types", this class of galaxies has now been renamed as ultra diffuse galaxies \citep{2015ApJ...798L..45V} and was found in many different environments \citep{2016A&A...590A..20V}, i.e. in clusters \citep{2015ApJ...798L..45V,2015ApJ...807L...2K}, and in groups \citep{2016ApJ...833..168M}. Different possible formation scenarios have been proposed \citep[e.g.][]{2016MNRAS.459L..51A,2017MNRAS.466L...1D} and are under intense debate. \citet{2015ApJ...798L..45V} suggested to classify dwarf galaxies with $r_{eff}>1.5$\,kpc and a fainter central surface brightness than $\mu_g>24.0$\,mag\,arcsec$^{-2}$ as UDG, however, this boundary is rather arbitrary and should be considered more as a guideline.\\ Studying the properties of the Leo-I members we consider dw1055+11, dw1117+15, dw1051+11, KK\,96, and ACG\,215415 as UDG candidates. With $r_{eff}=1.3$\,kpc dw1137+16 is still considerably large and could be an UDG type. Better photometry is needed to derive the structural parameters more accurately.
However, we note that if these objects are more in the foreground (e.g. in the Canes Venatici-I cloud), they would be closer to our point of view and therefore would have smaller intrinsic sizes, making them common-sized dwarf galaxies.\\
\om{The UDG candidates are distributed in the outskirts of the aggregates and not in the central parts of the group. This is similar to what is found in galaxy clusters: in galaxy clusters the UDG density drops nearly to zero in the central regions because they cannot survive the tidal forces inflicted on them \citep{2016A&A...590A..20V}. We note that it is not feasible to assess the UDG distribution in Leo-I with only 5-6 candidates.}
\section{Conclusion}
\label{concl}
We have surveyed 500 square degrees of $gr$ images taken from SDSS within the extended region of the Leo-I group and found 36 new dwarf galaxy candidates. For every known member and new candidate we derived surface brightness photometry. Based on a comparison of their structural properties with other known dwarf galaxies in the nearby universe and their morphology we consider these candidates member of the Leo-I group, either lying in the vicinity of the M\,96 subgroup, the Leo Triplet, or in the nearby field.
To confirm their membership follow-ups are required to either measure their radial velocity, their distances, or both. Some of the candidates are exceptionally large with low surface brightness, a characteristic of ultra diffuse galaxies. If these UDGs are confirmed to be Leo-I members, those would be some of the closest UDGs from Earth and valuable targets to improve our understanding of this type of galaxies.
\begin{acknowledgements}
OM and BB are grateful to the Swiss National Science Foundation for financial support. HJ acknowledges the support of the Australian Research Council through Discovery Project DP150100862.
\end{acknowledgements}
\begin{sidewaystable*}
\caption{Photometric and structural parameters of the previously known Leo-I members.}
\small
\centering
\setlength{\tabcolsep}{3pt}
\begin{tabular}{lccccrrrrcrccrrlll}
\hline\hline
Name & && ${g_{tot}}$ & ${r_{tot}}$ & $A_g$ & $A_r$ & $M_{r}$ & $(g-r)_{0,tot}$ & $\mu_{0,r}$ & $r_{0,r}$ & $n_r$ & $\langle\mu\rangle_{eff,r}$ & $r_{eff,r}$ & $\log r_{eff,r}$ & Ref & $D$ & $v$\\
& && mag & mag & mag & mag & mag & mag & mag arcsec$^{-2}$ &arcsec & & mag arcsec$^{-2}$ &arcsec & $\log$ pc & & Mpc & km\,s$^{-1}$ \\
(1)& && (2) & (3) & (4) & (5) & (6) & (7) &(8) & (9) & (10) & (11) & (12) & (13) & (14) & (15) & (16)\\
\hline \\
M\,96 subgroup & & & & & & & & &\\
AGC\,205156 & 10:30:53 & $+$12:26:48 & 18.65 & 18.22 & 0.102 & 0.071 & -11.94 & 0.398 & $21.22 \pm 0.12 $ & $1.80 \pm 0.17 $ & $1.12 \pm 0.08 $ & 22.13 & 2.41 & 2.08 & (AA) & 10.43 (LV) & 915 (AA) \\
AGC\,202248& 10:34:56 & $+$11:29:31 & 17.17 & 16.92 & 0.103 & 0.071 & -13.23 & 0.220 & $22.10 \pm 0.09 $ & $5.15 \pm 0.36 $ & $1.28 \pm 0.07 $ & 22.87 & 6.17 & 2.49 & (AA) & & 1177 (AA) \\
NGC\,3299 & 10:36:24 & $+$12:42:25 & 13.08 & 12.48 & 0.082 & 0.057 & -17.66 & 0.575 & $21.06 \pm 0.02 $ & $22.48 \pm 0.55 $ & $0.77 \pm 0.02 $ & 22.09 & 33.3 & 3.22 & & & 604 (AA)\\
AGC\,205165 & 10:37:05&$+$15:20:13 & 15.92 & 15.39 & 0.123 & 0.085 & -14.77 & 0.490 & $21.85 \pm 0.02 $ & $7.83 \pm 0.16 $ & $0.97 \pm 0.01 $ & 22.97 & 13.0 & 2.81 & (AA) & & 724 (AA)\\
AGC\,200499 & 10:38:08 & $+$10:22:52 & 14.32 & 13.90 & 0.094 & 0.065 & -16.24 & 0.390 & $20.25 \pm 0.05 $ & $5.35 \pm 0.19 $ & $0.98 \pm 0.02 $ & 20.15 & 7.10 & 2.55 & (AA) & & 1175 (AA) \\
LeG\,04 &10:39:40&$+$12:44:07& 18.10 & 17.53 & 0.090 & 0.062 & -12.61 & 0.551 & $23.10 \pm 0.11 $ & $4.97 \pm 0.59 $ & $0.90 \pm 0.07 $ & 24.43 & 9.58 & 2.68 & (LV) & & \\
FS\,01 (LeG\,05) & 10:39:43 &$+$12:38:04 &16.74 & 16.14 & 0.084 & 0.058 & -13.99 & 0.573 & $21.50 \pm 0.03 $ & $4.04 \pm 0.15 $ & $0.82 \pm 0.02 $ & 22.91 & 8.99 & 2.65 & (FS) & & 780 (AA) \\
LeG\,06 &10:39:56&$+$13:54:33 & 17.12 & 16.60 & 0.117 & 0.081 & -13.56 & 0.481 & $23.90 \pm 0.03 $ & $14.32 \pm 0.36 $ & $1.50 \pm 0.06 $ & 24.59 & 15.7 & 2.90 & (LV) & & 1007 (AA) \\
UGC\,05812 &10:40:56&$+$12:28:21 & 15.07 & 14.56 & 0.080 & 0.056 & -15.58 & 0.486 & $21.97 \pm 0.02 $ & $13.08 \pm 0.22 $ & $1.17 \pm 0.03 $ & 22.97 & 19.1 & 2.98 & & & 1008 (AA)\\
FS\,04 &10:42:00&$+$12:20:05 &15.53 & 15.13 & 0.084 & 0.058 & -15.00 & 0.373 & $21.55 \pm 0.06 $ & $7.96 \pm 0.42 $ & $1.01 \pm 0.03 $ & 22.57 & 12.2 & 2.79 & (FS) & & 772 (AA)\\
LeG\,09 & 10:42:34&$+$12:09:02&17.03 & 16.41 & 0.082 & 0.057 & -13.72 & 0.587 & $24.38 \pm 0.08 $ & $18.37 \pm 1.46 $ & $1.05 \pm 0.09 $ & 25.29 & 23.8 & 3.07 & (LV) & & \\
LeG\,10 &10:43:55&$+$12:08:00 & 19.16 & 18.78 & 0.089 & 0.061 & -11.35 & 0.352 & $23.85 \pm 0.45 $ & $3.86 \pm 2.31 $ & $0.71 \pm 0.22 $ & 25.10 & 7.30 & 2.56 & (LV) & & \\
LeG\,11 & 10:44:02&$+$15:35:21 & 18.18 & 17.78 & 0.104 & 0.072 & -12.37 & 0.366 & $24.13 \pm 0.08 $ & $10.22 \pm 0.57 $ & $1.63 \pm 0.18 $ & 24.53 & 8.92 & 2.65 & (LV) & & \\
LeG\,12 & 10:44:07&$+$11:32:03 &19.20 & 18.75 & 0.098 & 0.068 & -11.40 & 0.421 & $23.60 \pm 0.75 $ & $2.07 \pm 2.36 $ & $0.57 \pm 0.25 $ & 25.07 & 7.33 & 2.56 & (LV) & & \\
AGC\,205445 & 10:44:35&$+$13:56:22 &16.11 & 15.59 & 0.103 & 0.072 & -14.56 & 0.487 & $21.71 \pm 0.03 $ & $6.48 \pm 0.20 $ & $0.92 \pm 0.02 $ & 22.93 & 11.7 & 2.77 & (AA) & & 633 (LV) \\
FS\,09 (LeG\,13) & 10:44:57&$+$11:55:00 & 17.59 & 17.10 & 0.073 & 0.050 & -13.03 & 0.466 & $23.13 \pm 0.11 $ & $8.16 \pm 0.67 $ & $1.39 \pm 0.15 $ & 23.79 & 8.70 & 2.64 & (FS) & & 871 (AA)\\
FS\,13 (LeG\,14) &10:46:14&$+$12:57:38 &18.49 & 17.83 & 0.081 & 0.056 & -12.31 & 0.638 & $24.25 \pm 0.05 $ & $9.57 \pm 0.34 $ & $1.70 \pm 0.13 $ & 24.92 & 10.4 & 2.72 & (FS) & & \\
FS\,14 (KK\,93) & 10:46:25&$+$14:01:25 &16.99 & 16.48 & 0.098 & 0.068 & -13.66 & 0.471 & $23.59 \pm 0.05 $ & $11.06 \pm 0.55 $ & $1.00 \pm 0.04 $ & 24.71 & 17.5 & 2.94 & (FS)& & \\
FS\,15 (LeG\,16) & 10:46:30&$+$11:45:21 &18.63 & 18.03 & 0.085 & 0.058 & -12.10 & 0.569 & $24.35 \pm 0.14 $ & $10.64 \pm 0.75 $ & $2.00 \pm 0.49 $ & 24.59 & 8.16 & 2.61 & (FS) & & \\
FS\,17 (LeG\,17) & 10:46:41&$+$12:19:37 &16.44 & 15.84 & 0.080 & 0.055 & -14.30 & 0.578 & $22.62 \pm 0.03 $ & $8.74 \pm 0.30 $ & $1.02 \pm 0.03 $ & 23.86 & 16.0 & 2.90 & (FS) & & 1030 (S+) \\
LeG\,18 &10:46:53&$+$12:44:26& 18.05 & 17.72 & 0.073 & 0.051 & -12.40 & 0.304 & $24.44 \pm 0.52 $ & $4.65 \pm 4.62 $ & $0.47 \pm 0.16 $ & 26.31 & 20.8 & 3.02 & (TT) & & 636 (AA) \\
FS\,20 (LeG\,19) &10:46:55&$+$12:47:19 &18.37 & 17.12 & 0.074 & 0.052 & -13.01 & 1.228 & $23.67 \pm 0.11 $ & $7.00 \pm 0.73 $ & $1.11 \pm 0.11 $ & 25.43 & 18.3 & 2.96 & (FS) & & \\
FS\,21 (KK94) & 10:46:57&$+$12:59:54 & 17.72 & 16.88 & 0.098 & 0.068 & -13.26 & 0.809 & $24.47 \pm 0.05 $ & $17.14 \pm 0.70 $ & $1.37 \pm 0.09 $ & 25.14 & 17.9 & 2.95 & (FS) & & 832 (H+)\\
Le\,G21 &10:47:01&$+$12:57:39& 18.60 & 18.18 & 0.096 & 0.066 & -11.96 & 0.391 & $24.46 \pm 0.07 $ & $10.03 \pm 0.46 $ & $1.86 \pm 0.23 $ & 24.90 & 8.81 & 2.64 & (TT) & & 843 (AA) \\
DDO\,088 & 10:47:22&$+$14:04:15 &13.85 & 13.35 & 0.114 & 0.079 & -16.16 & 0.465 & $22.07 \pm 0.02 $ & $25.52 \pm 0.36 $ & $1.14 \pm 0.02 $ & 22.95 & 33.1 & 3.09 & & 7.73 (LV) & 573 (H+) \\
CGCG\,066-026 &10:48:54&$+$14:07:28 &15.27 & 14.64 & 0.132 & 0.092 & -15.52 & 0.580 & $20.35 \pm 0.02 $ & $3.76 \pm 0.09 $ & $0.73 \pm 0.01 $ & 22.15 & 12.6 & 2.80 & & & 541 (LV) \\
FS\,40 (LeG\,22) &10:49:37&$+$11:21:06 &17.79 & 17.16 & 0.102 & 0.071 & -12.99 & 0.599 & $24.40 \pm 0.09 $ & $13.55 \pm 1.03 $ & $1.27 \pm 0.14 $ & 25.20 & 16.2 & 2.91 & (FS) & & \\
LeG\,23 & 10:50:09&$+$13:29:02&19.66 & 19.25 & 0.104 & 0.072 & -10.89 & 0.379 & $24.37 \pm 0.19 $ & $5.35 \pm 0.76 $ & $1.44 \pm 0.31 $ & 25.09 & 5.85 & 2.47 & (LV) & & \\
UGC\,05944 &10:50:19&$+$13:16:18 &14.94 & 14.35 & 0.099 & 0.068 & -15.93 & 0.565 & $21.50 \pm 0.01 $ & $8.49 \pm 0.11 $ & $0.79 \pm 0.02 $ & 23.06 & 22.0 & 3.07 & & 11.07 (R+) & 1073 (LV) \\
KK\,96 &10:50:27&$+$12:21:34 &16.65 & 16.07 & 0.084 & 0.058 & -14.06 & 0.547 & $24.32 \pm 0.06 $ & $19.46 \pm 1.24 $ & $1.05 \pm 0.09 $ & 25.37 & 28.8 & 3.16 & (LV) & & \\
LeG\,26 & 10:51:21&$+$12:50:56&16.30 & 15.75 & 0.079 & 0.054 & -14.38 & 0.534 & $22.48 \pm 0.03 $ & $9.52 \pm 0.25 $ & $1.05 \pm 0.02 $ & 23.52 & 14.3 & 2.85 & (LV) & & 630 (LV)\\
AGC\,205540& 10:51:31&$+$14:06:53 & 17.75 & 17.18 & 0.107 & 0.074 & -12.97 & 0.540 & $22.06 \pm 0.04 $ & $4.27 \pm 0.15 $ & $1.24 \pm 0.04 $ & 22.99 & 5.77 & 2.46 & (AA) & & 832 (LV)\\
AGC\,205544 & 10:52:05&$+$15:01:50 & 16.86 & 16.27 & 0.072 & 0.050 & -13.85 & 0.561 & $21.53 \pm 0.02 $ & $3.94 \pm 0.11 $ & $0.85 \pm 0.01 $ & 22.91 & 8.48 & 2.63 & (AA) & & 828 (LV)\\
AGC\,202456 & 10:52:19&$+$11:02:35&15.94 & 15.37 & 0.078 & 0.054 & -14.76 & 0.547 & $20.95 \pm 0.02 $ & $4.88 \pm 0.09 $ & $0.90 \pm 0.01 $ & 22.24 & 9.44 & 2.67 & (AA) & &824 (LV) \\
LeG\,27 & 10:52:20&$+$14:42:26&18.15 & 17.27 & 0.073 & 0.050 & -12.86 & 0.856 & $23.40 \pm 0.05 $ & $7.29 \pm 0.33 $ & $1.38 \pm 0.08 $ & 24.52 & 11.2 & 2.75 & (LV) & & \\
LeG\,28 &10:53:01&$+$10:22:43 &17.11 & 16.43 & 0.083 & 0.057 & -13.70 & 0.646 & $23.03 \pm 0.04 $ & $10.15 \pm 0.34 $ & $1.17 \pm 0.04 $ & 23.83 & 12.0 & 2.78 & (LV) & & \\
LSBCD\,640-12 &10:55:56&$+$12:20:22 &17.21 & 16.68 & 0.060 & 0.042 & -13.44 & 0.512 & $23.51 \pm 0.08 $ & $11.14 \pm 0.81 $ & $1.15 \pm 0.09 $ & 24.37 & 13.7 & 2.84 & (S+) & & 847 (AA) \\
LSBCD\,640-13 &10:56:14&$+$12:00:35 & 16.20 & 15.85 & 0.065 & 0.045 & -14.27 & 0.332 & $22.79 \pm 0.03 $ & $11.42 \pm 0.26 $ & $1.59 \pm 0.06 $ & 23.69 & 14.7 & 2.87 & (S+) & & 989 (S+) \\
LSBCD\,640-14 &10:58:10&$+$11:59:53& 17.79 & 17.16 & 0.055 & 0.038 & -12.95 & 0.611 & $24.13 \pm 0.06 $ & $11.72 \pm 0.48 $ & $1.68 \pm 0.17 $ & 24.95 & 14.3 & 2.86 & (S+) & & \\
AGC\,205278 &10:58:52&$+$14:07:47& 16.66 & 16.32 & 0.060 & 0.042 & -14.07 & 0.317 & $21.87 \pm 0.03 $ & $4.62 \pm 0.14 $ & $0.87 \pm 0.02 $ & 23.17 & 9.35 & 2.72 & (AA) & & 686 (AA)\\
LeG\,33 &11:00:45&$+$14:10:21& 18.98 & 18.29 & 0.061 & 0.042 & -11.83 & 0.667 & $23.78 \pm 0.13 $ & $5.69 \pm 0.65 $ & $1.19 \pm 0.16 $ & 24.70 & 7.64 & 2.58 & (LV) & & \\
LSBCD\,640-08 &11:00:52&$+$13:52:53& 16.19 & 15.64 & 0.053 & 0.037 & -14.47 & 0.530 & $22.25 \pm 0.04 $ & $7.25 \pm 0.31 $ & $0.87 \pm 0.02 $ & 23.70 & 16.2 & 2.91 & (S+) & & \\
CGCG\,066-109 & 11:04:26&$+$11:45:20& 15.69 & 15.31 & 0.050 & 0.034 & -14.77 & 0.356 & $22.26 \pm 0.05 $ & $11.88 \pm 0.43 $ & $1.26 \pm 0.04 $ & 23.00 & 13.7 & 2.83 & & & 777 (AA) \\
\hline\hline
\end{tabular}
\tablefoot{The quantities listed are as follows:
(1) name of candidate;
(2+3) total apparent magnitude in the $g$ and $r$ bands;
(4+5) galactic extinction in the $g$ and $r$ bands \citep{2011ApJ...737..103S};
(6) extinction corrected absolute $r$ band magnitude, using a distance modulus of $M-m=30.06$\,mag;
(7) integrated and extinction corrected $g-r$ color;
(8) S\'ersic central surface brightness in the $r$ band;
(9) S\'ersic scale length in the $r$ band;
(10) S\'ersic curvature index in the $r$ band;
(11) mean effective surface brightness in the $r$ band;
(12) effective radius in the $r$ band;
(13) the logarithm of the effective radius in the $r$ band, converted to pc with a distance modulus of $M-m=30.06$\,mag;
(14+15+16) reference for original discovery, distance measurement, and velocity measurement: (AA) \citep{2011AJ....142..170H}, (LV) \citep{2004ARep...48..267K,2004AJ....127.2031K,2013AJ....145..101K}, (FS) \citep{1990AJ....100....1F}, (S+) \citep{1997ApJS..111..233S}, (R+) \citep{2005A&A...437..823R},
(S+) \citep{1992MNRAS.258..334S}, (TT) \citep{2002MNRAS.335..712T}, and (H+) \citep{2003A&A...401..483H}
}
\label{table3}
\end{sidewaystable*}
\begin{sidewaystable*}
\caption{Table\,\ref{table3} continued.}
\small
\centering
\setlength{\tabcolsep}{3pt}
\begin{tabular}{lccccrrrrcrccrrlll}
\hline\hline
Name & && ${g_{tot}}$ & ${r_{tot}}$ & $A_g$ & $A_r$ & $M_{r}$ & $(g-r)_{0,tot}$ & $\mu_{0,r}$ & $r_{0,r}$ & $n_r$ & $\langle\mu\rangle_{eff,r}$ & $r_{eff,r}$ & $\log r_{eff,r}$ & Ref & $D$ & $v$\\
& && mag & mag & mag & mag & mag & mag & mag arcsec$^{-2}$ &arcsec & & mag arcsec$^{-2}$ &arcsec & $\log$ pc & & Mpc & km\,s$^{-1}$ \\
(1)& && (2) & (3) & (4) & (5) & (6) & (7) &(8) & (9) & (10) & (11) & (12) & (13) & (14) & (15) & (16)\\
\hline \\
Leo Triplet & & & & & & & & &\\
AGC\,202256 & 11:14:45&$+$12:38:52& 17.26 & 16.87 & 0.063 & 0.044 & -13.37 & 0.368 & $21.73 \pm 0.12 $ & $4.07 \pm 0.38 $ & $1.08 \pm 0.06 $ & 22.71 & 5.86 & 2.49 & (AA) & 11.0 (LV) & 630 (AA) \\
IC\,2684 & 11:17:01&$+$13:05:57& 15.38 & 14.79 & 0.091 & 0.063 & -15.35 & 0.566 & $20.91 \pm 0.04 $ & $3.75 \pm 0.18 $ & $0.65 \pm 0.01 $ & 23.12 & 18.5 & 2.97 & & & 588 (AA) \\
AGC\,215354 &11:19:16&$+$14:17:24& 17.23 & 16.69 & 0.073 & 0.051 & -13.44 & 0.522 & $21.33 \pm 0.04 $ & $2.92 \pm 0.13 $ & $0.85 \pm 0.02 $ & 22.76 & 6.54 & 2.51 & (AA) & & 790 (LV)\\
DGSAT-1$^*$ & 11:21:37&$+$13:26:50 & 19.63 & 18.92 & 0.085 & 0.059 & -11.21 & 0.683 & $24.81 \pm 0.18 $ & $8.52 \pm 1.23 $ & $1.36 \pm 0.38 $ & 25.27 & 7.43 & 2.57 & (J+) & &\\
AGC\,213436 & 11:22:24&$+$12:58:46& 16.40 & 15.85 & 0.082 & 0.057 & -14.28 & 0.522 & $20.78 \pm 0.03 $ & $2.05 \pm 0.08 $ & $0.63 \pm 0.01 $ & 22.85 & 9.98 & 2.70 & (AA) & & 626 (LV) \\
IC\,2787 & 11:23:19&$+$13:37:47& 15.02 & 14.45 & 0.085 & 0.058 & -15.68 & 0.538 & $20.49 \pm 0.02 $ & $3.11 \pm 0.09 $ & $0.60 \pm 0.01 $ & 22.73 & 18.0 & 2.95 & & & 708 (LV) \\
IC\,2791 & 11:23:38&$+$12:53:46& 16.74 & 16.35 & 0.088 & 0.061 & -13.79 & 0.363 & $21.51 \pm 0.06 $ & $3.73 \pm 0.23 $ & $0.87 \pm 0.03 $ & 22.97 & 8.40 & 2.62 && & 666 (AA)\\
AGC\,215415 &11:24:34&$+$12:40:30& 18.18 & 17.26 & 0.125 & 0.087 & -12.91 & 0.881 & $25.63 \pm 0.08 $ & $21.10 \pm 0.88 $ & $3.59 \pm 1.06 $ & 26.49 & 28.0 & 3.15 & (AA) & & 1002 (AA)\\
KKH\,68 &11:30:53&$+$14:08:46&16.22 & 15.83 & 0.126 & 0.087 & -13.89 & 0.351 & $22.84 \pm 0.07 $ & $11.03 \pm 0.64 $ & $1.07 \pm 0.05 $ & 23.89 & 16.3 & 2.82 & (KKH) &8.50 (LV) & 880 (AA) \\
\\
Field & & & & & & & & &\\
UGC\,05456 &10:07:19&$+$10:21:48& 13.41 & 13.13 & 0.133 & 0.092 & -17.06 & 0.241 & $20.31 \pm 0.04 $ & $10.54 \pm 0.51 $ & $0.88 \pm 0.04 $ & 21.31 & 17.2 & 2.94 & & 10.52 (LV) & 536 (AA)\\
CGCG\,095-078 &10:58:02&$+$19:30:19& 15.34 & 14.99 & 0.084 & 0.058 & -15.40 & 0.322 & $21.03 \pm 0.02 $ & $6.87 \pm 0.12 $ & $1.12 \pm 0.01 $ & 21.95 & 9.80 & 2.74 & & 11.70 (LV) &652 (LV)\\
KKH\,67 & 11:23:03&$+$21:19:18& 17.49 & 17.34 & 0.077 & 0.053 & -12.70 & 0.130 & $24.75 \pm 0.04 $ & $18.15 \pm 0.48 $ & $1.97 \pm 0.13 $ & 25.08 & 14.1 & 2.83 & (KKH) & & \\
AGC\,213091 &11:29:35&$+$10:48:34&17.68 & 17.19 & 0.119 & 0.083 & -12.46 & 0.447 & $23.10 \pm 0.03 $ & $7.71 \pm 0.18 $ & $1.49 \pm 0.04 $ & 23.75 & 8.15 & 2.51 & (AA) & 8.22 (LV) &743 (LV) \\
KKH\,69 &11:34:53&$+$11:01:07& 16.51 & 16.17 & 0.082 & 0.057 & -13.22 & 0.313 & $23.54 \pm 0.02 $ & $16.55 \pm 0.27 $ & $1.77 \pm 0.05 $ & 24.02 & 14.8 & 2.72 & (KKH) & 7.4 (LV) & 881 (LV)\\
LV\,J1149+1715 &11:49:06&$+$17:15:20& 17.44 & 17.43 & 0.127 & 0.088 & -11.91 & -0.02 & $23.37 \pm 0.10 $ & $7.03 \pm 0.65 $ & $1.13 \pm 0.10 $ & 24.38 & 9.82 & 2.52 & (LV) & 7.11 (LV) & 623 (LV) \\
AGC\,215145 &11:54:12&$+$12:26:04& 17.95 & 17.83 & 0.101 & 0.070 & -11.80 & 0.096 & $24.39 \pm 0.05 $ & $11.36 \pm 0.30 $ & $2.42 \pm 0.20 $ & 24.71 & 9.51 & 2.57 & (HI) & 8.20 (i) & 1004 (AA)\\
\hline\hline
\end{tabular}
\tablefoot{
*: full name: NGC3628-DGSAT-1.
The quantities listed are as follows:
(1) name of candidate;
(2+3) total apparent magnitude in the $g$ and $r$ bands;
(4+5) galactic extinction in the $g$ and $r$ bands \citep{2011ApJ...737..103S};
(6) extinction corrected absolute $r$ band magnitude, using a distance modulus of $M-m=30.06$\,mag;
(7) integrated and extinction corrected $g-r$ color;
(8) S\'ersic central surface brightness in the $r$ band;
(9) S\'ersic scale length in the $r$ band;
(10) S\'ersic curvature index in the $r$ band;
(11) mean effective surface brightness in the $r$ band;
(12) effective radius in the $r$ band;
(13) the logarithm of the effective radius in the $r$ band, converted to pc with a distance modulus of $M-m=30.06$\,mag;
(14+15+16) reference for original discovery, distance measurement, and velocity measurement: (J+) \citep{2016A&A...588A..89J}, (AA) \citep{2011AJ....142..170H}, (LV) \citep{2004ARep...48..267K,2004AJ....127.2031K,2013AJ....145..101K}, (KKH) \citep{2001A&A...366..428K}, and (HI) \citep{2006MNRAS.371.1855W}.
}
\label{table4}
\end{sidewaystable*}
\bibliographystyle{aa}
|
{
"timestamp": "2018-02-26T02:12:13",
"yymm": "1802",
"arxiv_id": "1802.08657",
"language": "en",
"url": "https://arxiv.org/abs/1802.08657"
}
|
\section{Introduction}
\label{sec:intro}
Automatic summarization has enjoyed wide popularity in natural
language processing due to its potential for various information
access applications. Examples include tools which aid users navigate
and digest web content (e.g.,~news, social media, product reviews),
question answering, and personalized recommendation engines. Single
document summarization --- the task of producing a shorter version of
a document while preserving its information content --- is perhaps the
most basic of summarization tasks that have been identified over the
years (see \citeauthor{Nenkova:McKeown:2011},
\citeyear{Nenkova:McKeown:2011} for a comprehensive overview).
Modern approaches to single document summarization are data-driven,
taking advantage of the success of neural network architectures and
their ability to learn continuous features without recourse to
preprocessing tools or linguistic annotations. \emph{Abstractive}
summarization involves various text rewriting operations
(e.g.,~substitution, deletion, reordering) and has been recently
framed as a sequence-to-sequence problem
\cite{sutskever-nips14}. Central in most approaches
\cite{rush-acl15,chenIjcai-16,nallapati-signll16,see-acl17,tanwan-acl17,paulus-socher-arxiv17}
is an encoder-decoder architecture modeled by recurrent neural
networks. The encoder reads the source sequence into a list of
continuous-space representations from which the decoder generates the
target sequence. An attention mechanism \cite{bahdanau-arxiv14} is
often used to locate the region of focus during decoding.
\emph{Extractive} systems create a summary by identifying (and
subsequently concatenating) the most important sentences in a
document. A few recent approaches
\cite{jp-acl16,nallapati17,narayan-arxiv17,Yasunaga:2017:gcn}
conceptualize extractive summarization as a sequence labeling task in
which each label specifies whether each document sentence should be
included in the summary. Existing models rely on recurrent neural
networks to derive a meaning representation of the document which is
then used to label each sentence, taking the previously labeled
sentences into account. These models are typically trained using
cross-entropy loss in order to maximize the likelihood of the
ground-truth labels and do not necessarily \emph{learn to rank}
sentences based on their importance due to the absence of a
ranking-based objective. Another discrepancy comes from the mismatch
between the learning objective and the evaluation criterion, namely
ROUGE \cite{rouge}, which takes the entire summary into account.
In this paper we argue that cross-entropy training is not optimal for
extractive summarization. Models trained this way are prone to
generating verbose summaries with unnecessarily long sentences and
redundant information. We propose to overcome these difficulties by
globally optimizing the ROUGE evaluation metric and learning to rank
sentences for summary generation through a reinforcement learning
objective. Similar to previous work
\cite{jp-acl16,narayan-arxiv17,nallapati17}, our neural summarization
model consists of a hierarchical document encoder and a hierarchical
sentence extractor. During training, it combines the
maximum-likelihood cross-entropy loss with rewards from policy
gradient reinforcement learning to directly optimize the evaluation
metric relevant for the summarization task. We show that this global
optimization framework renders extractive models better at
discriminating among sentences for the final summary; a sentence is
ranked high for selection if it often occurs in high scoring
summaries.
We report results on the CNN and DailyMail news highlights datasets
\cite{hermann-nips15} which have been recently used as testbeds for
the evaluation of neural summarization systems. Experimental results
show that when evaluated automatically (in terms of ROUGE), our model
outperforms state-of-the-art extractive \emph{and} abstractive
systems. We also conduct two human evaluations in order to assess
(a)~which type of summary participants prefer (we compare extractive
and abstractive systems) and (b)~how much key information from the
document is preserved in the summary (we ask participants to answer
questions pertaining to the content in the document by reading system
summaries). Both evaluations overwhelmingly show that human subjects
find our summaries more informative and complete.
Our contributions in this work are three-fold: a novel application
of reinforcement learning to sentence ranking for extractive
summarization; corroborated by analysis and empirical results
showing that cross-entropy training is not well-suited to the
summarization task; and large scale user studies following two
evaluation paradigms which demonstrate that state-of-the-art
abstractive systems lag behind extractive ones when the latter are
globally trained.
\begin{figure*}[t!]
\center{\footnotesize
\begin{tikzpicture}[scale=0.55]
\fill [gray,opacity=0.1,rounded corners=3mm] (21.2,-1.4) rectangle (25.8,4.9);
\draw [rounded corners=5pt,solid,->,line width=0.75mm]
(19.5,5.5) -- (22.5, 5.5) node
[above,midway,label={[align=center]above:Candidate\\summary}]
{} -- (22.5, 4.75) ;
\draw [rounded corners=5pt,solid,->,line width=0.75mm] (24.5,
5.5) -- (24.5, 4.75) node [above,near
start,label={[align=center]above:Gold\\summary}] {} ;
\draw (21.5,3.5) rectangle (25.5,4.5);
\node at (23.5,4) {REWARD};
\draw [rounded corners=5pt,solid,->,line width=0.75mm]
(23.5,3.25) -- (23.5, 0.25);
\draw (21.5,-1) rectangle (25.5,0);
\node at (23.5,-0.5) {REINFORCE};
\draw [rounded corners=5pt,solid,->,line width=0.75mm]
(23.5,-1.25) -- (23.5, -2) -- (19.5, -2) node [below,midway]
{Update agent};
\fill [gray,opacity=0.1,rounded corners=3mm] (-3,-5.3) rectangle (19,7.3);
\begin{scope}[shift={(0,0)}]
\draw [lightgray] (0,0) -- (0, 3.5) -- (2,3.5) -- (2,0) -- (0,0);
\draw [lightgray] (0,0.5) -- (2, 0.5); \draw [lightgray] (0,1) -- (2, 1);
\draw [lightgray] (0,1.5) -- (2, 1.5); \draw [lightgray] (0,2) -- (2, 2);
\draw [lightgray] (0,2.5) -- (2, 2.5); \draw [lightgray] (0,3) -- (2, 3);
\draw [lightgray] (0.5,0) -- (0.5, 3.5); \draw [lightgray] (1,0) -- (1, 3.5);
\draw [lightgray] (1.5,0) -- (1.5, 3.5);
\node at (-0.8,3.25) {Police};
\node at (-0.5,2.75) {are};
\node at (-0.6,2.25) {still};
\node at (-1,1.75) {hunting};
\node at (-0.5,1.25) {for};
\node at (-0.5,0.75) {the};
\node at (-0.8,0.25) {driver};
\fill [red,opacity=0.2] (0,0) rectangle (2,1);
\draw [red] (0,0) rectangle (2,1);
\fill [red,opacity=0.2] (3.5,-1) rectangle (5,2);
\draw [lightgray] (3.5,-1) rectangle (5,2);
\draw [lightgray] (3.5,-0.5) -- (5, -0.5); \draw [lightgray] (3.5,0) -- (5, 0);
\draw [lightgray] (3.5, 0.5) -- (5, 0.5); \draw [lightgray] (3.5,1) -- (5, 1);
\draw [lightgray] (3.5,1.5) -- (5, 1.5);
\draw [lightgray] (4,-1) -- (4, 2); \draw [lightgray] (4.5,-1) -- (4.5, 2);
\fill [red,opacity=0.5] (3.5,-1) rectangle (4,-0.5);
\draw [red] (2,0) -- (3.5,-1); \draw [red] (2,1) -- (3.5,-0.5);
\fill [blue,opacity=0.2] (0,1.5) rectangle (2,3.5);
\draw [blue] (0,1.5) rectangle (2,3.5);
\fill [blue,opacity=0.2] (3.5,2.5) rectangle (5,4.5);
\draw [lightgray] (3.5,2.5) rectangle (5,4.5);
\draw [lightgray] (3.5,3) -- (5, 3); \draw [lightgray] (3.5,3.5) -- (5, 3.5);
\draw [lightgray] (3.5, 4) -- (5, 4);
\draw [lightgray] (4,2.5) -- (4, 4.5); \draw [lightgray] (4.5,2.5) -- (4.5, 4.5);
\fill [blue,opacity=0.5] (3.5,4) rectangle (4,4.5);
\draw [blue] (2,3.5) -- (3.5,4.5); \draw [blue] (2,1.5) -- (3.5,4);
\fill [blue,opacity=0.2] (6.5,3.25) rectangle (7,1.75);
\draw [lightgray] (6.5,3.25) rectangle (7,1.75);
\draw [lightgray] (6.5,2.75) -- (7, 2.75); \draw [lightgray] (6.5,2.25) -- (7, 2.25);
\fill [blue,opacity=0.5] (6.5,2.75) rectangle (7,3.25);
\fill [red,opacity=0.2] (6.5,1.75) rectangle (7,0.25);
\draw [lightgray] (6.5,1.75) rectangle (7,0.25);
\draw [lightgray] (6.5,1.25) -- (7, 1.25); \draw [lightgray] (6.5,0.75) -- (7, 0.75);
\fill [red,opacity=0.5] (6.5,0.75) rectangle (7,0.25);
\draw [blue] (4.5,2.5) rectangle (5,4.5);
\draw [lightgray,->] (4.75,4.5) -- (4.75, 5) -- (5.75, 5) --
(5.75, 3) -- (6.5, 3);
\draw [red] (4.5,-1) rectangle (5,2);
\draw [lightgray,->] (4.75,-1) -- (4.75, -1.5) -- (5.75, -1.5)
-- (5.75, 0.5) -- (6.5, 0.5);
\node at (2.75,-2) {[convolution]};
\node at (6,-2) {[max pooling]};
\draw [lightgray] (-2.3,-2.5) rectangle (9.5, 5.2);
\node at (-0.2,4.8) {\textbf{Sentence encoder}};
\draw [->] (3.25, -3) -- (3.25, -2.5) ;
\node at (3.25,-3.5) {\doc};
\draw [lightgray] (8,3) rectangle (11,3.5);
\fill [red,opacity=0.2] (8,3) rectangle (9.5,3.5);
\fill [blue,opacity=0.2] (9.5,3) rectangle (11,3.5);
\draw [lightgray] (8.5,3) -- (8.5,3.5);
\draw [lightgray] (9,3) -- (9,3.5);
\draw [lightgray] (9.5,3) -- (9.5,3.5);
\draw [lightgray] (10,3) -- (10,3.5);
\draw [lightgray] (10.5,3) -- (10.5,3.5);
\node at (7.75,3.25) {$s_4$};
\draw [lightgray] (8,2) rectangle (11,2.5);
\fill [red,opacity=0.2] (8,2) rectangle (9.5,2.5);
\fill [blue,opacity=0.2] (9.5,2) rectangle (11,2.5);
\draw [lightgray] (8.5,2) -- (8.5,2.5);
\draw [lightgray] (9,2) -- (9,2.5);
\draw [lightgray] (9.5,2) -- (9.5,2.5);
\draw [lightgray] (10,2) -- (10,2.5);
\draw [lightgray] (10.5,2) -- (10.5,2.5);
\node at (7.75,2.25) {$s_3$};
\draw [lightgray] (8,1) rectangle (11,1.5);
\fill [red,opacity=0.2] (8,1) rectangle (9.5,1.5);
\fill [blue,opacity=0.2] (9.5,1) rectangle (11,1.5);
\draw [lightgray] (8.5,1) -- (8.5,1.5);
\draw [lightgray] (9,1) -- (9,1.5);
\draw [lightgray] (9.5,1) -- (9.5,1.5);
\draw [lightgray] (10,1) -- (10,1.5);
\draw [lightgray] (10.5,1) -- (10.5,1.5);
\node at (7.75,1.25) {$s_2$};
\draw [rounded corners=5pt,->] (11.1, 1.35) -- (12, 1.35) --
(12, 0.6) -- (16.6, 0.6) -- (16.6, 1.7);
\draw [rounded corners=5pt,->] (11.1, 1.15) -- (11.7, 1.15) --
(11.7, -4.9) -- (16.6, -4.9) -- (16.6, -4.3);
\draw [lightgray] (8,0) rectangle (11,0.5);
\fill [red,opacity=0.2] (8,0) rectangle (9.5,0.5);
\fill [blue,opacity=0.2] (9.5,0) rectangle (11,0.5);
\draw [lightgray] (8.5,0) -- (8.5,0.5);
\draw [lightgray] (9,0) -- (9,0.5);
\draw [lightgray] (9.5,0) -- (9.5,0.5);
\draw [lightgray] (10,0) -- (10,0.5);
\draw [lightgray] (10.5,0) -- (10.5,0.5);
\node at (7.75,0.25) {$s_1$};
\draw (11.1, 0.35) -- (11.6, 0.35);
\draw (11.6, 0.2) -- (11.6, 0.5);
\draw (11.8, 0.2) -- (11.8, 0.5);
\draw [rounded corners=5pt,->] (11.8, 0.35) -- (17.9, 0.35) -- (17.9, 1.7);
\draw [rounded corners=5pt,->] (11.1, 0.15) -- (11.5, 0.15) --
(11.5, -5.1) -- (17.9, -5.1) -- (17.9, -4.3);
\end{scope}
\begin{scope}[shift={(12.5,-3)}]
\draw [lightgray] (-0.2,-1.5) rectangle (5.9, 3);
\node at (3,2.7) {\textbf{Document encoder}};
\draw [lightgray,fill=green,opacity=0.3] (0,0) rectangle (0.75,2);
\draw [lightgray,fill=green,opacity=0.3] (1.25,0) rectangle (2,2);
\draw [lightgray,fill=green,opacity=0.3] (2.50,0) rectangle (3.25,2);
\draw [lightgray,fill=green,opacity=0.3] (3.75,0) rectangle (4.5,2);
\draw [lightgray,fill=green,opacity=0.3] (5,0) rectangle (5.75,2);
\draw [->] (0.375, -0.5) -- (0.375, 0) ;
\node at (0.375,-1) {$s_5$};
\draw [->] (1.625, -0.5) -- (1.625, 0) ;
\node at (1.625,-1) {$s_4$};
\draw [->] (2.875, -0.5) -- (2.875, 0) ;
\node at (2.875,-1) {$s_3$};
\draw [->] (4.125, -0.5) -- (4.125, 0) ;
\node at (4.125,-1) {$s_2$};
\draw [->] (5.375, -0.5) -- (5.375, 0) ;
\node at (5.375,-1) {$s_1$};
\draw [->] (0.75, 1) -- (1.25, 1) ;
\draw [->] (2, 1) -- (2.50, 1) ;
\draw [->] (3.25, 1) -- (3.75, 1) ;
\draw [->] (4.5, 1) -- (5, 1) ;
\draw [rounded corners=5pt,->] (5.75, 1) -- (6.3, 1) --
(6.3,7) -- (5.75,7);
\end{scope}
\begin{scope}[shift={(12.5,3)}]
\draw [lightgray] (-0.2,-1.5) rectangle (5.9, 4);
\node at (3,3.7) {\textbf{Sentence extractor}};
\draw [lightgray,fill=yellow,opacity=0.3] (0,0) rectangle (0.75,2);
\draw [lightgray,fill=yellow,opacity=0.3] (1.25,0) rectangle (2,2);
\draw [lightgray,fill=yellow,opacity=0.3] (2.50,0) rectangle (3.25,2);
\draw [lightgray,fill=yellow,opacity=0.3] (3.75,0) rectangle (4.5,2);
\draw [lightgray,fill=yellow,opacity=0.3] (5,0) rectangle (5.75,2);
\draw [->] (0.375, -0.5) -- (0.375, 0) ;
\node at (0.375,-1) {$s_5$};
\draw [->] (1.625, -0.5) -- (1.625, 0) ;
\node at (1.625,-1) {$s_4$};
\draw [->] (2.875, -0.5) -- (2.875, 0) ;
\node at (2.875,-1) {$s_3$};
\draw [->] (4.125, -0.5) -- (4.125, 0) ;
\node at (4.125,-1) {$s_2$};
\draw [->] (5.375, -0.5) -- (5.375, 0) ;
\node at (5.375,-1) {$s_1$};
\draw [<-] (0.75, 1) -- (1.25, 1) ;
\draw [<-] (2, 1) -- (2.50, 1) ;
\draw [<-] (3.25, 1) -- (3.75, 1) ;
\draw [<-] (4.5, 1) -- (5, 1) ;
\draw [->] (0.375, 2) -- (0.375, 2.50) ;
\node at (0.375,3) {$y_5$};
\draw [->] (1.625, 2) -- (1.625, 2.50) ;
\node at (1.625,3) {$y_4$};
\draw [->] (2.875, 2) -- (2.875, 2.50) ;
\node at (2.875,3) {$y_3$};
\draw [->] (4.125, 2) -- (4.125, 2.50) ;
\node at (4.125,3) {$y_2$};
\draw [->] (5.375, 2) -- (5.375, 2.50) ;
\node at (5.375,3) {$y_1$};
\end{scope}
\end{tikzpicture}
}
\caption{Extractive summarization model with reinforcement learning:
a hierarchical encoder-decoder model ranks sentences for their
extract-worthiness and a candidate summary is assembled from the
top ranked sentences; the REWARD generator compares the candidate
against the gold summary to give a reward which is used in the
REINFORCE algorithm \protect\cite{Williams:1992} to update the
model.} \label{fig:architecture}
\end{figure*}
\section{Summarization as Sentence Ranking}
\label{sec:extr-summ-as}
Given a document \doc consisting of a sequence of sentences \sentseq, an
extractive summarizer aims to produce a summary $\mathcal{S}$ by
selecting~$m$ sentences from \doc (where $m < n$). For each sentence
\mbox{$s_i \in \doc$}, we predict a label $y_i \in \{0,1\}$ (where~$1$
means that $s_i$ should be included in the summary) and assign a score
$p(y_i| s_i,\doc,\theta)$ quantifying $s_i$'s relevance to the
summary. The model learns to assign $p(1| s_i,\doc,\theta) > p(1|
s_j,\doc,\theta)$ when sentence~$s_i$ is more relevant
than~$s_j$. Model parameters are denoted by $\theta$. We
estimate~$p(y_i| s_i,\doc,\theta)$ using a neural network model and
assemble a summary $\mathcal{S}$ by selecting~$m$ sentences with top
$p(1| s_i,\doc,\theta)$ scores.
Our architecture resembles those previously proposed in the literature
\cite{jp-acl16,nallapati17,narayan-arxiv17}. The main components
include a sentence encoder, a document encoder, and a sentence
extractor (see the left block of Figure~\ref{fig:architecture}) which
we describe in more detail below.
\paragraph{Sentence Encoder}
A core component of our model is a convolutional sentence encoder
which encodes sentences into continuous representations. In recent
years, CNNs have proven useful for various NLP tasks
\cite{nlpscratch,kim-emnlp14,kalchbrenner-acl14,zhang-nips15,lei-emnlp15,kim-aaai16,jp-acl16}
because of their effectiveness in identifying salient patterns in the
input \cite{showattendtell}. In the case of summarization, CNNs can
identify named-entities and events that correlate with the gold
summary.
We use temporal narrow convolution by applying a kernel filter~$K$ of
width~$h$ to a window of~$h$ words in sentence $s$ to produce a new
feature. This filter is applied to each possible window of words
in~$s$ to produce a feature map $f \in R^{k-h+1}$ where $k$ is the
sentence length. We then apply max-pooling over time over the feature
map~$f$ and take the maximum value as the feature corresponding to
this particular filter~$K$. We use multiple kernels of various sizes
and each kernel multiple times to construct the representation of a
sentence.
In Figure~\ref{fig:architecture}, kernels of size~$2$ (red) and~$4$
(blue) are applied three times each. Max-pooling over time yields two
feature lists $f^{K_{2}}$ and $f^{K_{4}} \in R^3$. The final sentence
embeddings have six dimensions.
\paragraph{Document Encoder}
The document encoder composes a sequence of sentences to obtain a
document representation. We use a recurrent neural network with Long
Short-Term Memory (LSTM) cells to avoid the vanishing gradient problem
when training long sequences \cite{lstm}. Given a document \doc
consisting of a sequence of sentences $(s_1, s_2, \ldots , s_n)$, we
follow common practice and feed sentences in reverse order
\cite{sutskever-nips14,lijurafsky-acl15,katja-emnlp15,narayan-arxiv17}. This
way we make sure that the network also considers the top sentences of
the document which are particularly important for summarization
\cite{rush-acl15,nallapati-signll16}.
\paragraph{Sentence Extractor}
Our sentence extractor sequentially labels each sentence in a document
with~$1$ (relevant for the summary) or~$0$ (otherwise). It is
implemented with another RNN with LSTM cells and a softmax layer. At
time $t_i$, it reads sentence $s_i$ and makes a binary prediction,
conditioned on the document representation (obtained from the document
encoder) and the previously labeled sentences. This way, the sentence
extractor is able to identify locally and globally important sentences
within the document. We rank the sentences in a document~\doc by
\mbox{$p(y_i=1 | s_i,\doc,\theta)$}, the confidence scores assigned by
the softmax layer of the sentence extractor.
We learn to rank sentences by training our network in a reinforcement
learning framework, directly optimizing the final evaluation metric,
namely ROUGE \cite{rouge}. Before we describe our training algorithm,
we elaborate on why the maximum-likelihood cross-entropy objective
could be deficient for ranking sentences for summarization
(Section~\ref{sec:crossent}). Then, we define our reinforcement
learning objective in Section~\ref{sec:reinforced} and show that our
new way of training allows the model to better discriminate amongst
sentences, i.e., a sentence is ranked higher for selection if it often
occurs in high scoring summaries.
\begin{table*}[t!]
\center{\fontsize{8.5}{6.2}\selectfont
\begin{tabular}{| l | p{9.6cm} | c | c | c || r |}
\hline
\multirow{7}{*}{\rotatebox[origin=c]{90}{sent. pos.}}
& \multicolumn{1}{|c|}{\multirow{4}{*}{CNN article}}
&
\multirow{7}{*}{\rotatebox[origin=c]{90}{\parbox[t]{1.2cm}{Sent-level\\\textsc{rouge}}}}
&
\multirow{7}{*}{\rotatebox[origin=c]{90}{\parbox[t]{1.2cm}{Individual\\Oracle}}}
&
\multirow{7}{*}{\rotatebox[origin=c]{90}{\parbox[t]{1.2cm}{Collective\\Oracle}}}
&
\multicolumn{1}{|c|}{\multirow{7}{*}{\parbox[t]{1.8cm}{Multiple\\Collective\\Oracle}}}\\
& & & & & \\
& & & & & \\
& & & & & \\
& \multicolumn{1}{|c|}{\multirow{3}{*}{Sentences}} & & & & \\
& & & & & \\
& & & & & \\
\hline
0 & A debilitating, mosquito-borne virus called Chikungunya has
made its way to North Carolina, health officials say. & 21.2 & 1 &
1 & \multirow{41}{*}{\parbox[t]{1.9cm}{ (0,11,13) : 59.3 \\ (0,13)
: 57.5 \\ (11,13) : 57.2 \\ (0,1,13) : 57.1 \\ (1,13) : 56.6
\\ (3,11,13) : 55.0 \\ (13) : 54.5 \\ (0,3,13) : 54.2
\\ (3,13) : 53.4 \\ (1,3,13) : 52.9 \\ (1,11,13) : 52.0
\\ (0,9,13) : 51.3 \\ (0,7,13) : 51.3 \\ (0,12,13) : 51.0
\\ (9,11,13) : 50.4 \\ (1,9,13) : 50.1 \\ (12,13) : 49.3
\\ (7,11,13) : 47.8 \\ (0,10,13) : 47.8 \\ (11,12,13):47.7
\\ (7,13) : 47.6 \\ (9,13) : 47.5 \\ (1,7,13) : 46.9
\\ (3,7,13) : 46.0 \\ (3,12,13) : 46.0 \\ (3,9,13) : 45.9
\\ (10,13) : 45.5 \\ (4,11,13) : 45.3 \\ ... \\ \\
}}\\
1 & It's the state's first reported case of the virus. & 18.1 & 1 &
0 & \\
2 & The patient was likely infected in the Caribbean, according to
the Forsyth County Department of Public Health. & 11.2 & 1 & 0 & \\
3 & Chikungunya is primarily found in Africa, East Asia and the
Caribbean islands, but the Centers for Disease Control and
Prevention has been watching the virus,+ for fear that it could
take hold in the United States -- much like West Nile did more
than a decade ago. & 35.6 & 1 & 0 & \\
4 & The virus, which can cause joint pain and arthritis-like
symptoms, has been on the U.S. public health radar for some
time. & 16.7 & 1 & 0 & \\
5 & About 25 to 28 infected travelers bring it to the United
States each year, said Roger Nasci, chief of the CDC's Arboviral
Disease Branch in the Division of Vector-Borne Diseases. & 9.7 & 0
& 0 & \\
6 & "We haven't had any locally transmitted cases in the U.S. thus
far," Nasci said. & 7.4 & 0 & 0 & \\
7 & But a major outbreak in the Caribbean this year -- with more
than 100,000 cases reported -- has health officials concerned. &
16.4 & 1 & 0 & \\
8 & Experts say American tourists are bringing Chikungunya back
home, and it's just a matter of time before it starts to spread
within the United States. & 10.6 & 0 & 0 & \\
9 & After all, the Caribbean is a popular one with American
tourists, and summer is fast approaching. & 13.9 & 1 & 0 & \\
10 & "So far this year we've recorded eight travel-associated
cases, and seven of them have come from countries in the Caribbean
where we know the virus is being transmitted," Nasci said. & 18.4
& 1 & 0 & \\
11 & Other states have also reported cases of Chikungunya. & 13.4 &
0 & 1 & \\
12 & The Tennessee Department of Health said the state has had
multiple cases of the virus in people who have traveled to the
Caribbean. & 15.6 & 1 & 0 & \\
13 & The virus is not deadly, but it can be painful, with symptoms
lasting for weeks. & 54.5 & 1 & 1 & \\
14 & Those with weak immune systems, such as the elderly, are more
likely to suffer from the virus' side effects than those who are
healthier. & 5.5 & 0 & 0 & \\
\hline \hline
\multicolumn{6}{|l|}{Story Highlights} \\
\multicolumn{6}{|l|}{\parbox[t]{15cm}{\textbullet \hspace{0.1cm}
North Carolina reports first case of mosquito-borne virus called
Chikungunya \hspace{0.2cm} \textbullet \hspace{0.1cm}
Chikungunya is primarily found in Africa, East Asia and the
Caribbean islands \hspace{0.2cm} \textbullet \hspace{0.1cm}
Virus is not deadly, but it can be painful, with symptoms
lasting for weeks}} \\
\hline
\end{tabular}
}
\caption{An abridged CNN article (only first~15 out of~31 sentences are
shown) and its ``story highlights''. The
latter are typically written by journalists to
allow readers to quickly gather information on stories.
Highlights are often used as gold standard abstractive summaries
in the summarization literature.} \label{tab:cnnexample}
\end{table*}
\section{The Pitfalls of Cross-Entropy Loss}
\label{sec:crossent}
Previous work
optimizes summarization
models by maximizing $p(y | \doc,\theta) = \prod_{i=1}^n p(y_i |
s_i,\doc,\theta)$, the likelihood of the ground-truth labels
\mbox{\labels = \labelseq} for sentences $(s_1,s_2,\dots,s_n)$, given
document~\doc and model parameters~$\theta$. This objective can be
achieved by minimizing the cross-entropy loss at each decoding step:
\begin{equation}
L(\theta) = -\sum_{i=1}^n \log p(y_i | s_i,\doc,\theta). \label{eq:celoss}
\end{equation}
Cross-entropy training leads to two kinds of discrepancies in the
model. The first discrepancy comes from the disconnect between the
task definition and the training objective. While MLE in
Equation~\eqref{eq:celoss} aims to maximize the likelihood of the
ground-truth labels, the model is (a)~expected to rank sentences to
generate a summary and (b)~evaluated using $\mbox{ROUGE}$ at test
time. The second discrepancy comes from the reliance on ground-truth
labels. Document collections for training summarization systems do not
naturally contain labels indicating which sentences should be
extracted. Instead, they are typically accompanied by abstractive
summaries from which sentence-level labels are
extrapolated. \newcite{jp-acl16} follow \newcite{woodsend-acl10} in
adopting a rule-based method which assigns labels to each sentence in
the document \emph{individually} based on their semantic
correspondence with the gold summary (see the fourth column in
Table~\ref{tab:cnnexample}). An alternative method
\cite{svore-emnlp07,Cao:2016,nallapati17} identifies the set of
sentences which \emph{collectively} gives the highest ROUGE with
respect to the gold summary. Sentences in this set are labeled with~1
and 0~otherwise (see the column 5 in Table~\ref{tab:cnnexample}).
Labeling sentences individually often generates too many positive
labels causing the model to overfit the data. For example, the
document in Table~\ref{tab:cnnexample} has 12~positively labeled
sentences out of 31 in total (only first 10 are shown). Collective
labels present a better alternative since they only pertain to the few
sentences deemed most suitable to form the summary. However, a model
trained with cross-entropy loss on collective labels will underfit the
data as it will only maximize probabilities $p(1 | s_i,\doc,\theta)$
for sentences in this set (e.g., sentences $\{0,11,13\}$ in
Table~\ref{tab:cnnexample}) and ignore all other sentences. We found
that there are many candidate summaries with high ROUGE scores which
could be considered during training.
Table~\ref{tab:cnnexample} (last column) shows candidate summaries
ranked according to the mean of \mbox{ROUGE-1}, \mbox{ROUGE-2}, and
\mbox{ROUGE-L} F$_1$ scores.
Interestingly, multiple top ranked summaries have reasonably high
ROUGE scores. For example, the average ROUGE for the summaries ranked
second (0,13), third (11,13), and fourth (0,1,13) is~57.5\%, 57.2\%,
and~57.1\%, and all top 16~summaries have ROUGE scores more or equal
to 50\%. A few sentences are indicative of important content and
appear frequently in the summaries: sentence~13 occurs in all
summaries except one, while sentence~0 appears in several summaries
too. Also note that summaries (11,13) and (1,13) yield better ROUGE
scores compared to longer summaries, and may be as informative, yet
more concise, alternatives.
These discrepancies render the model less efficient at ranking
sentences for the summarization task. Instead of maximizing the
likelihood of the ground-truth labels, we could train the model to
predict the individual ROUGE score for each sentence in the document
and then select the top $m$~sentences with highest scores. But
sentences with individual ROUGE scores do not necessarily lead to a
high scoring summary, e.g.,~they may convey overlapping content and
form verbose and redundant summaries. For example, sentence~3, despite
having a high individual ROUGE score (35.6\%), does not occur in any
of the top~5 summaries. We next explain how we address these issues
using reinforcement learning.
\section{Sentence Ranking with Reinforcement Learning}
\label{sec:reinforced}
Reinforcement learning \cite{Sutton98a} has been proposed as a way of
training sequence-to-sequence generation models in order to directly
optimize the metric used at test time, e.g.,~BLEU or ROUGE
\cite{ranzato-arxiv15-bias}. We adapt reinforcement learning to our
formulation of extractive summarization to rank sentences for summary
generation.
We propose an objective function that combines the maximum-likelihood
cross-entropy loss with rewards from policy gradient reinforcement
learning to globally optimize ROUGE. Our training algorithm allows to
explore the space of possible summaries, making our model more robust
to unseen data. As a result, reinforcement learning helps extractive
summarization in two ways: (a)~it directly optimizes the evaluation
metric instead of maximizing the likelihood of the ground-truth labels
and (b)~it makes our model better at discriminating among sentences; a
sentence is ranked high for selection if it often occurs in high
scoring summaries.
\subsection{Policy Learning}
We cast the neural summarization model introduced in
Figure~\ref{fig:architecture} in the Reinforcement Learning paradigm
\cite{Sutton98a}. Accordingly, the model can be viewed as an ``agent''
which interacts with an ``environment'' consisting of documents. At
first, the agent is initialized randomly, it reads document~\doc and
predicts a relevance score for each sentence $s_i \in D$ using
``policy'' $p(y_i | s_i,\doc,\theta)$, where $\theta$~are model
parameters. Once the agent is done reading the document, a summary
with labels $\hat{y}$ is sampled out of the ranked sentences. The
agent is then given a ``reward''~$r$ commensurate with how well the
extract resembles the gold-standard summary. Specifically, as reward
function we use mean F$_1$ of $\mbox{ROUGE-1}$, $\mbox{ROUGE-2}$,
and $\mbox{ROUGE-L}$. Unigram and bigram overlap ($\mbox{ROUGE-1}$ and
$\mbox{ROUGE-2}$) are meant to assess informativeness, whereas the
longest common subsequence ($\mbox{ROUGE-L}$) is meant to assess
fluency.
We update the agent using the REINFORCE algorithm \cite{Williams:1992}
which aims to minimize the negative expected reward:
\begin{equation}
L(\theta) = -\mathbb{E}_{\hat{y} \sim\ p_{\theta}}
[r(\hat{y})] \label{eq:reinloss}
\end{equation}
where, $p_{\theta}$ stands for $p(y|\doc,\theta)$. REINFORCE is based
on the observation that the expected gradient of a non-differentiable
reward function (ROUGE, in our case) can be computed as
follows:
\begin{equation}
\nabla L(\theta) = -\mathbb{E}_{\hat{y} \sim\ p_{\theta}} [r(\hat{y})
\nabla \log p(\hat{y} | \doc,\theta)] \label{eq:reinlossgrad}
\end{equation}
While MLE in Equation~\eqref{eq:celoss} aims to maximize the
likelihood of the training data, the objective in
Equation~\eqref{eq:reinloss} learns to discriminate among sentences
with respect to how often they occur in high scoring summaries.
\subsection{Training with High Probability Samples}
Computing the expectation term in Equation~\eqref{eq:reinlossgrad} is
prohibitive, since there is a large number of possible extracts. In
practice, we approximate the expected gradient using a single
sample~$\hat{y}$ from~$p_{\theta}$ for each training example in a
batch:
\begin{align}
\nabla L(\theta) &\approx -r(\hat{y}) \nabla \log
p(\hat{y} | \doc,\theta) \\ &\approx - r(\hat{y}) \sum_{i=1}^n \nabla
\log p(\hat{y_i} | s_i,\doc,\theta) \label{eq:reinlossgrad-singsample}
\end{align}
Presented in its original form, the REINFORCE algorithm starts
learning with a random policy which can make model training
challenging for complex tasks like ours where a single document can
give rise to a very large number of candidate summaries. We therefore
limit the search space of~$\hat{y}$ in
Equation~\eqref{eq:reinlossgrad-singsample} to the set of largest
probability samples~$\hat{\mathbb{Y}}$. We
approximate~$\hat{\mathbb{Y}}$ by the~$k$ extracts which receive
highest ROUGE scores. More concretely, we assemble candidate summaries
efficiently by first selecting $p$~sentences from the document which
on their own have high ROUGE scores. We then generate all possible
combinations of $p$~sentences subject to maximum length $m$ and
evaluate them against the gold summary. Summaries are ranked according
to~F$_1$ by taking the mean of $\mbox{ROUGE-1}$, $\mbox{ROUGE-2}$, and
$\mbox{ROUGE-L}$. $\hat{\mathbb{Y}}$ contains these top~$k$ candidate
summaries. During training, we sample~$\hat{y}$
from~$\hat{\mathbb{Y}}$ instead of~$p(\hat{y} | \doc,\theta)$.
\newcite{ranzato-arxiv15-bias} proposed an alternative to REINFORCE
called MIXER (Mixed Incremental Cross-Entropy Reinforce) which first
pretrains the model with the cross-entropy loss using ground truth
labels and then follows a curriculum learning strategy
\cite{bengio-nips2015-curriculum} to gradually teach the model to
produce stable predictions on its own. In our experiments MIXER
performed worse than the model of \newcite{nallapati17} just trained
on collective labels. We conjecture that this is due to the unbounded
nature of our ranking problem. Recall that our model assigns relevance
scores to sentences rather than words. The space of sentential
representations is vast and fairly unconstrained compared to other
prediction tasks operating with fixed vocabularies
\cite{li-emnlp-16,paulus-socher-arxiv17,xingxing-arxiv-17}. Moreover,
our approximation of the gradient allows the model to converge much
faster to an optimal policy. Advantageously, we do not require an
online reward estimator, we pre-compute $\hat{\mathbb{Y}}$, which
leads to a significant speedup during training compared to MIXER
\cite{ranzato-arxiv15-bias} and related training schemes
\cite{shenMRT-acl16}.
\section{Experimental Setup}
\label{sec:experiments}
In this section we present our experimental setup for assessing the
performance of our model which we call \refresh\ as a shorthand for
\textbf{RE}in\textbf{F}o\textbf{R}cement Learning-based
\textbf{E}xtractive \textbf{S}ummarization. We describe our datasets,
discuss implementation details, our evaluation protocol, and the
systems used for comparison.
\paragraph{Summarization Datasets}
We evaluated our models on the CNN and DailyMail news highlights
datasets \cite{hermann-nips15}. We used the standard splits of
\newcite{hermann-nips15} for training, validation, and testing
(90,266/1,220/1,093 documents for CNN and 196,961/12,148/10,397 for
DailyMail). We did not anonymize entities or lower case tokens. We
followed previous studies
\cite{jp-acl16,nallapati-signll16,nallapati17,see-acl17,tanwan-acl17}
in assuming that the ``story highlights'' associated with each article
are gold-standard abstractive summaries. During training we use these
to generate high scoring extracts and to estimate rewards for them,
but during testing, they are used as reference summaries to evaluate
our models.
\paragraph{Implementation Details}
We generated extracts by selecting three sentences ($m=3$) for CNN
articles and four sentences ($m=4$) for DailyMail articles. These
decisions were informed by the fact that gold highlights in the
CNN/DailyMail validation sets are 2.6/4.2~sentences long. For both
datasets, we estimated high-scoring extracts using 10~document
sentences ($p=10$) with highest ROUGE scores. We tuned the
initialization parameter $k$~for $\hat{\mathbb{Y}}$ on the validation
set: we found that our model performs best with $k=5$~for the CNN
dataset and $k=15$~for the DailyMail dataset.
We used the One Billion Word Benchmark corpus \cite{billionbenchmark}
to train word embeddings with the skip-gram model \cite{word2vec}
using context window size~6, negative sampling size~10, and
hierarchical softmax~1. Known words were initialized with pre-trained
embeddings of size~200. Embeddings for unknown words were initialized
to zero, but estimated during training. Sentences were padded with
zeros to a length of~100. For the sentence encoder, we used a list of
kernels of widths 1 to 7, each with output channel size of 50
\cite{kim-aaai16}. The sentence embedding size in our model was~350.
For the recurrent neural network component in the document encoder and
sentence extractor, we used a single-layered LSTM network with
size~600. All input documents were padded with zeros to a maximum
document length of~120. We performed minibatch cross-entropy training
with a batch size of 20~documents for 20~training epochs. It took
around 12 hrs on a single GPU to train. After each epoch, we evaluated
our model on the validation set and chose the best performing model
for the test set. During training we used the Adam optimizer
\cite{adam-14} with initial learning rate~$0.001$. Our system is
implemented in TensorFlow \cite{tensorflow2015-whitepaper}.
\begin{figure}[t!]
\center{\fontsize{8.5}{6.2}\selectfont
\begin{tabular}{|@{~}c@{~}| p{6.8cm} |}
\hline
\multirow{10}{*}{\rotatebox[origin=c]{90}{\textsc{Lead}}} &
\textbullet \hspace{0.1cm} A SkyWest Airlines flight made an
emergency landing in Buffalo, New York, on Wednesday after a
passenger lost consciousness, officials said. \\
& \textbullet \hspace{0.1cm} The passenger received medical
attention before being released, according to Marissa Snow,
spokeswoman for SkyWest. \\
& \textbullet \hspace{0.1cm} She said the airliner expects to
accommodate the 75 passengers on another aircraft to their
original destination -- Hartford, Connecticut -- later Wednesday
afternoon. \\ \hline
\multirow{6}{*}{\rotatebox[origin=c]{90}{See et al.}} &
\textbullet \hspace{0.1cm} Skywest Airlines flight made an
emergency landing in Buffalo, New York, on Wednesday after a
passenger lost consciousness. \\
& \textbullet \hspace{0.1cm} She said the airliner expects to
accommodate the 75 passengers on another aircraft to their
original destination -- Hartford, Connecticut. \\ \hline
\multirow{8}{*}{\rotatebox[origin=c]{90}{\refresh}} &
\textbullet \hspace{0.1cm} A SkyWest Airlines flight made an
emergency landing in Buffalo, New York, on Wednesday after a
passenger lost consciousness, officials said. \\
& \textbullet \hspace{0.1cm} The passenger received medical
attention before being released, according to Marissa Snow,
spokeswoman for SkyWest. \\
& \textbullet \hspace{0.1cm} The Federal Aviation Administration
initially reported a pressurization problem and said it would
investigate. \\ \hline
\multirow{4}{*}{\rotatebox[origin=c]{90}{\textsc{Gold}}}
& \textbullet \hspace{0.1cm} FAA backtracks on saying crew reported a pressurization problem \\
& \textbullet \hspace{0.1cm} One passenger lost consciousness \\
& \textbullet \hspace{0.1cm} The plane descended 28,000 feet in three minutes \\ \hline \hline
Q$_1$ & Who backtracked on saying crew reported a pressurization problem? (\emph{FAA}) \\
Q$_2$ & How many passengers lost consciousness in the incident? (\emph{One}) \\
Q$_3$ & How far did the plane descend in three minutes? (\emph{28,000 feet}) \\ \hline
\end{tabular}
}
\caption{Summaries produced by the \textsc{Lead} baseline, the
abstractive system of \protect\newcite{see-acl17} and \refresh\ for
a CNN (test) article. \textsc{Gold} presents the human-authored
summary; the bottom block shows manually written questions using
the gold summary and their answers in
parentheses.}\label{fig:summaries}
\vspace{-0.2cm}
\end{figure}
\paragraph{Evaluation}
We evaluated summarization quality using F$_1$ $\mbox{ROUGE}$
\cite{rouge}. We report unigram and bigram overlap ($\mbox{ROUGE-1}$
and $\mbox{ROUGE-2}$) as a means of assessing informativeness and the
longest common subsequence ($\mbox{ROUGE-L}$) as a means of assessing
fluency.\footnote{We used pyrouge, a Python package, to compute all
ROUGE scores with parameters ``-a -c 95 -m -n 4 -w 1.2.''} We
compared \refresh\ against a baseline which simply selects the first
$m$~leading sentences from each document (\textsc{Lead}) and two
neural models similar to ours (see left block in
Figure~\ref{fig:architecture}), both trained with cross-entropy loss.
\newcite{jp-acl16} train on individual labels, while
\newcite{nallapati17} use collective labels. We also compared our
model against the abstractive systems of \newcite{chenIjcai-16},
\newcite{nallapati-signll16}, \newcite{see-acl17}, and
\newcite{tanwan-acl17}.\footnote{\newcite{jp-acl16} report ROUGE
recall scores on the DailyMail dataset only. We used their code
(\url{https://github.com/cheng6076/NeuralSum}) to produce ROUGE
F$_1$ scores on both CNN and DailyMail datasets. For other systems,
all results are taken from their papers.}
In addition to ROUGE which can be misleading when used as the only
means to assess the informativeness of summaries
\cite{schluter:2017:EACLshort}, we also evaluated system output by
eliciting human judgments in two ways. In our first experiment,
participants were presented with a news article and summaries
generated by three systems: the \textsc{Lead} baseline, abstracts from
\newcite{see-acl17}, and extracts from \refresh. We also included the
human-authored highlights.\footnote{We are grateful to Abigail See for
providing us with the output of her system. We did not include
output from \newcite{nallapati17}, \newcite{chenIjcai-16},
\newcite{nallapati-signll16}, or \newcite{tanwan-acl17} in our human
evaluation study, as these models are trained on a named-entity
anonymized version of the CNN and DailyMail datasets, and as result
produce summaries which are not comparable to ours. We
did not include extracts from \newcite{jp-acl16} either as they were
significantly inferior to \textsc{Lead} (see
Table~\ref{tab:cnndm}).} Participants read the articles and were
asked to rank the summaries from best (1) to worst (4) in order of
informativeness (does the summary capture important information in the
article?) and fluency (is the summary written in well-formed
English?). We did not allow any ties. We randomly selected 10~articles
from the CNN test set and 10~from the DailyMail test set. The study
was completed by five participants, all native or proficient English
speakers. Each participant was presented with the 20 articles. The
order of summaries to rank was randomized per article and the order of
articles per participant. Examples of summaries our subjects ranked
are shown in Figure~\ref{fig:summaries}.
Our second experiment assessed the degree to which our model retains
key information from the document following a question-answering (QA)
paradigm which has been previously used to evaluate summary quality
and text compression \cite{MorrisKA92,Mani:1999,Clarke:Lapata:2010}.
We created a set of questions based on the gold summary under the
assumption that it highlights the most important document content. We
then examined whether participants were able to answer these questions
by reading system summaries alone without access to the article. The
more questions a system can answer, the better it is at summarizing
the document as a whole.
We worked on the same 20 documents used in our first elicitation
study. We wrote multiple fact-based question-answer pairs for each
gold summary without looking at the document. Questions were
formulated so as to not reveal answers to subsequent questions. We
created 71~questions in total varying from two to six questions per
gold summary. Example questions are given in
Figure~\ref{fig:summaries}. Participants read the summary and answered
all associated questions as best they could without access to the
original document or the gold summary. Subjects were shown summaries
from three systems: the \textsc{Lead} baseline, the abstractive system
of \newcite{see-acl17}, and \refresh. Five participants answered
questions for each summary. We used the same scoring mechanism from
\newcite{Clarke:Lapata:2010}, i.e., a correct answer was marked with a
score of one, partially correct answers with a score of~0.5, and zero
otherwise. The final score for a system is the average of all its
question scores. Answers were elicited using Amazon's Mechanical Turk
crowdsourcing platform. We uploaded data in batches (one system at a
time) on Mechanical Turk to ensure that same participant does not
evaluate summaries from different systems on the same set of
questions.
\begin{table*}[t!]
\center{\small
\begin{tabular}{ |l |c c c| c c c| c c c| }
\hline
\multicolumn{1}{|c|}{Models}& \multicolumn{3}{c}{CNN} &
\multicolumn{3}{c}{DailyMail} &
\multicolumn{3}{c|}{CNN$+$DailyMail}\\
& R1 & R2 & RL & R1 & R2 & RL & R1 & R2 & RL \\ \hline \hline
\textsc{Lead} (ours) & 29.1 & 11.1 & 25.9 & 40.7 & 18.3 & 37.2 &
39.6 & 17.7 & 36.2 \\
\textsc{Lead}$^{\ast}$ \cite{nallapati17} & --- & --- & --- & --- & --- & --- & 39.2 & 15.7 & 35.5 \\
\textsc{Lead} \cite{see-acl17} & --- & --- & --- & --- & --- & --- & \textbf{40.3} & 17.7 & \textbf{36.6} \\
\newcite{jp-acl16} & 28.4 & 10.0 & 25.0 & 36.2 & 15.2 & 32.9 & 35.5 & 14.7 & 32.2 \\
\newcite{nallapati17}$^{\ast}$ & --- & --- & --- & --- & --- & --- & 39.6 & 16.2 & 35.3 \\
\refresh\ & \textbf{30.4} & \textbf{11.7} & \textbf{26.9} & \textbf{41.0} & \textbf{18.8} & \textbf{37.7} & 40.0 & \textbf{18.2} & \textbf{36.6} \\ \hline \hline
\newcite{chenIjcai-16}$^{\ast}$ & 27.1 & 8.2 & 18.7 & --- & --- & --- & --- & --- & --- \\
\newcite{nallapati-signll16}$^{\ast}$ & --- & --- & --- & --- & --- & --- & 35.4 & 13.3 & 32.6 \\
\newcite{see-acl17} & --- & --- & --- & --- & --- & --- & 39.5 & 17.3 & 36.4 \\
\newcite{tanwan-acl17}$^{\ast}$ & 30.3 & 9.8 & 20.0 & --- & --- & --- & 38.1 & 13.9 & 34.0 \\ \hline
\end{tabular}
}
\caption{Results on the CNN and DailyMail test sets. We report
ROUGE-1 (R1), ROUGE-2 (R2), and ROUGE-L (RL) F$_1$
scores. Extractive systems are in the first block and
abstractive systems in the second. Table cells are filled with ---
whenever results are not available. Models marked with $^{\ast}$
are not directly comparable to ours as they are based on
an anonymized version of the dataset.\label{tab:cnndm}}
\vspace{-0.2cm}
\end{table*}
\section{Results}
\label{sec:results}
We report results using automatic metrics in
Table~\ref{tab:cnndm}. The top part of the table compares \refresh\
against related extractive systems. The bottom part reports the
performance of abstractive systems. We present three variants of
\textsc{Lead}, one is computed by ourselves and the other two are
reported in \newcite{nallapati17} and \newcite{see-acl17}. Note that
they vary slightly due to differences in the preprocessing of the
data. We report results on the CNN and DailyMail datasets and their
combination (CNN$+$DailyMail).
\paragraph{Cross-Entropy vs Reinforcement Learning}
The results in Table~\ref{tab:cnndm} show that \refresh\ is superior
to our \textsc{Lead} baseline and extractive systems across datasets
and metrics. It outperforms the extractive system of
\newcite{jp-acl16} which is trained on individual labels. \refresh\
is not directly comparable with \newcite{nallapati17} as they generate
anonymized summaries. Their system lags behind their \textsc{Lead}
baseline on ROUGE-L on the CNN+DailyMail dataset (35.5\% vs 35.3\%).
Also note that their model is trained on collective labels and has a
significant lead over \newcite{jp-acl16}. As discussed in
Section~\ref{sec:crossent} cross-entropy training on individual labels
tends to overgenerate positive labels leading to less informative and
verbose summaries.
\paragraph{Extractive vs Abstractive Systems}
Our automatic evaluation results further demonstrate that
\refresh\ is superior to abstractive systems
\cite{chenIjcai-16,nallapati-signll16,see-acl17,tanwan-acl17} which
are all variants of an encoder-decoder architecture
\cite{sutskever-nips14}. Despite being more faithful to the actual
summarization task (hand-written summaries combine several pieces of
information from the original document), abstractive systems lag
behind the \textsc{Lead} baseline. \newcite{tanwan-acl17} present a
graph-based neural model, which manages to outperform \textsc{Lead} on
ROUGE-1 but falters when higher order ROUGE scores are used. Amongst
abstractive systems \newcite{see-acl17} perform best. Interestingly,
their system is mostly extractive, exhibiting a small degree of
rewriting; it copies more than 35\% of the sentences in the source
document, 85\% of 4-grams, 90\% of 3-grams, 95\% of bigrams, and 99\%
of unigrams.
\begin{table}[t!]
\center{\footnotesize
\begin{tabular}{|@{~}l@{~~}| c@{~~} c@{~~} c@{~~} c@{~~} |c@{~}| }
\hline
Models & 1st & 2nd & 3rd & 4th & QA\\ \hline
\textsc{Lead} & 0.11 & 0.21 & \textbf{0.34} & {0.33} & 36.33\\
\newcite{see-acl17} & 0.14 & 0.18 & 0.31 & \textbf{0.36} & 28.73 \\
\refresh\ & 0.35 &\textbf{0.42} & 0.16 & 0.07 & \textbf{66.34} \\
\textsc{Gold} & \textbf{0.39} & 0.19 & 0.18 & 0.24 & ---\\
\hline
\end{tabular}}
\caption{System ranking and QA-based evaluations. Rankings (1st,
2nd, 3rd and 4th) are shown as proportions. Rank 1 is the best and
Rank 4, the worst. The column QA shows the percentage of questions
that participants answered correctly by reading system
summaries. \label{tab:heval}
\vspace{-0.2cm}
\end{table}
\paragraph{Human Evaluation: System Ranking}
Table~\ref{tab:heval} shows, proportionally, how often participants
ranked each system, 1st, 2nd, and so on. Perhaps unsurprisingly
human-authored summaries are considered best (and ranked 1st 39\% of
the time). \refresh\ is ranked 2nd best followed by \textsc{Lead} and
\newcite{see-acl17} which are mostly ranked in 3rd and 4th places. We
carried out pairwise comparisons between all models in
Table~\ref{tab:heval} to assess whether system differences are
statistically significant. There is no significant difference between
\textsc{Lead} and \newcite{see-acl17}, and \refresh\ and \textsc{Gold}
(using a one-way ANOVA with post-hoc Tukey HSD tests; \mbox{$p <
0.01$}). All other differences are statistically significant.
\paragraph{Human Evaluation: Question Answering}
The results of our QA evaluation are shown in the last column of
Table~\ref{tab:heval}. Based on summaries generated by \refresh,
participants can answer 66.34\% of questions correctly. Summaries
produced by \textsc{Lead} and the abstractive system of
\newcite{see-acl17} provide answers for 36.33\% and 28.73\% of the
questions, respectively. Differences between systems are all
statistically significant (\mbox{$p < 0.01$}) with the exception of
\textsc{Lead} and \newcite{see-acl17}.
Although the QA results in Table~\ref{tab:heval} follow the same
pattern as ROUGE in Table~\ref{tab:cnndm}, differences among systems
are now greatly amplified. QA-based evaluation is more focused and a
closer reflection of users' information need (i.e.,~to find out what
the article is about), whereas ROUGE simply captures surface
similarity (i.e.,~\mbox{$n$-gram} overlap) between output summaries
and their references. Interestingly, \textsc{Lead} is considered
better than \newcite{see-acl17} in the QA evaluation, whereas we find
the opposite when participants are asked to rank systems. We
hypothesize that \textsc{Lead} is indeed more informative than
\newcite{see-acl17} but humans prefer shorter summaries. The average
length of \textsc{Lead} summaries is 105.7~words compared to~61.6 for
\newcite{see-acl17}.
\section{Related Work}
Traditional summarization methods manually define features to rank
sentences for their salience in order to identify the most important
sentences in a document or set of documents
\cite{Kupiec:1995binary,mani2001automatic,radev-lrec2004,filatova-04event,nenkova-06,SparckJones:2007}.
A vast majority of these methods learn to score each sentence
independently
\cite{Barzilay97usinglexical,Teufel97sentenceextraction,erkan:2004:lexrank,Mihalcea04TextRank,Shen:2007:IJCAI,Schilder:2008:fastsum,Wan:2010:urank}
and a summary is generated by selecting top-scored sentences in a way
that is not incorporated into the learning process. Summary quality
can be improved heuristically, \cite{Yih:2007:MSM}, via max-margin
methods \cite{Carbonell:1998:UMD,Li:2009:EDC}, or integer-linear
programming
\cite{woodsend-acl10,berg:2011,Woodsend:2012:emnlp,Almeida:acl13,parveen:2015:tgraph}.
Recent deep learning methods
\cite{krageback-cvsc14,Yin-ijcai15,jp-acl16,nallapati17} learn
continuous features without any linguistic preprocessing (e.g., named
entities). Like traditional methods, these approaches also suffer from
the mismatch between the learning objective and the evaluation
criterion (e.g., ROUGE) used at the test time. In comparison, our
neural model globally optimizes the ROUGE evaluation metric through a
reinforcement learning objective: sentences are highly ranked if they
occur in highly scoring summaries.
Reinforcement learning has been previously used in the context of
traditional multi-document summarization as a means of selecting a
sentence or a subset of sentences from a document
cluster. \newcite{Ryang:2012} cast the sentence selection task as a
search problem. Their agent observes a state (e.g.,~a candidate
summary), executes an action (a transition operation that produces a
new state selecting a not-yet-selected sentence), and then receives a
delayed reward based on $\mbox{tf}*\mbox{idf}$. Follow-on work
\cite{Rioux:emnlp14} extends this approach by employing ROUGE as part
of the reward function, while \newcite{hens-gscl15} further experiment
with \mbox{$Q$-learning}. \newcite{MollaAliod:2017:ALTA2017} has adapt
this approach to query-focused summarization. Our model differs from
these approaches both in application and formulation. We focus solely
on extractive summarization, in our case states are documents (not
summaries) and actions are relevance scores which lead to sentence
ranking (not sentence-to-sentence transitions). Rather than employing
reinforcement learning for sentence selection, our algorithm performs
sentence ranking using ROUGE as the reward function.
The REINFORCE algorithm \cite{Williams:1992} has been shown to improve
encoder-decoder text-rewriting systems by allowing to directly
optimize a non-differentiable objective
\cite{ranzato-arxiv15-bias,li-emnlp-16,paulus-socher-arxiv17} or to
inject task-specific constraints
\cite{xingxing-arxiv-17,nogueira-cho:2017:EMNLP2017}. However, we are
not aware of any attempts to use reinforcement learning for training a
sentence ranker in the context of extractive summarization.
\section{Conclusions}
\label{sec:conclusions}
In this work we developed an extractive summarization model which is
globally trained by optimizing the ROUGE evaluation metric. Our
training algorithm explores the space of candidate summaries while
learning to optimize a reward function which is relevant for the task
at hand. Experimental results show that reinforcement learning offers
a great means to steer our model towards generating informative,
fluent, and concise summaries outperforming state-of-the-art
extractive and abstractive systems on the CNN and DailyMail
datasets. In the future we would like to focus on smaller discourse
units \cite{rst} rather than individual sentences, modeling
compression and extraction
jointly.
\paragraph{Acknowledgments} \begin{footnotesize} We gratefully
acknowledge the support of the European Research Council (Lapata;
award number 681760), the European Union under the Horizon 2020
SUMMA project (Narayan, Cohen; grant agreement 688139), and Huawei
Technologies (Cohen). The research is based upon work supported by
the Office of the Director of National Intelligence (ODNI),
Intelligence Advanced Research Projects Activity (IARPA), via
contract FA8650-17-C-9118. The views and conclusions contained
herein are those of the authors and should not be interpreted as
necessarily representing the official policies or endorsements,
either expressed or implied, of the ODNI, IARPA, or the
U.S. Government. The U.S. Government is authorized to reproduce and
distribute reprints for Governmental purposes notwithstanding any
copyright annotation thereon.\end{footnotesize}
|
{
"timestamp": "2018-04-17T02:17:44",
"yymm": "1802",
"arxiv_id": "1802.08636",
"language": "en",
"url": "https://arxiv.org/abs/1802.08636"
}
|
\section{Introduction}
\label{intro}
Breaking of the quantum Hall state in a 2D electron gas (2-DEG) is an intriguing topic as it mixes issues of edge and bulk transport. Several different models for quantum Hall state breakdown mechanisms have been proposed and they can be grouped into two main categories: bootstrap electron heating (BSEH) and quasi-elastic inter-Landau level scattering (QUILLS). There are experimental results supporting both of these views and no consensus has been reached yet. While the BSEH is usually associated with the edge states, the QUILLS mechanism is founded on the temperature dependence of the conductivity $\sigma_{xx}$ in the bulk of the 2DEG. Providently, in the Corbino sample geometry there are no continuous edge states across the sample, which makes this geometry particularly suitable for testing theories dependent on the bulk properties. Therefore Corbino geometry, implemented using an ultimate 2D material such as monolayer graphene, forms an excellent platform for studying the bias-induced breakup of the quantum Hall state.
The present paper extends the work of Ref. \cite{Laitinen2017} and concentrates on the cross-over from the gyrotropic Zener tunneling (inter-LL Zener tunneling) to avalanche type of transport, and finally to nearly ohmic behavior. In our IV characteristics, the Zener tunneling regime ends in a sharp increase of current with the bias voltage, which is quite well in line with the expectations from the BSEH theories. We address this cross over regime using low frequency current fluctuations (frequency $f\leq 10$ Hz) and shot noise $S_I$ experiments (over microwave band $f=650-900$ MHz) in order to obtain further information on the charge transfer processes underlying these phenomena. Our results indicate strong low-frequency noise in the rapidly growing, steep sections of the IV curves, which we interpret as bunching of electrons in the avalanche generation of the BSEH regime. This low-frequency noise grows as average current squared $\langle I \rangle^2$, which is typical for switching type of noise, as well as for resistance fluctuations with uniform energy spectrum. In our region of interest, the inevitable contact resistance fluctuations can be neglected, and we analyze the observed low frequency Lorentzian noise spectrum as a switching process. This analysis allows us to determine characteristic values for the basic transition rates in the switching process. At microwave frequencies, on the other hand, the noise is close to regular shot noise, although we find predominantly levels below the full Poissonian noise. The amount of shot noise at microwave frequencies is dependent on the correlations among the charge carriers: with full temporal correlations the zero-frequency shot noise vanishes \cite{Blanter2000}. Our results indicate that close to the onset of the avalanche regime there are substantial correlations between the carriers, whereas approximately Poissonian noise is reached when the avalanche pulses start to overlap each other.
\subsection{Inter-Landau level Zener tunneling}
The zero-energy Landau level (see, \textit{e.g}. Ref. \cite{ezawa}) in suspended graphene provides a well-isolated setting for investigations of the breakdown of the quantum Hall state at magnetic fields $B$ around 1 T and even below. At small $B$, the energy scales in this unique quantum Hall state can be kept small, which allows sensitive quasi-elastic studies of the breakdown at mK temperatures. At higher temperatures, however, inelastic processes such as phonon-assisted scattering processes may start to dominate the breakdown. The zero-energy level is four-fold degenerate with respect to the two values of true electron spin and two values of pseudospin that describes the distribution of electrons between two valleys in the graphene Brillouin zone. The degeneracy, however, is lifted by Coulomb interactions and the original four-fold degenerate zero-energy states split into two states with a gap between them. According to theoretical considerations \cite{ezawa}, the insulator state was expected to be ferromagnetic, although the experiments of Refs. \cite{Giesbers2009} and \cite{young} did not confirm this. Independently from the character of the gap, the zero-energy Landau state is an insulator with the gap $\Delta$ of the order of the characteristic Coulomb energy equal to $e^2/\epsilon \ell_B$ in graphene. Here $\epsilon$ is the dielectric constant, $\ell_B =\sqrt{\Phi_0/2\pi B}$ is the magnetic length, $B$ is the magnetic field, and $\Phi_0=h/e$ is the single-electron flux quantum.
Any insulator can be driven to a conducting state using a high voltage $V>V_{cr}$, where $V_{cr}$ denotes the threshold for dielectric breakdown. At bias voltages $V \gg V_{cr}$, the $IV$ curve becomes nearly linear (ohmic regime). At $V \ll V_{cr}$, on the other hand, the conductance is strongly suppressed and distinctly nonlinear. At low temperatures, an exponentially small current $I$ emerges due to Zener tunneling between the two bands \cite{Zener1934}, creating an electron in the empty upper band and a hole in the full lower band. \cite{ziman} Recently, we demonstrated \cite{Laitinen2017} that the regular equation for the Zener tunneling current $ I\propto e ^{-V_Z/ V }$, with $V_Z \propto \Delta^{3/2} $ has to be replaced by a gyrotropic law given by
\begin{equation}
I\propto e ^{-(V_Z/V)^2 },
\ee{vor}
where $V_Z=\frac{eBd}{\sqrt{8\pi}\epsilon\Phi_0}$. The formula follows from the gyrotropic Zener tunneling theory developed for the zero-energy Landau level of graphene in a strong magnetic field \cite{Laitinen2017}. This behavior provides evidence that the quantum tunneling processes here are governed - instead of the particle mass - by the gyrotropic force on a particle. Such a behavior is similar to the motion of quantized vortices in superfluids, for example in the nucleation of vortices at a plane boundary \cite{Vol72,Son73}, with the Magnus force (analog of the Lorentz force on an electron) balancing the external force \cite{EBS}. The Zener tunneling processes in the quantum Hall regime of a 2-DEG are central in the quasi-elastic inter-Landau level scattering (QUILLS) \cite{Heinonen1984,Eaves1984} and in the magnetoresistance oscillations in the ohmic regime \cite{Zener,Bykov2012}. The gyrotropic theory, however, is distinct from them in its implications for the IV-characteristics. In small constrictions QUILLS type of behavior has been observed at one-micron dimensions \cite{Bliek1996,Makarovsky2002}, i.e. at similar length scales where we observe the gyrotropic behavior in our experiments.
In the studies of the quantum Hall state breakdown, the Corbino geometry has a distinct advantage over the Hall bar geometry, because no edge states persist and the breakdown is dominated by transport processes in the bulk. The Corbino geometry has been used for numerous studies of phenomena related to transitions between Landau levels in 2D electron gases, most notably in the works of microwave-induced resistance oscillations and zero-resistance states \cite{Yang2003}, as well as phonon-induced resistance oscillations \cite{Liu2014}. It has also been employed to investigate standard-type Zener tunneling between different Landau levels \cite{Goran2013} and the bootstrap electron heating (BSEH) model \cite{Komiyama1985}, which has experimentally been found to be a reasonable route to breakdown of the quantum Hall effect (QHE) \cite{Ebert1983,Cage1983,Nachtwei1999,Komiyama2000a}. Recently, the Corbino geometry has also been employed for measuring current fluctuations in order to investigate the nature of bulk transport phenomena at the onset of the integer QHE breakup. In their low-frequency current noise experiments on the first Landau level of a 2-DEG heterostructure, Kobayashi and coworkers \cite{Chida2014,Hata2016} found strong bunching of electrons at the onset of the breakdown of the quantum Hall state, supporting the BSEH view.
\subsection{Avalanche type of transport} \label{AVtheory}
Avalanche type of current transport can be considered as one type of switching transport. Such transport is illustrated in Fig. \ref{fig:switchingNoise}a, where the current consists of a random sequence of two current levels 0 and $I_0$. The average switching rate upwards is denoted by $1/\tau_0$ while the rate downwards is $1/\tau_s$, which means that the average duration of avalanche pulses is $\tau_s$. Following e.g. Ref. \cite{Kogan}, one can derive for the avalanche type switching noise spectral density $S_I^{AV}$ the equation:
\begin{equation}
S_I^{AV}(\omega) = 4\frac{(\tau_0 \tau_s)^2}{(\tau_0 + \tau_s)^3}I_0^2\frac{1}{1+\omega^2\left( \frac{\tau_0 \tau_s}{\tau_0 + \tau_s} \right)^2} .
\end{equation}
We are interested in the cross-over from Zener tunneling events to avalanche type of transport. Initially at small currents, the avalanche events are rare and we can make the approximation $\tau_0 \gg \tau_s$. Using this approximation and recognizing that $\left\langle I\right\rangle = \left(\frac{\tau_s}{\tau_0}\right) I_0$, we obtain the spectrum:
\begin{equation} \label{limit1}
S_I^{AV}(\omega) = 4\tau_0 \left\langle I\right\rangle^2 \frac{1}{1+\omega^2\tau_s^2},
\end{equation}
which is illustrated schematically in Fig. \ref{fig:switchingNoise}b. The length of the avalanche pulse determines the corner frequency $\omega_c \propto 1/\tau_s$ in this regime and this possible variation in spectral density is indicated in Fig. \ref{fig:switchingNoise}b at three values of $\omega_c$.
\begin{figure}[htb]
\includegraphics[width=0.8\textwidth]{Fig1.pdf}
\centering
\caption{a) Model of low-frequency avalanche current noise derived from a switching sequence of rectangular current pulses with the nominal height of $I_0$ having asymmetric upward and downward transition rates $1/\tau_0$ and $1/\tau_s$, respectively. This corresponds to pulses with mean duration $\tau_s$ separated by a waiting time $\tau_0$ on average. b) Schematic illustration of Eq. \ref{limit1} at a few different lengths of the avalanche pulse $\tau_s$ (inverse of transition rate): the chosen specific transition rates result in a spectrum which is flat at low frequencies and rolls off linearly on the log-log scale of the illustration.}
\label{fig:switchingNoise}
\end{figure}
The time trace of Fig. 1 assumes that there is only one scenario for producing the avalanche sequence of charge carriers. In reality, there might be several parallel scenarios, which would then lead to a sum of Lorentzian spectra, each with a different weighting factor. Such a scenario could eventually lead to $1/f$ type of noise spectrum.
The above low-frequency model neglects fluctuations within the avalanche pulse. The current during the avalanche pulse will consist of individual charge carrier events, the spacing of which may vary in time. These fluctuations lead to wide-band shot noise which becomes important at frequencies $\omega \gg 1/\tau_s$ where the Lorentzian fluctuator spectrum of Eq. \ref{limit1} has already decreased below the shot noise level. The shot noise intrinsic to the avalanche pulse will depend on the correlations between charge carriers in the pulse: if the pulse is fully correlated, there is no shot noise \footnote{However, there will be a peak in the noise power spectrum at the frequency corresponding to the inverse of the arrival period of the correlated charge carriers.}, while without any correlation there should be independent carrier emissions with a full Poissonian shot noise like in a tunnel junction. Hence, the Fano factor for the current during the avalanche pulse is expected to vary between $ F = 0 - 1$. Using the average current $\langle I \rangle=I_0\frac{\tau_s}{\tau_s+\tau_0}$, the shot noise spectral density arising from the current pulses can be written as:
\begin{equation}\label{noise_shot_avalanche}
S_{sh}^{AV}(\omega)=F 2e\langle I \rangle.
\end{equation}
A further possibility would be to have an effective tunneling charge which is different from one electron. Here we assume this possibility to be included in to the value of the Fano factor.
\section{Experimental techniques}
Our measurements down to 10 mK were performed in a BlueFors LD-400 dilution refrigerator. The cryostat was equipped with a DC/audio-frequency measurement circuitry for IV characteristics and low-frequency noise. The bias leads were connected via microwave bias-T components to the sample, which facilitated simultaneous microwave noise measurements along a separate 50\,$\Omega$ output channel. The low frequency measurement leads were twisted pair phosphor-bronze wires supplemented by three stage $RC$ filters with a nominal cut-off given by $R=150$ $\Omega$ and $C=10$ nF. However, due the high impedance of the measured quantum Hall devices the actual cutoff is dependent on the resistance of the sample. For magneto-conductance scans, see Fig. \ref{fig:basics}, we used an AC peak-to-peak current excitation of $0.1$ nA at $f=3.333$ Hz.
\subsection{Samples and their characterization}
Resists with different selectivities (lift-off-resist LOR for support and PMMA for lithography) were employed to facilitate our sample fabrication (see the supplemental material of Ref. \cite{Kumar2016}). We exfoliated graphene (Graphenium, NGS Naturgraphit GmbH) using a heat-assisted technique~\cite{Huang2015}. Monolayer graphene flakes were located on the basis of their optical contrast and the identification was verified using a Raman spectrometer with He-Ne laser (633 nm). The first contact defining the outer rim of the Corbino disk (see Fig. \ref{fig:basics}a) was made in the regular manner \cite{Tombros2011}, and later the inner contact was fabricated together with a self-standing bridge to connect the inner electrode to a bonding pad. The strongly doped silicon Si++ substrate with a 285-nm layer of thermally grown SiO$_2$ provided the back gating electrode for the sample.
\begin{figure}[htb]
\includegraphics[width=0.6\textwidth]{Fig2a.png}
\includegraphics[width=1.3\textwidth]{Fig2b.pdf}
\centering
\caption{a) Scanning electron micrograph of sample EV3 (on the left side); the scale is indicated by the white bar. b) A magneto-conductance Landau "fan plot" recorded on sample C2 on the plane spanned by charge density $n$ and magnetic field $B$. The charge density range corresponds to gate voltages $V_g \in$ [-60,60] V. The main filling factors are indicated in the picture.}
\label{fig:basics}
\end{figure}
Initially after the fabrication process, our suspended devices tend to be $p$-doped in the first resistance $R$ \textit{vs.} gate voltage $V_g$ scans. Following the initial characterization, the samples were cooled down to $T$ = $10$ mK base temperature of the dilution refrigerator. Prior to DC and noise characterization, all devices were current annealed at the base temperature. These samples on LOR were typically annealed at a bias voltage of 1.6$\pm$0.1 V which is comparable with the optimal annealing voltage of our HF etched, rectangular two-lead samples \cite{Laitinen2014}. Subsequently, the samples were characterized by lock-in conductance measurement $G(V_g)$ to determine the mobility which was found to be $\mu > 10^5$ cm$^2$/Vs. The gate voltage was converted into charge carrier density by $n = (V_g-V_g^D)C_g/e$, where $V_g^D$ denotes the offset of the Dirac point from $V_g = 0$ V, typically these samples had $V_g^D \simeq -2$ V. A Landau fan plot, a conductance map on the $n-B$ plane, is displayed in Fig. \ref{fig:basics}b for sample C2. The gate capacitance $C_g$ was obtained by fitting this Landau fan plot to the calculated locations of the higher Landau levels on the $n-B$ plane.
The results of this paper cover two measured samples, C2 and EV3, with practically identical results. The inner and outer diameters of the EV3 Corbino ring were $d=0.8$ $\mu$m and $D=3.2$ $\mu$m, respectively. For the sample C2 the corresponding dimensions were $d=0.9$ $\mu$m and $D=2.8$ $\mu$m. The air gap between the graphene and the substrate surface, i.e. the LOR double layer thickness, was around 500 nm for both samples. The distance between the back gate and graphene was compared with the value obtained from the gate capacitance by using the parallel plate capacitor model, which resulted in a reasonable agreement.
\subsection{Noise measurements}
The correlations in electron dynamics can be quantified using the Fano factor $F=S_I/S_I^P$ for the relative magnitude of current fluctuations \cite{Blanter2000}. The value $F = 1$ corresponds to Poissonian noise $S_I^P$, indicating non-correlated electron motion without any interactions between the electrons. For correlated motion, for example due to Pauli exclusion principle, the Fano factor becomes $F < 1$ and ultimately vanishes altogether for fully ballistic transport channels. Superpoissonian noise, on the other hand, is an indication for bunching of particles \cite{Blanter2000}, which can be observed e.g. in avalanche diodes \cite{Reulet2009}. In avalanche type of transport, charge carriers are grouped together through carrier multiplication. Such avalanche pulses can be considered at long time scales as a single charge carrier, which means that the resulting shot noise (assuming random triggering of the pulses) can be written as $S_I = F_{AV}2e\langle I \rangle$, where $F_{AV}$ is related to the average charge of the pulse $N_{tot}e = \tau_s I_0$ by $F_{AV} = 2N_{tot}$ according to Eq. \ref{limit1}.
On short time scales, the avalanche pulses do have fluctuations that originate from the specific excitation processes leading to the carrier multiplication. It is assumed in BSEH theories that thermal excitation is relevant in the carrier multiplication processes. These thermal processes are strongly temperature dependent, but they can last on the order of microseconds at the lowest temperatures (See, \textit{e.g.} Ref. \cite{Walsh2017}). Consequently, shot noise experiments around frequencies of 1 GHz are able to probe possible thermal-relaxation-induced correlations in the avalanche regime as the temperature due to the Joule heating of the sample increases with current.
Our low-frequency noise measurements were carried out using voltage bias up to $100$ mV. The current was recorded using a transimpedance amplifier (Stanford Research SR570, gain $10^6$ V/A) and its fluctuations were measured using a SRS 785 FFT analyzer. For the low-frequency noise power spectral density, we employed fast Fourier transform (FFT) with $800$ points spanning the range $125$ mHz - $100$ Hz, using a total measurement time of $150$ s per bias point. In our experiments, we are limited in the measurement bandwidth by the $RC$ cut-off, dependent on the resistance of the sample $R$ and the total capacitance of the lines $C\simeq 30$ nF. Hence, in the results section, we characterize the bias dependence of the low-frequency noise at $10$ Hz which is the maximum frequency not yet appreciably influenced by the $RC$ cut-off frequency.
The high frequency noise was measured over frequencies $650 - 900$ MHz. Our microwave noise techniques follow the basic principles outlined in Refs. \cite{wu2006,wu2007,danneau2008,Nieminen2016}. In this work, we measured the excess shot noise $S_I(V) - S_I(0)$ using DC bias alone, while the zero-bias value $S_I(0)$ was measured intermittently in the middle of each bias sweep for a drift correction. In order to avoid external spurious disturbances from mobile phones, the set-up was closed in a Faraday cage. The noise signal was first amplified by a cryogenic low-noise SiGe amplifier (Caltech CITLF3, gain 36 dB) with a nominal noise temperature of $T_{noise} \approx 4$ K. A circulator was used in both channels to block the amplifier noise reaching the sample, see Fig. \ref{fig:shotNoiseSchem}. The noise detection channel ends with two room-temperature amplifiers (Mini-Circuits ZRL-1150LN+, gain 32 dB) yielding the total gain of 92 dB, including an 8 dB attenuator used to limit the power to a suitable level. Finally, the signal was mixed down by a 780 MHz local oscillator and digitized using ADL-5380 IQ-mixers and a 125 MS/s AlazarTech ATS9440 digitizer card. The digitized data was auto- and cross correlated in real time using a computer in which GPU accelerated data processing was utilized. Custom software for the processing was written in CUDA C.
\begin{figure}[t]
\includegraphics[width=0.8\textwidth]{Fig3.pdf}
\centering
\caption{Schematic of the measurement setup. The low- and high-frequency circuits are separated from each other by bias-Tees at low temperatures. The high-frequency side contains a microwave shot noise measurement hooked up to the contact electrodes of the Corbino ring, while the low frequency side is employed for DC-biasing as well as for measuring low-frequency noise from the outer contact. There are separate high frequency coaxial cables with 50 $\Omega$ terminators acting as thermal noise sources, which can be coupled to the amplification chain through a microwave switch. The cooled low-noise amplifiers (LNA, Caltech CITLF3) and the room temperature $\mu$W-amplifiers (2 $\times$ Mini-Circuits ZRL-1150LN+) provide a gain of 100 dB in total.}
\label{fig:shotNoiseSchem}
\end{figure}
A microwave switch allowed us to measure either shot noise from the graphene sample or thermal noise from $Z_0=50$ $\Omega$ resistors located at the mixing chamber (10 mK) and at the still plate (800 mK). These resistors allowed us to calculate the noise temperature of the measurement channels, and thereby calibrate the shot noise level. The system noise temperature of a measurement line amounts to:
\begin{equation}
T_{N} = \frac{T_{hot}-\frac{P_{hot}}{P_{cold}}T_{cold}}{\frac{P_{hot}}{P_{cold}}-1},
\end{equation}
where $P$ and $T$ refer to the thermal noise of a 50 $\Omega$ resistor and its physical temperature, while the labels $"hot/cold"$ refer to mixing chamber / still positions in the refrigerator, respectively.
One can use the system noise temperature $T_N$ to estimate the equivalent shot noise temperature from the excess noise correlation factor $C_{12}' = \frac{C_{12}}{\sqrt{C_1^2C_2^2}}$, where $C_i$ is the autocorrelation of channel $i$ and $C_{12}$ is the un-normalized cross-correlation between channels 1 and 2. $C_{12}'$ is normalized using auto-correlations $C_i$ which are dominated by the system noise temperature ($2eIZ_0 \ll T_N$). Then, the noise temperature corresponding to the excess noise is given by:
\begin{equation}
T_e = C_{12}'T_N.
\end{equation}
Thus, the calibrated current noise spectral density ($S_I$ couples fully to $Z_0$ due to the circulator) is obtained from $S_I = 4 k_B T_e/Z_{0}$, where $Z_0=50$ $\Omega$ is the characteristic impedance of the microwave system.
The validity of the calibration was tested by comparing the obtained small-bias Fano factor $F(V_g)$ at $B=0$ T to the theoretically predicted values \cite{Rycerz2009}. We reached an agreement within $\pm 20$ \%, but this may reflect rather the insufficiency of the theoretical modelling than the inaccuracy of our calibration.
\section{Results}
\subsection{Low frequency switching noise}
The $IV$ characteristics of sample EV3 measured at $B$ = 5.6 T is displayed in Fig. \ref{fig:pinknoise}. The data are compared with the theoretical model for the gyrotropic tunneling of Eq. \ref{vor} illustrated by the red curve. There is an agreement between the model and the data only at small bias, above which there is an abrupt increase in the current. Compared with the data at 2 T presented in Ref. \cite{Laitinen2017}, the current increase takes place at smaller current, and the agreement with the gyrotropic Zener tunneling is over a smaller range. As already tentatively discussed in Ref. \cite{Laitinen2017}, the steep increase in current signifies the onset of the avalanche type of transport, which is verified in the low-frequency noise behavior. The beginning and the end of the avalanche regime are denoted by arrows at 500 pA and 30 nA, respectively.
\begin{figure}[htb]
\includegraphics[width=0.43\textwidth]{Fig4a.pdf}
\includegraphics[width=0.56\textwidth]{Fig4b.pdf}
\centering
\caption{a) IV characteristics measured on the sample EV3 at $B$ = 5.6 T and a fit to the data using the gyrotropic Zener tunneling model of Eq. \ref{vor}; positive bias was applied to the inner Corbino contact while the outer was connected to virtual ground. b) The low-frequency noise at $f$ = 10 Hz measured at the same time as the IV data. The black dashed line denotes the quadratic $I^2$ behavior.}
\label{fig:pinknoise}
\end{figure}
The low-frequency noise recorded at 10 Hz is depicted in Fig. \ref{fig:pinknoise}b as a function of the bias current, extending across the avalanche type of transport towards ohmic behavior. The steep section of the IV curve between $0.5 - 30$ nA is seen to display noise that increases as $\langle I \rangle^2$ with bias current. This kind of current dependence is typical to switching noise as well as for resistance fluctuation noise~\cite{Kogan}. Because of the sharp cut-off of the $\langle I \rangle^2$ dependence, we argue that the observed behavior can be assigned to avalanche transport which results in switching type of noise. This type of pulse sequence was illustrated in Fig. \ref{fig:switchingNoise}a in Sect. \ref{AVtheory}. By fitting the maximum value of the low-frequency noise $S_I^{\textrm{max}} \simeq 10^{-22}$ A$^2$/Hz at 10--20 nA to the model of Eq. \ref{limit1}, we obtain $\tau_0 \sim 100$ ns for the time separation between the pulses in the fully developed switching sequence (assuming $\omega \tau_s \ll 1$). Using the earlier definitions, we observe that the low-frequency Fano factor corresponds to bunching with $F_{AV} \approx 1.3 \times 10^4$ at the noise peak \footnote{Here one needs to remember that this approximation assumes white spectrum for the low-$f$ noise}.
Initially, only the size of the avalanche pulses grows with bias and produces an $\langle I \rangle^2$-dependent increase in the low-$f$ noise. However, towards the end of the avalanche regime at $I=30$ nA, the upwards transition rate $1/\tau_0$ starts also to change. This is seen as a decrease in the current noise by an order of magnitude, before the noise starts growing again as $\sim \langle I \rangle^2$. The ten-fold increase in the upwards transition rate brings the value of $\tau_0$ close to the value of $\tau_s$ (see below), which means gradual overlapping of individual avalanche pulses. The second increase in low-$f$ noise is assigned to the contacts, which are known to have low-frequency resistance fluctuations even in the case of the best suspended samples \cite{Kumar2015}.
\subsection{Shot noise at microwave frequencies}\label{muWaveNoise}
To supplement the information on the low-frequency noise of the avalanche pulse, we have recorded shot noise at microwave frequencies $f=650-900$ MHz. The microwave shot noise data obtained on sample C2 at $B = 6$ T are illustrated in Fig. \ref{fig:shotnoise} together with the measured IV characteristics. The red curve in Fig. \ref{fig:shotnoise}a illustrates the behavior according to the gyrotropic Zener tunneling model at low bias. The arrows mark the beginning and the end of the avalanche transport regime at 200 pA and 20 nA, respectively.
\begin{figure}[htb]
\includegraphics[width=0.44\textwidth]{Fig5a.pdf}
\includegraphics[width=0.55\textwidth]{Fig5b.pdf}
\centering
\caption{a) IV characteristics measured on the sample C2 at $B$ = 6 T and a fit to the data using the gyrotropic Zener tunneling model of Eq. \ref{vor}. The arrows indicate the beginning and the end of the avalanche transport regime. b) The shot noise spectral density (excess noise) measured at microwave frequencies $f = 650-900$ MHz and recorded simultaneously with the IV data in the panel 5a. The arrows denote the same spots as in the IV picture; the increase of the shot noise across the avalanche regime is clearly seen. }
\label{fig:shotnoise}
\end{figure}
Fig. \ref{fig:shotnoise}b displays the shot noise spectral density (excess shot noise $S_I(V) - S_I(0)$) measured at microwave frequencies recorded simultaneously with the IV data of Fig. \ref{fig:shotnoise}a; the arrows denote the same spots as in the IV picture. The shot noise displays a clear increase in the spectral density across the avalanche regime in which a large enhancement in $I$ and bunching of electrons takes place. At the end of the avalanche regime, the Fano factor $F \sim 1$ which points towards random Poissonian noise with uncorrelated charge carriers. Above the avalanche regime, there is a decrease in the shot noise power which is similar, although weaker than was observed in the low-frequency noise. Thus, we conclude that when bunching decreases, as deduced from the effective low-$f$ noise Fano factor $F_{AV}$, the microwave shot noise also decreases. The variation in the shot noise power at microwave frequencies, however, remains much weaker than in the observations at low frequencies, and we find $F \lesssim 1$ for microwave excess noise in the avalanche regime, as well as above it.
The Fano factors for magnetic fields $B=6$ and 8 T as deduced from the noise power spectral density are displayed in Fig. \ref{fig:shotnoise2}a and b, respectively. Both data sets indicate an increase of the Fano factor when going deeper into the avalanche regime. Initially, just above the avalanche threshold, the Fano factor is small, $F \sim 0.2$ as expected for a time-correlated sequence of electrons within the avalanche pulse. The increase of $F \rightarrow 1$ within the avalanche regime is in line with a development of a single multiplication site to multiple ones, which would lead to reduced temporal correlations between electrons in the generated charge pulse, and thereby to an enhanced Fano factor; the observed maximum value $F = 1.2 \pm 0.2$, however, would suggest partly simultaneous triggering events of the multisite generation. Above the avalanche regime, the Fano factor clearly decreases to a level $F \simeq 0.5$ at bias voltages $\sim 40$ mV. This plateau is close to what one would expect for hot-electron transport in a diffusive conductor, but such theories exist only for the half-filled 2-DEG Landau level case, \textit{i.e.} for a composite fermion Fermi sea \cite{vonOppen1997}. At high bias, the Fano factor starts to diminish again, which indicates the presence of inelastic processes, such as the electron - phonon coupling, leading to a suppression of the noise. Our data display $F\simeq0.2$ at 100 mV and a power-law-like decrease as $V^{-1} \ldots V^{-2}$.
\begin{figure}[htb]
\centering
\includegraphics[width=0.6\textwidth]{Fig6.pdf}
\centering
\caption{The Fano factor as a function of the bias current $I$ for sample C2 at $B = $ 6 T and 8 T, obtained from the results such as in Fig. \ref{fig:shotnoise}b. The displayed data sets start from the beginning of the avalanche region, below which the noise measurement displays just the background noise of the measurement setup and $F$ cannot be defined. The end of the avalanche region is marked by the arrows for both data sets. Note the increase of the Fano factor within the avalanche regime.}
\label{fig:shotnoise2}
\end{figure}
In the BSEH model, heating of the electron gas plays a central role. The Joule heating of the electron gas has to be transported away either via phonons or via electronic thermal conduction. In the quantum Hall regime, the electronic conductance is weak, which makes the electronic thermal conductivity negligible in our breakdown experiments. Therefore, the thermal balance is governed by electron - phonon coupling. In our suspended graphene, the main cooling channel at small energies is via acoustic phonons while at large energies supercollisions with flexural phonons are the dominant process \cite{Laitinen2014}. In our present experiment, the electron - phonon coupling will increase with field as the density of states of electrons increases linearly with the magnetic field, assuming that the width of the Landau level remains fixed. However, there is an opposite tendency arising from the shrinking of the electron wave function with $B$, which makes it difficult to couple the long-wave-length acoustic phonons to the high-field electrons and a reduced increase in the coupling results \footnote{In regular 2-DEG heterostructure, an increase in the electron - phonon coupling by a factor of two is found between 2 and 9 T. \protect{\cite{Prasad1984}}}.
We have investigated the power required to initiate the avalanche type of transport by measuring both the critical current $I_c$ and the critical voltage $V_c$ for breaking the gyrotropic tunneling regime. In our data at fields $B > 2$ T displayed in Fig. \ref{fig:AvalancheBeginning}, we find a decrease in the heating power $P=I_c V_c$, \textit{ i.e.} a smaller power $P$ is required to initiate the avalanche regime. In fact, the quantity $ I_c V_c^2$ appears to be independent of the magnetic field. According to our shot noise results around the Dirac point, the electron-phonon coupling is approximately independent of the magnetic field at $B > 2$ T. These results together suggest that if the BSEH processes become active at some fixed temperature, then the heating power needs to be deposited to an area that decreases with the bias voltage. One possibility is that the power is dissipated into a region that extends between the two tilted Landau level bands: their spatial separation decreases with increasing bias.
\begin{figure}[tb]
\centering
\includegraphics[width=0.49\textwidth]{Fig7a.pdf}
\includegraphics[width=0.49\textwidth]{Fig7b.pdf}
\centering
\caption{Take-off point of the avalanche region, i.e. the point where the low-frequency noise begins to grow: a) the critical current $I_c$ as a function of magnetic field $B$, b) the corresponding critical voltage $V_c(B)$. Note that the data in Fig. \ref{fig:shotnoise2} deviate slightly from the data here; the data sets derive from different measurements. The deviation does not influence the conclusions on $I_c$ and $V_c$ at magnetic fields $B > 2$ T, which are based on the fitted solid lines given by $I_c=I_0 \times \exp{(-B/B_I)}$ and $V_c=V_0 \times \exp{(B/B_V)}$, where $I_0=1.1 \times 10^{-8}\textrm{ A}$, $V_0=5.8 \textrm{ mV}$, $B_I=2.1 \textrm{ T}$ and $B_V=4.8\textrm{ T}$. The product of these fits indicates that $I_c V_c^2$ is approximately constant at large magnetic fields.}
\label{fig:AvalancheBeginning}
\end{figure}
\section{Discussion}\label{disc}
To our knowledge, our results are the first ones dealing in detail with the crossover from Zener tunneling of single electrons to BSEH behavior leading to avalanche type of transport. The crossover is easily visible at moderately low magnetic fields ($\sim 2$ T), but it becomes exceedingly difficult to distinguish the gyrotropic Zener tunneling regime at magnetic fields above 8 T. The maximum observable Zener tunneling current decreases by two orders of magnitude between 1 and 9 Tesla in our experiments. This indicates that avalanche type of breakdown of the zero-energy Landau level becomes very easily triggered at high magnetic fields.
Our low-frequency noise results in the avalanche regime are similar to those of Kobayashi and coworkers on GaAs Corbino rings~\cite{Chida2014,Hata2016}. In both experiments, very strong bunching of charge carriers is observed, which supports the view of BSEH type of carrier excitation. We note that our sample size is approximately by a factor of 50 smaller than the GaAs devices investigated in Refs. \cite{Chida2014,Hata2016}, and yet, the carrier excitation seems to be nearly equally efficient as judged from our observed avalanche-regime Fano factors $F_{AV} = 1.3 \times 10^4$ at 5.6 T, in comparison to $F_{AV} = 10^3 - 10^5$ found in Refs. \cite{Chida2014,Hata2016}.
In addition to the low-frequency noise, we also probed the noise at microwave frequencies and found predominantly sub-Poissonian shot noise in this case, with a Fano factor varying in the range $F = 0.2 - 1.2$ at $B =$ 8 T. The Fano factor at microwave frequencies measured at the end of the avalanche regime seems to be quite independent of the magnetic field. The value of the Fano factor suggests that the Lorentzian spectrum caused by the switching noise has to decay below the shot noise before the GHz frequency range. If we take $ F_{AV}=10^4$, then $\omega \tau_s > 100$ is a necessary condition, which means that the avalanche pulse duration has to satisfy $\tau_s > 20$ ns. As this pulse contains $5 \times 10^3$ electrons, the average time distance of the electrons becomes $\geq 4$ ps, which corresponds to only $\leq 50$ nA in average current during the avalanche pulse. Note that this value is not far from the current value at the end of the avalanche regime, which would then just correspond to the beginning of the overlap of the avalanche pulses.
In the light of the rather weak conductance during the avalanche pulses, the observed values of $F \simeq 1$ at the end of the avalanche regime may signal that the charge carrier emission events within the avalanche pulse become quite random at large currents. This could indicate several parallel transport paths where random carrier escape is supported by elevated local temperature. Such a state would transform smoothly to ohmic behavior with increased Joule heating by the bias.
One possible theoretical framework for a single transport path at high bias is provided by transport in a 1-dimensional array of tunneling junctions \cite{Golubev2004}. In this case, the Fano factor is around $F=1/3$ but it may vary substantially depending on the properties of each scattering/tunneling element (\textit{i.e.} their $F$ and $R$). Furthermore, array models can be extended to two dimensions where solitons may produce avalanche-like behavior and a suitable increase of the Fano factor around Coulomb blockade energies $E_c$, matching with our finding. \cite{Sverdlov2001}. The increase in Coulomb blockade voltage $E_c/e$ as $1/\ell_B$ is also in agreement with our observed upward voltage shift in the microwave Fano factor $F(V)$ with magnetic field.
Since the theoretical models are able to give reasonable explanations for $F \gtrsim 0.3$ in the coherent transport regime, we conclude from our results that strong correlations between the electrons within an avalanche pulse of electrons exist only near the onset of the avalanche regime (where $F \lesssim 0.23$). The number of electrons in such a pulse is $N_{\textrm{tot}} \simeq 10^3$ according to the low-frequency noise measurements.
\section{Conclusions}
The low-frequency noise clearly distinguishes between the gyrotropic Zener tunneling regime and the avalanche type of transport in the 0$^{th}$ Landau level of graphene. With an increasing magnetic field, the avalanche type of behavior becomes more favorable, and above $B =$ 8 Tesla it is hard for us to distinguish any Zener tunneling regime. The low-frequency noise in the avalanche regime displays features which are distinct to switching noise, i.e., the noise grows quadratically with the bias current. At the largest noise levels, this noise equals to an effective Fano factor on the order of $F_{AV} \simeq 10^4$, which also yields an estimate of 10 MHz for the switching rate of the avalanche pulses with a duration of $> 20$ ns. Our measurements of the high-frequency microwave shot noise indicate clear correlations within the one-thousand-electron avalanche pulses at the onset of the avalanche transport. However, we also find that these charge carriers within the avalanche pulses become less and less correlated with increasing bias. This is seen as growth of the microwave $F$ across the avalanche region, at the end of which we obtain $F = 1.2 \pm 0.2$. In the high bias transport regime, the Fano factor is lowered as $V^{-1} \ldots V^{-2}$ and amounts to $F \simeq 0.2$ at 100 mV, which is in line with inelastic processes caused by electron - phonon interactions.
\begin{acknowledgements}
We thank C. Flindt, A. Harju, T. Ojanen, S. Paraoanu, and B. Pla\c{c}ais, for fruitful discussions. This work has been supported in part by the EU Framework Programme (FP7 and H2020 Graphene Flagship), by ERC (grant no. 670743), and by the Academy of Finland (project no. 250280 LTQ CoE). A.L. is grateful to Vais{\"a}l{\"a} Foundation of the Finnish Academy of Science and Letters for a scholarship. This research project made use of the Aalto University OtaNano/LTL infrastructure which is part of European Microkelvin Platform.
\end{acknowledgements}
|
{
"timestamp": "2018-02-27T02:07:55",
"yymm": "1802",
"arxiv_id": "1802.08914",
"language": "en",
"url": "https://arxiv.org/abs/1802.08914"
}
|
\section{Introduction}
\subsection{Original motivations and background}
\emph{Lagrangian submanifolds} are natural objects, arising in the context of \emph{Hamiltonian mechanics} and \emph{dynamical systems}. Their prominent role in symplectic topology and geometry should not come as a surprise. In spite of tremendous efforts, the classification of Lagrangian submanifolds, up to \emph{Hamiltonian isotopy}, is generally an open
problem: for instance, Lagrangian tori of the Euclidean symplectic space ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^4$ are not classified up to Hamiltionian isotopy.
Lagrangian submanifolds are also key objects of various gauge theories. For example,
the \emph{Lagrangian Floer theory} is defined by counting pseudoholomorphic discs with boundary contained in some prescribed Lagrangian submanifolds.
Many examples of \emph{smooth} Lagrangian submanifolds are known. They are easy
to construct and to deform. In a nutshell, Lagrangian submanifolds are typical, rather \emph{flexible} objects, from symplectic
topology.
An elementary construction of Lagrangian submanifold is provided by
considering the $0$-section of the cotangent bundle of a smooth
manifold $T^*L$,
endowed with its natural symplectic structure $\omega=d\lambda$,
where $\lambda$ is the \emph{Liouville form}. More generally, it is
well known that any section of $T^*L$ given by a closed $1$-form
is a Lagrangian submanifold. Furthermore, such Lagrangian submanifolds are Hamiltonian isotopic to the $0$-section if, and only if, the corresponding $1$-form is exact.
These examples provide a large class of Lagrangian submanifolds which
admit
as many Hamiltonian deformations as smooth
function on $L$ modulo constants.
By the \emph{Lagrangian
neighborhood theorem}, every Lagrangian submanifold $L$ of a symplectic manifold admits a neighborhood symplectomorphic to a
neighborhood of the $0$-section of $T^*L$. It follows that the local
Hamiltonian deformations discussed above (in the case of
$T^*L$) also provide deformations for Lagrangian
submanifolds of \emph{any} symplectic manifold.
The geometric notion of \emph{stationary Lagrangians submanifolds} was introduced
by Oh~\cite{O90, O93}
in order to seek canonical representatives, in a
given isotopy class of Lagrangian submanifolds. Stationary Lagrangian
submanifolds can be though of as analogues of \emph{minimal
submanifolds} in the framework of symplectic geometry. Stationary Lagrangians are expected to be canonical in some sense, and Oh conjectured for instance that Clifford tori of $\mathbb{CP}^2$ should minimize the volume in their Hamiltonian isotopy class.
As in the case of minimal surfaces, one can define various
modified versions of the \emph{mean curvature flow},
which are expected to converge toward stationary
Lagrangians submanifolds in a given isotopy class.
In an attempt to implement numerical versions of these flows~\cite{JR},
we ended up facing theoretical problems of
a \emph{discrete geometric nature}.
Indeed, from a numerical point of view, surfaces are
usually understood as some type of~\emph{mesh} and their mathematical
counterpart is \emph{discrete geometry} and sometimes \emph{piecewise
linear} geometry.
Two obstacles arose in order to provide a sound numerical
simulation of geometric flows for Lagrangian submanifolds, namely:
\begin{enumerate}
\item To the best of our knowledge, discrete Lagrangian surfaces of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^4$ and more generally discrete isotropic surfaces of
${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$
are poorly understood, in fact hardly studied. We had no available
examples of discrete Lagrangian tori in ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^4$ in our toolbox, save
some discrete analogues of product or Chekanov
tori~\ref{sec:examples}. Furthermore, we had no deformation theory
that we could rely upon contrarily to the smooth case.
Implementing a geometric evolution equation for discrete
Lagrangian surfaces,
with so few examples to start the flow was not an enticing project.
\item As far as a program is based on a numerical implementation,
using floating point numbers, it is not natural
to check if a symplectic form
vanishes exactly along a plane. It only makes sense to
test if the symplectic density is rather small, which means that we
have an approximate solution of our problem.
From an experimental point of view, we dread our numerical flow
would exhibit some spurious drift of the symplectic density.
We feared such instabilities may jeopardize our numerical
simulations for flowing Lagrangian submanifolds.
\end{enumerate}
These issues led us to consider an auxiliary flow. Ideally, the auxiliary flow should
attract \emph{any} discrete surface toward Lagrangian discrete surfaces. The
utility of the auxiliary flow would be $2$-fold: its limits would
provide examples of Lagrangian discrete surfaces for our experiments. It may also be
used to prevent instabilities of evolution equation along
the moduli space of discrete Lagrangian surfaces.
These questions are part of a larger ongoing project. They have not been fully
investigated yet but stirred many questions
of a discrete differential geometric nature, in the context of
symplectic geometry.
This paper delivers a few answers to some of the simplest questions
arising, as a spin-off to our initial motivations.
\subsection{Statement of results}
We consider smooth maps $\ell:\Sigma\to{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$,
where $\Sigma$ is a
surface and $n\geq 2$. The Euclidean space
${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ is endowed with its standard symplectic form $\omega$.
A map $\ell$ is said to be \emph{isotropic} if $\ell^*\omega=0$.
Lagrangian tori of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^4$ are the submanifolds obtained as the image
of $\Sigma$ by $\ell$, in the particular case where $2n=4$, $\Sigma$
is diffeomorphic to a torus and $\ell$ is an isotropic
embedding.
In this paper, we construct approximations of smooth isotropic
immersions of
the torus in ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ by \emph{piecewise linear isotropic maps}. The idea is
to consider a discretization of the torus by a square grid and
approximate the smooth map by a quadrangular mesh. This mesh is almost
isotropic, in a suitable sense. A perturbative argument shows that there exists a nearby
isotropic quadrangular mesh, which is used to build a piecewise linear
map. We provide a more precise statement of the above claims in the rest of the introduction.
\subsubsection{Piecewise linear isotropic maps}
We recall some usual definitions beforere stating one of our main results.
A \emph{triangulation} of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$
is a locally finite \emph{simplicial complex} that covers ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$ entirely.
In this paper, points, line segments, triangles of triangulations are understood as geometrical Euclidean
objects of the plane. Similarly, we shall consider triangulations of
quotients of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$ by a lattice $\Gamma$, obtained by quotient of
$\Gamma$-invariant triangulations of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$.
A \emph{piecewise linear map} $f:{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2\to{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^m$ is a
continuous map such that, for some triangulation of
${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$, the restriction of $f$ to any triangle is an affine map to
${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^m$.
We consider smooth
isotropic immersions $\ell:\Sigma\to {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$, where $\Sigma$ is
diffeomorphic to a $2$-dimensional torus and $n\geq 2$. The Euclidean metric $g$ of
${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ induces a conformal structure on $\Sigma$. The uniformization
theorem implies that the conformal structure of $\Sigma$ actually
comes from a quotient of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$, with its canonical conformal
structure, by a lattice. Thus, we have a conformal covering map
$$
p:{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2\to\Sigma,
$$
with group of deck transformations $\Gamma$, a lattice of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$.
A triangulation (resp. quadrangulation) of $\Sigma$ is called an
\emph{Euclidean triangulation } (resp. \emph{quadrangulation}) of $\Sigma$ if the boundary
of every face lifts to an Euclidean triangle (resp.quadrilateral) of
${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$ via $p$.
Similarly, a function $f:\Sigma\to{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^m$ is a \emph{piecewise linear
map} if it lifts to a piecewise
linear map ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2\to{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^m$ via $p$. Given a piecewise linear map
$\hat \ell:\Sigma\to {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$, the pull-back of the symplectic form
$\omega$ of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ makes sense on each triangle of the triangulation
subordinate to $\hat \ell$. We say that $\hat\ell$ is a \emph{isotropic piecewise
linear map} if the pull back of $\omega$ vanishes along each face of the triangulation.
A piecewise linear map which is locally injective is called a \emph{piecewise linear immersion.}
The main result of this paper can be stated as follows:
\begin{theointro}
\label{theo:maindiscr}
Let $\ell:\Sigma\to{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ be a smooth isotropic immersion, where
$\Sigma$ is a
surface diffeomorphic to a compact torus and $n\geq 2$.
Then, for every $\epsilon >0$,
there exists a piecewise linear isotropic map
$\hat\ell:\Sigma\to{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ such that for every $x\in \Sigma $, we have
$$\|\ell(x)-\hat\ell(x)\|\leq \epsilon.$$
Furthermore, if $n\geq 3$, we may assume that $\hat \ell$ is an
immersion.
If $n=2$, we may assume that $\hat\ell$ is an immersion away from a
finite union of embedded circles in $\Sigma$.
\end{theointro}
Loosely stated, Theorem~\ref{theo:maindiscr} says that every
isotropic immersion $\ell$ of a torus into ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ can be approximated by a
piece linear map arbitrarily ${\mathcal{C}}^0$-close to~$\ell$. If $n\geq 3$ the
last statement of the theorem provides the following corollary:
\begin{corintro}
\label{cor:corintro}
Let $n$ be an integer such that $n\geq 3$. Let $\Sigma$ be a smoothly
immersed surface in ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$, which is isotropic and
diffeomorphic to a compact torus.
Then, there exist piecewise
linear immersed surfaces in ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$, which are
isotropic, homeomorphic
to a compact torus and arbitrarily close to $\Sigma$ with respect to the
Hausdorff distance.
\end{corintro}
\begin{rmk}
Our technique does not
allow to get much better results than a rather rough
${\mathcal{C}}^0$-closedness between $\ell$ and its approximation $\hat\ell$.
The best evidence for this weakness is the existence of a certain
\emph{shear action} on the space of isotropic quadrangular meshes
(cf. \S\ref{sec:shear}).
It would be most interesting to understand whether
these limitations are inherent to the techniques we employed here,
or if there are geometric obstructions to get better
estimates.
\end{rmk}
\subsubsection{Isotropic quadrangular meshes}
The main tool to prove Theorem~\ref{theo:maindiscr}
relies on \emph{quadrangulations} of
$\Sigma$ and \emph{quadrangular meshes}.
Quadrangulations of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$ are particular
CW-complex decompositions of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$,
where edges are line segments of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$ and the boundary of every face
is an Euclidean quadrilateral. Nevertheless, the precise general definition of
quadrangulations
is unimportant for our purpose. Indeed, we shall only work with
particular standard quadrangulations $\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$ of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$, pictured as a regular grid with
step size $N^ {-1}$ tiled by Euclidean squares.
Particular Euclidean quadrangulations of
$\mathcal{Q}_N(\Sigma)$, are defined at \S\ref{sec:quadconv}. They
are obtained as quotients of $\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$ by certain lattices $\Gamma_N$
of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$. The associated moduli space of \emph{quadrangular meshes} is by definition
$$
{\mathscr{M}}_N = C^0(\mathcal{Q}_N(\Sigma))\otimes{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}.
$$
A mesh $\tau\in{\mathscr{M}}_N$ is an object that associates ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$-coordinates
to every vertex of the quadrangulation $\mathcal{Q}_N(\Sigma)$.
We would like to say that any quadrilateral of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ contained in an isotropic plane is an \emph{isotropic quadrilateral}. However, quadrilaterals are generally not contained in a $2$-dimensional plane. The above attempt of definition can be extended via the Stokes theorem for every non flat quadrilateral: a quadrilateral
of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ is called an \emph{isotropic quadrilateral}, if the integral of the Liouville
form $\lambda$ along the quadrilateral --- that is four oriented line segments --- vanishes (cf. \S\ref{sec:lagquad}).
\begin{rmk}
In particular, for any compact embedded oriented surface $S$ of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ with boundary given by an isotropic quadrilateral, we have $ \int_S\omega=0$ by Stokes theorem.
\end{rmk}
By extension, we say that a mesh $\tau\in
{\mathscr{M}}_N$ is isotropic if the quadrilateral in ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ associated to each face of
$\mathcal{Q}_n(\Sigma)$ via $\tau$ is isotropic.
The main strategy for proving Theorem~\ref{theo:maindiscr} involves the following approximation result:
\begin{theointro}
\label{theo:mainquad}
Given an isotropic immersion $\ell:\Sigma\to{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$, there exists a
family of isotropic quadrangular meshes $\rho_N\in{\mathscr{M}}_N$ defined for every $N$ sufficiently large, with the following property:
for every $\epsilon>0$, there exists $N_0>0$ such that for every $N\geq N_0$ and every
vertex $v$ of $\mathcal{Q}_N(\Sigma)$, we have
$$
\|\rho_N(v)-\ell(v)\|\leq \epsilon.
$$
\end{theointro}
An isotropic quadrilateral of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ is always the base of
an isotropic pyramid in ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ (cf. \S\ref{sec:pyramid}), which is
easily
found as the solution
of a linear system. This remark allows to pass from an
isotropic quadrangular mesh to an isotropic triangular mesh. Together with Theorem~\ref{theo:mainquad} this provides essentially the proof of Theorem~\ref{theo:maindiscr}.
\subsubsection{Flow for quadrangular meshes}
Our approach for proving Theorem~\ref{theo:mainquad} has been
inspired to a large extent by the beautiful~\emph{moment map geometry}
introduced by Donaldson~\cite{Don99}. We shall provide a careful
presentation of this infinite dimensional geometry at
\S\ref{sec:dream}, and merely state a few facts in this introduction:
the moduli space of maps
$$
{\mathscr{M}}=\{f :\Sigma\to{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}\},
$$
from a surface $\Sigma$ endowed with a volume form $\sigma$ into
${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ admits a natural formal K\"ahler structure, with a formal Hamiltonian action of $\mathrm{Ham}(\Sigma,\sigma)$. The moment map of the action is given by
$$
\mu(f)=\frac{f^*\omega}{\sigma}.
$$
Zeroes of the moment map are precisely isotropic maps.
It is tempting to make an
analogy with the
Kempf-Ness theorem, which holds in the finite dimensional setting.
We may conjecture that a
map $f$ admits an isotropic representative in its
\emph{complexified orbit} provided some type of algebro-geometric hypothesis of
stability.
Furthermore, one can also define a \emph{moment map flow}, which is
naturally defined in the context of a K\"ahler manifold endowed with a
Hamiltonian group action. Such flow is
essentially the downward gradient of the function $\|\mu\|^2$ on the
moduli space, which is expected, in favorable circumstances, to converge
toward a zero of the moment map in a prescribed orbit.
\begin{rmk}
We shall not state any significant results aside the description of
this geometric framework. For instance, it is an open question whether
the moment map flow exists for short time in this context, which is
part of a broader ongoing program.
\end{rmk}
In an attempt to define a finite dimensional analogue of
this infinite dimensional moment map picture,
we define a flow analogous to the moment map flow
on the moduli space of meshes ${\mathscr{M}}_N$, called the discrete moment
map flow. This flow is now just an ODE and its behavior can readily be explored
from a numerical perspective, using the Euler method.
We provide a computer program called DMMF, available on the homepage
\begin{center}
\href{http://www.math.sciences.univ-nantes.fr/~rollin/index.php?page=flow}{http://www.math.sciences.univ-nantes.fr/\textasciitilde{}rollin/},
\end{center}
which is
a numerical simulation of the discrete moment map flow.
From an experimental point of view,
the flow seems to be converging quickly toward isotropic
quadrangular meshes, for any initial quadrangular mesh (cf. \S\ref{sec:dmmf}).
\subsection{Open questions}
Theorem~\ref{theo:maindiscr}
is a fundamental tool for the discrete geometry of isotropic tori,
since it provides a vast class of examples of piecewise linear
objects by approximation of the smooth ones.
Here is a list a questions that arise immediately in this new
territory of discrete symplectic geometry:
\begin{enumerate}
\item Is there a converse to Theorem~\ref{theo:maindiscr} or
Corollary~\ref{cor:corintro}? Given a piecewise linear isotropic
surface in ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$, is it possible to find a nearby smooth
isotropic surface?
\item
More generally, to what extent does the moduli space of piecewise linear Lagrangian submanifolds
retain the properties of the moduli space of smooth Lagrangian submanifolds?
In spite of groundbreaking progress in symplectic topology, the classification of Lagrangian submanifolds up to Hamiltonian
isotopy remains open. It is known that there exists several types of
Lagrangian tori in ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^4$, which are not Hamiltonian
nonisotopic: namely, product tori and Chekanov
tori~\cite{Chekanov96}. On the other hand, Luttinger found infinitely many
obstructions in~\cite{Luttinger95} to the existence
of certain type of knotted Lagrangian tori in ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^4$. In
particular spin knots provide knotted tori in ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^4$ which cannot
by isotopic to Lagrangian tori according to Luttinger's theorem.
This thread of ideas led to the conjecture that product and
Checkanov tori are the only classes of Lagrangian tori in ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^4$,
up to Hamiltonian isotopy. Although the result was claimed before,
the conjecture is still open for the time being~\cite{D17}. However it was proved by Dimitroglou Rizell, Goodman and Ivrii that all embedded Lagrangian tori of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^4$ are isotopic trough Lagrangian isotopies~\cite{DGI}. Perhaps an interesting approach to
tackle such conjecture, an more generally any questions involving
some type of $h$-principle, would be to recast the question in the finite
dimensional framework of piecewise linear Lagrangian tori of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^4$.
\item The moment map framework, in an infinite dimensional context,
presented at \S\ref{sec:dream}, has been a great endeavor for
proving our main results and introducing a finite dimensional version
of the moment map flow. However, only a faint shadow of the moment
map geometry is recovered in the finite dimensional world. More
precisely, there exists a finite dimensional analogue $\mu_N^r$ of the moment map
$\mu$ on ${\mathscr{M}}_N$. But it is not clear whether $\mu_N^r$ is
actually a
moment map and for which group action on ${\mathscr{M}}_N$. It would be most interesting to
define a finite dimensional analogue of the group $\mathrm{Ham}(\Sigma,\sigma)$,
and try to make sense of the Kempf-Ness theorem in this setting.
\end{enumerate}
\subsection{Comments on the proofs -- future works}
The proofs given in this paper come with a strong differential
geometric flavor, involving \emph{uniformization theorem} for Riemann
surfaces, \emph{discrete analysis}, \emph{discrete elliptic operators},
\emph{discrete Schauder estimates}, \emph{Riemannian geometry},
\emph{discrete spectral gap theorem}, \emph{Gau{\ss} curvature} and
its discrete analogues.
Many of the techniques employed here may be adapted to more general
settings. Working with tori seems to be a key
fact however: indeed, Fourier and discrete Fourier transforms
are well adapted for the analysis on tori and their quadrangulations
and do not seem to extend easily. The discrete Schauder estimates
derived by Thom\'ee~\cite{Thomee68}, which are a crucial ingredient of our fixed point principle, are
proved using Fourier transforms.
Although geometric analysis is quite often a powerful tool for proving topological theorems,
symplectic topologists may still
expect some \emph{more flexible constructions}. Boldly stated, \emph{there may be a shorter proof based on more conventional techniques of symplectic topolology},
steming from some local ansatz, some jiggling lemma or in the spirit of the $h$-principle. Such proofs might be shorter, more elementary and, perhaps, lead to some stronger regularity results. These statements are difficult to disprove, especially since our attempts to deliver alternate proofs of, say Theorem~\ref{theo:maindiscr}, along these lines failed so far.
One of the original motivation for this work was to get some effective constructions, even for \emph{rough} PL surfaces, that is when $N$ is quite small. It is unlikely that any of the \emph{flexible constructions} could shed some light on this case. On the contrary, one of the outcome of this paper is a new flow for quadrangular meshes (cf. \S\ref{sec:dmmf}) that provides large families of PL isotropic surfaces. Many questions about this finite dimensional flow remain open, and we would like to tackle them in future research. For instance, the completeness of the finite dimensional flow is unclear at the moment, although this is expected, based on numerical evidence.
Another open problem concerns the naturality of our finite dimensional flow as a \emph{good approximation} of the infinite dimensional flow a $N$ goes to infinity:
let $\tau_N(t)\in{\mathscr{M}}_N$ be families of solutions of the finite dimensional flow \eqref{eq:discrf}, for every $N>0$, and $f_t:\Sigma\to {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$, a solution of the infinite dimensional flow~\eqref{eq:mmf}. Assume that these flows are defined on the same interval $t\in [0,T]$, and that the initial conditions $\tau_N(0)$ converge towards $f_0$ in a suitable sense. Is it true that $\tau_N(t)$ converges uniformly towards $f_t$ in a suitable sense?
Under some strong regularity assumptions of $f_t$, a scheme of proof of such a convergence result, would depend of the following ingredients:
\begin{itemize}
\item Open problem : show that the sequence of finite dimensional evolution equations converge in
a suitable sense to a \emph{nice} evolution equation in the smooth setting, perhaps, in some sense, a parabolic equation. Such result could be understood as an analogue of the study of the limit operator $\Xi$ carried out in this paper~\ref{sec:splitlapl}.
\item Open problem : relying on the Schauder discrete estimates, show that for suitable norms (perhaps weak discrete Hölder norms), the finite dimensional flow has some type of regularizing properties similar to parabolic flows. The answer to this question could be seen as an analogue of the spectral gap Theorem~\ref{theo:specgap}.
\end{itemize}
At the moment of writing, we have no interpretation of the smooth moment map flow as some type parabolic flow and the above questions remain widely open.
\subsection{Organization of the paper}
Section~\S\ref{sec:dream} of this paper is a presentation of the moment map
geometry of a certain infinite dimensional moduli spaces introduced by
Donaldson. Finite dimensional analogues of this geometry are used in
the rest of the paper. For instance
a discrete
version of the moment map flow is introduced at~\S\ref{sec:dmmf} and
implemented on a computer, in order to obtain examples of Lagrangian
piecewise linear surfaces from an experimental point of view.
In \S\ref{sec:anal}, we introduce suitable spaces of discrete functions
on tori, together with the analysis suited for implementing
the fixed point principle. This section contains the definition of
quadrangulations, discrete functions,
discrete H\"older norms, together with the relevant notions of
convergence, culminating with a type of Ascoli-Arzela theorem
(cf. Theorem~\ref{theo:ascoli}).
The equations for Lagrangian quadrangular meshes are introduced
at~\S\ref{sec:pert}, where their linearization is also computed. As
the dimension of the discrete problem goes to infinity, we
show that the finite dimensional linearized problem
converges toward
a smooth differential operator at~\S\ref{sec:limit}.
Some
uniform estimates on the spectrum of the finite dimensional linearized problem are derived as a corollary.
The proof of Theorem~\ref{theo:mainquad} is completed
at~\S\ref{sec:fpt}, using the fixed point principle.
The proof of Theorem~\ref{theo:maindiscr} follows and is completed at
\S\ref{sec:quadtri} after introducing some generic perturbations in
order to obtain piecewise linear immersions.
\subsection{Acknowledgements}
The authors would like to thank Vestislav Apostolov, Gilles Carron,
Stéphane Guillermou,
Xavier Saint-Raymond, Pascal Romon and Carl Tipler for some useful discussions.
A significant part of this research was carried out while the second author was
working at the CNRS UMI-CRM
research lab in Montr\'eal, during the academic year 2017-2018.
We are grateful to all the colleagues from l'Universit\'e du Qu\'ebec
\`a Montr\'eal (UQAM) and le Centre Inter-universitaire de Recherche
en G\'eom\'etrie et Topologie (CIRGET), for providing such a great scientific and human environment.
The authors benefited from the support of the
French government
\emph{investissements d’avenir} program
ANR-11-LABX-0020-01 and from
the
\emph{défi de tous les savoirs} EMARKS research grant ANR-14-CE25-0010.
\tableofcontents
\newpage
\section{Dreaming of the smooth setting}
\label{sec:dream}
The main results of this work (for instance
Theorem~\ref{theo:mainquad}), are
of discrete geometric nature.
Yet the main ideas of our proof was
obtained via an analogy with the moment map geometry
of the space of maps, from the tori endowed with a volume form, into
${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$.
This section is independent of the others, but we think it is important to show how
smooth and discrete geometry analogies can be used to unravel unexpected ideas.
\subsection{Donaldson's moment map geometry}
The moment map geometry presented here was coined by Donaldson,
although our
specific setting is not emphasized in~\cite{Don99}.
All the notions of moduli spaces shall be discussed from a purely
\emph{formal
perspective}.
With some additional effort, it may be possible to define
infinite dimensional manifolds structures on moduli spaces of
interest, by using suitable Sobolev or H\"older spaces.
Let $M$ be a smooth manifold endowed with a K\"ahler
structure~$(M,J,\omega,g)$. The K\"ahler structure of $M$ is given by
an integrable almost complex structure~$J$, a K\"ahler form~$\omega$
and the corresponding K\"ahler metric~$g$. Recall that the K\"ahler
form is related to the metric via
the usual formula
$$
\omega(v_1,v_2)=g(Jv_1,v_2), \mbox { for all $v_1,v_2 \in T_m M$}.
$$
The reader may keep in mind the simplest example provided by $M={\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}\simeq \CC^n$
with its induced Euclidean K\"ahler structure. In this case,
$\omega=d\lambda$, where $\lambda$ is the Liouville form, which
implies that the cohomology class $[\omega]\in H^2(M,{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}})$ vanishes.
Let $\Sigma$ be a closed surface with orientation induced by a volume
form~$\sigma$. In real dimension $2$, a volume form is also a
symplectic form. Thus, the symplectic surface $(\Sigma,\sigma)$
admits an infinite dimensional Lie group of Hamiltonian transformations denoted
$${\mathcal{G}}=\mathrm{Ham}(\Sigma,\sigma).$$
One can consider the moduli space of smooth
maps
$$
{\mathscr{M}}=\{ f : \Sigma \to M \quad | \quad f^*[\omega] = 0\};
$$
notice that in the case of $M={\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ endowed with the standard
symplectic form, the condition $f^*[\omega]=0$
is satisfied by every smooth map.
The tangent space $T_f {\mathscr{M}}$ is identified with the space of tangent vector
fields $V$ along~$f$, which is the space of smooth map $V:\Sigma\to TM$
such that $V(x)\in T_{f(x)}M$.
There is an obvious right-action of $\mathrm{Ham}(\Sigma,\sigma)$ on ${\mathscr{M}}$
by precomposition.
As pointed out by Donaldson, the geometry of the target space induces a formal Kähler structure on ${\mathscr{M}}$ denoted
$({\mathscr{M}},{\mathfrak{g}},\Omega,{\mathfrak{J}})$ given by
$$
({\mathfrak{J}} V)|_x = JV_x, \quad {\mathfrak{g}}(V,V') = \int_\Sigma g(V,V')\sigma , \quad \Omega(V,V') = \int_\Sigma \omega(V,V')\sigma
$$
for any pair of tangent vector fields $V,V'$ along $f:\Sigma\to{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^4$.
By definition, the action of $\mathrm{Ham}(\Sigma,\sigma)$ preserves the
Kähler structure of ${\mathscr{M}}$.
The canonical $L^2$-inner product on $\Sigma$, given by
$$\ipp{h,h'}=\int_\Sigma hh'\sigma ,$$
allows to define the space of smooth functions orthogonal to constants $C^\infty_0(\Sigma)$, which in turn,
be identified to the Lie algebra $\mathrm{Lie}({\mathcal{G}})$ of ${\mathcal{G}}=\mathrm{Ham}(\Sigma,\sigma)$ via the map
$h\mapsto X_h$. Here $X_h$ is the Hamiltonian vector field with
respect to the symplectic form $\sigma$, which satisfies
$$
\iota_{X_h}\sigma = dh.
$$
The $L^2$-inner product $\ipp{h,h'}$ also provides an isomorphism between the
Lie algebra of $\mathrm{Ham}(\Sigma,\sigma)$ and its dual. The Lie algebra and
its dual will be identified throughout this section without any further warning.
Since $\mathrm{Ham}(\Sigma,\sigma)$ acts on ${\mathscr{M}}$, any element of the Lie algebra
$h\in \mathrm{Lie}({\mathcal{G}})\simeq C^\infty_0(\Sigma)$ induces a \emph{fundamental
vector field} $Y_h$ on
${\mathscr{M}}$ defined by
$$
Y_h(f)=f_* X_h \in T_f{\mathscr{M}}.
$$
For $f\in{\mathscr{M}}$, we have $f^*[\omega]=0$, hence
$$
\int_\Sigma
\frac{f^*\omega}\sigma \; \sigma= \int_\Sigma
f^*\omega =0,
$$
so that we may consider the map
\begin{equation}
\mu:\left\{\begin{array}{rcll}
{\mathscr{M}} & \longrightarrow & C^\infty_0(\Sigma) &\\
f& \longmapsto & \mu(f) = & \frac{f^*\omega}\sigma
\end{array}\right.
\end{equation}
By definition, we have the obvious property
$$
\mu(f)=0 \Leftrightarrow f^*\omega = 0 \Leftrightarrow \mbox { $f$ is an isotropic map}.
$$
But we have much more than an equation:
\begin{prop}[Donaldson]
\label{prop:donaldson}
The action of $\mathrm{Ham}(\Sigma,\sigma)$ on ${\mathscr{M}}$ is formally
Hamiltonian and admits $\mu$ as a moment map. More precisely:
\begin{enumerate}
\item $\mu:{\mathscr{M}}\to C^\infty_0(\Sigma)$ is
$\mathrm{Ham}(\Sigma,\sigma)$-equariant in the sense that for every
$f\in{\mathscr{M}}$ and $u\in\mathrm{Ham}(\Sigma,\sigma)$
$$
\ipp{\mu(f \circ u),h\circ u} = \ipp{\mu(f ),h} ;
$$
\item for every variation $V$ of $f$
$$
\ipp{D\mu|_f\cdot V, h} = -\iota_{Y_h(f)} \Omega(V),
$$
where $D$ denotes the differentiation of functions on ${\mathscr{M}}$.
\end{enumerate}
\end{prop}
\begin{proof}
Only the second property requires some explanation.
We pick a smooth family of maps $f_t:\Sigma\to M$ such that
$\frac{\partial}{\partial t} f_t|_{t=0}=V$ and $f_0=f$. The family is
understood as a smooth map
$$
F:I\times\Sigma\to M
$$
where $I$ is a neighborhood of $0$ in ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}$ and
$F(t,x)=f_t(x)$. We denote by $j_0:\Sigma \hookrightarrow
I\times \Sigma$ the canonical embedding given by $j_0(x)=(0,x)$. Notice
that by definition $F\circ j_0 =f$.
Then
\begin{align*}
\ipp{D\mu|_f\cdot V, h} & = \left . \frac \partial{\partial t}\right |_{t=0} \int_\Sigma
hf_t^*\omega\\
&= \int_\Sigma h j_0^*{\mathcal{L}}_{\partial_t}\cdot F^*\omega \\
&= \int_\Sigma h j_0^*(d\iota_{\partial_t}F^*\omega +
\iota_{\partial_t}dF^*\omega)
\end{align*}
where the last line comes from the Cartan formula.
The symplectic form is closed, hence $dF^*\omega = F^*d\omega=0$. In
addition $F^*\partial_t$ agrees with $V$ along $\{0\}\times\Sigma$, so
that $j_0^* d\iota_{\partial_t} F^*\omega = d j_0^* \iota_{\partial_t} F^*\omega = d \omega(V,(F\circ j_0)^* \cdot )= df^*\iota_V\omega$.
It
follows that
\begin{align*}
\ipp{D\mu|_f \cdot V, h} & = \int_\Sigma hd f^*\iota_V\omega\\
&= - \int_\Sigma dh\wedge f^*\iota_V\omega\\
&= - \int_\Sigma \iota_{X_h}\sigma \wedge f^*\iota_V\omega
\end{align*}
The interior product is an antiderivation. In particular
$$
\iota_{X_h}(\sigma \wedge f^*\iota_V\omega)= (\iota_{X_h}\sigma)
\wedge f^*\iota_V\omega + (\iota_{X_h} f^*\iota_V\omega) \sigma.
$$
The LHS of the above identities must vanish since this is the case for
a $3$-form over a surface, and we obtain the identity
\begin{align*}
\ipp{D\mu|_f\cdot V, h} &= \int_\Sigma (\iota_{X_h}
f^*\iota_V\omega) \sigma\\
&= \int_\Sigma \omega(V,Y_h(f))\sigma\\
&=\Omega(V,Y_h(f))
\end{align*}
which proves the proposition.
\end{proof}
\subsection{A moment map flow}
\label{sec:mmf}
From this point, gauge theorists may dream of generalizations of the
Kempf-Ness Theorem, which is only known to hold in the finite
dimensional setting. The Kempf-Ness theorem asserts that the existence of a zero of the moment map in a given
complexified orbit of the group action is equivalent to an algebraic property of
stability, in the sense of geometric invariant theory. Under the
hypothesis of stability, the zeroes of the moment map are unique up to
the action of the real group. Unfortunately the Kempf-Ness Theorem
does not apply immediately in the infinite
dimensional setting and the conjectural isomorphism
$$
{\mathscr{M}} /\!\!/ {{\mathcal{G}}}^{\CC} \simeq \mu^{-1}(0)/{\mathcal{G}},
$$
where the LHS is some type of GIT quotient,
is out of reach for the moment. To start with, the complexification of
${\mathcal{G}}$ is
not even well defined and the quotient ${\mathscr{M}} /\!\!/ {{\mathcal{G}}}^{\CC}$ does not make sense.
Nevertheless, a significant number of this thread of ideas may be implemented.
For instance, we may define a natural \emph{moment map flow}.
\begin{dfn}
\label{dfn:mmf}
Let $f_t\in{\mathscr{M}}$ be a family of maps, for
$t$ in an open interval of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}$. We say that the family $f_t$ is solution of the
moment map flow if
\begin{equation}
\label{eq:mmf}
\frac {df}{dt} = {\mathfrak{J}} Y_{\mu(f)}(f).
\end{equation}
\end{dfn}
\begin{rmk}
In the finite dimensional setting, such a moment map flow preserves the complexified
orbits and converges to a zero of the moment map under a suitable
assumption of stability. It is natural to conjecture that this flow
should converges generically to an isotropic map in a prescribed
complexified orbit. We shall not tackle this problem here and only
prove some very down to earth properties of the flow.
\end{rmk}
By construction, we have
\begin{align*}
{\mathfrak{g}}({\mathfrak{J}} Y_{\mu(f)}(f),V) &= \Omega(Y_{\mu(f)},V) \\
&= -\ipp{D\mu|_f\cdot V,\mu(f)}\\
& =-\frac 12 D(\|\mu\|^2)|_f\cdot V
\end{align*}
so that
$$
{\mathfrak{J}} Y_{\mu(\cdot)} = -\frac 12 \mathrm{grad} \|\mu \|^2,
$$
which proves the following lemma:
\begin{lemma}
\label{lemma:flowdown}
Smooth maps $f:\Sigma\to M$ are zeroes of the moment map if, and
only if they are isotropic.
In addition, the moment map flow is a downward gradient flow of the functional
$f\mapsto \|\mu(f)\|^2$ on ${\mathscr{M}}$. More precisely, the evolution equation of the moment map flow can be written
$$
\frac{df}{dt} = -\frac 12 \mathrm{grad} \|\mu(f)\|^2,
$$
where $\mathrm{grad}$ is the gradient of a function on ${\mathscr{M}}$ endowed with its Riemannian metric ${\mathfrak{g}}$.
\end{lemma}
As a corollary, we see that the functional should decrease along flow lines:
\begin{cor}
If $f_t$ is a smooth family of maps solution of the moment map flow, then
$$
\frac {d}{dt}\|\mu(f_t)\|^2 <0
$$
unless $f_t$ is isotropic, in which case $\frac {d}{dt}\|\mu(f_t)\|^2=0$ and the flow is stationary.
\end{cor}
\begin{proof}
Assume that $f_t$ is not isotropic. In particular there exists
$x\in \Sigma$ such that the differential of $\mu(f_t)$ does not
vanish at $x$. Otherwise $\mu(f_t)$ would be constant. But the
fact that $\omega$ is exact would force $\mu(f_t)=0$. By
definition $X_{\mu(f_t)}$ is a non vanishing vector field at $x$
since it is the symplectic dual of $d\mu(f_t)$. It follows that
$Y_{\mu(f_t)}$ does not vanish hence
\begin{align*}
\frac{d}{dt}\|\mu(f_t)\|^2
&= -2{\mathfrak{g}}({\mathfrak{J}} Y_{\mu(f_t)}, {\mathfrak{J}} Y_{\mu(f_t)}) \\
&= -2{\mathfrak{g}}( Y_{\mu(f_t)}, Y_{\mu(f_t)}) <0.
\end{align*}
\end{proof}
\subsubsection{Laplacian and related operators}
\label{sec:laplrel}
For each vector field $V$ tangent to $f$, we define the operator
$$\delta_f:T_f{\mathscr{M}}\to C^\infty_0(\Sigma)
$$
by
\begin{equation}
\label{eq:deltaf}
\delta_f V = - D\mu|_f\cdot JV.
\end{equation}
We see that that the adjoint $\delta_f^\star$ of $\delta_f$ satisfies
\begin{align*}
{\mathfrak{g}}(\delta^\star_fh,V) & = \ipp{\delta_f V, h}\\
&= -\ipp{D\mu|_f\cdot JV,h} \\
&= \Omega(Y_h(f), JV)\\
& = {\mathfrak{g}}(Y_h(f),V)
\end{align*}
so that
\begin{equation}
\label{eq:hamdstar}
\delta^\star_f h = Y_h(f).
\end{equation}
For each $f\in{\mathscr{M}}$, we may define a natural Laplacian
\begin{equation}
\label{eq:Deltaf}
\Delta_f = \delta_f\delta^\star_f
\end{equation}
acting on smooth functions on $\Sigma$.
\begin{rmk}
It seems likely that the moment map flow of Definition~\ref{dfn:mmf}
can be interpreted as a parabolic flow, once a suitable
analytical framework
and gauge condition have been setup. Although we shall not prove
anything about short time existence of the moment map flow in this work, we provide at least a heuristic
evidence. In the next section,
we compute the variation of the moment map and
show that the variation of $\mu(f)$,
when
$f$ is deformed in the direction of the complexified action ${\mathfrak{J}} Y_h$, is expressed as a
Laplacian of $h$. However, the
systematic study of the moment map flow in the smooth setting is not our purpose here, and
we shall return to this question in a sequel to this paper~\cite{JRT}.
\end{rmk}
\subsection{Variations of the moment map}
The operator
$f\mapsto \mu(f)$ is a first order differential operator.
This section is devoted to calculate its linearization.
\subsubsection{General variations}
Let $f_s:\Sigma\to M$ be a smooth family of maps, with
parameter $s\in I$, where $I$ is an open interval of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}$.
We use the notation,
$$
V_s=\frac{\partial f_s}{\partial s},
$$
for the infinitesimal variation $V_s\in T_{f_s}{\mathscr{M}}$
of the
family $f_s$.
We consider the map $F:I\times \Sigma\to M$ given by $F(s,x)=f_s(x)$ and the canonical injection
$j_{s_0}:\Sigma \hookrightarrow I\times\Sigma$, defined by $j_{s_0}(x)=(s_0,x)$ for some $s_0\in I$.
We compute, using the Cartan formula
\begin{align*}
\left .\frac{\partial f_s^*\omega}{\partial s}\right |_{s=s_0} & = j_{s_0}^* \frac{\partial}{\partial s}\cdot F^*\omega \\
& = j_{s_0}^*(d\circ \iota_{\partial_s} +\iota_{\partial_s}\circ d)F^*\omega \\
& = j_{s_0}^* d\circ \iota_{\partial_s} F^*\omega \\
& = j_{s_0}^* d F^*\iota_{V_s}\omega \\
& = df_{s_0}^*\iota_{V_{s_0}}\omega
\end{align*}
where we have used the fact that $d\omega=0$, that $d$ commutes with
pullbacks and that $F\circ j_{s_0} = f_{s_0}$.
The form
$$\alpha_{s_0} = f^*_s\iota_{V_{s_0}}\omega
$$
is called the
\emph{Maslov form} of the deformation $f_s$ at $s=s_0$.
The above computation shows that
$$
\frac{\partial f_s^*\omega}{\partial s} = d\alpha_s,
$$
which reads
$$
\frac{\partial \mu(f_s)}{\partial s} = \delta \alpha_s
$$
where $\delta$ is the operator given by
\begin{equation}
\label{eq:deltadef}
\delta \gamma = \frac{d\gamma}\sigma,
\end{equation}
for every $1$-form $\gamma$ on $\Sigma$.
Thus we have proved the following result:
\begin{lemma}
\label{lemma:genvarmmp}
Let $f:\Sigma\to M$ be a smooth map and $V\in T_f{\mathscr{M}}$ be an infinitesimal variation of $f$.
Then
$$
D\mu|_f\cdot V = \delta \alpha_V
$$
where $\alpha_V= f^*\iota_V\omega$ is the Maslov form of the deformation and $\delta$ is the operator defined by~\eqref{eq:deltadef}.
\end{lemma}
\subsubsection{Variations at an immersion}
We assume now that $f:\Sigma\to M$ is a smooth immersion. In
particular the pullback $g_\Sigma=f^*g$ is a Riemannian metric on
$\Sigma$. The volume form $\mathrm{vol}_\Sigma$ may not agree with $\sigma$,
but the $2$-forms are related by a conformal factor
$$
\mathrm{vol}_\Sigma = \theta\sigma
$$
where $\theta:\Sigma\to {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}$ is a positive smooth function.
We introduce a conformal metric $g_\sigma$ that satisfies the equation
$$
g_\Sigma =\theta g_\sigma,
$$
and the Hodge operator acting on forms of
$\Sigma$, associated to the metric $g_\sigma$ is denoted $*_\sigma$.
\begin{lemma}
\label{lemma:varmu}
Assume that $f:\Sigma\to M$ is a smooth immersion. Then
$\Sigma$ has an induced Riemannian metric $g_\Sigma$. Let
$g_\sigma$ be the Riemannian metric with volume form $\sigma$, conformal to $g_\Sigma$.
Let
$Y_h$ be the fundamental vector field on ${\mathscr{M}}$ associated to the Hamiltonian function $h$.
Then
$$
\Delta_fh=\delta_f\delta^\star_f h= -D\mu|_{f}\cdot {\mathfrak{J}} Y_h(f) = d^{*_\sigma}\theta d h = \theta\Delta_\sigma
h - g_{\sigma}(d\theta,dh) .
$$
where $\Delta_\sigma$ is the Laplacian associated to the Riemannian
metric $g_\sigma$ and $\theta$ is the conformal factor such that
$g_\Sigma=\theta g_\sigma$.
\end{lemma}
\begin{rmk}
In particular, if $\mathrm{vol}_\Sigma$ agrees with $\sigma$, then
$\theta=1$, $g_\sigma=g_\Sigma$ and the above formula says that
$$
\Delta_f h = \Delta_\Sigma h.
$$
\end{rmk}
\begin{proof}
Let $f_s\in {\mathscr{M}}$, be a smooth family of maps for $s\in
I=(-\epsilon,\epsilon)$, with the
property that $f_0=f$ and $V_0 = JY_h = Jf_*X_h$.
Then
$\frac{\partial\mu(f_s)}{\partial_s} = \delta \alpha_s$ by Lemma~\ref{lemma:genvarmmp}.
But $\alpha_0(U)= f ^*\omega(V_0,U)=
\omega(Jf_*X_h,f_* U)= -g( f_*X_h,f_*
U)=-g_\Sigma(X_h,U)$.
It follows that
$\alpha_0(U)= - \theta
g_\sigma(X_h,U) = -\theta \sigma(X_h,*_\sigma U) = \theta *_\sigma dh.$
Then we conclude that
$$
\left .\frac{\partial \mu(f_s)}{\partial s}\right |_{s=0} = *_\sigma
d\theta *_\sigma dh = - d^{*_\sigma}\theta d h = -\theta\Delta_\sigma
h + g_{\sigma}(d\theta,dh) ,
$$
which proves the Lemma.
\end{proof}
The next lemma shows that $\Delta_f$ is essentially an isomorphism.
\begin{lemma}
\label{lemma:isodelta}
The operator $h\mapsto d^{*_\sigma}\theta dh$ is an elliptic
operator of order $2$, which is an isomorphism modulo constants.
\end{lemma}
\begin{proof}
The fact that the operator is elliptic of order $2$ follows from the
formula
$$
d^{*_\sigma}\theta dh = \theta\Delta_\sigma h -g_\sigma(d\theta,dh).
$$
The operator is selfadjoint since
$$
\ipp{d^{*_\sigma}\theta dh , h'}= \ipp{\theta dh,d h'} = \ipp{h,d^{*_\sigma}\theta dh'}.
$$
If $h$ belongs to the kernel of the operator, then
$$
0=\ipp{d^{*_\sigma}\theta dh,h} = \ipp{\theta dh,dh}
$$
which implies that $h$ is constant. Because the operator is
selfajoint, the orthogonal of its image is identified to the
kernel. So the operator is an isomophism when restricted to functions
which are $L^2$-orthogonal to constants.
\end{proof}
\subsection{Application}
We know that ${\mathscr{M}}$ is acted on by ${\mathcal{G}}=\mathrm{Ham}(\Sigma,\sigma)$. The
${\mathcal{G}}$-orbit of $f\in{\mathscr{M}}$ is denoted ${\mathcal{O}}_f$. The group of Hamiltonian transformations does not
admit a natural complexification. Nevertheless, it is possible to make sense of its complexified orbits.
The space of vector fields $Y_u$ defined by
$Y_u(f)=f_*X_u$ over ${\mathscr{M}}$ defines an integrable distribution
${\mathcal{D}}\subset T{\mathscr{M}}$ which is the tangent space to ${\mathcal{G}}$-orbits.
We can consider the complexified distribution of the tangent bundle to ${\mathscr{M}}$
$$
{\mathcal{D}}^\CC = {\mathcal{D}} +{\mathfrak{J}} {\mathcal{D}}
$$
given by vector fields of the form $Y_u+{\mathfrak{J}} Y_v$.
The fact
that ${\mathcal{G}}$ preserves the complex structure ${\mathfrak{J}}$ of ${\mathscr{M}}$ implies
that the distribution is formally integrable
into a holomorphic foliation. A leaf of the foliation, obtained by
integrating the distribution, is refered to as a
\emph{complexified orbit of ${\mathcal{G}}$}. The complexified orbit of a
element $f\in{\mathscr{M}}$ is denoted ${\mathcal{O}}_f^\CC$.
We are now assuming for simplicity that $M$ is the K\"ahler manifolds
${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ identified to $\CC^{n}$. In this case, given $f\in {\mathscr{M}}$, we may consider a type of exponential map given by
$$
\exp_f (u+iv)= f + Y_u(f) +{\mathfrak{J}} Y_v(f),
$$
for $u, v\in C^\infty_0(\Sigma)$.
This type of exponential map does not come from a Lie group exponential
map. However, $\exp_f(u+iv)$
provides perturbations of $f$ in directions tangent to the
complexified orbit ${\mathcal{O}}_f^\CC$ at $f$.
We have now all the tools necessary to prove the following result:
\begin{theo}
\label{theo:perttoy}
We choose $M={\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ for the construction of ${\mathscr{M}}$.
Let $\ell\in{\mathscr{M}}$ be a smooth isotropic immersion. If $f\in{\mathscr{M}}$
is sufficiently close to $\ell$ in ${\mathcal{C}}^{1,\alpha}$-norm, there exists a nearby
perturbation of the form $\tilde \ell=\exp_f(ih)$, where $h$ is a
${\mathcal{C}}^{2, \alpha}$ function on $\Sigma$, such that
$\tilde \ell$ is an
isotropic immersion.
\end{theo}
\begin{proof}
We denote by ${\mathcal{C}}^{k,\alpha}$, for some Hölder parameter $\alpha>0$, the
usual Hölder spaces. The moduli space ${\mathscr{M}}$ is replaced with
${\mathscr{M}}^{k,\alpha}$ which consists of ${\mathcal{C}}^{k,\alpha}$-maps $f:\Sigma\to{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$.
Since ${\mathscr{M}}^{k,\alpha}$ is an affine space modeled on
${\mathcal{C}}^{k,\alpha}$, it is naturally endowed with an infinite dimensional
manifold structure. In particular, the map $\exp_f$ defines a smooth
map
$$
\exp : {\mathcal{C}}^{k+1,\alpha}(\Sigma,\CC) \times
{\mathscr{M}}^{k,\alpha}\longrightarrow {\mathscr{M}}^{k,\alpha}.
$$
given by $(h,f)\mapsto \exp_f(h)$.
We denote by ${\mathcal{C}}^{k,\alpha}_0(\Sigma)$ the subspace of
${\mathcal{C}}^{k,\alpha}(\Sigma)$ that consists of real valued functions
$h:\Sigma\to {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}$
such that $\int_{\Sigma}h\sigma =0$ (i.e. functions orthogonal to
constants for the inner product $\ipp{\cdot,\cdot}$).
We consider the map
$$
Z:\left\{\begin{array}{ccl}
{\mathcal{C}}^{2,\alpha}_0(\Sigma)\times {\mathscr{M}}^{1,\alpha} &\longrightarrow & {\mathscr{M}}^{0,\alpha} \\
(h,f) &\longmapsto &\exp_{f}(-ih)
\end{array}\right.
$$
whose differential at $(0,\ell)$ satisfies
$$
\frac{\partial Z}{\partial h}|_{(0,\ell)} \cdot \dot h = -{\mathfrak{J}} Y_{\dot h}(\ell)
$$
by definition of the exponential map.
In particular
\begin{equation}
\label{eq:implicit}
\frac{\partial(\mu\circ Z)}{\partial h}|_{(0,\ell)}\cdot \dot h=
- D\mu|_{\ell}\cdot{\mathfrak{J}} Y_{\dot h}(\ell) = \delta_\ell
\delta^\star_\ell h = \Delta_\ell h
\end{equation}
by Lemma~\ref{lemma:varmu}. This operator is an isomorphism modulo
constants by Lemma~\ref{lemma:isodelta}.
We consider the map
$$
F:{\mathcal{C}}^{2,\alpha}_0(\Sigma)\times{\mathscr{M}}^{1,\alpha}\to {\mathcal{C}}^{0,\alpha}_0(\Sigma)
$$
given by $F=\mu\circ Z$. We have proved that the differential
$$
\frac{\partial F}{\partial h}|_{(0,\ell)}: {\mathcal{C}}^{2,\alpha}_0(\Sigma)\longrightarrow {\mathcal{C}}^{0,\alpha}_0(\Sigma)
$$
is an isomorphism.
The rest of the proof follows from the implicit function theorem: for
every $f\in {\mathscr{M}}^{1,\alpha}$ sufficiently close to $\ell$ in
${\mathcal{C}}^{1,\alpha}$-norm, there exists a unique $\tilde h=h(f)\in {\mathcal{C}}^{2,\alpha}(\Sigma)$
in a small neighborhood of the origin, such that
$$
F(\tilde h, f)=0.
$$
By definition $\exp_{f}(i\tilde h)=\tilde \ell$
satisfies $\mu(\tilde\ell)=0$. By assumption $\ell$ is smooth. If $f$
is also smooth, elliptic regularity and standard bootstrapping
argument shows that $\tilde h$, and in turn $\tilde \ell$, must be smooth as well. This proves the theorem.
\end{proof}
\begin{rmk}
In section~\ref{sec:pert}, we will develop a perturbation theory on
the space of quadrangular meshes ${\mathscr{M}}_N$ that mimics
Theorem~\ref{theo:perttoy}.
We shall define an analogue $\delta_\tau$ of the operator $\delta_f$
in the context of discrete geometry (cf. Formula~\eqref{eq:deltatau}).
The operator $\delta_\tau$, and more precisely its adjoint
$\delta^\star_\tau$, could be used to define an analogue of
Hamiltonian vector fields in the context of discrete differential geometry, in
view of Formula~\eqref{eq:hamdstar}. This could be relied upon to define a
discrete analogue of the gauge group action
${\mathcal{G}}=\mathrm{Ham}(\Sigma,\sigma)$. This idea will be explored in a sequel
to this work~\cite{JRT}.
\end{rmk}
\subsubsection{Outreach}
In \S\ref{sec:pert} we shall define finite dimensional analogues of the
infinite dimensional moment map picture presented in the current section.
This will provide the incomplete dictionary below, where the right column is
conjecturally a finite dimensional approximation of the left column:
\begin{center}
\begin{tabular}{|c|c|}
\hline
Infinite dimensional case & finite dimensional case \\
\hline
Area form $\sigma$ on $\Sigma$ & Quadrangulation $\mathcal{Q}_N(\Sigma)$ \\
\hline
${\mathscr{M}}=\{f:\Sigma\to{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}\}$ & ${\mathscr{M}}_N =
C^0(\mathcal{Q}_N(\Sigma)\otimes{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ \\
\hline
Canonical K\"ahler structure & Canonical K\"ahler structure\\
\hline
$\mathrm{Ham}(\Sigma,\sigma)$-action & ???\\
\hline
Fundamental V.F $Y_h(f)=\delta^\star_f h$ & $\delta^\star_\tau \phi$\\
\hline
A moment map $\mu:{\mathscr{M}}\to C^\infty(\Sigma)$ & $\mu_N^r:{\mathscr{M}}_N
\to C^2(\mathcal{Q}_N(\Sigma))$\\
\hline
The moment map flow \eqref{eq:mmf} & The discrete flow
\eqref{eq:discrf} \\
\hline
\end{tabular}
\end{center}
Many aspects of the above dictionary remain unclear. First, the finite
dimensional picture does not come with a Lie group action that would,
in some sense, approximate $\mathrm{Ham}(\Sigma,\sigma)$. In particular
$\mu_N^r$ is not a moment map and $C^2(\mathcal{Q}_N(\Sigma))$ is not
interpreted as a Lie algebra. The flows are defined on both sides and
we would like to compare them as $N$ goes to infinity. Unfortunately, we do not
even know whether the infinite dimensional flow exists for short time. The
discrete flow is an ODE, but it is not completely understood at this stage. For $N$ fixed, does
the flow converge, or does it blowup ? Does a sequence
of flow converge to the moment map flow as $N$ goes to infinity ? Can we use the above sketch
of correspondence to make sense of some type of Kempf-Ness theorem in
the infinite dimensional setting ?
All these gripping questions are postponed
to a later work. In this paper, we focus on the discrete flow on ${\mathscr{M}}_N$,
for a given $N$, and merely provide a computer simulation of the
discrete flow at \S\ref{sec:dmmf}.
\section{Discrete analysis}
\label{sec:anal}
In this section, we consider a real surface $\Sigma$, diffeomorphic to a
torus. We denote by $g$ the canonical Euclidean metric of
${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ and $J$ the standard complex structure deduced from the
identification ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}\simeq \CC^n$. The standard symplectic form of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}{2n}$ is
given by $\omega(\cdot,\cdot)=g(J\cdot,\cdot)$ and $\ell:\Sigma\to {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ is an
\emph{isotropic immersion}.
\subsection{Conformally flat metric}
\label{sec:confflat}
Every Riemannian metric on a surface diffeomorphic to a torus is \emph{conformally flat}.
In particular, $\Sigma$ carries a pullback Riemannian metric
$$
g_\Sigma=\ell^*g,
$$
which must be conformally flat. In other words, there exists a
covering map
\begin{equation}
\label{eq:p}
p:{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2\to\Sigma
\end{equation}
with deck transformations given by a lattice $\Gamma\subset
{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$. The Euclidean metric $g_{\mathrm{euc}}$ of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$ descends as a flat
metric $g_\sigma$ on $\Sigma$. In addition there exists a positive
smooth function $\theta :\Sigma\to (0,+\infty)$,
known as the \emph{conformal factor},
such that
$$g_\Sigma = \theta g_\sigma.$$
The projection $p$, which descends to the quotient ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^ 2/\Gamma $, provides a preferred diffeomorphism
\begin{equation}
\label{eq:ptilde}
\Phi : {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^ 2/\Gamma \to \Sigma,
\end{equation}
which is also an isometry from $({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^ 2/\Gamma,g_{\mathrm{euc}})$ to $(\Sigma,g_\sigma)$.
\subsection{Square lattice and checkers board}
\label{sec:sqlat}
Let $e_1=(1,0)$ and $e_2=(0,1)$ be the canonical basis of
${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$. The basis $(e_1,e_2)$ is orthonormal with respect to the
canonical Euclidean metric $g_{\mathrm{euc}}$ of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$ and it is
positively oriented, by convention.
For every positive integer $N$, we introduce the lattice
$\Lambda_N\subset {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$ spanned by
$e_1/N$ and $e_2/N$:
$$
\Lambda_N = {\mathbb{Z}}} \def\DD{{\mathbb{D}}} \def\SS{{\mathbb{S}}} \def\WW{{\mathbb{W}}} \def\QQ{{\mathbb{Q}}} \def\CC{{\mathbb{C}}\cdot \frac{e_1}N\oplus {\mathbb{Z}}} \def\DD{{\mathbb{D}}} \def\SS{{\mathbb{S}}} \def\WW{{\mathbb{W}}} \def\QQ{{\mathbb{Q}}} \def\CC{{\mathbb{C}}\cdot \frac{e_2}N \subset {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2.
$$
The lattice $\Lambda_N$ provides the familiar picture of a square grid
in ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$ with step size~$N^{-1}$.
The lattice $\Gamma$, introduced at \S\ref{sec:confflat}, admits a basis
$(\gamma_1,\gamma_2)$, compatible with the canonical orientation of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^ 2$. The lattice $\Gamma$ is generally not a sublattice
of $\Lambda_N$. Indeed, the components of the vectors $\gamma_1, \gamma_2\in{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^ 2$ may not be rational. This fact will cause a technical catch
for constructing quadrangulations of $\Sigma$. Luckily this difficulty is easily overcome
as we shall explain below.
The \emph{checkers board} sublattice
$\Lambda^{ch}_N\subset \Lambda_N$
is spanned by the vectors $\frac
{e_1+e_2}N$ and $\frac
{e_2-e_1}N$:
$$
\Lambda_N^{ch}= {\mathbb{Z}}} \def\DD{{\mathbb{D}}} \def\SS{{\mathbb{S}}} \def\WW{{\mathbb{W}}} \def\QQ{{\mathbb{Q}}} \def\CC{{\mathbb{C}}\cdot \frac{e_1+e_2}N \oplus {\mathbb{Z}}} \def\DD{{\mathbb{D}}} \def\SS{{\mathbb{S}}} \def\WW{{\mathbb{W}}} \def\QQ{{\mathbb{Q}}} \def\CC{{\mathbb{C}}\cdot \frac{e_2-e_1}N \subset\Lambda_N.
$$
The elements of $\Lambda_N\subset {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^ 2$ may be thought of as the
positions of a standard
checkers board game. Then $\Lambda_N^{ch}$ acts on $\Lambda_N$ by translations. These translations are spanned by
diagonal motions, as in some kind of checkers game. One can easily see that the quotient
$\Lambda_N/ \Lambda^{ch}_N$ is isomorphic to ${\mathbb{Z}}} \def\DD{{\mathbb{D}}} \def\SS{{\mathbb{S}}} \def\WW{{\mathbb{W}}} \def\QQ{{\mathbb{Q}}} \def\CC{{\mathbb{C}}_2$ which is
isomorphic to the equivalence classes
of the usual black and white positions of the
checkers board game.
For each $N>0$ and $i=1,2$, we choose $\gamma_i^ N\in\Lambda^{ch}_N$ which is a best approximation of
$\gamma_i$ in~$\Lambda_{N}^{ch}$, for the Euclidean distance in
${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$. By definition, $\gamma_1^N$ and $\gamma_2^N$ are linearly
independent for all sufficiently large $N$. We define the lattice $\Gamma_N$, at least for sufficiently large $N$, as
$$
\Gamma_N={\mathbb{Z}}} \def\DD{{\mathbb{D}}} \def\SS{{\mathbb{S}}} \def\WW{{\mathbb{W}}} \def\QQ{{\mathbb{Q}}} \def\CC{{\mathbb{C}} \cdot \gamma_1^ N \oplus {\mathbb{Z}}} \def\DD{{\mathbb{D}}} \def\SS{{\mathbb{S}}} \def\WW{{\mathbb{W}}} \def\QQ{{\mathbb{Q}}} \def\CC{{\mathbb{C}}\cdot \gamma_2^N \subset \Lambda_N^ {ch}\subset \Lambda_N.
$$
We summarize our construction in Figure~\ref{figure:gammaN}. The red and blue bullets
represent the elements of $\Lambda_N$, where the red bullets are in
$\Lambda_N^{ch}$. We draw the generators $\gamma_i$ of
$\Gamma$ and their best approximations, in red, by elements
$\gamma_i^N$ of~$\Lambda_N^{ch}$:
\begin{figure}[H]
\begin{pspicture}[showgrid=false](0,0)(4,4)
\psset{linecolor=green, linestyle=dashed}
\pscircle(1.6,3.2){0.8}
\pscircle(2.5,1.2){0.7}
\color{black}
\rput(2.5,1.5){$\gamma_1$}
\rput(1.6,3.4){$\gamma_2$}
\color{red}
\rput(3.5,.8){$\gamma^N_1$}
\rput(0.5,2.6){$\gamma^N_2$}
\psset{linecolor=red, fillstyle=solid , fillcolor=red, linestyle=solid}
\pscircle (0,0){.1}
\pscircle (0,2){.1}
\pscircle (0,4){.1}
\pscircle (1,1){.1}
\pscircle (1,3){.1}
\pscircle (2,0){.1}
\pscircle (2,2){.1}
\pscircle (2,4){.1}
\pscircle (3,1){.1}
\pscircle (3,3){.1}
\pscircle (4,0){.1}
\pscircle (4,2){.1}
\pscircle (4,4){.1}
\psset{linecolor=blue, fillstyle=solid , fillcolor=blue, linestyle=solid}
\pscircle(0,1){.1}
\pscircle (0,3){.1}
\pscircle (1,0){.1}
\pscircle (1,2){.1}
\pscircle (1,4){.1}
\pscircle (2,1){.1}
\pscircle (2,3){.1}
\pscircle (3,0){.1}
\pscircle (3,2){.1}
\pscircle (3,4){.1}
\pscircle (4,1){.1}
\pscircle (4,3){.1}
\psset{linecolor=black, arrows=->,arrowsize=.3,linestyle=solid}
\psline(0,0)(1.6,3.2)
\psline(0,0)(2.5,1.2)
\psset{linecolor=red}
\psline(0,0)(1,3)
\psline(0,0)(3,1)
\end{pspicture}
\caption{Construction of $\Gamma_N$}
\label{figure:gammaN}
\end{figure}
By construction, $\Gamma_N$ is a sublattice of
$\Lambda_N^{ch}$; this choice has been designed so that the checkers
graph splits into two connected components precisely~(cf. \S\ref{sec:check}).
Furthermore, the lattices $\Gamma_N$ converge towards $\Gamma$,
in a sense to be made more precise now:
the linear transformation $U_N$ of
${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$ defined by
$$
U_N(\gamma_i^N)= \gamma_i
$$
identifies the lattices
$\Gamma_N$ and $\Gamma$ by an automorphism of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$.
Using an operator norm for linear transformations of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$, we have
\begin{equation}
\label{eq:unasympt}
\|U_N - \mathrm{id}|_{{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^ 2}\| = {\mathcal{O}}\left (N^ {-1}\right ).
\end{equation}
In conclusion $U_N$ converges towards the identity and
$U_N(\Gamma_N)=\Gamma$, which is understood as $\Gamma_N$ converges
towards $\Gamma$.
By construction, $U_N$ descends to the
quotient as a (locally linear) diffeomorphism
$$
u_N:{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2/\Gamma_N\to {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2/\Gamma.
$$
The linear transformation $U_N$ may not belong to the orthogonal
group. Therefore neither $U_N$ nor $u_N$ are isometries. But,
they are isometries in the limit, since $U_N$ converges to the
identity. This fact will be sufficient for our purpose.
The quotients ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2/\Gamma$ and ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2/\Gamma_N$ are canonically identified to $\Sigma$ via the diffeomorphisms
$$
{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2/\Gamma_N
\stackrel{u_N}\longrightarrow {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2/\Gamma \stackrel{\Phi}\longrightarrow \Sigma.
$$
There are now several competing covering maps: we defined $p:{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^ 2\to
\Sigma$ at~\eqref{eq:p}, but we may also consider the covering maps
\begin{equation}
\label{eq:pN}
p_N= p\circ U_N:{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^ 2\to \Sigma.
\end{equation}
The group of deck transformation of $p$ is $\Gamma$, whereas the group of deck transformations of $p_N$ is $\Gamma_N$.
There are also several
flat metrics descending on $\Sigma$ via $p$ and $p_N$. The first
$g_\sigma$ is induced by the Euclidean metric and the diffeomorphism
$\Phi : {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2/\Gamma \to \Sigma$. The other flat metrics $g_{\sigma}^ N$ are induced by the
Euclidean metric and the diffeomorphisms
\begin{equation}
\label{eq:ptildeN}
\Phi_N :{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2/\Gamma_N
\to \Sigma
\end{equation}
induces by~\eqref{eq:pN}.
According to \eqref{eq:unasympt} we have
$$
g_{\sigma}^ N= g_\sigma + {\mathcal{O}}\left (N^ {-1}\right ).
$$
\subsection{Quadrangulations}
\label{sec:quadconv}
Instead
of linear triangulations, we shall work with particular
\emph{linear quadrangulations} $\mathcal{Q}_N(\Sigma)$ of $\Sigma$. The
current section is devoted to the definition of these CW-complexes.
\subsubsection{Quadrangulations of the plane}
\label{sec:quadnot}
For $k,l\in{\mathbb{Z}}} \def\DD{{\mathbb{D}}} \def\SS{{\mathbb{S}}} \def\WW{{\mathbb{W}}} \def\QQ{{\mathbb{Q}}} \def\CC{{\mathbb{C}}$, the points of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$ given by
$$
\mathbf{v}_{kl} = \frac kN e_1 + \frac lN e_2 ,
$$
are the
elements of the lattice $\Lambda_N\subset{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$.
The elements of the lattice $\Lambda_N$ are also the vertices of a
nice quadrangulation $\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$ of the plane ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$, pictured as
the usual square grid with step $N^{-1}$.
More precisely, the quadrangulation $\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$ is a particular
CW-complex decomposition of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$,
characterized by the following properties:
\begin{itemize}
\item The edges $\mathbf{e}_{1,kl}$ and $\mathbf{e}_{2,kl}$ of the quadrangulation are
the oriented line segments of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$ with
oriented boundary
$$\partial \mathbf{e}_{1,kl}= \mathbf{v}_{k+1,l}-\mathbf{v}_{kl} \mbox{ and } \partial
\mathbf{e}_{2,kl}=\mathbf{v}_{k,l+1}-\mathbf{v}_{kl}.
$$
\item
The faces $\mathbf{f}_{kl}$ of the quadrangulation are oriented squares of
${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$ with oriented boundary
$$ \partial \mathbf{f}_{kl}=\mathbf{e}_{1,kl}+\mathbf{e}_{2,k+1,l} - \mathbf{e}_{1,k,l+1} - \mathbf{e}_{2,kl}.$$
\end{itemize}
Figure~\ref{figure:tiling} shows the familiar picture of the plane
tiled by squares together with the notations introduced above.
\begin{figure}[H]
\begin{pspicture}[showgrid=false](-3,-3)(3,3)
\psscalebox{1.0}
{
\psline (-3,3) (3,3)
\psline (-3,1) (3,1)
\psline (-3,-1) (3,-1)
\psline (-3,-3) (3,-3)
\psline (-3,3) (-3,-3)
\psline (-1,3) (-1,-3)
\psline (1,3) (1,-3)
\psline (3,3) (3,-3)
\psset{linecolor=blue, arrows=->,arrowsize=.3, linewidth=2pt,linestyle=solid}
\psline{->}(-1,-1) (-1,1)
\psline{->}(-1,-1) (1,-1)
\psline{->}(-1,1) (1,1)
\psline{->}(1,-1) (1,1)
\psset{linecolor=red, linewidth=2pt,linestyle=solid, fillstyle=solid, fillcolor=red}
\rput(-2,-2){$\mathbf{f}_{k-1,l-1}$}
\rput(0,-2){$\mathbf{f}_{k,l-1}$}
\rput(2,-2){$\mathbf{f}_{k+1,l-1}$}
\rput(-2,0){$\mathbf{f}_{k-1,l}$}
\rput(0,0){$\mathbf{f}_{kl}$}
\rput(2.1,0){$\mathbf{f}_{k+1,l}$}
\rput(-2,+2){$\mathbf{f}_{k-1,l+1}$}
\rput(0,+2){$\mathbf{f}_{k,l+1}$}
\rput(2,+2){$\mathbf{f}_{k+1,l+1}$}
\color{red}
\pscircle(-1,-1){.1}
\rput[tr](-1.1,-1.1){$\mathbf{v}_{k,l}$}
\pscircle(1,-1){.1}
\rput[tl](1.1,-1.1){$\mathbf{v}_{k+1,l}$}
\pscircle(-1,+1){.1}
\rput[br](-1.1,1.1){$\mathbf{v}_{k,l+1}$}
\pscircle(1,1){.1}
\rput[bl](1.1,1.1){$\mathbf{v}_{k+1,l+1}$}
\color{blue}
\rput[t](0.0,-1.1){$\mathbf{e}_{1,kl}$}
\rput[t](0.0,+0.9){$\mathbf{e}_{1, k,l+1}$}
\rput[t]{90}(-0.9,0.0){$\mathbf{e}_{2,kl}$}
\rput[t]{90}(1.1,0.0){$\mathbf{e}_{2, k+1,l}$}
}
\end{pspicture}
\caption{Quadrangulation $\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$}
\label{figure:tiling}
\end{figure}
\subsubsection{Quadrangulations of the torus}
The lattice $\Lambda_N$ acts on itself, by translation. It follows that $\Lambda_N$ also acts naturally on the vertices, on the edges and on the faces of the quadrangulation $\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$ by translation.
Since $\Gamma_N\subset\Lambda_N^{ch}\subset \Lambda_N$, the lattice
$\Gamma_N$ acts on $\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$ as well. Thus, the quadrangulation descends
to a quadrangulation $\mathcal{Q}_N(\Sigma)$ of the quotient $\Sigma$, via the covering map $p_N:{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^ 2\to\Sigma$.
When this is clear from the context, the vertices, edges and faces of $\mathcal{Q}_N(\Sigma)$ will still be denoted $\mathbf{v}_{kl}$, $\mathbf{e}_{1,kl}$,
$\mathbf{e}_{2,kl}$ and $\mathbf{f}_{kl}$.
\subsubsection{Alternate quadrangulation of the plane}
\label{sec:altquad}
Our construction involves the various diffeomorphisms
$\Phi:{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^ 2/\Gamma\to\Sigma$ and $\Phi_N:{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^ 2/\Gamma_N\to\Sigma$.
For the purpose of analysis and, more specifically, the notion of convergence, it is convenient to identify $\Sigma$ with a single reference quotient, say ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^ 2/\Gamma$ using $\Phi$.
Lifting $\mathcal{Q}(\Sigma)$ via the covering map $p:{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2\to \Sigma$ provides a quadrangulation different from $\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$. We denote by $\hat\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$ the quadrangulation obtained as the image of $\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^ 2)$ by the isomorphism $U_N:{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^ 2\to {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^ 2$. We also denote by $\hat\Lambda_N$ and $\hat\Lambda^{ch}_N$
the images of $ \Lambda_N$ and $\Lambda^{ch}_N$ by $U_N$.
By definition, $\Gamma$ is a sublattice of $\hat\Lambda^{ch}_N$ and we have a sequence of canonical inclusions
$$
\Gamma\subset \hat\Lambda^{ch}_N \subset \hat\Lambda_N
$$
which is nothing else but the image of the inclusions
$$
\Gamma_N\subset \Lambda^{ch}_N \subset \Lambda_N
$$
by $U_N$.
By construction, the quadrangulation $\hat\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^ 2)$ has vertices given by the elements of the lattice $\hat\Lambda_N$.
Furthermore $\hat\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$ descends to the quotient via the covering map $p:{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2\to\Sigma$ into a quadrangulation that coincides with~$\mathcal{Q}_N(\Sigma)$.
\subsection{Checkers graph}
\label{sec:check}
We associate a graph ${\mathcal{G}}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$ to the quadrangulation $\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$,
called the \emph{checkers graph} of $\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$. Combinatorially, the vertices $\mathbf{z}_{kl}$ of ${\mathcal{G}}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$ correspond to
faces $\mathbf{f}_{kl}$ of $\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$. However a vertex $\mathbf{z}_{kl}$ of the graph ${\mathcal{G}}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$ shall
be though of as the
barycenter of the face $\mathbf{f}_{kl}$ of $\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$, understood as a square of
${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$. The fact that vertices of the graph correspond to points in
${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$ will be most helpful for defining the notion of convergence at \S\ref{sec:conv}.
Two barycenters are connected by an edge if, and only if, they
belong to faces having exactly one vertex in common. For instance the
faces $\mathbf{f}_{kl}$ and $\mathbf{f}_{k+1,l+1}$ of $\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^ 2)$ have exactly one vertex in
common. An edge between two vertices of ${\mathcal{G}}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^ 2)$ is the segment of straight
line of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$ between the two vertices.
Figure~\ref{fig:CG} shows
the quadrangulation $\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$ using dashed lines and
the corresponding checkers graph ${\mathcal{G}}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$. The graph has two connected components painted
with colors red and blue. The bullets correspond to vertices of the
graph.
\begin{figure}[H]
\begin{pspicture}[showgrid=false](-2,-2)(2,2)
\psgrid[griddots=5,gridlabels=0, subgriddiv=1]
\psset{linecolor=blue }
\psline (-2,1) (-1,2)
\psline (-2,-1) (1,2)
\psline (-1,-2) (2,1)
\psline (1,-2) (2,-1)
\psline (2,-2) (-2,2)
\psline (0,2) (2,0)
\psline (-2,0) (0,-2)
\psset{linecolor=blue, fillstyle=solid , fillcolor=blue, linestyle=solid}
\pscircle (-1.5,1.5){.12}
\pscircle (0.5,1.5){.12}
\pscircle (-0.5,0.5){.12}
\pscircle (1.5,0.5){.12}
\pscircle (-1.5,-0.5){.12}
\pscircle(0.5,-0.5){.12}
\pscircle (0.5,-0.5){.12}
\pscircle(-0.5,-1.5){.12}
\pscircle(1.5,-1.5){.12}
\psset{linecolor=red}
\psline(-2,0)(0,2)
\psline(-2,-2)(2,2)
\psline(0,-2)(2,0)
\psline(-2,-1)(-1,-2)
\psline(-2,1)(1,-2)
\psline(-1,2)(2,-1)
\psline(1,2)(2,1)
\psset{linecolor=red, fillstyle=solid , fillcolor=red,
linestyle=solid}
\pscircle (-0.5,1.5){.12}
\pscircle (1.5,1.5){.12}
\pscircle (-1.5,0.5){.12}
\pscircle (0.5,0.5){.12}
\pscircle (-0.5,-0.5){.12}
\pscircle (1.5,-0.5){.12}
\pscircle (-0.5,-0.5){.12}
\pscircle (-1.5,-1.5){.12}
\pscircle (0.5,-1.5){.12}
\end{pspicture}
\caption{Graph ${\mathcal{G}}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$}
\label{fig:CG}
\end{figure}
\subsubsection{Splitting of the ckeckers graph}
The graph ${\mathcal{G}}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$ splits into two connected components denoted
$$
{\mathcal{G}}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2) = {\mathcal{G}}^+_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2) \cup {\mathcal{G}}_N^-({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2),
$$
where ${\mathcal{G}}_N^+({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$ contains the vertex $\mathbf{z}_{00}$ corresponding to the face
$\mathbf{f}_{00}$, by convention.
The lattice $\Lambda_N$ acts by translation on $\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$ and on
${\mathcal{G}}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$. The action on the vertices of
${\mathcal{G}}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$
(or, equivalently the faces of $\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$) is transitive.
However the sublattice $\Lambda^{ch}_N$ does not act transitively: in fact it
preserves the connected components of the
graph ${\mathcal{G}}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$ and acts transitively on each component.
The quotient $\Lambda_N/\Lambda^{ch}_N
\simeq {\mathbb{Z}}} \def\DD{{\mathbb{D}}} \def\SS{{\mathbb{S}}} \def\WW{{\mathbb{W}}} \def\QQ{{\mathbb{Q}}} \def\CC{{\mathbb{C}}_2$ is the residual action of the lattice $\Lambda_N$ on the connected
components of ${\mathcal{G}}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$.
\subsubsection{Checkers graph of the quotient}
By construction $\Gamma_N\subset \Lambda^{ch}_N$, so that the action of $\Lambda_N$
preserves the connected component of ${\mathcal{G}}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$. It follows that the
graphs ${\mathcal{G}}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$, ${\mathcal{G}}_N^+({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$ and ${\mathcal{G}}_N^-({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$ descend
as graphs ${\mathcal{G}}_N(\Sigma)$, ${\mathcal{G}}_N^+(\Sigma)$ and ${\mathcal{G}}_N^-(\Sigma)$ on the
quotient $\Sigma\simeq {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2/\Gamma_N$ via the covering map $p_N:{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^ 2\to \Sigma$. Furthermore, the graph
${\mathcal{G}}_N(\Sigma)$ splits into two connected components
${\mathcal{G}}_N^+(\Sigma)$ and ${\mathcal{G}}_N^-(\Sigma)$:
$$
{\mathcal{G}}_N(\Sigma) = {\mathcal{G}}_N^+(\Sigma)\cup {\mathcal{G}}_N^-(\Sigma).
$$
\subsubsection{Alternate checkers graph on the plane}
A discussion similar to the case of the quadrangulations $\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^ 2)$ and $\hat\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^ 2)$ occurs here (cf. \S\ref{sec:altquad}). We introduce the checkers graphs
$\hat{\mathcal{G}}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$, $\hat{\mathcal{G}}^+_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$ and $\hat{\mathcal{G}}_N^-({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$ obtained as the image of ${\mathcal{G}}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$, ${\mathcal{G}}^+_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$ and ${\mathcal{G}}_N^-({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$ by $U_N$. Similarly to the non-hat version, these graphs can be also understood as the checkers graphs of $\hat\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^ 2)$. They descend via the covering map $p:{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^ 2\to\Sigma$ where we recover ${\mathcal{G}}_N(\Sigma)$, ${\mathcal{G}}_N^+(\Sigma)$ and ${\mathcal{G}}_N^-(\Sigma)$.
If the vertices of the checkers graph ${\mathcal{G}}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$ are the barycenters $\mathbf{z}_{kl}$ of the faces $f_{kl}$, their images by $U_N$, denoted $\hat \mathbf{z}_{kl}$, are the vertices of $\hat{\mathcal{G}}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$.
\subsection{Examples}
\label{sec:examples}
The lattices of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$ defined at \S\ref{sec:sqlat} come with canonical inclusions
$$
\Lambda_1\subset \cdots\Lambda_N\subset\Lambda_{N+1}\subset\cdots
$$
and
$$
\Lambda_1^{ch}\subset \cdots\Lambda_N^{ch}\subset\Lambda_{N+1}^{ch}\subset\cdots
$$
If $\Gamma$ is a sublattice of $\Lambda_1^{ch}$, then its approximations $\Gamma_N$ coincide with $\Gamma$, which makes the construction of $\mathcal{Q}_N(\Sigma)$ somewhat simpler.
For example, we may consider the lattice
$$
\Gamma' = {\mathbb{Z}}} \def\DD{{\mathbb{D}}} \def\SS{{\mathbb{S}}} \def\WW{{\mathbb{W}}} \def\QQ{{\mathbb{Q}}} \def\CC{{\mathbb{C}} (e_1+e_2)\oplus {\mathbb{Z}}} \def\DD{{\mathbb{D}}} \def\SS{{\mathbb{S}}} \def\WW{{\mathbb{W}}} \def\QQ{{\mathbb{Q}}} \def\CC{{\mathbb{C}}(e_2-e_1) \subset \Lambda_1^{ch},
$$
or the lattice
$$
\Gamma'' = {\mathbb{Z}}} \def\DD{{\mathbb{D}}} \def\SS{{\mathbb{S}}} \def\WW{{\mathbb{W}}} \def\QQ{{\mathbb{Q}}} \def\CC{{\mathbb{C}} e_1\oplus {\mathbb{Z}}} \def\DD{{\mathbb{D}}} \def\SS{{\mathbb{S}}} \def\WW{{\mathbb{W}}} \def\QQ{{\mathbb{Q}}} \def\CC{{\mathbb{C}} e_2 = \Lambda_1.
$$
In the latter case, $\Gamma''\subset \Lambda_N^{ch}$ if and only if $N$ is even and we shall only consider $\mathcal{Q}_N(\Sigma'')$ when $N$ is even.
The quotients $\Sigma'={\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2/\Gamma'$ and $\Sigma''={\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2/\Gamma''$ are conformally isomorphic but the quadrangulations $\mathcal{Q}_N(\Sigma')$ and $\mathcal{Q}_N(\Sigma'')$ are not isomorphic through a conformal mapping.
Let $\ell_1:\SS^1\to\CC $ and $\ell_2:\SS^1\to\CC$ be two smooth embeddings of the circle into the complex plane $\CC$. This provides an embedding of the torus
\begin{align*}
\ell:\SS^1\times\SS^1 &\longrightarrow \CC^2 \simeq {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^4\\
(\varphi_1,\varphi_2)&\longmapsto (\ell_1(\varphi_1),\ell_2(\varphi_2))
\end{align*}
which is isotropic since both maps $\ell_i$ are. The image of $\ell$
is usually called a \emph{product Lagrangian torus of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^4$}.
The map $\ell$ can be approximated by a piecewise linear maps. The idea
is to approximate the two embedded circles by polygons of $\CC$. We
obtain a product of two polygons approximating the product torus.
More precisely, we define
$$
\ell_i^N:(N^{-1}{\mathbb{Z}}} \def\DD{{\mathbb{D}}} \def\SS{{\mathbb{S}}} \def\WW{{\mathbb{W}}} \def\QQ{{\mathbb{Q}}} \def\CC{{\mathbb{C}})/{\mathbb{Z}}} \def\DD{{\mathbb{D}}} \def\SS{{\mathbb{S}}} \def\WW{{\mathbb{W}}} \def\QQ{{\mathbb{Q}}} \def\CC{{\mathbb{C}} \to \CC
$$
by $\ell_i^N(v)=\ell_i(v)$. The map $\ell_i^N$ can be extended as a
piecewise linear map denoted
$$
\ell_i^N:{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}/{\mathbb{Z}}} \def\DD{{\mathbb{D}}} \def\SS{{\mathbb{S}}} \def\WW{{\mathbb{W}}} \def\QQ{{\mathbb{Q}}} \def\CC{{\mathbb{C}} \to \CC
$$
as well. If $N$ is sufficiently large, the maps $\ell^N_i$ are
piecewise linear embeddings.
For the same reasons as before, the product embedding
$$
\ell_N:\SS^1\times\SS^1\to {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^4
$$
defined by $\ell_N(\varphi_1,\varphi_2)=(\ell_1^N(\varphi_1),\ell_2^N(\varphi_2))$
is isotropic and it is a piecewise linear isotropic approximation of
$\ell$.
Notice that the maps $\ell_N$ can be recovered only from the
${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^4$-coordinates of the vertices of the points in
$\Lambda_N/{\mathbb{Z}}} \def\DD{{\mathbb{D}}} \def\SS{{\mathbb{S}}} \def\WW{{\mathbb{W}}} \def\QQ{{\mathbb{Q}}} \def\CC{{\mathbb{C}}^2$. These vertices are by definition the vertices of
the quadrangulation $\mathcal{Q}_N(\Sigma'')$,
modulo the isomorphism
$$
\SS^1\times\SS^1\simeq {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2/\Gamma'' =\Sigma'',
$$
where $\Gamma''=\Lambda_1$ is the standard lattice described above.
Notice that each face of the quadrangulation is mapped to a
quadrilateral of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^4$ contained in a Lagrangian plane.
\begin{rmk}
The piecewise linear isotropic
embeddings $\ell_N$ of
the torus described above were essentially the only examples
known to at the begining of this research project.
If $\ell$ is any smooth isotropic map, one can
construct samples $\ell_N$ as above (cf. \S\ref{sec:quadsamp}).
Strictly speaking, this samples
are quadrangular meshes. In general these samples are not isotropic
on the nose. From this point of view, the examples described above are very special, because in
this case the samples are isotropic. In general, one needs a suitable perturbation theory so that they
become isotropic, which is the technical task of this paper.
\end{rmk}
\subsection{A splitting for discrete functions}
The vector space of discrete functions on the faces of the quadrangulation $\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$ is denoted $C^2(\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2))$. A discrete function $\phi$ is defined by its values on faces given by
$$
\phi_{kl}=\phi(\mathbf{f}_{kl})= \ip{\phi,\mathbf{f}_{kl}}.
$$
In the above notation, $\ip{\cdot, \cdot}$ is the duality bracket, and a discrete function is understood as a linear form on the vector space $C_2(\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2))$ spanned by the faces of the quadrangulation.
By construction there is a canonical identification between
the faces $\mathbf{f}_{kl}$ of $\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^ 2)$ and the vertices
of the graph $\mathbf{z}_{kl}$ of ${\mathcal{G}}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$.
Therefore, a discrete function $\phi$ can be understood, either as a function on
faces $\mathbf{f}_{kl}$ of $\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$, or as a function on vertices $\mathbf{z}_{kl}$ of ${\mathcal{G}}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$.
The above identification leads to an isomorphism of discrete functions
\begin{equation}
\label{eq:splitA}
C^2(\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)) \simeq C^0({\mathcal{G}}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)) = C^0({\mathcal{G}}^+_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)) \oplus C^0({\mathcal{G}}^-_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)).
\end{equation}
The same decomposition holds for the \emph{hat} version of theses
objects and
we have a canonical isomorphism
\begin{equation}
\label{eq:splitB}
C^2(\hat\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)) \simeq C^0(\hat{\mathcal{G}}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)) =
C^0(\hat{\mathcal{G}}^+_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)) \oplus C^0(\hat{\mathcal{G}}^-_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)).
\end{equation}
The isomorphism \eqref{eq:splitA} descends to the quotient $\Sigma$ via $p_N$ and may be expressed as an
isomorphism
\begin{equation}
\label{eq:splitC}
C^2(\mathcal{Q}_N(\Sigma)) \simeq C^0({\mathcal{G}}_N(\Sigma)) = C^0({\mathcal{G}}^+_N(\Sigma)) \oplus C^0({\mathcal{G}}^-_N(\Sigma)).
\end{equation}
Any discrete function $\phi$, in one of the
three kind of spaces $C^0({\mathcal{G}}_N(\cdot))$
as above, admits a unique decomposition according to the
splittings~\eqref{eq:splitA}, \eqref{eq:splitB} or \eqref{eq:splitC}
$$
\phi = \phi^+ +\phi ^-
$$
where $\phi^\pm \in C^0({\mathcal{G}}_N^\pm(\cdot))$.
The induced splitting of
$C^2(\mathcal{Q}_N(\cdot))$ via the isomorphisms~\eqref{eq:splitA}, \eqref{eq:splitB} or \eqref{eq:splitC} is also denoted
\begin{equation}
\label{eq:splitD}
C^2(\mathcal{Q}_N(\cdot)) = C^2_+(\mathcal{Q}_N(\cdot))\oplus C^2_-(\mathcal{Q}_N(\cdot)).
\end{equation}
When the discrete function $\phi$ is regarded as a constant function on faces of the quadrangulation, we also write $\phi = \phi^+ +\phi^-$ according to the above splitting.
\begin{convention}
In the sequel we shall use a shorthand
in order to make statements that hold either for cocycles
of the graph ${\mathcal{G}}_N^+(\Sigma)$, or for cocycles
of the graph ${\mathcal{G}}_N^-(\Sigma)$. For this purpose, we will use
the notation ${\mathcal{G}}_N^\pm(\Sigma)$ and the convention below:
For every statement using the symbols $\pm$ and $\mp$, the reader should either
\begin{itemize}
\item replace all symbols $\pm$ (resp $\mp$) consistently with $+$ (resp. $-$), or
\item replace all symbols $\pm$ (resp $\mp$) consistently with $-$ (resp. $+$).
\end{itemize}
\end{convention}
\subsection{Discrete Hölder norms}
\label{sec:dhn}
In this section we define particular norms on the space
$C^2(\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2))$ (or equivalently, on the space $C^0({\mathcal{G}}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2))$, which is
a discrete analogue of the Hölder norm. The norms are defined first
on each component of the splitting~\eqref{eq:splitD} (or~\eqref{eq:splitA}).
\subsubsection{${\mathcal{C}}^ 0$-norm}
\label{sec:findif}
Given $\phi \in C^0({\mathcal{G}}_N^+({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2))$ we
define its ${\mathcal{C}}^0$-norm by
\begin{equation}
\label{eq:C0}
\|\phi \|_{{\mathcal{C}}^0} = \sup_{\mathbf{z} \in {\mathfrak{C}}_0({\mathcal{G}}_N^+({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2))}
|\ip{\phi,\mathbf{z}}|.
\end{equation}
We define a similar norm on $C^0({\mathcal{G}}_N^-({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$ (resp. $C^0({\mathcal{G}}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$) by taking the $\sup$
on vertices of ${\mathcal{G}}_N^-({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$ (resp. ${\mathcal{G}}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$). We deduce a norm, with the same notation $\|\cdot\|_{{\mathcal{C}}^0}$ on
$C^2_\pm(\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2))$ via the isomorphisms~\eqref{eq:splitA} and
\eqref{eq:splitD}.
These quantities may be infinite. Later we shall restrict to periodic
functions, which are bounded and have a well defined ${\mathcal{C}}^0$-norm.
\subsubsection{Finite differences}
The canonical basis $(e_1,e_2)$ with canonical coordinates $(x,y)$ of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$ is
not the best for
our situation.
Most of the times, we shall rotate the plane ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$ by an angle
$\pi/4$. For this purpose we introduce the rotated orthonormal basis
$(e'_1,e'_2)$ of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^ 2$
given by
\begin{equation}
e'_1 = \frac {e_1+e_2}{\sqrt 2},\quad e'_2 = \frac {e_2-e_1}{\sqrt 2}.
\end{equation}
The coordinates $(u,v)$ with respect to the basis $(e'_1,e'_2)$ are
deduced from the canonical coordinates $(x,y)\in{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$ by the formula
\begin{equation}
\label{eq:uv}
u = \frac {x+y}{\sqrt 2},\quad v = \frac {y-x}{\sqrt 2}.
\end{equation}
We define finite differences of $\phi \in C^0({\mathcal{G}}_N^\pm({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2))$,
which are discrete analogues of the partial derivatives of a function
on ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$, with respect to $u$ or $v$. These differences are denoted
$$
\frac{\partial\phi}{\partial \cev u},\quad \frac{\partial\phi}{\partial \vec u},\quad
\frac{\partial\phi}{\partial \cev v}\quad \mbox{ and }\quad \frac{\partial\phi}{\partial \vec v} \in C^0({\mathcal{G}}_N^\pm({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)),
$$
where the forward or retrograde arrows indicate forward or retrograde differences,
defined as follows: for $\phi \in C^0({\mathcal{G}}^+({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2))$, we write
$\phi_{kl} = \ip{\phi,\mathbf{z}_{kl}}$ for $\mathbf{z}_{kl}\in
{\mathfrak{C}}_0({\mathcal{G}}_N^+({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2))$ and put
\begin{align}
\ip{\frac{\partial\phi}{\partial \cev u},\mathbf{z}_{kl}} &= \frac{N}{\sqrt 2}(\phi_{kl}-
\phi_{k-1,l-1}) , \\
\ip{\frac{\partial\phi}{\partial \vec u},\mathbf{z}_{kl}} & = \frac{N}{\sqrt 2}(\phi_{k+1,l+1}-
\phi_{kl}), \\
\ip{\frac{\partial\phi}{\partial \cev v},\mathbf{z}_{kl}} & = \frac{N}{\sqrt 2}(\phi_{kl}-
\phi_{k+1,l-1}) \quad \mbox { and }\\
\ip{\frac{\partial\phi}{\partial \vec v},\mathbf{z}_{kl}} & = \frac{N}{\sqrt 2}(\phi_{k-1,l+1}-
\phi_{kl}).
\end{align}
The finite differences are defined with the same formulae if $\phi
\in C^0({\mathcal{G}}^-({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2))$. Since all the indices involved in the above formulae correpond to
vertices in connected component of $\mathbf{z}_{kl}$ in ${\mathcal{G}}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$, the
finite differences $\frac{\partial}{\partial \cev u}$, $\frac{\partial}{\partial \vec u}$,
$\frac{\partial}{\partial \cev v}$ and $\frac{\partial}{\partial \vec v}$ define endomorphisms
$$
C^0({\mathcal{G}}_N^+({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2))\oplus C^0({\mathcal{G}}_N^-({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2))\longrightarrow
C^0({\mathcal{G}}_N^+({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2))\oplus C^0({\mathcal{G}}_N^-({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)),
$$
which respect the above splitting.
Finite differences can also be expressed using the translations of
$\Lambda_N^{ch}$ acting on functions. If $T_u$, $T_v$ are the
translations acting on ${\mathcal{G}}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$, given respectively by the
vectors
$\frac {e_1+e_2}N$ and $\frac {e_2-e_1}N$, then
\begin{equation}
\label{eq:fd1}
\frac{\partial\phi}{\partial\vec u} = \frac N{\sqrt 2} ( \phi\circ T_u -\phi ), \quad \frac{\partial\phi}{\partial\cev u} = \frac N{\sqrt 2} ( \phi -\phi \circ T_u^ {-1} )
\end{equation}
and
\begin{equation}
\label{eq:fd2}
\frac{\partial\phi}{\partial\vec v} = \frac N{\sqrt 2} ( \phi\circ T_v -\phi ), \quad \frac{\partial\phi}{\partial\cev v} = \frac N{\sqrt 2} ( \phi -\phi \circ T_v^ {-1} ).
\end{equation}
As an immediate consequence of \eqref{eq:fd1}, we have
\begin{equation}
\label{eq:fd3}
\frac{\partial\phi}{\partial\vec u} = \frac{\partial\phi}{\partial \cev u}\circ T_u,
\end{equation}
so that the functions $\frac{\partial\phi}{\partial\vec u}$ and $\frac{\partial\phi}{\partial \cev u}$
have the same ${\mathcal{C}}^0$-norm.
The same holds for the $v$-coordinate since by~\eqref{eq:fd2}
\begin{equation}
\label{eq:fd4}
\frac{\partial\phi}{\partial\vec v} = \frac{\partial\phi}{\partial \cev v}\circ T_v,
\end{equation}
so that finite differences $\frac{\partial\phi}{\partial \cev v}$ and $\frac{\partial\phi}{\partial \vec v}$
have the same ${\mathcal{C}}^ 0$-norm.
\begin{notation}
As far as we are concerned with the ${\mathcal{C}}^0$-norms of finite
differences, we could drop the arrow notation over $u$ or $v$, since
the forward of retrograde differences have the same norms.
\end{notation}
\subsubsection{Definition of H\"older norms}
For $\phi\in C^0({\mathcal{G}}_N^+({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2))$, we may define its ${\mathcal{C}}^1$-norm as
$$
\|\phi\|_{{\mathcal{C}}^1} = \|\phi\|_{{\mathcal{C}}^0} + \left \|\frac{\partial \phi}{\partial
u}\right \|_{{\mathcal{C}}^0} + \left \|\frac{\partial \phi}{\partial v}\right
\|_{{\mathcal{C}}^0}
$$
and its ${\mathcal{C}}^2$-norm by
$$
\|\phi\|_{{\mathcal{C}}^2} = \|\phi\|_{{\mathcal{C}}^1} + \left \|\frac{\partial^2 \phi}{\partial
u^2}\right \|_{{\mathcal{C}}^0} + \left \|\frac{\partial^2 \phi}{\partial
v^2}\right \|_{{\mathcal{C}}^0} + \left \|\frac{\partial^2 \phi}{\partial
u\partial v}\right \|_{{\mathcal{C}}^0}.
$$
More generally we can define a ${\mathcal{C}}^k$-norm on $C^0({\mathcal{G}}^+({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2))$ by
induction. Similarly we define a ${\mathcal{C}}^k$-norm on $C^0({\mathcal{G}}^-({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2))$.
For a positive H\"older constant $\alpha\in (0,1)$, we define the
${\mathcal{C}}^{0,\alpha}$-H\"older norm of $\phi\in C^0({\mathcal{G}}^+({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2))$ by
\begin{equation}
\label{eq:C0alpha}
\|\phi\|_{{\mathcal{C}}^{0,\alpha}}= \|\phi\|_{{\mathcal{C}}^{0}} +
\sup_{
\substack{{\mathbf{z}_{kl}, \mathbf{z}_{mn} \in
{\mathfrak{C}}_0({\mathcal{G}}_N^+({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2))} \\
{\mathbf{z}_{kl}\neq\mathbf{z}_{mn} }
}q
}\frac {|\phi_{kl}-\phi_{mn}|}{\|\mathbf{z}_{kl}-\mathbf{z}_{mn}\|^\alpha},
\end{equation}
where $\|\mathbf{z}_{kl}-\mathbf{z}_{mn}\|$ is the Euclidean distance between $\mathbf{z}_{kl}$
and $\mathbf{z}_{mn}$ in ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$.
The ${\mathcal{C}}^{1,\alpha}$-H\"older norm of $\phi\in C^0({\mathcal{G}}^+({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2))$ is defined by
$$
\|\phi\|_{{\mathcal{C}}^{1,\alpha}} = \|\phi\|_{{\mathcal{C}}^0} + \left \|\frac{\partial \phi}{\partial
u}\right \|_{{\mathcal{C}}^{0,\alpha}} + \left \|\frac{\partial \phi}{\partial
v}\right \|_{{\mathcal{C}}^{0,\alpha}} ,
$$
and its ${\mathcal{C}}^{2,\alpha}$-H\"older norm is defined by
$$
\|\phi\|_{{\mathcal{C}}^{2,\alpha}} = \|\phi\|_{{\mathcal{C}}^1} + \left \|\frac{\partial^2 \phi}{\partial
u^2}\right \|_{{\mathcal{C}}^{0,\alpha}} + \left \|\frac{\partial^2 \phi}{\partial
v^2}\right \|_{{\mathcal{C}}^{0,\alpha}} + \left \|\frac{\partial^2 \phi}{\partial
u \partial v}\right \|_{{\mathcal{C}}^{0,\alpha}}.
$$
More generally, we can define a ${\mathcal{C}}^{k,\alpha}$-H\"older norm by induction on
$C^0({\mathcal{G}}^+({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2))$, in a obvious way. We define
a ${\mathcal{C}}^k$ and a ${\mathcal{C}}^{k,\alpha}$-H\"older norm on $C^0({\mathcal{G}}^-({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2))$ by taking the
$\sup$ of the above formulae on vertices
of ${\mathcal{G}}_N^-({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$ instead.
\subsubsection{Weak H\"older norms}
\label{sec:won}
For
$\phi \in C^2(\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2))\simeq C^0({\mathcal{G}}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2))$ we use the direct sum decomposition
$\phi=\phi^++\phi^-$ of \eqref{eq:splitA} or \eqref{eq:splitD}. We define the \emph{weak} ${\mathcal{C}}^{k,\alpha}_w$-norm of $\phi$ by
$$
\|\phi\|_{{\mathcal{C}}^{k,\alpha}_w} = \left \|\phi^+\right \|_{{\mathcal{C}}^{k,\alpha}}
+\left \|\phi^-\right \|_{{\mathcal{C}}^{k,\alpha}},
$$
where the Hölder norms of each components $\phi^\pm$ are defined in
the previous section.
Similarly, the \emph{weak} ${\mathcal{C}}^k_w$-norm of $\phi $ is defined by
$$
\|\phi\|_{{\mathcal{C}}^{k}_w} = \left \|\phi^+\right \|_{{\mathcal{C}}^{k}}
+\left \|\phi^-\right \|_{{\mathcal{C}}^{k}}.
$$
\begin{rmk}
As you may have noticed, the discrete ${\mathcal{C}}^{k,\alpha}_w$-H\"older norms or ${\mathcal{C}}^k_w$-norms defined above on
$C^0({\mathcal{G}}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2))$ are called \emph{weak}. Indeed, only the variations of $\phi$ in the \emph{diagonal directions} spanned by the vectors $\frac{e_1+e_2}2$ and $\frac{e_2-e_1}2$
are taken into account. It turns out that these weak norms are the one appropriate to
set up the fixed point principle, as explained in~\S\ref{sec:fpt}.
In the sequel, we shall drop the term \emph{weak} for the sake of brevity. However, the reader should bear in mind that these norms may allow some unexpected behavior when $N$ goes to infinity (cf. Example~\ref{example:comb}).
\end{rmk}
\subsubsection{Quotient and alternate quadrangulations}
The alternate versions of the quadrangulation $\hat\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$
and checkers graph $\hat{\mathcal{G}}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$ are canonically isomorphic to the non-hat
versions $\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$
and ${\mathcal{G}}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$. Thus, we have an isomorphism
$$C^2(\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^ 2))\simeq C^2(\hat\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^ 2)).$$
This isomorphism allows to define ${\mathcal{C}}^k_w$ and ${\mathcal{C}}^ {k,\alpha}_w$-norms on $C^2(\hat\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^ 2))$.
A function $\phi\in C^2(\mathcal{Q}_N(\Sigma))$ admits a lift
$\phi_N = \phi \circ p_N\in C^ 2(\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2))$. We define the norms of $\phi$ as the norms of its lift:
$$
\|\phi\|_{{\mathcal{C}}^{k,\alpha}_w} = \|\phi_N\|_{{\mathcal{C}}^{k,\alpha}_w},\quad \|\phi\|_{{\mathcal{C}}^{k}_w} = \|\phi_N\|_{{\mathcal{C}}^{k}_w}.
$$
\begin{rmk}
The discrete functions on $\Sigma$ have finite H\"older norm since they are bounded, and so are their finite differences.
\end{rmk}
\subsection{Convergence of discrete functions}
\label{sec:conv}
In this section, a suitable notion of convergence for a sequence of discrete functions is introduced. This concept will be the cornerstone of a version of the Ascoli-Arzela compactness theorem. It will be an essential tool to obtain spectral gap results at~\S\ref{sec:specgap}.
\subsubsection{Definition of converging sequences}
\begin{dfn}
\label{def:conv}
Let $(N_k)_{k\in {\mathbb{N}}}$ be an increasing sequence of positive integers. Let $\psi_{N_k}\in C^0(\hat{\mathcal{G}}^\pm_{N_k}({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2))$ be a sequence of discrete functions and
$\phi:{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2\to {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}$ be a function defined on the plane.
Assume that
for every point $w \in {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$ and $\epsilon >0$, there exists
$ \delta >0$ and an integer $ k_0 >0$, such that
for every integer $k\geq k_0$ and
vertex $\mathbf{z}\in{\mathfrak{C}}_0(\hat{\mathcal{G}}_{N_k}^\pm({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2))$
with the property that $\| w-\mathbf{z} \|\leq \delta$, we have
$ |\phi(w)-\psi_{N_k}(\mathbf{z})|\leq \epsilon$.
Then we say that the sequence of discrete functions $(\psi_{N_k})$
converges toward the function $\phi:{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2\to {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}$. This property is denoted by
$$
\psi_{N_k} \to \phi \quad \mbox{ or } \quad \lim \psi_{N_k} = \phi.
$$
If $\psi_{N_k}\in C^0(\hat{\mathcal{G}}_{N_k}({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2))$ is a sequence of discrete
functions with associated decomposition $ \psi_{N_k}= \psi^+_{N_k}+
\psi^-_{N_k}$ and with the property that the components converge to
functions $\phi^+$ and $\phi^-$, in the sense of the above
definition, i.e.
$$
\psi_{N_k}^+ \to \phi^+ \quad \mbox{ and }\quad \psi_{N_k}^- \to \phi^-,
$$
we say that $\psi_{N_k}$ converges toward the pair of functions $(\phi^+,\phi^-)$. This property is denoted by
$$
\psi_{N_k} \to (\phi^+,\phi^ -) \quad \mbox{ or } \quad \lim \psi_{N_k} = (\phi^+,\phi^ -).
$$
\end{dfn}
\begin{rmk}
The above definition may also be stated in a somewhat slicker way:
we say that a sequence $\psi_{N_k} \in
C^0(\hat{\mathcal{G}}^+_{N_k}({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2))$ converges toward a function
$\phi:{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2\to {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}$ if, at every point $w$ of the plane,
$\psi_{N_k}$ takes arbitrarily close values to $\phi(w)$, for
every $k$ sufficiently large and for all
vertices of $\hat{\mathcal{G}}^+_{N_k}({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$ in a sufficiently
small neighborhood of $w$.
\end{rmk}
\begin{example}
\label{example:comb}
The splitting of discrete functions into their positive and negative
components leads to some unusual type converging sequences
in the sense of Definition~\ref{def:conv}.
For example, we may define a sequence of \emph{discrete comb functions} as follows. We
define $\psi_N^\pm\in C^0(\hat{\mathcal{G}}^\pm_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2))$ as a constant
function each connected component of the graph, equal to $\pm 1$ at
each vertex of ${\mathcal{G}}^\pm_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$. Let
${\mathbf{1}}:{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^ 2\to{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}$ be the constant function equal to $1$ at every
point of the plane. Then $\lim \psi_N^+ ={\mathbf{1}}$ whereas $\lim \psi_N^-
=-{\mathbf{1}}$. If $\psi_N:=\psi^+_N+\psi^-_N$, then $\psi_N$ converges and
$$
\lim\psi_N=({\mathbf{1}},-{\mathbf{1}}).
$$
Typically, the sequence $\psi_N$ is uniformly bounded in weak
${\mathcal{C}}^{0,\alpha}_w$-norm. Our notion of convergence is designed to
state a version of the Ascoli-Arzela theorem in this setting.
\end{example}
The notion of convergence of discrete functions is extended to $C^2(\mathcal{Q}_N(\Sigma))$ as follows:
\begin{dfn}
\label{def:convsigma}
Let $\psi_{N_k}\in C^0({\mathcal{G}}^\pm_{N_k}(\Sigma))$ be a sequence of discrete functions and
$\phi:\Sigma\to {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}$ be a function defined on $\Sigma$.
Let $\hat \psi_{N_k} =\psi_{N_k}\circ p \in C^0(\hat{\mathcal{G}}^\pm_{N_k}({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2))$ be the lift of $\psi_{N_k}$
via the canonical projection $p$ and $\hat
\phi=\phi\circ p:{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2\to {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}$ be the lift of $\phi$. We say that $(\psi_{N_k})$ converges to
$\phi$ if $(\hat\psi_{N_k}) \in C^0({\mathcal{G}}^\pm_{N_k}(\Sigma))$ converges to $\hat \phi:{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2\to {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}$ in the sense Definition~\ref{def:conv}.
This property is denoted by
$$
\psi_N \to \phi \mbox{ or } \lim \psi_N = \psi.
$$
If $\psi_{N_k}\in C^0({\mathcal{G}}_{N_k}(\Sigma))$ is a sequence of discrete functions with associated decomposition $ \psi_{N_k}= \psi^+_{N_k}+ \psi^-_{N_k}$ and with the property that both components converge to some functions $\phi^+:\Sigma\to{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}$ and $\phi^-:\Sigma\to{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}$ in the sense of the above
definition, we say that $\psi_{N_k}$ converges toward the pair of functions $(\phi^+,\phi^-)$ and denote this by
$$
\psi_{N_k} \to (\phi^+,\phi^ -) \mbox{ or } \lim \psi_{N_k} = (\phi^+,\phi^ -).
$$
\end{dfn}
\subsection{Continuity and limits of discrete functions}
Our notion of convergence for discrete function is intimately
related to the uniform convergence, in the case of continuous functions.
Indeed, we have the following result:
\begin{prop}
\label{lemma:cont}
Let $\psi_{N_k}\in C^0(\hat{\mathcal{G}}^\pm_{N_k}({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2))$ be a sequence of discrete
functions converging toward
$\phi:{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2\to {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}$. Then $\phi$ must be continuous.
\end{prop}
\begin{proof}
The proof goes by contradiction:
assume that $\psi_{N_k}\in C^0(\hat{\mathcal{G}}^\pm_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2))$ is a sequence
converging toward a discontinuous function $\phi$.
Then there exists $\epsilon_0>0$, $w\in {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$ and a
sequence of points $w_k\in {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$ such that $\lim_{k\to \infty} w_k = w$ and $|\phi(w_k) -
\phi(w)|\geq \epsilon_0$ for all $k$.
From the definition of convergence
of discrete functions, we can extract a sequence $N'_k$ from $N_k$
and vertices $\mathbf{z}_k$ of $\hat{\mathcal{G}}^\pm_{N'_k}({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$
such that $|w_k - \mathbf{z}_{k}|\to
0 $ and $|\psi_{N'_k}(\mathbf{z}_k) - \phi(w_{k})| \to 0$ as
$k\to \infty$.
By construction $\lim \mathbf{z}_k=w$. Furthermore
$$
|\phi(w_{k})-\phi( w)|\leq
|\phi(w_{k})- \psi_{N'_k}(\mathbf{z}_k)| + |\psi_{N'_k}(\mathbf{z}_k)- \phi( w)|.
$$
The LHS is bounded below by $\epsilon_0>0$. The first term of the
RHS converges to $0$ by definition of the sequences. The second term
of the RHS converges to zero, by definition of the convergence of a
sequence of discrete functions. This is a contradiction, hence
$\phi:{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2\to {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}$ is continuous.
\end{proof}
\begin{cor}
\label{cor:cont}
Let $\psi_{N_k}\in C^0({\mathcal{G}}^\pm_N(\Sigma))$ be a sequence of discrete
functions converging toward
$\phi:\Sigma\to {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}$. Then $\phi$ is continuous.
\end{cor}
\begin{proof}
We use the covering map $p:{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2\to\Sigma$ and apply Proposition~\ref{lemma:cont} to the lift of the functions.
\end{proof}
\subsection{Samples and convergence of discrete functions}
\begin{dfn}
If $\phi:{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2\to {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}$ is any real function, we define its
\emph{samples} $\phi^\pm_N\in C^0( \hat{\mathcal{G}}^\pm_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2))$ by
$$
\ip{\phi_N^\pm,
\hat \mathbf{z}_{kl}}:= \phi(\hat \mathbf{z}_{kl})
$$
for every $\hat \mathbf{z}_{kl}\in{\mathfrak{C}}_0(\hat{\mathcal{G}}^\pm_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2))$.
We define similarly the samples $\phi_N^\pm\in C^0({\mathcal{G}}_N^\pm(\Sigma))$ of
a real function $\phi:\Sigma\to{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}$.
Let $\hat \phi =\phi \circ p :{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2\to {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}} $
be the lift of $\phi$ via the projection $p$.
Its samples $\hat\phi_N^\pm\in C^0(\hat{\mathcal{G}}_N^\pm({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2))$, as defined above, descend to
discrete functions $\phi_N^\pm\in C^0({\mathcal{G}}_N^\pm(\Sigma))$ on the
quotient, referred to as the samples of $\phi$.
\end{dfn}
The convergence is uniform in the sense of the following lemma:
\begin{prop}
\label{prop:c0nec}
Let $\psi_{N_k}^\pm\in C^0({\mathcal{G}}^\pm_{N_k}(\Sigma))$ be a sequence of discrete
functions converging to
$\phi:\Sigma\to {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}$ and
$\phi_N^\pm \in C^0({\mathcal{G}}^\pm_{N_k}(\Sigma))$ be the samples of $\phi$. Then
$$
\lim_{k\to \infty}\left \|\phi^\pm_{N_k} - \psi^\pm_{N_k} \right \|_{{\mathcal{C}}^0} = 0.
$$
\end{prop}
\begin{proof}
Since $\phi:\Sigma\to {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}$
is a limit of a sequence of discrete functions, it is continuous by
Corollary~\ref{cor:cont}. The surface $\Sigma$ is compact, hence $\phi$ is uniformly
continuous by Heine theorem.
We denote by $\hat\psi^\pm_{N_k}$ and $\hat \phi$ the canonical lifts of
$\psi^\pm_{N_k}\in C^0(\hat{\mathcal{G}}^\pm_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2))$ and $\phi$ via the projection $p:{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2\to \Sigma$.
Since $\phi$ is uniformly continuous, so is $\hat\phi$. Let $\epsilon$, be a positive real number. By uniform continuity, there exists $\delta>0$
such that for every $w, w' \in{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$
\begin{equation}
\label{eq:unifcontcond}
\|w- w'\|\leq \delta \Rightarrow |\hat\phi(w) -
\hat\phi(w')|\leq \epsilon.
\end{equation}
By definition of the convergence of discrete functions, for each $w\in{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$, we may choose an integer $k(w)\geq 0$ and
and a real number $\eta(w)>0$ such that for all $k\geq k(w)$ and $\hat
\mathbf{z}\in
{\mathfrak{C}}_0(\hat {\mathcal{G}}^\pm_{N_k}({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2))$ we have
\begin{equation}
\label{eq:convcond}
\|\hat \mathbf{z}- w\| \leq \eta(w) \Rightarrow |\hat
\psi^\pm_{N_k}(\hat \mathbf{z})-
\hat\phi(w)|\leq \epsilon.
\end{equation}
For each $w\in{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$, put
$$
\delta(w) = \min(\delta,\eta(w)).
$$
The family of open Euclidean balls $B(w,\delta(w))$, centered at
$w\in{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$ with radius $\delta(w)$, provides an open cover of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$.
Their images $U_w=p(B(w,\delta(w)))$, by the canonical projection
$p:{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2\to\Sigma$,
provide an open cover of the compact surface $\Sigma$. Hence we can
extract a finite cover $U_i=U_{w_i}$ of $\Sigma$, for a finite collection of
points $\{w_i\in {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2, 1\leq i\leq d\}$. We put $k_0 = \max \{
k(w_i)_{1\leq i \leq d} \}$ and consider $k\geq k_0$.
Every $\mathbf{z}\in {\mathfrak{C}}_0({\mathcal{G}}^\pm_{N_k}(\Sigma))$ is an
element of one of the open sets $U_i$. Hence $\mathbf{z}$ admits a lift
$\hat \mathbf{z} \in {\mathfrak{C}}_0(\hat{\mathcal{G}}^\pm_{N_k}({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2))$ contained in one of a the balls $B(w_i,\delta(w_i))$. In
particular
$$
| \psi^\pm_{N_k}(\mathbf{z}) - \phi^\pm_{N_k}(\mathbf{z})| =
| \hat \psi^\pm_{N_k}(\hat \mathbf{z}) - \hat \phi^\pm_{N_k}(\hat \mathbf{z})|
\leq |\hat \psi^\pm_{N_k}(\hat \mathbf{z}) -
\hat \phi(w_i)| + |\hat \phi (w_i) - \hat \phi^\pm_{N_k}(\hat \mathbf{z})|.
$$
The first term of the RHS is bounded above by $\epsilon$ by
\eqref{eq:convcond}. By definition $\hat \phi^\pm_{N_k}(\hat \mathbf{z}) = \hat
\phi(\hat \mathbf{z})$, hence the second term of the RHS is bounded above by
$\epsilon$ thanks to \eqref{eq:unifcontcond}.
In conclusion
$$
| \psi^\pm_{N_k}( \mathbf{z}) - \phi^\pm_{N_k}(\mathbf{z})|\leq 2\epsilon,
$$
which shows that
$$
\|\psi^\pm_N - \phi^\pm_N\|_{{\mathcal{C}}^0}\leq 2\epsilon
$$
for $k\geq k_0$.
\end{proof}
We also have a sort of converse for Proposition~\ref{prop:c0nec}:
\begin{prop}
\label{prop:c0suf}
Let $\psi^\pm_{N_k}\in C^0({\mathcal{G}}^\pm_{N_k}(\Sigma))$ be a sequence of discrete
functions and
$\phi:\Sigma\to {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}$ a continuous function such that
$$
\lim_{k\to \infty}\left \|\phi_{N_k}^\pm - \psi_{N_k}^\pm \right\|_{{\mathcal{C}}^0} = 0,
$$
where $\phi^\pm_{N_k}\in C^0({\mathcal{G}}^\pm_{N_k}(\Sigma))$ are the samples of $\phi$.
Then
$$
\lim \psi_{N_k}^\pm = \phi.
$$
\end{prop}
\begin{proof}
The compactness of $\Sigma$ implies the uniform continuity of
$\phi$, which is a key argument in a proof closely related to the one of Proposition~\ref{prop:c0nec}. The details are left to the
interested reader.
\end{proof}
Proposition~\ref{prop:c0suf} has the following immediate corollary, which shows that samples of a function are natural approximations:
\begin{cor}
\label{cor:sample} Let
$\phi:\Sigma\to {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}$ be a continuous function, and
$\phi^\pm_{N} \in C^0({\mathcal{G}}^\pm_{N}(\Sigma))$ its samples.
Then
$$
\lim \phi_{N}^\pm = \phi.
$$
\end{cor}
\subsection{Precompactness}
We denote by $\|\cdot \|_{{\mathcal{C}}^{0,\alpha}}$ the usual Hölder norm on
the space of function $\phi:\Sigma\to {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}$, defined with respect to the
Riemannian metric $g_\sigma$, for instance. The corresponding Hölder space is
denoted
${\mathcal{C}}^{0,\alpha}(\Sigma)$.
We may now state a version of the Ascoli-Arzela theorem adapted to our setting:
\begin{theo}[Ascoli-Arzela, first version]
\label{theo:AA1}
Let $\psi^\pm_{N_k}$ be a sequence of discrete function in
$C^0({\mathcal{G}}^\pm_{N_k}(\Sigma))$,
which are uniformly bounded in ${\mathcal{C}}^{0,\alpha}$-norm.
In other words,
there exists a constant $c>0$ with the property that
$$
\|\psi^\pm_{N_k}\|_{{{\mathcal{C}}}^{0,\alpha}}\leq c
$$
for all $k\in{\mathbb{N}}$.
Then there exists a subsequence $N_k'$ of $N_k$ and a function $\phi^\pm:\Sigma\to
{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}$ in ${\mathcal{C}}^{0,\alpha}(\Sigma)$, such that
$$
\lim \psi^\pm_{N_k'} = \phi^\pm.
$$
\end{theo}
\begin{proof}
Let
$\psi_{N_k}\in C^0({\mathcal{G}}^+_{N_k}(\Sigma))$ be a sequence of discrete functions bounded in H\"older norm, as in the theorem.
We start by choosing a countable dense set $Q=\{q_n\in
\Sigma, n\in{\mathbb{N}} \}$ of
$\Sigma$; for instance the projection by $p:{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2\to \Sigma$ of the points of rational
coordinates in ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$ is a possible choice. For each $q_n$, we choose a lift $\hat q_n$ such that $p(\hat q_n)=q_n$.
For each $n$ we choose a sequence $\hat \mathbf{z}^n_{N_k}\in
{\mathfrak{C}}_0(\hat{\mathcal{G}}^+_{N_k}({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2))$ such that
that
$$
\lim_{k\to\infty} \hat \mathbf{z}_{N_k}^n = \hat q_n.
$$
We denote by $\hat\psi_{N_k}=\psi_{N_k}\circ p\in C^ 0(\hat{\mathcal{G}}_{N_k}^+({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2))$ the
canonical lift of $\psi_{N_k}$. By assumption, the uniform estimate on the H\"older norms provides a
uniform bound $|\hat \psi_{N_k}(\hat \mathbf{z}_{N_k}^n)|\leq c$. Hence we can choose a
subsequence $N^0_k$ of integers such that
$\hat \psi_{N^0_k}(\hat \mathbf{z}_{N^0_k}^0)$ converges as $k\to \infty$.
By extracting a subsequence $N^1_k$ of $N^0_k$, we may assume that
$\hat \psi_{N^1_k}(\hat \mathbf{z}_{N^1_{k}}^n)$ converges for $n=0$ or $1$, as $k\to
\infty$.
Extracting subsequences inductively provides
family of subsequences $N^m_k$, indexed by $m$, such that
$\hat \psi_{N^m_k}(\hat \mathbf{z}^n_{N^m_{k}})$ converges for fixed $0\leq n\leq m$ as
$k\to \infty$.
Finally, using the diagonal subsequence $M_k=N^k_k$, we find a
subsequence $\psi_{M_k}$ such that $\hat\psi_{M_k}(\hat \mathbf{z}^n_{M_k})$
converges for every $n\in {\mathbb{N}}$, as $k\to \infty$.
The function
$$
\phi:Q\to {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}
$$
is defined on the countable dense subset $Q\subset\Sigma$ by
$$\phi(q_n) = \lim_{k\to\infty} \hat \psi_{M_k}(\hat z^n_{M_k}).
$$
Since the $\psi_{N_k}$ are uniformly bounded with respect to the discrete ${\mathcal{C}}^{0,\alpha}$-norms, it follows that
the function $\phi:Q\to {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}$ is bounded with respect to the usual ${\mathcal{C}}^{0,\alpha}$-norm. In
particular $\phi$ is uniformly continuous on $Q$, hence it admits a unique
continuous extension $\phi:\Sigma\to {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}$ which turns out to be in
${\mathcal{C}}^{0,\alpha}(\Sigma)$ as well.
One can readily check, using the uniform Hölder-norm estimates, that
the construction of the function $\phi$ is independent of the choice
of sequences $\hat \mathbf{z}^n_N$.
Furthermore the uniform H\"older estimates imply that
$$\lim _{k\to \infty}\|\psi_{M_k}-\phi_{M_k}\|_{{\mathcal{C}}^0}=0,$$ where
$\phi_{M_k}\in C^0({\mathcal{G}}^+_{M_k}(\Sigma))$ are the samples of $\phi$.
This implies by Proposition~\ref{prop:c0suf} that
$$
\lim \psi_{M_k} = \phi.
$$
\end{proof}
\subsection{Higher order convergence}
We are interested in stronger convergence of discrete
functions, taking into account
order finite differences. We start by stating the
following elementary results:
\begin{lemma}
\label{lemma:convc1}
Let $\psi_{N_k}\in C^0({\mathcal{G}}^\pm_{N_k}(\Sigma))$ be a sequence of discrete
functions. The finite differences
$\frac{\partial \psi_{N_k}}{\partial \vec
u}$ (resp. $\frac{\partial \psi_{N_k}}{\partial \vec
v}$) converge if, and only if,
the finite differences
$\frac{\partial \psi_{N_k}}{\partial \cev
u}$ (resp. $\frac{\partial \psi_{N_k}}{\partial \cev
v}$)
converge. It they converge, they have the same limits:
$$
\lim \frac{\partial \psi_{N_k}}{\partial \vec
u} = \lim \frac{\partial \psi_{N_k}}{\partial \cev
u}, \quad \lim \frac{\partial \psi_{N_k}}{\partial \vec
v} = \lim \frac{\partial \psi_{N_k}}{\partial \cev
v}.
$$
\end{lemma}
\begin{proof}
This follows
from the fact that finite
differences in the forward and retragrade directions are related by the
translations $T_u$, or $T_v$, spanning the lattice $\Lambda^{ch}_N$,
thanks to Formulae~\eqref{eq:fd3} and \eqref{eq:fd4}.
\end{proof}
\begin{rmk}
According to the above lemma, one can talk about the convergence of
the finite differences of a sequence of discrete functions without
specifying on the forward or retrograde directions.
\end{rmk}
\begin{prop}
\label{prop:c1conv}
Let $\psi_{N_k}\in C^0({\mathcal{G}}^\pm_{N_k}(\Sigma))$ be a converging sequence of discrete
functions such that its first order finite differences
converge as well towards the limits
$$ \phi=\lim\psi_{N_k}, \phi_u=\lim \frac{\partial
\psi_{N_k}}{\partial \vec u}
\mbox{ and } \phi_v= \lim
\frac{\partial \psi_{N_k}}{\partial \vec v}.
$$
Then, the limit $\phi:\Sigma\to {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}$ is of class ${\mathcal{C}}^1$ with partial derivatives
given by
$$
\frac{\partial\phi}{\partial u}=\phi_u,\quad \frac{\partial\phi}{\partial v}=\phi_v.
$$
\end{prop}
\begin{proof}
One can readily show that $\phi$ is a primitive function of $\phi_u$
(resp. $\phi_v$) in the
$u$-direction (resp. $v$-direction) using Riemann sums.
The limits $\phi_u$ and $\phi_v$ are continuous by
Lemma~\ref{lemma:cont} and it follows that $\phi$ is continuously differentiable.
\end{proof}
Lemma~\ref{lemma:convc1} and Proposition~\ref{prop:c1conv} motivate the following definition:
\begin{dfn}
If a sequence of discrete functions $\psi_{N_j}\in
C^0({\mathcal{G}}^\pm_{N_j}(\Sigma))$ converges together with its
finite differences, up to order $k$, we say that the sequence $(\psi_{N_j})$
converges in the ${\mathcal{C}}^k$-sense toward the function $\phi=\lim
\psi_{N_j}$. We denote
this property by
$$
\psi _{N_j}\stackrel{{\mathcal{C}}^k}\longrightarrow \phi.
$$
If $\psi_{N_j}\in
C^0({\mathcal{G}}_{N_j}(\Sigma))$ is a sequence of discrete functions with
decompositions $\psi_{N_j} =\psi_{N_j}^+ + \psi_{N_j}^-$ and
$\phi^+,\phi^-:\Sigma\to {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}$ are functions such that
$$
\psi _{N_j}^+ \stackrel{{\mathcal{C}}^k}\longrightarrow \phi^+,
\mbox{ and } \psi _{N_j}^-\stackrel{{\mathcal{C}}^k} \longrightarrow \phi^-,
$$
we say that $\psi_{N_j}$ converges in the weak ${\mathcal{C}}^k$-sense toward
the pair of functions $(\phi^+,\phi^-)$. This property is denoted
$$
\psi _{N_j}\stackrel{{\mathcal{C}}^k_w}\longrightarrow (\phi^+,\phi^-).
$$
\end{dfn}
This definition and Propositions~\ref{prop:c1conv} leads to the
following corollary:
\begin{prop}
If $\psi_{N_j}\in
C^0({\mathcal{G}}^\pm_{N_j}(\Sigma))$ converges in the ${\mathcal{C}}^k$ sense, the limit $\phi=\lim \psi_{N_j}$ is of class ${\mathcal{C}}^k$. Furthermore the finite differences of $\psi_{N_j}$ converge, up to order $k$ toward the corresponding partial derivatives of $\phi$.
\end{prop}
We may now state an improved version of the Ascoli-Arzela theorem in the
${\mathcal{C}}^k$ setting:
\begin{theo}[Ascoli-Arzela, second version]
\label{theo:ascoli}
Let $\psi_{N_j}$ be a sequence of discrete function in $C^0({\mathcal{G}}^\pm_{N_j}(\Sigma))$,
which are uniformly bounded in ${\mathcal{C}}^{k,\alpha}$-norm for some $k\geq 0$, in the sense that
there exists a constant $c>0$ with the property that
$$
\|\psi_{N_j}\|_{{\mathcal{C}}^{k,\alpha}}\leq c \quad \mbox{ for all $j\geq 0$.}
$$
Then there exists a subsequence $N_j'$ of $N_j$ and a function $\phi:\Sigma\to
{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}$ with $\phi\in{\mathcal{C}}^{k,\alpha}(\Sigma)$, such that
$$
\psi_{N'_j}\stackrel{{\mathcal{C}}^{k, \alpha}}{\longrightarrow} \phi.
$$
\end{theo}
\begin{proof}
We give a sketch of proof in the case $k=1$. By assumption, the $\psi_{N_j}$ are
uniformly bounded in ${\mathcal{C}}^{1,\alpha}$-norms. Thus the finite
differences of order $1$ are bounded in ${\mathcal{C}}^{0,\alpha}$-norm:
$$
\left \|\frac{\partial\psi_{N_j}}{\partial \vec u}\right \|_{{\mathcal{C}}^{0,\alpha}}\leq c, \quad
\left \|\frac{\partial\psi_{N_j}}{\partial \vec v}\right \|_{{\mathcal{C}}^{0,\alpha}}\leq c.
$$
and we may apply Theorem~\ref{theo:AA1} to
the first order finite differences. After passing to suitable
subsequences, we may assume that
$$
\frac{\partial\psi_{N_j}}{\partial \vec u} \stackrel{{\mathcal{C}}^0}{\longrightarrow}
{\phi_u}, \frac{\partial\psi_{N_j}}{\partial \vec v} \stackrel{{\mathcal{C}}^0}{\longrightarrow}{\phi_v}
$$
where $\phi_u, \phi_v \in {\mathcal{C}}^{0,\alpha}(\Sigma)$.
Since $\psi_N$ is bounded in ${\mathcal{C}}^{1,1}$-norm, we may apply
Ascoli-Arzela again and assume, up to further extraction, that
$$
\lim \psi_{N_j} = \phi
$$
for some continuous function $\phi$.
The rest of the proof follows from Proposition~\ref{prop:c1conv}.
The general case is proved by induction on $k$.
\end{proof}
\subsection{Examples of discrete convergence}
We present two examples of converging sequences of discrete functions
that will turn out to be useful.
\subsubsection{Samples of continuously differentiable functions}
Corollary~\ref{cor:sample} extends to stronger ${\mathcal{C}}^k$-convergence as
follows:
\begin{prop}
Let $\phi:\Sigma\to {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}$ be a function of class ${\mathcal{C}}^k$, and
$\phi^\pm_{N} \in C^0({\mathcal{G}}^\pm_{N}(\Sigma))$ its samples.
Then
$$
\phi_{N}^\pm \stackrel{{\mathcal{C}}^k}\longrightarrow \phi.
$$
\end{prop}
\begin{proof}
The Taylor formula insures that finite differences of $\phi^\pm_N$ converge
uniformly to the corresponding partial derivative of $\phi$. It
follows by Proposition~\ref{prop:c0suf} that, up to order $k$, the finite differences
of $\phi_N^\pm$ converge in the sense of
Definition~\ref{def:convsigma}, which proves the proposition.
\end{proof}
\subsubsection{Discrete tangent vector fields}
We may consider discrete functions with values in ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^m$, or more
precisely ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$, rather than real valued functions.
It is an easy exercise to check
that all the notions of convergence of discrete functions, H\"older norms, introduced
before trivially extend to this setting.
Given a smooth immersion
$\ell:\Sigma\to{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$, we shall define a sample $\tau_N\in
C^0(\mathcal{Q}_N(\Sigma)) \otimes {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ of $\ell$ at \S\ref{sec:lagquad}.
We will show that the discrete tangent vector fields associated to the
diagonals of the
sample $\tau_N$ converge in Proposition~\ref{prop:convell}.
\section{Perturbation theory for isotropic meshes}
\label{sec:pert}
We keep on using the notations of the previous section. Recall that
$\ell:\Sigma\to{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ is a smooth isotropic immersion and $\Sigma$ a
surface diffeomorphic to a torus. The surface is endowed with the
pullback metric $g_\Sigma$ and the flat metric $g_\sigma$ related by a
conformal factor $g_\Sigma=\theta g_\sigma$. There is also a family of
flat metrics $g_{\sigma}^N$ induced by the diffeomorphism
$\Phi_N: {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2/\Gamma_N \to \Sigma$. We construct
the various versions of quadrangulations and the checkers
graphs as in~\S\ref{sec:anal}.
\subsection{Isotropic quadrangular meshes}
\label{sec:lagquad}
A quadrangular mesh
$$
\tau\in {\mathscr{M}}_N = C^0(\mathcal{Q}_N(\Sigma))\otimes{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}
$$
associates ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$-coordinates to each vertex
of $\mathcal{Q}_N(\Sigma)$. One can define a unique piecewise linear map
$$
\ell_\tau:\Sigma^1_N\to {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}
$$
from the $1$-skeleton $\Sigma_N^1$ of the quadrangulation
$\mathcal{Q}_N(\Sigma)$ into ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$, which agrees with $\tau$ at vertices.
Contrarily to the case of a triangulation, there
is generally no piecewise linear extension to the $2$-skeleton, that is $\Sigma$. Indeed,
quadrilaterals of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ may not be planar.
There are several options to construct extensions of $\ell_\tau
:\Sigma^1_N\to {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2N}$ to
$\Sigma$, but this is not a fundamental issue as we shall see.
\begin{dfn}
\label{dfn:isotropicquad}
An Euclidean quadrilateral of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ is said to be isotropic if
the integral of the Liouville form $\lambda$ along the quadrilateral
vanishes.
Similarly, a mesh $\tau\in{\mathscr{M}}_N$ is called isotropic
if the quadrilaterals of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^n$ associated to each face of $\mathcal{Q}_N(\Sigma)$ via
$\tau$ are isotropic in the above sense.
The space ${\mathscr{L}}_N\subset {\mathscr{M}}_N$ is the set of all isotropic
quadrangular meshes $\tau\in{\mathscr{M}}_N$.
\end{dfn}
\subsubsection{Equation for isotropic quadrilaterals}
An oriented quadrilateral of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ can be given by $4$ ordered vertices
$(A_0,$ $A_1,$ $A_2,$ $A_3)$. We introduce the diagonals of the
quadrilateral
\begin{equation}
\label{eq:notdiag}
D_0=\overrightarrow{A_0A_2},\quad D_1=\overrightarrow{A_1A_3}.
\end{equation}
Then we have the following result, which shows that the equation for
an isotropic quadrilateral is quadratic:
\begin{lemma}
\label{lemma:quadiso}
The integral of the Liouville form $\lambda$ along an oriented
quadrilateral $(A_0,\cdots,A_3)$ of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^ {2n}$ is given by
$$
\frac 12 \omega(D_0,D_1),
$$
where $D_i$ are the diagonals of the quadrilateral defined by~\eqref{eq:notdiag}.
\end{lemma}
\begin{proof}
We construct a pyramid ${\mathcal{P}}$ with base the quadrilateral $\mathcal{Q}$ and with apex located at the
origin $O\in{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$, for instance.
By Stokes Theorem
$$
\int_{\mathcal{Q}}\lambda
=\int_{{\mathcal{P}}}\omega. $$
The integral of the RHS is the sum of the symplectic areas of the four
triangles $(OA_iA_{i+1})$, for $i$ considered as an index modulo $4$.
Hence the integral of the Liouville form is given by
$$
\frac
12 \sum_{i=0}^3\omega(\overrightarrow{OA_i},\overrightarrow{OA_{i+1}}) = \frac 12 \omega(D_0,D_1).
$$
\end{proof}
\subsubsection{Diagonals notation}
\label{sec:diag1}
For $\tau\in{\mathscr{M}}_N$, we consider the lifts $\tilde \tau = \tau \circ
p_N \in C^0(\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2))$.
Using the notations of \S\ref{sec:quadnot}, we define the diagonals
$$
D^u_\tau,D^v_\tau \in C^2(\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2))\otimes {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}
$$
by
$$
D^{u}_\tau(\mathbf{f}_{kl})= \tilde \tau (\mathbf{v}_{k+1,l+1}) - \tilde \tau(\mathbf{v}_{kl})
$$
and
$$
D^{v}_\tau (\mathbf{f}_{kl})= \tilde \tau(\mathbf{v}_{k,l+1}) - \tilde \tau( \mathbf{v}_{k+1,l}).
$$
Then, $D^u_\tau$ and $D^v_\tau$ descend to the quotient $\Sigma$ and provide
discrete vector fields denoted in the same way
$$
D^u_\tau,D^v_\tau\in C^2(\mathcal{Q}_N(\Sigma))\otimes {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n} \simeq C^0({\mathcal{G}}_N(\Sigma))\otimes {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}.
$$
By definition $D^u_\tau$ and $D^v_\tau$ represent certain diagonals of each face
of the quadrangular mesh $\tau$. It is also convenient to introduce
the renormalized discrete vector fields
$$
{\mathscr{U}}_\tau=\frac N{\sqrt 2}D^u_\tau \quad \mbox {and } \quad {\mathscr{V}}_\tau =\frac N{\sqrt 2}D^v_\tau.
$$
\subsubsection{Equation for isotropic mesh}
The problem of finding isotropic meshes can be formulated
using a suitable equation.
Each $ \tau\in {\mathscr{M}}_N$ and each
face $\mathbf{f}\in {\mathfrak{C}}_2(\mathcal{Q}_N)$ defines Euclidean quadrilateral in
${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$, given by the ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$-coordinates of ordered vertices of
$\mathbf{f}$. Such a
quadrilateral has a symplectic area defined by the
integral of the Liouville form $\lambda$ along the quadrilateral.
We can pack this data
into a map
$$
\mu_N : {\mathscr{M}}_N \longrightarrow C^2(\mathcal{Q}_N(\Sigma))
$$
such that $\ip{\mu_N(\tau),\mathbf{f}}$ is the symplectic area of the
corresponding quadrilateral.
The space of isotropic meshes ${\mathscr{L}}_N$
is by definition the set of solutions of the
equation $\mu_N=0$. In other words
$$
{\mathscr{M}}_N\supset {\mathscr{L}}_N=\mu_N^{-1}(0).
$$
For analytical reasons, it will be convenient to introduce a
renormalized version of $\mu_N$, defined by
$$
\mu_N^r = N^2 \mu_N.
$$
\begin{rmk}
Given $\mathbf{f}\in{\mathfrak{C}}_2(\mathcal{Q}_N(\Sigma))$, the real number $\mu^r_N(\mathbf{f})$ is the
ratio between the symplectic area $\ip{\mu_N(\tau),\mathbf{f}}$ and the Euclidean area
of $\mathbf{f}$ with respect to the metric $g_\sigma^N$, which is
$$
\mathrm{Area}(\mathbf{f},g_\sigma^ N)=\frac 1{N^ 2}.
$$
In this sense $\mu_N^r$ can be regarded as a discrete version of the
moment map $\mu(\ell)=\frac{\ell^*\omega}{\sigma}$ introduced at
\S\ref{sec:dream} and $\ip{\mu_N^r(\tau),\mathbf{f}}$ as the symplectic density
of the face~$\mathbf{f}$ with respect to $\tau$.
\end{rmk}
The space of isotropic meshes ${\mathscr{L}}_N$ is the zero set of
$\mu_N^r$. This subspace is defined by a system of quadratic
polynomials as shown by the following lemma.
\begin{lemma}
\label{lemma:quadratic}
The map $\mu_N : {\mathscr{M}}_N \to C^2(\mathcal{Q}_N(\Sigma))$ is quadratic.
More precisely, we have
\begin{equation}
\label{eq:sympareaquad}
\ip{\mu_N(\tau),\mathbf{f}} = \frac 12 \omega(D^u_\tau(\mathbf{f}),D^v_\tau(\mathbf{f}))
\end{equation}
and
\begin{equation}
\label{eq:notdiagrenq}
\ip{\mu_N^r(\tau),\mathbf{f}} = \omega({\mathscr{U}}_\tau(\mathbf{f}),{\mathscr{V}}_\tau(\mathbf{f})).
\end{equation}
\end{lemma}
\begin{proof}
This is an immediated consequence of Lemma~\ref{lemma:quadiso}.
\end{proof}
\begin{dfn}
\label{dfn:quadratic}
Since $\mu_N:{\mathscr{M}}_N\to C^2(\mathcal{Q}_N(\Sigma))$ is a quadratic map, it is
associated to a unique symmetric bilinear map
$$\Psi_N : {\mathscr{M}}_N \times {\mathscr{M}}_N \to C^2(\mathcal{Q}_N(\Sigma)).
$$
Similarly, $\Psi^r_N$ is the symmetric bilinear map associated to
the quadratic map~$\mu^r_N$.
\end{dfn}
\subsection{Shear action on meshes}
\label{sec:shear}
The space ${\mathscr{M}}_N$ admits an obvious action induced by the
translations of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$, which preserves the subspace of
isotropic meshes ${\mathscr{L}}_N$.
However, translations belong to a larger group acting on ${\mathscr{M}}_N$, defined below, preserving
isotropic meshes.
The space of vertices of $\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$ admits a splitting similar to
faces. Indeed, $\Lambda_N^{ch}$ acts on the vertices, with exactly two
orbits denoted
${\mathfrak{C}}^+_0({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$ and ${\mathfrak{C}}_0^-({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$, with the convention that $\mathbf{v}_{00}\in {\mathfrak{C}}^+_0({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$.
This splitting descends to the quotient via $p_N:{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2\to\Sigma$,
where we have two sets of vertices (cf. Figure~\ref{figure:shear} for
a picture)
$$
{\mathfrak{C}}_0(\Sigma) = {\mathfrak{C}}^+_0(\Sigma)\cup {\mathfrak{C}}_0^-(\Sigma).
$$
For any mesh $\tau\in{\mathscr{M}}_N$ and vector $T=(T_+,T_-)\in{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}\times {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$, we define the action of $T$ on $\tau$ by
$$
\ip {T\cdot\tau , \mathbf{v}}=
\left \{
\begin{array}{ll}
\ip{\tau,\mathbf{v}}+T_+ &\mbox { if } \mathbf{v}\in {\mathfrak{C}}^+_0(\Sigma) \\
\ip{\tau,\mathbf{v}}+T_- &\mbox { if } \mathbf{v}\in {\mathfrak{C}}^-_0(\Sigma)
\end{array}
\right .
$$
The above action of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}\times{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ on ${\mathscr{M}}_N$ is called
the \emph{shear action}. If $T_+=T_-$ the action of $T$ is the usal
action by translations mentionned earlier. However, the shear action
$\tau\mapsto T\cdot\tau$
by a vector $T=(T_+,0)$ pulls apart positive and negative
vertices of $\tau$.
But the shear action preserves isotropic meshes:
\begin{prop}
The space of isotropic meshes ${\mathcal{L}}_N\subset {\mathscr{M}}_N$ is invariant under the shear action.
\end{prop}
\begin{proof}
The diagonals of the quadrilaterals associated to some mesh $\tau$ are
invariant under the shear action. In particular, any isotropic mesh
remains isotropic under the shear action by Lemma~\ref{lemma:quadiso}.
\end{proof}
\begin{rmk}
The shear symmetry shows that the space of isotropic quadrangular meshes ${\mathscr{L}}_N =(\mu^r_N)^{-1}(0)$ does not become more regular as $N\to\infty$ in a naive sense.
Intuitively, if $\tau$ is isotropic and close to a
smooth immersed surface (in some $C^1$-sence), the isotropic mesh
$(T_+,0)\cdot\tau$ now looks wild
(cf. Figure~\ref{figure:shear}), even more so as the step size of the
quadrangulation goes to $0$.
This explains why Schauder estimates for discrete ellitpic operators
involve only
weak Hölder norms introduced at \S\ref{sec:dhn}. In turn
Theorem~\ref{theo:maindiscr} and Theorem~\ref{theo:mainquad} are only
stated with ${\mathcal{C}}^0$-norms.
On the contrary, it could be argued that shear symmetry could be used to improve regularity, rather than destroying it. It is possible to obtain good strong ${\mathcal{C}}^1$-estimates in Theorem~\ref{theo:mainquad} at one quadrilateral of the quadrangular mesh $ \rho_N$, modulo the shear action. Unfortunately, it seems unlikely that one could pass in general from such a local to a global strong ${\mathcal{C}}^1$-estimate.
\end{rmk}
\begin{figure}[H]
\begin{pspicture}[showgrid=false](-2,-2)(3,2)
\psline(-2,-2) (-1,1)
\psline(-1,-2) (0,1)
\psline(0,-2) (1,1)
\psline(1,-2) (2,1)
\psline(-2,-2)(1,-2)
\psline(-1.66,-1)(1.33,-1)
\psline(-1.33,0)(1.66,0)
\psline(-1,1)(2,1)
\psset{linecolor=red, fillstyle=solid , fillcolor=red, linestyle=solid}
\pscircle (-2,-2){.1}
\pscircle (0,-2){.1}
\pscircle (-0.66,-1){.1}
\pscircle (1.33,-1){.1}
\pscircle (0.66,0){.1}
\pscircle (-1.33,0){.1}
\pscircle (0,1){.1}
\pscircle (2,1){.1}
\psset{linecolor=blue, fillstyle=solid , fillcolor=blue, linestyle=solid}
\pscircle (-1,-2){.1}
\pscircle (1,-2){.1}
\pscircle (-1.66,-1){.1}
\pscircle (0.33,-1){.1}
\pscircle (-0.33,0){.1}
\pscircle (1.66,0){.1}
\pscircle (-1,1){.1}
\pscircle (1,1){.1}
\psline{->}(-1,1)(-1,2)
\end{pspicture}
\begin{pspicture}[showgrid=false](-2,-2)(3,2)
\psline(-2,-2)(-1.66,0)(-1.33,0) (-1,2)
\psline(-1,-1) (-0.66,-1)(-0.33,1)(0,1)
\psline(0,-2) (0.33,0)(0.66,0) (1,2)
\psline(1,-1)(1.33,-1)(1.66,1) (2,1)
\psline(-2,-2)(-1,-1)(0,-2)(1,-1)
\psline(-1.66,0)(-.66,-1)(.33,0)(1.33,-1)
\psline(-1.33,0)(-0.33,1)(0.66,0)(1.66,1)
\psline(-1,2)(-0,1)(1,2)(2,1)
\psset{linecolor=red, fillstyle=solid , fillcolor=red, linestyle=solid}
\pscircle (-2,-2){.1}
\pscircle (0,-2){.1}
\pscircle (-0.66,-1){.1}
\pscircle (1.33,-1){.1}
\pscircle (0.66,0){.1}
\pscircle (-1.33,0){.1}
\pscircle (0,1){.1}
\pscircle (2,1){.1}
\psset{linecolor=blue, fillstyle=solid , fillcolor=blue, linestyle=solid}
\pscircle (-1,-1){.1}
\pscircle (1,-1){.1}
\pscircle (-1.66,0){.1}
\pscircle (0.33,0){.1}
\pscircle (-0.33,1){.1}
\pscircle (1.66,1){.1}
\pscircle (-1,2){.1}
\pscircle (1,2){.1}
\end{pspicture}
\caption{Shear action on the blue vertices of a mesh}
\label{figure:shear}
\end{figure}
\begin{rmk}
We will make seldom mention of the shear action. But this action
will be crucial at
\S\ref{sec:quadtri} to get more generic isotropic
quadrangular meshes.
\end{rmk}
\subsection{Meshes obtained by sampling}
\label{sec:quadsamp}
Given a smooth immersion $\ell:\Sigma\to {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$, we construct a
canonical sequence of approximations of $\ell$ by quadrangular meshes
$$\tau_N\in
{\mathscr{M}}_N.
$$
The map $\ell:\Sigma \to {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ can be restricted to the
vertices of $\mathcal{Q}_N(\Sigma)$. Hence we may define an element
$\tau_N\in {\mathscr{M}}_N$, called \emph{a sample
of $\ell$}, by
$$
\tau_N(\mathbf{v})=\ell(\mathbf{v})
$$
for each $\mathbf{v}\in {\mathfrak{C}}_0(\mathcal{Q}_N(\Sigma))$.
We would like to discuss more precisely the nature of the convergence
of $\tau_N$ towards $\ell$ in the spirit of \S\ref{sec:anal}. This is
possible at the cost of extending all the analysis introduced at
\S\ref{sec:anal} for discrete functions on faces of $\mathcal{Q}_N(\Sigma)$ to
the case of functions defined at vertices. Instead of carrying this
uncomplicated but lengthy work, we will adopt a more straightforward
approach here.
For the special case $\tau=\tau_N$, where the meshes $\tau_N$ are the samples of
an immersion $\ell:\Sigma\to{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$, the diagonals
$D_\tau^u$, $D_\tau^v$, ${\mathscr{U}}_\tau$ and ${\mathscr{V}}_\tau$ are denoted $D_N^u,
D_N^v, {\mathscr{U}}_N$ and ${\mathscr{V}}_N$ instead.
Then we have the following result
\begin{prop}
\label{prop:convell}
The sequence of discrete vector fields ${\mathscr{U}}_N^\pm$ and ${\mathscr{V}}_N^\pm
\in C^0({\mathcal{G}}_N^\pm(\Sigma))\otimes{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$
converge in the ${\mathcal{C}}^k$-sense, for every $k$. Furthermore
$$
{\mathscr{U}}_N^\pm\stackrel {{\mathcal{C}}^k}\longrightarrow \frac{\partial\ell}{\partial u}
\quad \mbox{ and } \quad
{\mathscr{V}}_N^\pm\stackrel {{\mathcal{C}}^k}\longrightarrow \frac{\partial\ell}{\partial v}.
$$
More precisely, if we denote by ${\mathscr{U}}_N'$ (resp. ${\mathscr{V}}_N'$) the
samples of $\frac{\partial\ell}{\partial u}$ (resp. $\frac{\partial\ell}{\partial u}$)
then
$$
\|{\mathscr{U}}_N-{\mathscr{U}}_N'\|_{C^{k}_w}={\mathcal{O}}(N^{-1}) \mbox{ and } \|{\mathscr{V}}_N-{\mathscr{V}}_N'\|_{C^{k}_w}={\mathcal{O}}(N^{-1}) .
$$
\end{prop}
\subsection{Almost isotropic samples}
The defect of the samples $\tau_N$ to be isotropic is given by the
sequence of discrete functions
\begin{equation}
\label{eq:etaN}
\eta_N=\mu_N^r(\tau_N) \in C^2(\mathcal{Q}_N(\Sigma)).
\end{equation}
The error
$\eta_N$ is small as $N$ goes to infinity in the sense of the
following proposition:
\begin{prop}
Let $\ell:\Sigma\to{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{}$ be a smooth isotropic immersion and
$\tau_N\in{\mathscr{M}}_N$ be the sequence of samples of $\ell$ with respect to the quadrangulations
$\mathcal{Q}_N(\Sigma)$.
Let $\eta_N = \mu^r(\tau_N) \in C^2(\mathcal{Q}_N(\Sigma))$ be the isotropic defect
of $\tau_N$.
Then for every integer $k\geq 0$, we have
$$
\|\eta_N \|_{{\mathcal{C}}_w^{k}}={\mathcal{O}}(N^{-1}).
$$
\end{prop}
\begin{proof}
For each face $f\in {\mathfrak{C}}_2(\mathcal{Q}_N(\Sigma))$, the quantity $\eta_N(f)$
is given by
$$
\eta_N(f)=\frac {N^2}2 \omega(D^u_N(f),D^v_N(f)) = \omega({\mathscr{U}}_N(f), {\mathscr{V}}_N(f)).
$$
The formula for discrete differences of a quadratic form and the
${\mathcal{C}}^k$-convergence of Proposition~\ref{prop:convell} proves the proposition.
\end{proof}
\subsection{Inner products}
\label{sec:ip}
The tangent vectors to the space of meshes ${\mathscr{M}}_N$ and the space of
discrete functions come equiped with canonical inner products, which
are crucial for the analysis.
\subsubsection{The case of function}
The space $C^2(\mathcal{Q}_N(\Sigma))$ of discrete functions comes equiped with an Euclidean inner
product which is a discrete version of the $L^2$-inner product
for smooth functions.
The space $C_2(\mathcal{Q}_N(\Sigma))$ admits a canonical basis,
given by the set of
faces $\mathbf{f} \in {\mathfrak{C}}_2(\mathcal{Q}_N(\Sigma))$. Thus, we have a corresponding
dual basis $\mathbf{f}^*$ of
$C^2(\mathcal{Q}_N(\Sigma))$ defined by
$$\ip{\mathbf{f}^*,\mathbf{f}'}=\left \{
\begin{array}{l}
1 \mbox{ if } \mathbf{f}=\mathbf{f}'\\
0 \mbox{ otherwise}
\end{array}
\right .$$
where $\ip{\cdot,\cdot}$ is the duality bracket.
Recall that the area of a face $\mathbf{f}$ of $\mathcal{Q}_N(\Sigma)$, with respect to the
Riemannian metric $g_{\sigma}^N$, is equal to $N^{-2}$.
The $1$-form $\mathbf{f}^*$ is understood as a constant function equal to
$1$ on the face $\mathbf{f}$ and $0$ on other faces. This intuition gives
an interpretation of the duality bracket
$$
\ip{\cdot,\cdot} : C^2(\mathcal{Q}_N(\Sigma))\times C_2(\mathcal{Q}_N(\Sigma)) \to {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}
$$
as the \emph{pointwise} evaluation of functions on face.
This leads to a discrete analogue
$$\ipp{\cdot,\cdot } : C^2(\mathcal{Q}_N(\Sigma))\times C^2(\mathcal{Q}_N(\Sigma))\to {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}$$
of the $L^2$-inner
product defined by
$$\ipp{\mathbf{f}^*_{1},\mathbf{f}^*_{2}} =
\left \{
\begin{array}{ll}
0 &\mbox{ if $\mathbf{f}_1\neq \mathbf{f}_2$} \\
\frac 1{N^2} & \mbox{ if $\mathbf{f}_1=\mathbf{f}_2$}
\end{array}
\right . .
$$
The corresponding Euclidean norm on $C^2(\mathcal{Q}_N(\Sigma))$ is
simply denoted $\|\cdot\|$.
Notice that the splitting of $$C^2(\mathcal{Q}_N(\Sigma))\simeq
C^0({\mathcal{G}}_N(\Sigma))= C^0({\mathcal{G}}_N^+(\Sigma)) \oplus C^0({\mathcal{G}}^-_N(\Sigma))$$
is orthogonal for $\ipp{\cdot,\cdot }$.
By construction, we have the following result:
\begin{prop}
Let $\psi_{N_k}^\pm \in C^0({\mathcal{G}}_{N_k}^\pm(\Sigma))$ be a converging sequence
of discrete functions with $\lim\psi^\pm_{N_k}=\phi ^\pm$. Then
$$
\lim \|\psi_{N_k}^\pm\|^2 = \frac 12 \|\phi^\pm\|^2_{L^2}
$$
where $\|\phi^\pm\|_{L^2}$ is the $L^2$-norm of $\phi^\pm$ with respect to
the Riemannian \emph{flat} metric $g_\sigma$. In particular if both sequences
converge and $\phi^+=\phi^-=\phi$, then $\lim \|\psi_{N_k} \|^2 = \|\phi
\|^2_{L^2}$.
\end{prop}
\begin{proof}
Let $\phi^\pm_N$ be the sequence of samples of $\phi^\pm$. Then
$\|\phi_N^\pm \|^2$ is understood as a Riemann sum for the integral
$\|\phi^\pm\|^2_{L^2}$. Compared to a usual Riemann sum, we are
throwing away half of the faces of the subdivision, and we have
$$
\lim \|\phi_N^\pm\|^2 = \frac 12 \|\phi^\pm\|^2_{L^2}.
$$
Using the ${\mathcal{C}}^0$-convergence of $\psi_N^\pm$ and Proposition~\ref{prop:c0nec}, we
deduce that
$$\lim \|\phi_N^\pm - \psi_N^\pm\|^2 = 0,$$
and the proposition follows by the triangle inequality.
\end{proof}
\subsubsection{The case of vector fields}
The space $T_\tau{\mathscr{M}}_N$ consists of tangent vectors $V\in
C^0(\mathcal{Q}_N(\Sigma))\otimes {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$. Here $V$ is understood as a
family of vectors,
given at each vertex $\mathbf{v}$ of $\mathcal{Q}_N(\Sigma)$ by
$V(\mathbf{v})=\ip{V,\mathbf{v}}\in {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$.
We deduce an Euclidean inner product on $C^0(\mathcal{Q}_N)\otimes{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^4$,
defined by
$$
\ipp{V,V'} = \frac 1{N^2} \sum_{\mathbf{v} \in {\mathfrak{C}}_0(\mathcal{Q}_N(\Sigma))} g(V(\mathbf{v}),V'(\mathbf{v})).
$$
The corresponding Euclidean norm is also denoted $\|\cdot\|$.
\subsection{Linearized equations}
Recall that the moduli space of quadrangular meshes ${\mathscr{M}}_N$ is in fact
the vector space
$C^0(\mathcal{Q}_N(\Sigma))\otimes {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$.
So for $\tau\in {\mathscr{M}}_N$, the tangent space at
$\tau$ is identified to
$$
T_\tau{\mathscr{M}}_N = C^0(\mathcal{Q}_N(\Sigma))\otimes {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n} = {\mathscr{M}}_N.
$$
Hence a tangent vector at $\tau$ is identified to a familly of vectors
of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ defined
at each vertex of the quadrangulation.
The differential of $\mu_N^r:{\mathscr{M}}_N\to C^2(\mathcal{Q}_N(\Sigma))$ at $\tau$,
which is a linear map denoted
$$
D\mu^r_N|_\tau : T_\tau {\mathscr{M}}_N \to C^2(\mathcal{Q}_N(\Sigma)),
$$
is readily computed. Formally, we have
$$
D\mu^r_N|_\tau \cdot V = 2\Psi^r_N(\tau,V),
$$
where $\Psi^r_N$ is the symmetric bilinear map associated to the
quadratic map $\mu_N^r$.
For a more explicit formula, we merely need to compute the variation of the
symplectic area of a
quadrilateral in ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$, which is being deformed by moving its
vertices. Let $V\in T_\tau{\mathscr{M}}_N$ be a discrete vector field. We define a path of quadrangulations by
$$
\tau_t = \tau+ tV, \mbox{ for } t\in {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}.
$$
We would like to express the variation of $\mu_N^r$ along $\tau_t$. In order to state a result, we need some additional notations.
\subsubsection{Other diagonal notations}
We introduced the diagonals $D^u_\tau$ and $D^v_\tau$ at
\S\ref{sec:diag1}. We need now a slightly different indexing in order to
have a simple expression of the differential of $\mu_N^r$.
We denote by $\mathbf{f}_{kl}$ for $k,l\in{\mathbb{Z}}} \def\DD{{\mathbb{D}}} \def\SS{{\mathbb{S}}} \def\WW{{\mathbb{W}}} \def\QQ{{\mathbb{Q}}} \def\CC{{\mathbb{C}}$, the faces of $\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$. Their image under
the projection $p_N$ are still denoted $\mathbf{f}_{kl} \in
{\mathfrak{C}}_2(\mathcal{Q}_N(\Sigma))$. Similarly, we denote by $\mathbf{v}_{kl}$ the vertices
of $\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$ and their image by $p_N$ as vertices of
$\mathcal{Q}_N(\Sigma)$.
Let
$V\in T_\tau{\mathscr{M}}_N = C^0(\mathcal{Q}_N(\Sigma))\otimes{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ be a vector given
as family of vectors
$$V_{kl} = \ip{V,\mathbf{v}_{kl}}\in
{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}.$$
We define a deformation of the quadrangulation
$\tau$ by $\tau_t =\tau+tV$, or in coordinates
$$\ip{\tau_t,\mathbf{v}_{kl}} = \ip{\tau,\mathbf{v}_{kl}}+
t V_{kl}.$$
Let $\tau\in{\mathscr{M}}_N$, $\mathbf{f}\in {\mathfrak{C}}_2(\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2))$ and
$\mathbf{v}\in{\mathfrak{C}}_0(\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2))$ be one of the vertices of $\mathbf{f}$. We enumerate
the vertices $(\mathbf{v}_0,\mathbf{v}_1,\mathbf{v}_2,\mathbf{v}_3)$ of $\mathbf{f}$ consistently with
the orientation and such that $\mathbf{v}_0=\mathbf{v}$.
The diagonals are defined by
$$D^\tau_{\mathbf{v},\mathbf{f}} = \tau(\mathbf{v}_3) - \tau(\mathbf{v}_1) \in {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}
$$
and if $\mathbf{v}$ is not a vertex of $\mathbf{f}$, we put
$D^\tau_{\mathbf{v},\mathbf{f}} =0$. Figure~\ref{figure:diagonal} shows a
diagrammatic representation of the above construction, with
orientation conventions.
\begin{figure}[H]
\begin{pspicture}[showgrid=false](-2,-1)(2,1.3)
\psset{linestyle=none, fillstyle=solid,
fillcolor=lightgray}
\psline(-1,1) (1,-1) (1,-1) (1,1) (1,1) (-1,1)
\psset{linecolor=black,linestyle=solid, fillstyle=none, arrowsize=.3}
\psline{->}(-1,1) (-1,-1)
\psline{->}(-1,-1) (1,-1)
\psline{->}(1,-1) (1,1)
\psline{->}(1,1) (-1,1)
\psset{fillstyle=solid,fillcolor=black}
\pscircle(1,1){.1}
\psset{linecolor=red}
\psline{->}(-1,1)(1,-1)
\rput(1.3,1.3){$\tau(\mathbf{v})$}
\color{red}
\rput(-.1,-.4){$D^\tau_{\mathbf{v},\mathbf{f}}$}
\end{pspicture}
\caption{A face $\mathbf{f}$ of a mesh $\tau$ with one diagonal and orientations}
\label{figure:diagonal}
\end{figure}
\begin{notation}
The vector $D^\tau_{\mathbf{v},\mathbf{f}}\in {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ is called the diagonal opposite
to $\mathbf{v}$ of the face $\mathbf{f}$ with respect to $\tau$.
\end{notation}
With these notations, we have the following expression for the
variation of the symplectic area:
\begin{lemma}
\label{lemma:diffphi}
$$
\frac d{dt}\ip {\mu_N(\tau_t),\mathbf{f}}|_{t=0} = - \frac 12\sum_{\mathbf{v}\in {\mathfrak{C}}_0(\mathcal{Q}_N(\Sigma))}\omega(V(\mathbf{v}),D^\tau_{\mathbf{v},\mathbf{f}}) .
$$
\end{lemma}
\begin{proof}
We use the ordered vertices $(A_0,A_1,A_2,A_3)$ of an oriented quadrilateral in
${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ and consider
a variation $(A^t_0,A^t_1,A^t_2,A^t_3)= (A_0,A_1,A_2,A_3)+ t
(V_0,V_1,V_2,V_3)$. We denote by $D_0^t$ and $D_1^t$ the diagonals
of the deformed quadrilateral.
By Lemma~\ref{lemma:quadiso}, its symplectic area is
$$
\frac 12\omega(D_0^t,D_1^t).
$$
Hence, the variation of
symplectic area at $t=0$ is given by
$$
\frac 12 \left ( - \omega (V_0,D_1) + \omega (V_1,D_0) + \omega
(V_2,D_1) - \omega (V_3,D_0) \right ).
$$
Using our conventions for the diagonals of quadrilaterals, this proves
the lemma.
\end{proof}
\subsubsection{Computation of the discrete Laplacian}
Any discrete vector field $V\in T_{\tau}{\mathscr{M}}_N$ is given by a
family of vectors
$$
V_v = \ip{V,v} \in{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}.
$$
The almost complex structure $J$ of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}\simeq\CC^n$ induces a canonical action on
$T_\tau{\mathscr{M}}_N=C^2(\mathcal{Q}_N(\Sigma))\otimes{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ that can be expressed as
$$(JV)_v:=J(V_v).$$
Recall that the Euclidean metric $g$ and the symplectic form $\omega$
of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ are related by the formula
$$
\forall u_1,u_2\in{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}, \quad \omega(u_1,u_2) = g(Ju_1,u_2).
$$
According to the Lemma~\ref{lemma:diffphi}, the differential of
$\mu_N$ at $\tau_N$ satisfies
$$
\ip {D\mu_N|_{\tau}\cdot V,\mathbf{f}} =-\frac 12 \sum_{\mathbf{v}} \omega (V(\mathbf{v}),D^\tau_{\mathbf{v},\mathbf{f}}) .
$$
hence
$$
\ip {D\mu_N|_{\tau}\cdot JV,\mathbf{f}} =\frac 12 \sum_{\mathbf{v}} g(V(\mathbf{v}),D^\tau_{\mathbf{v},\mathbf{f}}) .
$$
In turn, we have
\begin{equation}
\label{eq:DJ}
\ip {D\mu^r_{N}|_{\tau}\cdot JV,\mathbf{f}} = \frac {N^2}{2}\sum_{\mathbf{v}}
g(V(\mathbf{v}),D^\tau_{\mathbf{v},\mathbf{f}}).
\end{equation}
We introduce the operator (notice the analogy with Formula~\ref{eq:deltaf})
\begin{equation}
\label{eq:deltatau}
\boxed{\delta_\tau= -D\mu^r_{N}|_{\tau}\circ J,}
\end{equation}
so that Formula~\eqref{eq:DJ} reads
\begin{equation}
\label{eq:deltatauB}
\ip {\delta_\tau V , \sum_{\mathbf{f}} \phi(\mathbf{f})\mathbf{f}}
= \frac {N^2}2\sum_{\mathbf{v},\mathbf{f}}
\phi(\mathbf{f}) g(V(\mathbf{v}),D^\tau_{\mathbf{v},\mathbf{f}}) .
\end{equation}
or, equivalently
\begin{equation}
\label{eq:deltatauB2}
\delta_\tau V =
\frac {N^2}2\sum_{\mathbf{v},\mathbf{f}}
g(V(\mathbf{v}),D^\tau_{\mathbf{v},\mathbf{f}}) \mathbf{f}^* .
\end{equation}
With the above conventions
\begin{equation}
\label{eq:deltatauC}
\ip {\delta_\tau V , \sum_{\mathbf{f}} \phi(\mathbf{f})\mathbf{f}} =
N^2 \ipp {\delta_\tau V , \sum_{\mathbf{f}} \phi(\mathbf{f}) \mathbf{f}^*}
\end{equation}
and it follows from Formulae~\eqref{eq:deltatauB} and \eqref{eq:deltatauC} that
$$
\ipp {\delta_\tau V , \sum_{\mathbf{f}} \phi(\mathbf{f}) \mathbf{f}^* }= \frac 12\sum_{\mathbf{v},\mathbf{f}}
\phi(\mathbf{f}) g(V(\mathbf{v}),D_{\mathbf{v},\mathbf{f}}^\tau) .
$$
We deduce that the ajoint $\delta_\tau^\star$ of $\delta_\tau$ for the inner product
$\ipp{\cdot,\cdot}$ satisfies
\begin{align*}
\ipp {V ,\delta_\tau^\star \sum_{\mathbf{f}} \phi(\mathbf{f}) \mathbf{f}^*} &= \frac 12 \sum_{\mathbf{v}}
\frac 1{N^2}
g\left (V(\mathbf{v}), \sum_{\mathbf{f}} N^2\phi(\mathbf{f})
D^\tau_{\mathbf{v},\mathbf{f}}\right ) \\
&= \ipp{V ,\frac{N^2}2\sum_{\mathbf{v},\mathbf{f}}\phi(\mathbf{f}) D^\tau_{\mathbf{v},\mathbf{f}} \mathbf{v}^*}.
\end{align*}
which proves the following lemma
\begin{lemma}
\label{lemma:operators}
The operator
$$\delta_\tau : T_{\tau_N}{\mathscr{M}}_N \to C^2(\mathcal{Q}_N(\Sigma))$$
is given by
\begin{equation}
\label{eq:dN}
\boxed{
\delta_\tau V = \frac {N^2}2 \sum_{\mathbf{v},\mathbf{f}} g(V(\mathbf{v}),
D^\tau_{\mathbf{v},\mathbf{f}}) \mathbf{f}^*,
}
\end{equation}
whereas its adjoint
$$\delta_\tau^\star : C^2(\mathcal{Q}_N(\Sigma))\to T_{\tau_N}{\mathscr{M}}_N$$
is given by
\begin{equation}
\label{eq:dNstar}
\boxed{
\delta^\star_\tau \phi = \frac {N^2}2 \sum_{\mathbf{v},\mathbf{f}} \phi(\mathbf{f})
D^\tau_{\mathbf{v},\mathbf{f}} \otimes \mathbf{v}^*.
}
\end{equation}
\end{lemma}
\begin{rmk}
The operator $\delta_\tau =-D\mu^r_N|_\tau\circ J$ is the finite dimensional version of $\delta_f=D\mu|_f\circ{\mathfrak{J}}$ considered in the smooth setting (cf. \S\ref{sec:laplrel}).
In the smooth setting, the adjoint $\delta_f^\star$ allows to recover the Hamiltonian infinitesimal action of the gauge group ${\mathcal{G}}=\mathrm{Ham}(\Sigma,\sigma)$ on ${\mathscr{M}}$ according to the identity \eqref{eq:hamdstar}. In the finite dimensional approximation, there is no clear group action on ${\mathscr{M}}_N$ for which $\mu_N^r$ would be the corresponding moment map. However, the vector fields $V_\phi(\tau)=\delta_\tau^\star \phi$ define infinitesimal isometric Hamiltonian action which should play the role of finite dimensional approximations of $\mathrm{Ham}(\Sigma,\sigma)$.
\end{rmk}
The kernel of $\delta^\star_\tau$ contains the constants discrete
functions, but might contain other function as well.
This is not the case generically, according to the proposition below
\begin{prop}
\label{prop:generickernel}
Let $\tau\in{\mathscr{M}}_N$ be a generic quadrangular mesh in the
following sense: for every vertex $\mathbf{v}$ of the quadrangulation
$\mathcal{Q}_N(\Sigma)$, the four possibly non vanishing diagonals
$D^\tau_{\mathbf{v},\mathbf{f}}$, where $\mathbf{f}$ is a face that contains the vertex $\mathbf{v}$ span a
$3$-dimensional subspace of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$.
Then the kernel of $\delta_\tau^\star$ reduces to constant discrete functions.
\end{prop}
\begin{proof}
The equation $\delta^\star_\tau\phi=0$ provides a linear system of rank $3$
with four variables associated to each vertex. This imply that
$\phi$ must be locally constant around each vertex and it follows that
$\phi$ is constant.
\end{proof}
\begin{dfn}
Given a quadrangular mesh $\tau\in{\mathscr{M}}_N$, we define the discrete
Laplacian $\Delta_\tau : C^2(\mathcal{Q}_N(\Sigma)) \to C^2(\mathcal{Q}_N(\Sigma))$ associated to the mesh $\tau$ by
$$
\boxed{
\Delta_\tau = \delta_\tau\delta_\tau ^\star.
}
$$
Given a smooth isotropic immersion $\ell:\Sigma\to {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ and its samples $\tau_N\in{\mathscr{M}}_N$,
the associated operators to $\delta_{\tau_N}$,
$\delta_{\tau_N}^\star$ and $\Delta_{\tau_N}$ are denoted $\delta_{N}$,
$\delta_{N}^\star$ and $\Delta_{N}$, for simplicity.
\end{dfn}
\begin{rmk}
Notice the analogy between the operator $\Delta_f$ defined by
Formula~\eqref{eq:Deltaf} and $\Delta_\tau$. The operator
$\Delta_\tau$ will play a central role in the perturbation theory of
quadrangular meshes, as $\Delta_f$ did for smooth isotropic immersions. The reader should already be aware that $\Delta_\tau$ is \emph{not} the classical Laplacian associated to the mesh $\tau$, as will become clear from the sequel.
\end{rmk}
By
Formula~\eqref{eq:dNstar}
$$
\delta_\tau^\star \mathbf{f}^* = \frac {N^2}2 \sum_{\mathbf{v}} D_{\mathbf{v},\mathbf{f}} \mathbf{v}^*,
$$
and by Formula~\eqref{eq:dN}
$$
\Delta_\tau \mathbf{f}^* = \frac {N^4}4 \sum_{\mathbf{v},\mathbf{f}_2} g(D^\tau_{\mathbf{v},\mathbf{f}}, D^\tau_{\mathbf{v},\mathbf{f}_2}) \mathbf{f}_{2}^*.
$$
We obtain the following result
\begin{prop}
\label{prop:laplform}
For $\phi \in C^2(\mathcal{Q}_N(\Sigma))$, we have
$$
\boxed{
\Delta_\tau\phi = \frac {N^4}4 \sum_{\mathbf{v},\mathbf{f},\mathbf{f}_2}
\phi(\mathbf{f}) g(D^\tau_{\mathbf{v},\mathbf{f}}, D^\tau_{\mathbf{v},\mathbf{f}_2})
\mathbf{f}_{2}^*.
}
$$
\end{prop}
\subsection{Coefficients of the discrete Laplacian}
The discrete Laplacian $\Delta_N$ is an endomorphism of $C^2(\mathcal{Q}_N(\Sigma))$ whose
coefficients are explicitely given by Proposition~\ref{prop:laplform}.
When dealing with $\tau_N$, we use the notation
$D_{\mathbf{v},\mathbf{f}}:=D^{\tau_n}_{\mathbf{v},\mathbf{f}}$ for simplicity.
We introduce the coefficients
$$
\beta_{\mathbf{f}_1\mathbf{f}_2} = \frac {N^4}4 \sum_{\mathbf{v}} g(D_{\mathbf{v},\mathbf{f}_1}, D_{\mathbf{v},\mathbf{f}_2}).
$$
By Proposition~\ref{prop:laplform}
$$
\Delta_N \mathbf{f}_1^* = \sum_{\mathbf{f}_1 ,\mathbf{f}_2} \beta_{\mathbf{f}_1\mathbf{f}_2} \mathbf{f}_{2}^*.
$$
\subsubsection{Splitting of the Laplacian}
\label{sec:splitlapl}
The matrix $(\beta_{\mathbf{f}\face'})$ is obviously symmetric in $\mathbf{f}$
and $\mathbf{f}'$, which is not
surprising since $\Delta_N$ is
selfadjoint by definition.
The matrix is sparse in the sense that most of the coefficients
$\beta_{\mathbf{f}\face'}$ vanish.
There are three types of possibly non vanishing coefficients:
\begin{enumerate}
\item $\mathbf{f}=\mathbf{f}'$.
\item $\mathbf{f}$ and $ \mathbf{f}'$ have only one vertex in common.
\item $\mathbf{f}$ and $ \mathbf{f}'$ have exactly one edge (and two vertices) in common.
\end{enumerate}
Using the above observation, we may write the operator $\Delta_N$ as a
sum
$$
\Delta_N = \Delta_N^E + \Delta_N^I.
$$
Here
$$
\Delta^E_N \mathbf{f}^* = \frac {N^4}4 \sum_{
\mathbf{v},\mathbf{f}_2 \in E_{12}(\mathbf{f}) }
g(D_{\mathbf{v},\mathbf{f}}, D_{\mathbf{v},\mathbf{f}_2}) \mathbf{f}_{2}^*
$$
where $E_{12}(\mathbf{f})$ is the set of faces $\mathbf{f}_2$ such that the pair $(\mathbf{f},\mathbf{f}_2)$
is of type (1) or (2), and
$$
\Delta^I_N \mathbf{f}^* = \frac {N^4}4 \sum_{
\mathbf{v},\mathbf{f}_2\in E_3(\mathbf{f})}
g(D_{\mathbf{v},\mathbf{f}}, D_{\mathbf{v},\mathbf{f}_2}) \mathbf{f}_{2}^*
$$
where $E_{3}(\mathbf{f})$ is the set of faces $\mathbf{f}_2$ such that the pair $(\mathbf{f},\mathbf{f}_2)$
is of type (3).
By definition, we have the following lemma
\begin{lemma}
The operator $\Delta_N^E$ preserves the components of the direct
sum decomposition
$C^2_+(\mathcal{Q}_N(\Sigma))\oplus C^2_-(\mathcal{Q}_N(\Sigma))$, whereas
$\Delta_N^I$ exchanges the components. Accordingly, there have
a block decomposition of the discrete Laplacian
$$
\Delta_N = \left (
\begin{array}{c|c}
\Delta_N^E
& \Delta_N^I \\
\hline
\Delta_N^I& \Delta_N^E
\end{array}
\right ).
$$
\end{lemma}
\subsubsection{Finite difference operators and discrete Laplacian}
The smooth Laplacian~\eqref{eq:Deltaf} is related to a twisted
Riemannian Laplacian by Lemma~\ref{lemma:varmu}.
The goal of this section is to find a similar expression for the discrete
Laplacian $\Delta_N \phi$, using finite difference operators.
The strategy is to compute $\ip{\Delta_N\phi,\mathbf{f}}$ at some face $\mathbf{f}$ of
$\mathcal{Q}_N(\Sigma)$. For this purpose, we will use the notations $\mathbf{f}_{kl}$
and $\mathbf{v}_{ij}$ for faces and vertices of $\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$, considered
as vertices and faces of $\mathcal{Q}_N(\Sigma)$
(cf. \S\ref{sec:quadnot} and \S\ref{sec:diag1}).
The values of a discrete function are denoted
$$
\phi_{kl}=\ip{\phi,\mathbf{f}_{kl}},
$$
and the diagonals $D_{ijkl}$ are obtained as
$D_{\mathbf{v}_{ij}\mathbf{f}_{kl}}$, with the convention that $D_{ijkl}=0$ if
$\mathbf{v}_{ij}$ is not a vertex of the face $\mathbf{f}_{kl}$ in
$\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$.
The coefficients $\beta_{klmn}$ are denoted $\beta_{\mathbf{f}_{kl}\mathbf{f}_{mn}}$
and we choose the integers $k,l$ so that $\mathbf{f}=\mathbf{f}_{kl}$. The
coefficients $\beta_{\mathbf{f}\face'}$ vanishes unless $\mathbf{f}=\mathbf{f}'$ or $\mathbf{f}$ and $\mathbf{f}'$ are
contiguous faces. In such case we may choose a unique pair of
integers $(m,n)$ such that
$\mathbf{f}'=\mathbf{f}_{mn}$ with $m\in \{k-1,k,k+1\}$ and $n\in\{l-1,l,l+1\}$.
Under these conditions (cf. \S\ref{sec:splitlapl})
\begin{enumerate}
\item $\mathbf{f}_{kl}$ and $\mathbf{f}_{mn}$ are of type (1) if $(k,l)=(m,n)$,
\item $\mathbf{f}_{kl}$ and $\mathbf{f}_{mn}$ are of
type (2) if $(m,n) = (k\pm 1, l\pm 1)$ or $(k\pm 1, l\mp 1)$,
\item $\mathbf{f}_{kl}$ and $\mathbf{f}_{mn}$ are of
type (3) if $(m,n)= (k\pm 1,l)$ or $(k,l\pm 1)$.
\end{enumerate}
For the first type of coefficients, we find
$$
\beta_{klkl}= \frac {N^4}4 \sum_{ij} \|D_{ijkl}\|^2,
$$
where we may take the sum over all pairs of indices $i,j\in{\mathbb{Z}}} \def\DD{{\mathbb{D}}} \def\SS{{\mathbb{S}}} \def\WW{{\mathbb{W}}} \def\QQ{{\mathbb{Q}}} \def\CC{{\mathbb{C}}$.
For the second type of cooefficients, we have
$$
\beta_{klmn}= \frac {N^4}4 g(D_{ijkl},D_{ijmn})
$$
where $\mathbf{v}_{ij}$ is the common vertex of $\mathbf{f}_{kl}$ and
$\mathbf{f}_{mn}$ in $\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$.
In the third case there are two common vertices $\mathbf{v}_{ij}$ and $\mathbf{v}_{i'j'}$
which belong to the same edge. Then
$$
\beta_{klmn} = \frac {N^4}4 g(D_{ijkl},D_{ijmn}) + \frac {N^4}4 g(D_{i'j'kl},D_{i'j'mn}).
$$
For simplicity of notations, we also use the notations $D_{kl}^u$ and
$D_{kl}^v$ for the diagonal (cf. \S\ref{sec:diag1}), which differ only by a sign. We start our
computations with the operator $\Delta^E_N$:
\begin{align*}
4N^{-4} & \ip{\Delta^E_N \phi,\mathbf{f}_{kl}} = - \phi_{k-1,l-1}
g(D_{k-1,l-1}^v, D_{kl}^v) - \phi_{k+1,l+1}
g(D_{k+1,l+1}^v, D_{kl}^v)\\
& - \phi_{k-1,l+1}
g(D_{k-1,l+1}^u, D_{kl}^u) - \phi_{k+1,l-1}
g(D_{k+1,l- 1}^u, D_{kl}^u) \\
&+ 2\phi_{kl}(g(D_{kl}^u, D_{kl}^u) + g(D_{kl}^v, D_{kl}^v))
\end{align*}
One can write
\begin{align*}
\phi_{k+1,l+1}g(D^v_{k+1,l+1}, D^v_{kl})= & \phi_{k+1,l+1}g(D^v_{k,l},
D^v_{kl}) \\
+ & \phi_{kl} g(D^v_{k+1,l+1}- D^v_{kl}, D^v_{kl}) \\
+ & (\phi_{k+1,l+1}-\phi_{k,l})g(D^v_{k+1,l+1}-D^v_{kl}, D^v_{kl})
\end{align*}
and similar Leibnitz type decomposition for the other terms.
Thus, we obtain accordingly
\begin{align*}
4N^{-4} \ip{\Delta^E_{N} \phi,\mathbf{f}_{kl}} & = \\
&(- \phi_{k-1,l-1} -
\phi_{k+1,l+1} +2 \phi_{kl}) g(D_{kl}^v, D_{kl}^v) \\
+& (- \phi_{k-1,l+1} -
\phi_{k+1,l-1} +2 \phi_{kl}) g(D_{kl}^u, D_{kl}^u)\\
- & \phi_{kl} g(D_{k-1,l-1}^v + D_{k+1,l+1}^v - 2D_{kl}^v,
D_{kl}^v) \\
-&\phi_{kl} g(D_{k+1,l-1}^v + D_{k-1,l+1}^u - 2D_{kl}^u, D_{kl}^u)
\\
-& (\phi_{k-1,l-1} - \phi_{kl}) g(D^v_{k-1,l-1} -D^v_{kl},D^v_{kl})\\
-& (\phi_{k+1,l+1} - \phi_{kl}) g(D^v_{k+1,l+1}
-D^v_{kl},D^v_{kl}) \\
-& (\phi_{k-1,l+1} - \phi_{kl}) g(D^u_{k-1,l+1} -D^u_{kl},D^u_{kl})\\
-& (\phi_{k+1,l-1} - \phi_{kl}) g(D^u_{k+1,l-1} -D^u_{kl},D^u_{kl})
\end{align*}
We gather the RHS into a sum of three operators:
first we define $\hat \Delta^E_N$. This operator will turn out to be a
discrete version of the Riemannian Laplace-Beltrami operator on $\Sigma$.
\begin{align*}
4N^{-4} \ip{\hat \Delta^E_{N} \phi,\mathbf{f}_{kl}} & = \\
&(- \phi_{k-1,l-1} -
\phi_{k+1,l+1} +2 \phi_{kl}) g(D_{kl}^v, D_{kl}^v) \\
+& (- \phi_{k-1,l+1} -
\phi_{k+1,l-1} +2 \phi_{kl}) g(D_{kl}^u, D_{kl}^u)
\end{align*}
then we define the operator $K^E_{N}$, which is some kind of discrete
curvature operator by
\begin{align*}
4N^{-4} \ip{K^E_{N}\phi,\mathbf{f}_{kl}} & = \\
- & \phi_{kl} g(D_{k-1,l-1}^v + D_{k+1,l+1}^v - 2D_{kl}^v,
D_{kl}^v) \\
-&\phi_{kl} g(D_{k+1,l-1}^v + D_{k-1,l+1}^u - 2D_{kl}^u, D_{kl}^u)
\end{align*}
The last four lines can be rearranged into an operator $\Gamma^E_N$
given by
\begin{align*}
4N^{-4} \ip{\Gamma^E_N \phi,\mathbf{f}_{kl}} & = \\
-& \frac 12(\phi_{k-1,l-1} - \phi_{kl}) (|D^v_{k-1,l-1}|_g^2 -|D^v_{kl}|_g^2)\\
-& \frac 12(\phi_{k+1,l+1} - \phi_{kl}) (|D^v_{k+1,l+1}|_g^2
-|D^v_{kl}|_g^2)\\
-& \frac 12(\phi_{k-1,l+1} - \phi_{kl}) (|D^u_{k-1,l+1}|_g^2 -|D^u_{kl}|_g^2)\\
-& \frac 12(\phi_{k+1,l-1} - \phi_{kl}) (|D^u_{k+1,l-1}|_g^2
-|D^u_{kl}|_g^2)
\end{align*}
plus an operator
\begin{align*}
4N^{-4} \ip{{\mathcal{E}}^E_{N} \phi,\mathbf{f}_{kl}} & = \\
-& \frac 12(\phi_{k-1,l-1} - \phi_{kl}) |D^v_{k-1,l-1}-D^v_{kl}|_g^2\\
-& \frac 12(\phi_{k+1,l+1} - \phi_{kl}) |D^v_{k+1,l+1} - D^v_{kl}|_g^2\\
-& \frac 12(\phi_{k-1,l+1} - \phi_{kl}) |D^u_{k-1,l+1} - D^u_{kl}|_g^2\\
-& \frac 12(\phi_{k+1,l-1} - \phi_{kl}) |D^u_{k+1,l-1} -D^u_{kl}|_g^2
\end{align*}
So, we have a decomposition
$$
\Delta_N^E = \hat\Delta_N^E + K^E_N + \Gamma_N^E + {\mathcal{E}}^E_N.
$$
Similar computations can be carried out for $\Delta^I_N$.
\begin{align*}
4N^{-4} & \ip{\Delta^I_N \phi,\mathbf{f}_{kl}} = \\
& \phi_{k+1,l}
(-g(D_{k+1,l}^v, D_{kl}^u) - g(D_{k+1,l}^u, D_{kl}^v))\\
& \phi_{k,l+1}
(g(D_{k,l+1}^v, D_{kl}^u) + g(D_{k,l+1}^u, D_{kl}^v))\\
& \phi_{k-1,l}
(-g(D_{k-1,l}^v, D_{kl}^u) - g(D_{k-1,l}^u, D_{kl}^v))\\
& \phi_{k,l-1}
(+g(D_{k,l-1}^v, D_{kl}^u) + g(D_{k,l-1}^u, D_{kl}^v))
\end{align*}
We introduce the averaging operator $\phi\mapsto \bar \phi$ defined by
$$
\bar \phi_{kl}=\ip{\bar\phi, \mathbf{f}_{kl}}= \frac 14 \left (\phi_{k+1,l} +
\phi_{k-1,l} + \phi_{k,l+1} + \phi_{k,l-1}\right )
$$
and we write each term above under the form
\begin{align*}
\phi_{k+1,l}
g(D_{k+1,l}^v, D_{kl}^u) & =\\ & (\bar\phi_{kl} +
(\phi_{k+1,l}-\bar \phi_{kl})) g\left (D_{k,l}^v +
(D_{k+1,l}^v-D_{k,l}^v), D_{kl}^u \right )
\end{align*}
Expanding these expressions leads to
\begin{align*}
4N^{-4} & \ip{\Delta^I_N \phi,f_{kl}} = \\
& \bar\phi_{kl} g\left ( (D^v_{k,l+1}-D^v_{k+1,l}) -
(D^v_{k-1,l}-D^v_{k,l-1}),D^u_{kl} \right )
\\
+ & \bar\phi_{kl} g\left ( (D^u_{k,l+1}-D^u_{k+1,l}) -
(D^u_{k-1,l}-D^u_{k,l-1}),D^v_{kl} \right )
\\
+&2 ((\phi_{k,l+1} - \phi_{k-1,l}) - (\phi_{k+1,l}-\phi_{k,l-1}))
g(D^v_{kl},D^u_{kl}) \\
+& (\phi_{k+1,l}-\bar \phi_{kl}) (-g(D_{k+1,l}^v - D_{kl}^v,
D_{kl}^u) - g(D_{k+1,l}^u - D_{kl}^u, D_{kl}^v))\\
+& (\phi_{k,l+1}-\bar \phi_{kl}) (g(D_{k,l+1}^v - D_{kl}^v,
D_{kl}^u) + g(D_{k,l+1}^u - D_{kl}^u, D_{kl}^v))\\
+& (\phi_{k-1,l}-\bar \phi_{kl}) (-g(D_{k-1,l}^v - D_{kl}^v,
D_{kl}^u) - g(D_{k-1,l}^u - D_{kl}^u, D_{kl}^v))\\
+& (\phi_{k,l-1}-\bar \phi_{kl}) (g(D_{k,l-1}^v - D_{kl}^v,
D_{kl}^u) + g(D_{k,l-1}^u - D_{kl}^u, D_{kl}^v))\\
\end{align*}
The first two lines can be expressed using Chasles relation as an
operator
\begin{align*}
4N^{-4} \ip{K^I_{N}\phi,\mathbf{f}_{kl}} = &\\
&\bar\phi_{kl}g(D^u_{k-1,l+1}+D^u_{k+1,l-1}-2D^u_{kl},D^u_{kl})\\
+&\bar\phi_{kl}g(D^v_{k+1,l+1}+D^v_{k-1,l-1}-2D^v_{kl},D^v_{kl})
\end{align*}
In particular, we see that if $\phi_{kl}=\bar\phi_{kl}$, then
$\ip{(K^I_{\tau_0}+K^E_{\tau_0})\phi,\mathbf{f}_{kl}} = 0$. We decompose
$\Delta^I$ as a sum
$$
\Delta^I_N= K^I_N +{\mathcal{E}}^I_N.
$$
All the above operators may be expressend in terms of finite differences. First
we define analogue $\theta^u_N, \theta_N^v\in C^2({\mathcal{G}}_N(\Sigma))$ of the conformal
factor $\theta$ by
$$
\theta^u_N = \|{\mathscr{U}}_N\|^2_g, \mbox { and } \theta^v_N = \|{\mathscr{V}}_N\|^2_g.
$$
and a discrete analogue of the Gau{\ss} curvature plus an energy term $\kappa_N\in C^2({\mathcal{G}}_N(\Sigma))$ given by
$$
\kappa_N =- g\left (\frac{\partial^2}{\partial\cev u \partial\vec u}{\mathscr{V}}_N
,{\mathscr{V}}_N\right ) - g\left (\frac{\partial^2}{\partial\cev v \partial\vec
v}{\mathscr{U}}_N ,{\mathscr{U}}_N\right )
$$
\begin{prop}
The opterators introduced above satisfy the following identites for
every discrete function $\phi$:
\label{prop:formopl}
$$
\hat\Delta ^E_N\phi= - \left (\theta_N^u \frac{\partial^2}{\partial\cev u \partial\vec u}
+ \theta_N^v \frac{\partial^2}{\partial\cev v \partial\vec v}\right ) \phi
$$
$$
K^E_N\phi = \kappa_N\cdot\phi
$$
\begin{align*}
\Gamma_N^E\phi =& -\frac 12 \left (\frac {\partial\phi}{\partial\cev u} \frac {\partial\theta_N^v}{\partial\cev
u}+\frac {\partial\phi}{\partial\vec u} \frac {\partial\theta_N^v}{\partial\vec
u}+\frac {\partial\phi}{\partial\cev v} \frac {\partial\theta_N^u}{\partial\cev
v}+\frac {\partial\phi}{\partial\vec v} \frac {\partial\theta_N^u}{\partial\vec
v} \right )\\
=&- \mathrm{grad}\phi \cdot \mathrm{grad} \theta_N
\end{align*}
and
$$
K_N^I\phi = -\kappa_N\cdot \bar\phi.
$$
\end{prop}
The operators ${\mathcal{E}}^E_N$ and ${\mathcal{E}}^I_N$ become negligible as $N$ goes to
infinity, in the sense of the following proposition:
\begin{prop}
\label{prop:errors}
There exists a sequence $\epsilon_N={\mathcal{O}}(N^{-1})$ with $\epsilon_N>0$
such that for all $N$ and all
functions $\phi\in C^2(\mathcal{Q}_N(\Sigma))$, we have
$$
\|{\mathcal{E}}^I_N\phi \|_{{\mathcal{C}}^{0,\alpha}}\leq \epsilon_N \|\phi
\|_{{\mathcal{C}}^{2,\alpha}}
\quad \mbox { and } \quad
\|{\mathcal{E}}^E_N\phi \|_{{\mathcal{C}}^{0,\alpha}}\leq \epsilon_N \|\phi \|_{{\mathcal{C}}^{2,\alpha}}
$$
and
$$
\|{\mathcal{E}}^I_N\phi \|_{{\mathcal{C}}^{0}} \leq \epsilon_N \|\phi
\|_{{\mathcal{C}}^{2}}
\quad \mbox { and } \quad
\|{\mathcal{E}}^E_N\phi \|_{{\mathcal{C}}^{0}} \leq \epsilon_N \|\phi \|_{{\mathcal{C}}^{2}}
$$
\end{prop}
\section{Limit operator}
\label{sec:limit}
\subsection{Computation of the limit operator}
We denote by $\Delta_\sigma$ (resp. $\Delta_\Sigma$ the Laplace-Beltrami operator associated to the Riemannian metric $g_\sigma$ (resp. $g_\Sigma)$ on $\Sigma$.
\begin{theo}
\label{theo:limit}
Let $k$ be an integer such that $k\geq 2$.
For every sequence of discrete functions $\psi_{N_k}\in
C^2(\mathcal{Q}_{N_k}(\Sigma))$, converging in the ${\mathcal{C}}^k_w$-sense toward a
pair of functions
$(\phi^+,\phi^-)$, we have
$$
\Delta_{N_k}\psi_{N_k} \stackrel{{\mathcal{C}}^{k-2}}{\longrightarrow } \Xi(\phi^+,\phi^-)
$$
where $\Xi(\phi^+,\phi^-)$ is the pair of functions defined by
\begin{align*}
\Xi (\phi^+,\phi^-)= \Big ( & \theta\Delta_{\sigma}\phi^+ - g_\sigma (d\phi^+,d\theta)
+ (K+E)(\phi^+ - \phi^-),\\
& \theta\Delta_{\sigma}\phi^- - g_\sigma (d\phi^-,d\theta)
+ (K+E)(\phi^- - \phi^+) \Big ),
\end{align*}
$K$ is the Gau{\ss} curvature of $g_\Sigma$ and $E$ is a nonnegative function on $\Sigma$ defined at~\eqref{eq:defE}.
\end{theo}
\begin{proof}
The result is a trivial consequence of
Proposition~\ref{prop:formopl} and the convergence of the
coefficients of the operator.
The only non trivial fact that must be proved is the following
lemma:
\begin{lemma}
\label{lemma:gauss}
We have the identity
$$
K + E =-g\left (\frac{\partial^3\ell}{\partial u^2\partial v}, \frac{\partial\ell}{\partial v}\right ) - g\left (\frac{\partial^3\ell}{\partial v^2\partial u}, \frac{\partial\ell}{\partial u}\right ),
$$
where $K$ is the Gau{\ss} curvature of the metric $g^\Sigma$ and $E$ is the nonnegative function on $\Sigma$ defined via the second fundamental form ${\mathrm{\mathbf{I\!I}}}$ of $\ell:\Sigma\to {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^ {2n}$
\begin{equation}
\label{eq:defE}
E = 2 g\left (
\frac{\partial^2 \ell}{\partial u\partial v}^\perp,\frac{\partial^2 \ell}{\partial u\partial
v}^\perp \right ) = 2g\left ({\mathrm{\mathbf{I\!I}}} \left (\frac\partial{\partial u},
\frac\partial{\partial v}\right ), {\mathrm{\mathbf{I\!I}}} \left (\frac\partial{\partial u},
\frac\partial{\partial v}\right )\right ).
\end{equation}
\end{lemma}
\begin{proof}
Recall the standard formula for the Gau{\ss} curvature $K$ of the metric $g^\Sigma= \ell^* g = \theta g_\sigma$, conformal to the flat metric $g_\sigma$:
$$
K= \frac 12 \theta\Delta_\sigma \log\theta.
$$
Using the classical identity
$$
\theta\Delta_\sigma \log\theta = \Delta_\sigma\theta + \theta^{-1}g_\sigma (d\theta,d\theta)
$$
and using the fact that
$$
\theta = g\left (\frac{\partial \ell}{\partial u}, \frac{\partial \ell}{\partial u}\right ) = g \left (\frac{\partial \ell}{\partial v}, \frac{\partial \ell}{\partial v}\right ),
$$
we compute
\begin{equation}
\label{eq:K1}
\frac{\partial\theta}{\partial v} = 2g \left (\frac{\partial^2 \ell}{\partial u\partial v},
\frac{\partial \ell}{\partial u}\right ),\quad \frac{\partial\theta}{\partial u} = 2g\left (\frac{\partial^2 \ell}{\partial u\partial v}, \frac{\partial \ell}{\partial v}\right ),
\end{equation}
hence
$$
\frac{\partial^2\theta}{\partial v^2} = 2g\left (\frac{\partial^3 \ell}{\partial u\partial v^2}, \frac{\partial \ell}{\partial u}\right )
+ 2g\left (\frac{\partial^2 \ell}{\partial u\partial v}, \frac{\partial^2 \ell}{\partial u\partial v} \right )
$$
and
$$
\frac{\partial^2\theta}{\partial u^2} = 2g \left (
\frac{\partial^3 \ell}{\partial u^2\partial v}, \frac{\partial \ell}{\partial u}
\right )
+ 2g \left (
\frac{\partial^2 \ell}{\partial u\partial v}, \frac{\partial^2 \ell}{\partial u\partial v}
\right ).
$$
In particular
\begin{align*}
g_\sigma (d\theta,d\theta ) = & \left |\frac {\partial \theta}{\partial u} \right |^2 + \left |\frac {\partial \theta}{\partial v} \right |^2 \\
= & 4g \left (\frac{\partial^2 \ell}{\partial u\partial
v},\frac{\partial \ell}{\partial u} \right ) ^2 +4g \left (\frac{\partial^2 \ell}{\partial u\partial
v}, \frac{\partial \ell}{\partial v} \right )^2
\end{align*}
thanks to Formula~\eqref{eq:K1}. The fact that $\frac{\partial\ell}{\partial u}$ and $\frac{\partial\ell}{\partial v}$ is an orthogonal family of vectors of $g$-norm $\sqrt\theta$ implies that any vector $V\in {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ satisfies the identity
$$
g \left (V,\frac{\partial \ell}{\partial u} \right ) ^2 +g \left (V, \frac{\partial \ell}{\partial v} \right )^2 = \theta g
\left (V^T,V^T \right ),
$$
where $V^T$ is the $g$-orthogonal projection of $V$ onto the plane spaned by $\frac{\partial\ell}{\partial u}$ and $\frac{\partial\ell}{\partial v}$. In other words, $V^T$ is the $g$-orthogonal projection onto the tangent plane to $\ell(\Sigma)$.
Therefore
$$ \theta^{-1} g_\sigma (d\theta,d\theta ) = 4 g\left (
\frac{\partial^2 \ell}{\partial u\partial v}^T,\frac{\partial^2 \ell}{\partial u\partial v}^T \right )
$$
and
$$
\Delta_\sigma\theta = -2g\left (\frac{\partial^3 \ell}{\partial u\partial v^2},
\frac{\partial \ell}{\partial u}\right )-2g\left (\frac{\partial^3 \ell}{\partial u^2\partial v},
\frac{\partial \ell}{\partial v}\right ) -
4g \left (\frac{\partial^2 \ell}{\partial u\partial v},\frac{\partial^2 \ell}{\partial u\partial v} \right ).
$$
In conclusion
\begin{align*}
2K = & \theta \Delta_\sigma \log \theta \\
= & \Delta_\sigma \theta + \theta^{-1}g_\sigma(d\theta,d\theta) \\
= & -2g(\frac{\partial^3 \ell}{\partial u\partial v^2}, \frac{\partial \ell}{\partial u})-2g(\frac{\partial^3 \ell}{\partial u^2\partial v}, \frac{\partial \ell}{\partial v}) - 4 g\left (
\frac{\partial^2 \ell}{\partial u\partial v}^\perp,\frac{\partial^2 \ell}{\partial u\partial v}^\perp \right )
\end{align*}
where $\perp$ denotes the component of a vector orthogonal to the tangent space to $\ell$ at a point.
\end{proof}
\begin{cor}
\label{cor:kappaconv}
For all integer $k\geq 0$
$$
\kappa_N \stackrel{{\mathcal{C}}^k}\longrightarrow K+E.
$$
\end{cor}
The coefficients of $\Delta_N$ are now all understood asymptotically. This complete the proof of the theorem.
\end{proof}
\begin{dfn}
The operator defined by
\begin{align*}
\Xi(\phi^+,\phi^-)= \Big ( &\theta\Delta_{\sigma}\phi^+ - g_\sigma(d\phi^+,d\theta)
+ (K+E)(\phi^+ - \phi^-),\\
& \theta\Delta_{\sigma}\phi^- - g_\sigma (d\phi^-,d\theta)
+ (K+E)(\phi^- - \phi^+) \Big )
\end{align*}
is called the limit operator of $\Delta_{N}$.
\end{dfn}
\begin{rmk}
In particular, the limit operator $\Xi$ is elliptic. This fact will be crucial to derive uniform discrete Schauder estimates for $\Delta_N$.
\end{rmk}
\subsection{Kernel of the limit operator}
\begin{prop}
\label{prop:kernel}
A pair of smooth functions $(\phi^+,\phi^-)$ is an element of the kernel of the limit
operator $\Xi$ if, and only if, there exists some real constants
$c_0$ and $c_1$ such that
$$
\phi^+ = c_0 + c_1\theta^{-1},\quad \phi^- = c_0 - c_1\theta^{-1}.
$$
with $c_1=0$, unless the function $E$ vanishes identically on $\Sigma$.
In particular the kernel of $\Xi$ has dimension $1$ or $2$ depending on the vanishing of $E$.
\end{prop}
\begin{proof}
The Proposition is proved by a straightforward argument using integration by part. A few formulae are needed in order to give a streamlined proof:
\begin{lemma}
\label{lemma:kernel}
For every smooth function $f:\Sigma\to {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}$, we have
$$
d^{*_\sigma}\theta df =\theta\Delta_{\sigma} f - g_\sigma(d f, d \theta ),
$$
where $d^ {*_\sigma}$ is the adjoint of $d$ with respect to the $L^2$-inner product induced by $g_\sigma$.
On the other hand, we have
$$
\theta d^{*_\sigma} \theta^{-1} d\theta f = \theta\Delta_\sigma f - g_\sigma (d f, d \theta) + 2Kf
$$
where $K$ is the Gau{\ss} curvature of $g_\Sigma$.
\end{lemma}
\begin{proof}
For every $1$-form $\beta$
and every function $w$ on $\Sigma$, we have
$d^ {*_\sigma}(w\beta) = -*_\sigma d*_\sigma (w\beta)= -*_\sigma d(w*_\sigma \beta) = -*_\sigma (dw\wedge
*_\sigma \beta + wd*_\sigma \beta)= wd^{*_\sigma}\beta-*_\sigma g_\sigma(dw,\beta)\mathrm{vol}_\sigma =
wd^{*_\sigma}\beta- g_\sigma (dw,\beta )$. In conclusion
$$d^ {*_\sigma}(w\beta) = wd^{*_\sigma}\beta-
g_\sigma(dw,\beta).$$
The first formula of the lemma follows from the above identity.
For second identity, we have $\theta^{-1}d(\theta f) = fd\log\theta +
df$. Now, $d^{*_\sigma}\theta^{-1}d(\theta f) = fd^{*_\sigma}d\log \theta -
g_\sigma (df,d\log\theta) +d^{*_\sigma}df$. We use the fact that the Gau{\ss} curvature
of $g_\Sigma$ is given by the formula
$2K= \theta\Delta_\sigma \log\theta$ and deduce the second identity of the lemma.
\end{proof}
We may now complete the proof ot Proposition~\ref{prop:kernel}.
Let $\phi^\pm$ be a solution of the system
\begin{align*}
\theta\Delta_{\sigma}\phi^+ - g_\sigma ({d \phi^+, d \theta}) + (K+E)(\phi^+
- \phi^-) &= 0 \\
\theta\Delta_{\sigma}\phi^- - g_\sigma({d \phi^-, d \theta}) + (K+E)(\phi^-
- \phi^+)&= 0.
\end{align*}
Adding up the two equations gives the identity
$$
\theta\Delta_{\sigma}(\phi^++\phi^-) - \ip{d (\phi^++\phi^-), d
\theta}_\sigma =0 = d^{*_\sigma}\theta d(\phi^++\phi^-)
$$
by Lemma~\ref{lemma:kernel}. Integrating against $\phi^++\phi^-$ using the $L^2$-inner product induced by $g_\sigma$ gives $0=\ip{d^*\theta d(\phi^++\phi^-), \phi^++\phi^-}_{L^2} = \ip{\theta d(\phi^++\phi^-), d(\phi^++\phi^-)}_{L^2}$. Since $\theta$ is positive, this forces
$$
\phi^+ + \phi^- = 2c_0,
$$
for some constant $c_0$.
On the other hand the difference of the two equations provides the identity
$$
\theta\Delta_{\sigma}(\phi^+-\phi^-) - g_\sigma (d (\phi^+-\phi^-),
d\theta ) + 2(K+E)(\phi^+ - \phi^-)=0.
$$
by Lemma~\ref{lemma:kernel} we deduce that
$$
\theta d^{*_\sigma}\theta^{-1} d\theta(\phi^+-\phi^-) +2E (\phi^+ - \phi^-) =0.
$$
Integrating the above equation against $\theta(\phi^+-\phi^-)$ provides the identity
$$
\ip{\theta d(\phi^+-\phi^-), d(\phi^+-\phi^-) }_{L^2} + \ip{2E (\phi^+-\phi^-), \phi^+-\phi^-}_{L^2}=0.
$$
Now $\theta$ is positive and $E$ is nonnegative, so the two terms of the LHS are non negative: they must vanish both.
The vanishing of the first term forces
$$
\phi^+ -\phi^- = 2\theta^{-1} c_1 ,
$$
for somme real constant $c_1$. The vanishing of the second term implies that $c_1=0$ unless $E$ vanishes identically on $\Sigma$.
\end{proof}
\subsection{Degenerate families of quadrangulations}
Proposition~\ref{prop:kernel} leads us to distinguish two types of
constructions.
Recall that the construction of $\mathcal{Q}_N(\Sigma)$ depends on the choice
of a
Riemannian universal cover $p:{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2\to \Sigma$ for the
flat metric $g_\sigma$ on $\Sigma$. Such cover are not unique. They may be, for instance, precomposed with a rotation of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$. Equivalently, we may
replace the canonical basis of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$ by a rotated basis, which
also provides rotated $(u,v)$-coordinates.
We introduce a definition of degeneracy, bearing on pairs
$(p,\ell)$,
that consists of an isotropic
immersion $\ell:\Sigma\to{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ and a choice of Riemannian cover
$p:{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2\to\Sigma$ for a flat metric $g_\sigma$, in the conformal
class of the induced metric $g_\Sigma$.
\begin{dfn}
\label{dfn:degenerate}
We say that the pair $(p, \ell)$ is degenerate, if the function $E:\Sigma\to{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}$ defined
by~\eqref{eq:defE} vanishes identically. Otherwise, we say that the
pair $(p,\ell)$ is nondegenerate.
\end{dfn}
\begin{example}
An example of degenerate pair is provided by
the map
$$\ell:{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2 \longrightarrow \CC\otimes \CC\simeq {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{4}$$
defined by
$$\ell(x,y)=(\exp (2\pi iu)), \exp(2\pi iv))\in \CC^2,
$$
where $(x,y)$ are the canonical coordinates of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$ and $(u,v)$ are the rotated coordinates defined by~\eqref{eq:uv}.
This map clearly satisfies
\begin{equation}
\label{eq:degex}
\frac{\partial^2\ell}{\partial u\partial v}=0 .
\end{equation}
Moreover, $\ell$ is invariant under the lattice $\Gamma$ spanned by $\frac{e_1+e_2}{\sqrt{2}}$ and $\frac{e_2-e_1}{\sqrt{2}}$. Hence $\ell$ descends to a quotient map denoted $\ell:{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2/\Gamma\to\CC^2$.
We obtain a pair $(p,\ell)$, where $p:{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2\to{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2/\Gamma$ is the canonical projection, which is degenerate in the sense of Definition~\ref{dfn:degenerate} by~\eqref{eq:degex}.
\end{example}
Degenerate pairs can create additional technical
difficulties. Nevertheless, they may be taken care of with some additional
caution (cf. \S\ref{sec:deg}). Or they can just be avoided according to the following proposition:
\begin{prop}
\label{prop:avoid}
Given a pair $(p,\ell)$, there always exists a rotation $r$ of
${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$ such that $(p\circ r,\ell)$ is non degenerate.
\end{prop}
\begin{proof}
The $(u,v)$ coordinates of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$ induce an
orthonormal basis of tangent vectors of $\Sigma$ for the metric
$g_\sigma$, denoted $\frac{\partial}{\partial u}$, $\frac{\partial}{\partial
v}$. If $(p,\ell)$ is degenerate, ${\mathrm{\mathbf{I\!I}}}$ must vanishes identically on this pair of
vector fields. If $(p\circ r,\ell)$ is degenerate for every rotation
of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$, the second fundamental form must also
vanish for every pair of tangent vectors obtained by rotating
the basis $\frac{\partial}{\partial u}$, $\frac{\partial}{\partial v}$. This means
that ${\mathrm{\mathbf{I\!I}}}$ vanishes on every pair of orthogonal tangent vectors for
$g_\sigma$. Since $g_\Sigma$ is conformal to $g_\sigma$, this means
that ${\mathrm{\mathbf{I\!I}}}$ must vanish for every pair of orthogonal tangent vectors
for $g_\Sigma$. This is a contradiction according to the following
lemma:
\begin{lemma}
For any immersion $\ell:\Sigma\to{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$, where $\Sigma$ is a
closed surface diffeomorphic to a torus, there exists a point
$x\in\Sigma$ and an orthogonal basis of tangent vectors $U$, $V$
for the induced metric $g_\Sigma$, such that the second
fundamental form satisfies ${\mathrm{\mathbf{I\!I}}}(U,V)\neq 0$.
\end{lemma}
\begin{proof}
We choose a point $x\in T_x\Sigma$.
Assume that ${\mathrm{\mathbf{I\!I}}}(U,V)$ vanishes for every orthonormal basis
$(U,V)$ of $T_x\Sigma$. Notice that in this case $(U+V,U-V)$ is an
orthogonal basis, hence, by assumption ${\mathrm{\mathbf{I\!I}}}(U+V,U-V)=0$, and we have
$$
{\mathrm{\mathbf{I\!I}}}(U,U)={\mathrm{\mathbf{I\!I}}}(U+V,U-V) +{\mathrm{\mathbf{I\!I}}}(V,V) = {\mathrm{\mathbf{I\!I}}}(V,V).
$$
By the Gau{\ss} Theorema Egregium, the curvature $K$ of
$g_\Sigma$ is given by
$$
K = - g({\mathrm{\mathbf{I\!I}}}(U,V),{\mathrm{\mathbf{I\!I}}}(V,U)) +g({\mathrm{\mathbf{I\!I}}}(U,U),{\mathrm{\mathbf{I\!I}}}(V,V)).
$$
According to our discussion, we deduce that
$$
K=g({\mathrm{\mathbf{I\!I}}}(U,U),{\mathrm{\mathbf{I\!I}}}(U,U))\geq 0.
$$
By the Gau{\ss}-Bonnet formula, a torus with nonnegative curvature has
vanishing curvature. Thus $K=0$, and as a corollary ${\mathrm{\mathbf{I\!I}}}(U,U)=0$, which
implies that ${\mathrm{\mathbf{I\!I}}}=0$. In conclusion the image of $\ell:\Sigma\to{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$
is totally geodesic. The only totally geodesic surfaces of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ are
$2$-planes. This forces the image of $\ell$ to be contained in a
plane. This is not possible for an immersion of a compact surface.
\end{proof}
In conclusion there is a choice of rotation $r$ such that $(p\circ
r,\ell)$ is nondegenerate, which proves the proposition.
\end{proof}
\subsection{Schauder Estimates}
The following result is a consequence of a theorem of Thom\'ee, stated in a broader context
\cite{Thomee68}, for various elliptic finite difference operators, in the
case of domains of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^n$ covered by square lattices of step
$h=N^{-1}$. We provide here a statement adapted to the torus
$\Sigma$ identified to quotients ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2/\Gamma_N$ endowed with its spaces
of discrete functions.
\begin{theo}[Thom\'ee type theorem]
\label{theo:thomee}
There exists a constant $c_1>0$ such that for all $N\geq 0$ and for
all functions $\psi\in C^2(\mathcal{Q}_N(\Sigma))$, we have
$$
\| P_N \psi\|_{{\mathcal{C}}^{0,\alpha}_w} + \|\psi\|_{{\mathcal{C}}^0} \geq c_1\|\psi\|_{{\mathcal{C}}^{2,\alpha}_w},
$$
where
$$
P_N =\hat\Delta^E_N + \Gamma^E_N.
$$
\end{theo}
\begin{proof}
Proposition~\ref{prop:formopl} can be readily used to prove an analogue of
Theorem~\ref{theo:limit} for the opertors $P_N$. In other words, for
every $k\geq 2$, for every sequence $\psi_{N_j}\in
C^2_+(\mathcal{Q}_{N_j}(\Sigma))$ such that (cf.~\eqref{eq:Deltaf} and
Lemma~\ref{lemma:varmu} as well)
\begin{equation}
\label{eq:convPN}
\psi_{N_j}\stackrel{{\mathcal{C}}^{k}} \longrightarrow \phi \quad
\Rightarrow \quad
P_{N_j}\psi_{N_j}\stackrel{{\mathcal{C}}^{k-2}}\longrightarrow
\Delta_{\ell}\phi.
\end{equation}
The operators $P_N$ admit canonical lifts $\tilde P_N:C^2_+(\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2))
\to C^2_+(\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2))$. The elliptic operator $\Delta_\ell$ can
also be lifted as an elliptic operator with smooth coefficients $\tilde\Delta_\ell$ acting on functions on the plane.
By Property~\eqref{eq:convPN}, the discrete operators $\tilde P_N:
C^2_+(\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2))\to C^2_+(\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2))$ converge toward the elliptic operator
$\tilde \Delta_\ell$. This implies that the sequence of discrete
operators $\tilde P_N$ is consistent with the elliptic operator
$\tilde \Delta_\ell$
and that the operators $\tilde P_N$ must be elliptic, for $N$ sufficiently large, in the sense of Thom\'ee~\cite{Thomee68}.
We consider a fundamental domain ${\mathcal{D}}$ of the action of $\Gamma$ on
${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$. For $r_0>0$ sufficiently large, ${\mathcal{D}}\subset B(0,r_0)$.
We define
$$\Omega_0 = B(0,r_0+1), \quad \Omega_1 = B(0,r_0+2) \mbox{ and }
\Omega_2={\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2,
$$
where $B(0,r)$ is an Euclidean ball of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$ or radius $r$, centered
at the origin. By definition we have compact embeddings of the domains $\Omega_0 \Subset
\Omega_1 \Subset \Omega_2$.
The finite differences~\eqref{eq:fd1} and~\eqref{eq:fd2} used to obtain the discrete finite
difference operators $\tilde P_N$ correspond to the finite differences
defined in \cite{Thomee68}, modulo a translation operator for
the retrograde differences. It follows that~\cite[Theorem 2.1]{Thomee68}
applies in our setting: there exists a constant $c>0$ such that for every $N$
sufficiently large and $\phi\in C^2_+({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$,
\begin{equation}
\label{eq:schau1}
\|\phi\|_{{\mathcal{C}}_T^{2,\alpha}(\Omega_1)} \leq c \Big \{\| \tilde P_N
\phi\|_{{\mathcal{C}}_T^{0,\alpha}(\Omega_2)} + \|\phi\|_{{\mathcal{C}}_T^0(\Omega_2)} \Big
\}.
\end{equation}
\begin{rmk}
\label{rmk:tnorm}
In the above notations, the
${\mathcal{C}}^{k,\alpha}_T(\Omega_i)$-norm on $C^2_+(\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^ 2))$ are the norms defined
in~\cite{Thomee68}, using only forward differences.
For $\Omega_2={\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$, these norms co\"incide with
the ${\mathcal{C}}^{k,\alpha}$-norms introduced at~\S\ref{sec:dhn}.
If $\Omega=B(0,R)$, the definition of the norms given at~\eqref{eq:C0}
and~\eqref{eq:C0alpha} has to be modified slightly for the
${\mathcal{C}}_T^{k,\alpha}(\Omega)$-norm. In order to describe what has to be
modified, assume for a moment that $\phi$ is a discrete
function defined only on the set of vertices of ${\mathcal{G}}^+_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$
contained in $\overline\Omega$. Notice that the finite differences
$\frac{\partial\phi}{\partial u}$ and $\frac{\partial\phi}{\partial u}$ are defined on
a smaller set, and the second order partial derivative on an even
smaller set, etc... The ${\mathcal{C}}^{k,\alpha}_T(\Omega)$-norms are defined
similarly to the ${\mathcal{C}}^{k,\alpha}$-norms, by taking the corresponding $\sup$ on a
smaller set of vertices. Namely, the vertices of ${\mathcal{G}}^+_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$ contained
in $\overline \Omega$ where the relevant partial derivatives are well
defined.
\end{rmk}
For $\psi\in C^2_+(\mathcal{Q}_N(\Sigma))$, we define the lift
$\phi_N=\psi\circ p_N$.
By Remark~\ref{rmk:tnorm}, since $\Omega_2={\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$, the RHS of~\eqref{eq:schau1} applied to $\phi_N$ is equal to
$$c \Big \{\| \tilde P_N
\phi_N\|_{{\mathcal{C}}^{0,\alpha}} + \|\phi_N\|_{{\mathcal{C}}^0} \Big
\}.$$
By definition of the discrete H\"older norms, $\|\psi\|_{{\mathcal{C}}^0}=
\|\phi_N\|_{{\mathcal{C}}^0}$ and since $\tilde P_N\phi_N =
(P_N\psi)\circ p_N$, we have
$$
\| \tilde P_N
\phi_N\|_{{\mathcal{C}}^{0,\alpha}} = \| P_N
\psi\|_{{\mathcal{C}}^{0,\alpha}}.
$$
Thus by~\eqref{eq:schau1}
\begin{equation}
\label{eq:schau2}
\|\psi\circ p_N\|_{{\mathcal{C}}_T^{2,\alpha}(\Omega_1)}\leq c \Big \{\| P_N
\psi\|_{{\mathcal{C}}^{0,\alpha}} + \|\psi\|_{{\mathcal{C}}^0} \Big \} .
\end{equation}
We conclude using the following result
\begin{lemma}
There exists a constant $c'>0$
such that for every $N$ sufficiently large and all $\psi\in C^2_+(\mathcal{Q}_N(\Sigma))$
$$
c'\|\psi\|_{{\mathcal{C}}^{2,\alpha}} \leq \|\psi\circ p_N \|_{{\mathcal{C}}_T^{2,\alpha} (\Omega_1)}.
$$
\end{lemma}
\begin{proof}[Proof of the lemma]
By definition, $\Omega_0$ contains a fundamental domain ${\mathcal{D}}$ of
$\Gamma$, and furthermore ${\mathcal{D}}\Subset \Omega_0$. By construction, the lattices $\Gamma_N$ admit fundamental
domains ${\mathcal{D}}_N$ which converge (say in Hausdorff distance) toward
${\mathcal{D}}$. Therefore ${\mathcal{D}}_N \Subset \Omega_0$ for all $N$ sufficiently
large.
In particular every vertex $\mathbf{z}\in{\mathcal{G}}_N^+(\Sigma)$ admits a lift
$\tilde \mathbf{z}\in{\mathcal{G}}_N^+({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$ via $p_N$ such that $\mathbf{z}\in
\Omega_0$. This shows that
$$
\|\psi\|_{{\mathcal{C}}^0}\leq \|\psi\circ p_N\|_{{\mathcal{C}}^0_T(\Omega_1)}.
$$
If $N$ is sufficiently large, the finite differences of order $1$ or
$2$ of $\psi\circ p_N$ are well defined at $\tilde\mathbf{z}$ depend only on values taken by
the function on the domain $\overline\Omega_1$. It follows by
Remark~\ref{rmk:tnorm} that
\begin{equation}
\label{eq:controt1}
\|\psi\circ p_N\|_{{\mathcal{C}}^2}\leq \|\psi\circ p_N\|_{{\mathcal{C}}^2_T(\Omega_1)}.
\end{equation}
If $\xi$ is any discrete function in $C^0({\mathcal{G}}^+_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2))$, for every
pair of vertices $\mathbf{v}_0$, $\mathbf{v}'$ of ${\mathcal{G}}_N^+({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$ with $\mathbf{v}_0 \in
\overline \Omega_0$ and $\mathbf{v}' \not \in
\overline \Omega_1$, we have
$$
\frac{|\xi(\mathbf{v}_0)-\xi(\mathbf{v} ')|}{\|\mathbf{v}_0-\mathbf{v}'\|^\alpha} \leq
{|\xi(\mathbf{v}_0)-\xi(\mathbf{v} ')|} \leq 2\|\xi\|_{{\mathcal{C}}^0}
$$
since $ \|\mathbf{v}_0-\mathbf{v}'\|\geq 1$. We apply this inequality to the
second order finite differences of $\psi\circ p_N$. This shows that
the ${\mathcal{C}}^{2,\alpha}$-norm of $\psi\circ p_N$ is controled by its
${\mathcal{C}}^2$-norm and its ${\mathcal{C}}^{2,\alpha}_T(\Omega_1)$-norm. Hence by
\eqref{eq:controt1} the ${\mathcal{C}}^{2,\alpha}_T(\Omega_1)$-norm controls the
${\mathcal{C}}^{2,\alpha}$-norm.
\end{proof}
Using the lemma and \eqref{eq:schau2}, we deduce that for every $N$ sufficiently large
and $\psi\in C^2_+(\mathcal{Q}_N(\Sigma))$, we have
$$
c'\|\psi\|_{{\mathcal{C}}^{2,\alpha}} = c'\|\psi\circ p_N\|_{{\mathcal{C}}^{2,\alpha}}
\leq
c \Big \{\| P_N
\psi\|_{{\mathcal{C}}^{0,\alpha}} + \|\psi\|_{{\mathcal{C}}^0} \Big
\}.
$$
This proves the theorem for $N$
sufficiently large and $\psi\in C^2_+(\mathcal{Q}_N(\Sigma))$.
The same result holds if $\psi \in C^2_-(\mathcal{Q}_N(\Sigma))$. For a general
$\psi$, we use the decomposition in components $\psi=\psi^++\psi^-$
and the theorem follows, for $N$ sufficiently large, by definition of the weak H\"older norms.
If the theorem holds for $N$ sufficiently large, it holds for every
$N$ since $C^2(\mathcal{Q}_N(\Sigma))$ is finite dimensional, and all norms
are equivalent.
\end{proof}
\begin{cor}
\label{cor:schauder}
There exists a constant $c_2>0$ such that for all $N\geq 0$ and for
all functions $\phi \in C^2(\mathcal{Q}_N(\Sigma))$, we have
$$
\|\Delta_N \phi \|_{{\mathcal{C}}^{0,\alpha}_w} + \|\phi \|_{{\mathcal{C}}^0} \geq c_2\|\phi\|_{{\mathcal{C}}^{2,\alpha}_w}.
$$
\end{cor}
\begin{proof}
We use the decomposition $\phi=\phi^++\phi^-$ and prove the
Corollary in the case of the operator
$$ \Delta_N' = \hat\Delta^E_N +\Gamma^E_N + K^E_N + K^I_N
$$ first.
Then $\Delta'_N\phi = \psi = \psi^++\psi^-$, where $\psi^+ = P_N\phi^+ +
\kappa_N(\phi ^+ - \bar\phi^-)$ and $\psi^- = P_N\phi^- +
\kappa_N(\phi ^- - \bar\phi^+)$.
Since $\kappa_N$ converges in the sense of Lemma~\ref{cor:kappaconv},
we deduce that $\|\kappa_N\|_{{\mathcal{C}}^{0,\alpha}_w}$ is uniformly bounded
for all $N$. Thus a ${\mathcal{C}}^{0,\alpha}_w$-bound on $\phi$ provides
a ${\mathcal{C}}^{0,\alpha}_w$-bound on $\kappa_N\phi^\pm$. Similarly a
${\mathcal{C}}^{0,\alpha}_w$-bound provides ${\mathcal{C}}^{0,\alpha}_w$-bound on
$\bar\phi^\pm$. In other words, there exists a constant $c'>0$
independent of $N$ and $\phi$ such that
$$
\|\kappa_N(\phi ^+ - \bar\phi^-)\|_{{\mathcal{C}}_w^{0,\alpha}} \leq
c'\|\phi\|_{{\mathcal{C}}^{0,\alpha}_w} \mbox { and }
\|\kappa_N(\phi ^- - \bar\phi^+)\|_{{\mathcal{C}}_w^{0,\alpha}} \leq
c'\|\phi\|_{{\mathcal{C}}_w^{0,\alpha}} .
$$
It follows that
$$
\|\Delta'_N\phi\|_{{\mathcal{C}}_w^{0,\alpha}} + 2c'\|\phi\|_{{\mathcal{C}}_w^{0,\alpha}} \geq \|P_N\phi\|_{{\mathcal{C}}_w^{0,\alpha}} ,
$$
and by Theorem~\ref{theo:thomee}
\begin{equation}
\label{eq:schtrick}
\|\Delta'_N\phi\|_{{\mathcal{C}}_w^{0,\alpha}} + (2c'+1)\|\phi\|_{{\mathcal{C}}_w^{0,\alpha}}
\geq c\|\phi\|_{{\mathcal{C}}_w^{2,\alpha}} .
\end{equation}
We are not quite finished since we have a ${\mathcal{C}}_w^{0,\alpha}$-estimate
for $\phi$ in the above inequality rather than a ${\mathcal{C}}^0$-estimate as in the corollary.
We prove a weaker version of the corollary first: we show that there
exists a constant $c''>0$ such that for all $N$ and for all $\phi$,
\begin{equation}
\label{eq:schtrick2}
\|\Delta'_N\phi\|_{{\mathcal{C}}_w^{0,\alpha}} + \|\phi\|_{{\mathcal{C}}^{0}}
\geq c''\|\phi\|_{{\mathcal{C}}_w^{0,\alpha}} .
\end{equation}
If this is true, the corollary trivially follows in the case of $\Delta'_N$ from
\eqref{eq:schtrick} and \eqref{eq:schtrick2}. Finally,
Proposition~\ref{prop:errors} completes the proof in the case of
$\Delta_N =\Delta'_N+{\mathcal{E}}^E_N+{\mathcal{E}}^I_N$.
Assume that \eqref{eq:schtrick2} does not hold.
Then theres exists a sequence of
discrete functions $\phi_{N_k}\in{\mathcal{C}}^2(\mathcal{Q}_{N_k}(\Sigma))$ with the
property that
$$
\|\phi_{N_k}\|_{{\mathcal{C}}_w^{0,\alpha}} = 1,\quad
\|\Delta'_{N_k}\phi_{N_k}\|_{{\mathcal{C}}_w^{0,\alpha}} \to 0 \mbox{ and }
\|\phi_{N_k}\|_{{\mathcal{C}}_w^0} \to 0.
$$
Using Inequality~\eqref{eq:schtrick}, we obtain a uniform
${\mathcal{C}}^{2,\alpha}_w$-bound on $\phi_{N_k}$.
By the Ascoli-Arzela theorem~\ref{theo:ascoli}, we may assume up to
extraction of a subsequence, that $\phi_{N_k}$ converges in the
${\mathcal{C}}^2_w$-sense toward a pair of functions $(\phi^+,\phi^-)$ on
$\Sigma$. Since the convergence is ${\mathcal{C}}^2_w$ hence ${\mathcal{C}}^0$, the
condition $\|\phi_{N_k}\|_{{\mathcal{C}}^0}\to 0$ forces $\phi^+=\phi^-=0$.
This imply that $\phi_{N_k}\stackrel{{\mathcal{C}}_w^2}\longrightarrow
(0,0)$, and in particular $\|\phi_{N_k}\|_{{\mathcal{C}}_w^2}\to 0$. Since the
${\mathcal{C}}_w^2$-discrete norm controls the ${\mathcal{C}}_w^{0,\alpha}$-discrete norm,
this contradicts the assumption $\|\phi_{N_k}\|_{{\mathcal{C}}_w^{0,\alpha}}=1$.
\end{proof}
\subsection{Spectral gap}
\label{sec:specgap}
We define the discrete functions
$$
{\mathbf{1}}_N^\pm \in C^0({\mathcal{G}}_N^\pm(\Sigma)) \simeq C^2_\pm(\mathcal{Q}_N(\Sigma))
$$
by
$$
\ip{{\mathbf{1}}_N^\pm,\mathbf{z}} = \left \{
\begin{array}{ll}
1& \mbox{ if } \mathbf{z}\in{\mathfrak{C}}_0({\mathcal{G}}^\pm_N(\Sigma)) \\
0& \mbox{ if } \mathbf{z}\in{\mathfrak{C}}_0({\mathcal{G}}^\mp_N(\Sigma))
\end{array}
\right .
$$
We also define the discrete functions ${\mathbf{1}}_N, \zeta_N \in C^0({\mathcal{G}}_N(\Sigma))$ by
\begin{align*}
{\mathbf{1}}_N & = {\mathbf{1}}^+_N +{\mathbf{1}}^ -_N\\
\zeta_N &=\theta_N^{-1}\cdot{\mathbf{1}}_N^+ - \theta_N^{-1}\cdot{\mathbf{1}}_N^-
\end{align*}
where $\theta_N$ is any discrete function, sufficiently close to
$\theta^u_N$ or $\theta_N^v$. For instance, we put
$$
\theta_N =\frac 12( \theta^u_N + \theta^v_N).
$$
We define the spaces of discrete functions ${\mathscr{K}}_N \subset C^0({\mathcal{G}}_N(\Sigma))$ by
\begin{equation}
\label{eq:almostkernel}
{\mathscr{K}}_N = \left \{
\begin{array}{ll}
{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}\cdot {\mathbf{1}}_N & \mbox{ if $(p,\ell)$ is nondegenerate,} \\
{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}\cdot {\mathbf{1}}_N \oplus {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}\cdot \zeta_N &
\mbox{ in the degenerate case.}
\end{array}
\right .
\end{equation}
In addition, we denote by
$$
{\mathscr{K}}_N^\perp \subset C^2(\mathcal{Q}_N(\Sigma))
$$
the orthogonal complement of ${\mathscr{K}}_N$, with respect to the $\ipp{\cdot,\cdot}$-inner product.
\begin{rmks}
\begin{itemize}
\item The function ${\mathbf{1}}_N$ and more generally, any constant function, is contained in the kernel of
the operator $\Delta_N$. Indeed, $\Delta_N =\delta_N\delta_N^\star$, but
$d^\star{\mathbf{1}}_N = 0$ by formula~\eqref{eq:dNstar}.
\item The sequence of discrete functions $\zeta_N$ converges toward the
pair of functions $(\theta^{-1},-\theta^{-1})$, at least in the
${\mathcal{C}}^2_w$-sense. Whenever $\ell$ is degenerate, we must have
$$
\Delta_N\zeta_N \stackrel{{\mathcal{C}}^0_w}\longrightarrow (0,0),
$$
by Theorem~\ref{theo:limit} and Proposition~\ref{prop:kernel}.
\item The kernel of $\Delta_N$
is at least $1$ dimensional. If $\ell$ is degenerate, our next result at Theorem~\ref{theo:specgap}, implies that
for $N$ sufficiently large, $\ker\Delta_N$ has dimension at most
$2$. Although $\zeta_N$ may not belong to $\ker\Delta_N$, the previous
remark shows that this function is approximately in the kernel. In
this sense, ${\mathscr{K}}_N$ may be thought of as an approximate kernel of $\Delta_N$.
\end{itemize}
\end{rmks}
\begin{theo}
\label{theo:specgap}
There exists a real constant $c_3>0$ such that, for all positive integers $N$
sufficiently large and for all discrete function $\phi\in
{\mathscr{K}}_N^\perp$, we
have
$$
\|\Delta_N\phi\|_{{\mathcal{C}}^{0,\alpha}_w} \geq c_3\|\phi\|_{{\mathcal{C}}^{2,\alpha}_w}.
$$
\end{theo}
\begin{proof}
We are assuming that $\ell$ is degenerate. Since the proof in the
nondegenerate case is completely similar, we leave the details to the reader.
We start by proving a weaker version of the theorem:
\begin{lemma}
\label{lemma:specgap}
There exists a real constant $c_4>0$ such that, for all positive integers $N$
sufficiently large and for all discrete function $\phi\in
{\mathscr{K}}_N^\perp$, we
have
$$
\|\Delta_N\phi\|_{{\mathcal{C}}_w^{0,\alpha}} \geq c_4\|\phi\|_{{\mathcal{C}}^{0}}.
$$
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{lemma:specgap}]
Assume that the the result is false. Then there exists a sequence
$\phi_{N_k}\in{\mathscr{K}}_{N_k}^\perp$ such that
$$
\forall k \quad \|\phi_{N_k}\|_{{\mathcal{C}}^0}=1, \mbox{ and }
\|\Delta_{N_k}\phi_{N_k}\|_{{\mathcal{C}}_w^{0,\alpha}}\longrightarrow 0.
$$
Using Corollary~\ref{cor:schauder}, we deduce a
${\mathcal{C}}^{2,\alpha}$-bound on $\phi_{N_k}$. Thanks to the Ascoli-Arzela
Theorem~\ref{theo:ascoli}, we may assume that $\phi_{N_k}$ converges
in the weak ${\mathcal{C}}^2$-sense, up to further extraction:
\begin{equation}
\label{eq:lemmaconvphi}
\phi_{N_k}\stackrel{{\mathcal{C}}^2_w}\longrightarrow (\phi^+,\phi^-).
\end{equation}
By Theorem~\ref{theo:limit}, we conclude that
$$
\Delta_{N_k}\phi_{N_k}\stackrel{{\mathcal{C}}^0}\longrightarrow \Xi(\phi^+,\phi^-).
$$
The condition $\|\Delta_{N_k}\phi_{N_k}\|_{{\mathcal{C}}^{0,\alpha}}\to 0$
implies that $\|\Delta_{N_k}\phi_{N_k}\|_{{\mathcal{C}}^{0}}\to 0$, which shows
that the limit is $(0,0)$. Therefore
\begin{equation}
\label{eq:kerxi}
(\phi^+,\phi^-)\in\ker\Xi.
\end{equation}
We are assuming now that we are in the degenerate case as before. The
nondegenerate case is treated similarly.
By assymption $\phi_{N_k}$ is orthogonal to ${\mathscr{K}}_N$, hence
$\ipp{\phi_{N_k},{\mathbf{1}}_{N_k}}=\ipp{\phi_{N_k},\zeta_{N_k}}=0$. Since all
these discrete functions converge in the ${\mathcal{C}}^0$-sense, we deduce that
the limit also satisfy the orthogonality relation, that is
$$\ipp{(\phi^+,\phi^-), ({\mathbf{1}},{\mathbf{1}})} = \ipp{(\phi^+,\phi^-),
(\theta^{-1},-\theta^{-1})}=0.$$
In other words $(\phi^+,\phi^-)$ is $L^2$-orthogonal to $\ker\Xi$. In
view of~\eqref{eq:kerxi} we deduce that
$$
\phi^+=\phi^-=0,
$$
and by~\eqref{eq:lemmaconvphi}, we deduce that
$$
\|\phi_{N_k}\|_{{\mathcal{C}}^0}\longrightarrow 0
$$
which contradicts the assumption $\|\phi_{N_k}\|_{{\mathcal{C}}^ 0}=1$.
This completes the proof of the lemma.
\end{proof}
By Lemma~\ref{lemma:specgap}, we have for every
$\phi\in{\mathscr{K}}_N^\perp$
$$
(1+c_4^{-1})\|\Delta_N\phi\|_{{\mathcal{C}}_w^{0,\alpha}} \geq
\|\Delta_N\phi\|_{{\mathcal{C}}_w^{0,\alpha}}+ \|\phi\|_{{\mathcal{C}}^{0}}.
$$
By Corollary~\ref{cor:schauder}, the RHS is an upper bound for
$c_2\|\phi\|_{{\mathcal{C}}_w^{2,\alpha}}$. The constant,
$$
c_3 =\frac{c_2}{1+c_4^{-1}}
$$
satisfies the theorem, which completes the proof.
\end{proof}
\begin{cor}
\label{cor:greenprep}
\begin{enumerate}
\item If $(p,\ell)$ is nondegenerate, then for every $N$ sufficiently large,
the kernel of $\Delta_N$ is given by $\ker\Delta_N ={\mathscr{K}}_N=
{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}\cdot {\mathbf{1}}_N$.
Furthermore ${\mathscr{K}}_N^\perp$ is
preserved by $\Delta_N$ which induces an isomorphism $\Delta_N:{\mathcal{K}}_N^\perp\to
{\mathcal{K}}_N^\perp$.
\item More generally, including the case where $(p,\ell)$ is degenerate,
there is a direct sum decomposition for every $N$ sufficiently large
$$
C^2(\mathcal{Q}_N(\Sigma)= {\mathscr{K}}_N\oplus \Delta_N({\mathscr{K}}_N ^\perp),
$$
and a constant $c_6$ independent of $N$, such that for all $\phi \in
C^2(\mathcal{Q}_N(\Sigma))$ decomposed according to the above splitting as
$\phi=\bar\phi +\phi^ \Delta$, we have
\begin{equation}
\label{eq:controlgreen}
c_6\|\phi\|_{{\mathcal{C}}_w^{0,\alpha}}\geq \|\bar \phi\|_{{\mathcal{C}}_w^{0,\alpha}} +
\|\phi_\Delta \|_{{\mathcal{C}}_w^{0,\alpha}}.
\end{equation}
\end{enumerate}
\end{cor}
\begin{proof}
The first statement is a consequence of the second statement: In the
nondegenerate case, ${\mathscr{K}}_N$ is one dimensional and
${\mathscr{K}}_N\subset\ker\Delta_N$. Hence $\Delta({\mathscr{K}}_N^\perp)$ has
codimension at most $1$ in $C^2(\mathcal{Q}_N(\Sigma))$. By the second
statement the codimension is exactly $1$. Therefore $\ker\Delta_N
={\mathscr{K}}_N$. The rest of the statement follows using the fact that
$\Delta_N$ is selfadjoint.
We merely have to prove the second statement of the corollary.
We start by proving that we have a splitting as claimed. Suppose
that the intersection ${\mathscr{K}}_N\cap\Delta({\mathscr{K}}_N^\perp)$ is not
reduced to $0$ for arbitrarily large $N$. Then we may find a
sequence $\phi_{N_k}$ contained in the intersections and such that
$\|\phi_{N_k}\|_{{\mathcal{C}}^{0}} = 1$.
We notice that $\|{\mathbf{1}}_N\|_{{\mathcal{C}}^{0}} =1$ and that
$\|\zeta_N\|_{{\mathcal{C}}^{0}}$ converges toward a positive constant, since
$\zeta_N$ converges toward the pair of functions $(\theta^{-1},-\theta^{-1})$.
Since $\phi_{N_k}\in {\mathscr{K}}_{N_k}$, we may write
$$
\phi_{N_k}= a_k{\mathbf{1}}_{N_{k}} \mbox { resp. } \phi_{N_k}=
a_k{\mathbf{1}}_{N_{k}}+b_k\zeta_{N_k} \mbox{ in the degenerate case.}
$$
We deduce that the uniform ${\mathcal{C}}^{0}$-bound on $\phi_{N_k}$
provides a uniform bound on the coefficients $a_k$ and $b_k$.
We may after extracting a suitable subsequence assume that the
coefficients converge as $k$ goes to infinity. In particular
$\phi_{N_k}$ converges toward an element of $\ker\Xi$, say in the
${\mathcal{C}}^0$-sense.
By construction we
have a uniform ${\mathcal{C}}^1_w$-bound on $\Phi_{N_k}$, which provides a
uniform ${\mathcal{C}}^{0,\alpha}_w$-bound.
On the other hand $\phi_{N_k}\in \Delta_{N_k}({\mathscr{K}}_N^\perp)$ so
that there exists a sequence $\psi_{N_k}\in{\mathscr{K}}_{N_k}^\perp$ with
$\phi_{N_k}=\Delta_{N_k}\psi_{N_k}$. By Theorem~\ref{theo:specgap},
the uniform ${\mathcal{C}}_w^{0,\alpha}$-bound on $\phi_{N_k}$ provides a
uniform ${\mathcal{C}}_w^{2,\alpha}$-bound on $\psi_{N_k}$. By Ascoli-Arzella
Theorem~\ref{theo:ascoli}, we may assume that $\psi_{N_k}$ converges
in the ${\mathcal{C}}^2_w$-sense toward a limit $(\psi^+,\psi^-)$ after
extraction. It follows that $\phi_{N_k}=\Delta_{N_k}\psi_{N_k}$
converges in the ${\mathcal{C}}^0$-sense toward $\Xi(\psi^+,\psi^-)$.
In conclusion $\phi_{N_k}$ converges in the ${\mathcal{C}}^0$-sense to an element
of $\Im\Xi\cap\ker\Xi =\{ 0 \}$.
In conclusion, the limit of $\phi_{N_k}$ must be the pair of functions
$(0,0)$, which contradicts the fact that $\| \phi_{N_k}
\|_{{\mathcal{C}}^0}=1$. Thus
$$
{\mathscr{K}}_N \cap \Delta_N({\mathscr{K}}_N^\perp) = \{0\}
$$
for all sufficiently large $N$. By Theorem~\ref{theo:specgap}, we know
that the restriction of $\Delta_N$ to ${\mathscr{K}}_N^\perp$ is injective
provided $N$ is large enough. For dimensional reasons, we have a
splitting
$$
{\mathscr{K}}_N \oplus \Delta_N({\mathscr{K}}_N^\perp) = C^2(\mathcal{Q}_N(\Sigma)).
$$
We now proceed to the last part of the second statement. If the
control~\eqref{eq:controlgreen} does not hold, we find a sequence of
discrete functions
$\phi_{N_k}\in C^2(\mathcal{Q}_{N_k}(\Sigma))$ with decompositions
$$
\phi_{N_k}=\bar\phi_{N_k}+\phi^\Delta_{N_k},
$$
and the property that
$$
\|\phi_{N_k}\|_{{\mathcal{C}}_w^{0,\alpha}}\to 0 \mbox{ and }
\|\bar\phi_{N_k}\|_{{\mathcal{C}}_w^{0,\alpha}}+
\|\phi^\Delta_{N_k}\|_{{\mathcal{C}}_w^{0,\alpha}} = 1.
$$
The ${\mathcal{C}}^{0,\alpha}_w$-bound on $\bar\phi_{N_k}$ provides a uniform
${\mathcal{C}}^0$-bound. As in the first part of the proof, we may use this
bound to show that, up to extraction of a subsequence, $\bar\phi_{N_k}$
converges in the ${\mathcal{C}}^1_w$-sense toward a limit $(\bar\phi^+,\bar\phi^-) \in \ker\Xi$.
Similarly, the ${\mathcal{C}}_w^{0,\alpha}$ bound on $\phi^\Delta_{N_k}$ can be
used to show that, up to extraction of a subsequence, the sequence
converges in the ${\mathcal{C}}^0$-sense toward a limit $(\phi^+_\Delta,\phi^-_\Delta)$
in the image of $\Xi$.
Eventually, we may assume that $\phi_{N_k}$ converges
in the ${\mathcal{C}}^0$-sense toward a limit $(\bar \phi^+ +\phi^+_\Delta,\bar
\phi^- +\phi^-_\Delta) \in \ker\Xi\oplus \Im \Xi$.
The fact that $\|\phi_{N_k}\|_{{\mathcal{C}}_w^{0,\alpha}} \to 0$ implies that
$\|\phi_{N_k}\|_{{\mathcal{C}}^0}\to 0$ and
by
uniqueness of the limit, we deduce that $\bar\phi^\pm=0$ and
$\phi_\Delta=0$.
However, $\bar\phi_{N_k}$ converges in the stronger, says, ${\mathcal{C}}^1_w$-sense,
hence $\|\bar\phi_{N_k}\|_{{\mathcal{C}}^{0,\alpha}_w}\to 0$. We
deduce that
$$\|\bar\phi_{N_k}\|=\|\phi_{N_k}-\phi^\Delta_{N_k}\|_{{\mathcal{C}}^{0,\alpha}_w}\leq
\|\phi_{N_k}\|_{{\mathcal{C}}^{0,\alpha}_w} +\| \phi^\Delta_{N_k}\|_{{\mathcal{C}}^{0,\alpha}_w}\to 0,$$
which contradicts the assumption $\|\phi_{N_k}\|_{{\mathcal{C}}_w^{0,\alpha}} + \|\phi^\Delta_{N_k}\|_{{\mathcal{C}}_w^{0,\alpha}}=1$.
\end{proof}
\subsection{Modified construction in the degenerate case}
\label{sec:deg}
The situation for degenerate pairs $(p,\ell)$
came as a surprise to us. Our first guess was
that the operators $\Delta_N$ should converge in a reasonnable sense
toward
the operator involved in the smooth setting~\eqref{eq:Deltaf}. Consequently, we expected the kernel of $\Delta_N$ to be one dimensional,
at least for $N$ large enough.
The
first clue that this was not true came from a local model: in this model,
we do not choose $\Sigma$ to
be a torus, but a copy of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$ embedded in ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ as an isotropic
Euclidean plane identified to ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$ with its quadrangulation $\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$. Then one can check that the funtion ${\mathbf{1}}_N^+-{\mathbf{1}}_N^-$
belongs to the kernel of $\delta_N^\star$ directly from the formula~\eqref{eq:dNstar}.
The presence of a $2$-dimensional almost kernel ${\mathscr{K}}_N$ in the
degenerate case will create some trouble for solving our
problem. We may overcome them by changing slightly our construction.
\subsubsection{The setup}
We start with a degenerate pair $(p_S,\ell_S)$, where
$$
\ell_S : S\longrightarrow {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^4,
$$
is an isotropic immersion and $S$ is a surface diffeomorphic to an oriented torus.
We carry out the constructions of quadrangulations $\mathcal{Q}_N(S)$, graphs ${\mathcal{G}}_N(S)$ exactly as
in the case of $\Sigma$ (cf. \S\ref{sec:anal}), except one crucial detail.
The lattice group $\Gamma(S)$ of the covering map $p_S:{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2\to S$
admits oriented basis
$(\gamma_1(S),\gamma_2(S))$. This is where
comes the difference
with~\S\ref{sec:sqlat}: we choose a best approximation $\gamma_2^N(S)\in\Lambda_N^{ch}$
of $\gamma_2(S)$ and
$\gamma_1^N(S)\in \Lambda_N\setminus \Lambda_N^{ch}$ for
$\gamma_1(S)$. Notice that in the case of $\Sigma$, both
$\gamma_i^N(S)$ were chosen in $\Lambda^{ch}_N$.
This minor change
still allows us to construct families of quadrangulations and
checkers graph. The only difference is that action of the lattice
$$
\Gamma_N(S)=span\{\gamma_1^N(S),\gamma_2^N(S)\}
$$
does not preserve the connected components of the decomposition
$${\mathcal{G}}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)= {\mathcal{G}}^+_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2) \cup {\mathcal{G}}^-_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2),$$
and this splitting
does not descend as a splitting of ${\mathcal{G}}_N(S)$. In particular discrete
functions $\phi\in C^2(\mathcal{Q}_N(S))$ do not split into a positive and
negative component. However, we may construct
the constant function ${\mathbf{1}}_N\in C^2(\mathcal{Q}_N(S))$, which is the constant
$1$ on every face of the quadrangulation.
\subsubsection{The double cover}
We define
$$
\Gamma = span \{ \gamma_1,\gamma_2 \}
$$
where
$$
\gamma_1=2\gamma_1(S) \mbox{ and } \gamma_2=\gamma_2(S).
$$
The quotient $\Sigma = {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2/\Gamma$ comes with a covering map of
index $2$
\begin{equation}
\label{eq:piS}
\Phi^S:\Sigma\to S,
\end{equation}
and an action of $G\simeq \Gamma/\Gamma(S)\simeq {\mathbb{Z}}} \def\DD{{\mathbb{D}}} \def\SS{{\mathbb{S}}} \def\WW{{\mathbb{W}}} \def\QQ{{\mathbb{Q}}} \def\CC{{\mathbb{C}}_2$ on $\Sigma$ by deck tranformations.
We define accordingly
$$
\Gamma_N = span \{ \gamma_1^N,\gamma_2^N \}
$$
where
$$
\gamma_1^N=2\gamma^N_1(S) \mbox{ and } \gamma^N_2=\gamma^N_2(S).
$$
Notice that $2\Lambda_N\subset\Lambda_N^{ch}$, hence by definition
$\gamma_i^N\in\Lambda^{ch}_N$.
We also have double covers
$$
\Phi_N^S:\Sigma\to S
$$
with deck transformations $G_N\simeq \Gamma_N/\Gamma_N(S)\simeq {\mathbb{Z}}} \def\DD{{\mathbb{D}}} \def\SS{{\mathbb{S}}} \def\WW{{\mathbb{W}}} \def\QQ{{\mathbb{Q}}} \def\CC{{\mathbb{C}}_2$
which come from the canonical projections
${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2/\Gamma_N\to{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2/\Gamma_N(S)$.
In particular there are canonical embeddings of discrete functions
spaces induced by pullback
$$
(\Phi^S_N)^*:C^2(\mathcal{Q}_N(S))\longrightarrow C^2(\mathcal{Q}_N(\Sigma)).
$$
The action of $G_N$ induces an action on $C^2(\mathcal{Q}_N(\Sigma))$ and the
image of $(\Phi^S_N)^*$ consists of the discrete functions which are $G_N$-invariant.
\subsubsection{Meshes and operators for the modified construction}
Like for $\Sigma$, we may define the samples $\tau_N^S\in
{\mathscr{M}}_N(S)=C^0(\mathcal{Q}_N(S))\otimes{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ of the map $\ell_S:S\to{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$, the
inner product $\ipp{\cdot,\cdot}$ and the
operators $\delta_N$, $\delta_N^\star$, etc... Using the canonical projections
$$
(\Phi^S_N)^*:C^0(\mathcal{Q}_N(S))\otimes {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n} \longrightarrow C^0(\mathcal{Q}_N(\Sigma))\otimes{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}.
$$
we see that the pullbacks satisfy $\tau_N=(\Phi^S_N)^*\tau_N(S)$. In other words, they are also the
samples of the lifted isotropic immersion $\ell =\ell_S \circ \Phi^S:\Sigma \to {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$. Then
$\tau_N$ also induces operators denoted $\delta_N$, $\delta^\star_N$ and $\Delta_N$ which commute with the pullback operation, by naturality of the construction.
\subsubsection{Spectral gap for the degenerate case}
All the norms defined on $C^2(\mathcal{Q}_N(\Sigma))$ induce norms on
$C^2(\mathcal{Q}_N(S))$ via the pullbacks $(\Phi_N^S)^*$, denoted in the same way. For instance,
for $\phi\in C^2(\mathcal{Q}_N(S))$, we have
$$
\|\phi\|_{{\mathcal{C}}^{2,\alpha}} = \|\phi\circ \Phi_N^S\|_{{\mathcal{C}}^{2,\alpha}}.
$$
Then we prove the following result:
\begin{theo}
\label{theo:specgap2}
Let $\ell_S:S\to{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ be an isotropic immersion of an oriented
surface diffeomorphic to a torus with a conformal cover $p:{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2\to S$. There exists a constant $c_5>0$
such that for every $N$ sufficiently
large and every $\phi\in C^2(\mathcal{Q}_N(S))$ with $\ipp{\phi,{\mathbf{1}}_N}=0$, we
have
$$
\|\Delta_N\phi\|_{{\mathcal{C}}^{0,\alpha}} \geq c_5\|\phi\|_{{\mathcal{C}}^{2,\alpha}}.
$$
\end{theo}
\begin{proof}
We choose $N$ sufficiently large, so that the assumptions of Theorem~\ref{theo:specgap} are satisfied.
Let $\phi\in C^2(\mathcal{Q}_N(S))$ be a discrete function such that $\ipp{\phi,{\mathbf{1}}_N}$. There may be some ambiguity in our notations, so we should emphasize that
$(\Phi^S_N)^*{\mathbf{1}}_N$ is equal to ${\mathbf{1}}_N\in C^2(\mathcal{Q}_N(\Sigma))$.
We consider the pullback $\tilde \phi_N = \phi\circ\Phi_N^S$ of $\phi$ regarded as an element of $C^2(\mathcal{Q}_N(\Sigma))$.
By definition of inner products and pullbacks by $2$-fold covers, we have
$$
\ipp{\phi,{\mathbf{1}}_N} = \frac 12\ipp{\tilde \phi_N,{\mathbf{1}}_N} ,
$$
hence, by assumption, $\ipp{\tilde \phi_N,{\mathbf{1}}_N}=0$.
Notice that the action of $G_N=\ip{\Upsilon_N}\simeq{\mathbb{Z}}} \def\DD{{\mathbb{D}}} \def\SS{{\mathbb{S}}} \def\WW{{\mathbb{W}}} \def\QQ{{\mathbb{Q}}} \def\CC{{\mathbb{C}}_2$ on
$C^2(\mathcal{Q}_N(\Sigma))$ respects the inner product $\ipp{\cdot,\cdot}$. Since $\gamma_1^N(S)\not\in \Lambda^{ch}_N$, we also have
$$
\Upsilon_N\cdot {\mathbf{1}}_N^+ = {\mathbf{1}}_N^- \mbox { and conversely } \Upsilon_N\cdot {\mathbf{1}}_N^- = {\mathbf{1}}_N^+.
$$
By construction the discrete function $\theta_N$ is $G_N$
invariant. Thus
$$
\Upsilon_N\cdot\zeta_N = \Upsilon_N\cdot(\theta_N^{-1}({\mathbf{1}}^+_N -
{\mathbf{1}}^-_N)) = \theta_N^{-1}(-{\mathbf{1}}^+_N + {\mathbf{1}}^-_N) = -\zeta_N.
$$
The above property implies that any $G_N$-invariant discrete function
is orthogonal to $\zeta_N$:
$$
\ipp{\tilde\phi_N,\zeta_N}=\ipp{\Upsilon_N\cdot \tilde\phi_N,\Upsilon_N\cdot\zeta_N}=\ipp{\tilde\phi_N,-\zeta_N}
$$
therefore
$$
\ipp{\tilde\phi_N,\zeta_N}=0.
$$
In conclusion $\tilde\phi_N$ is orthogonal to ${\mathscr{K}}_N$ and we may
apply Theorem~\ref{theo:specgap} to $\tilde\phi_N$, which proves the
theorem with $c_5=c_3$.
\end{proof}
\begin{rmk}
Notice that Theorem~\ref{theo:specgap2} applies whether the pair $(p,\ell_S)$
is degenerate or
nondegenerate. The applications are different from
Theorem~\ref{theo:specgap}, in the sense that we are dealing with different
type of quadrangulations and meshes.
For instance the spaces of quadrangular meshes ${\mathscr{M}}_N$ admits a shear action whereas ${\mathscr{M}}_N(S)$ does not. Indeed the checkers graph associated to
$\mathcal{Q}_N(S)$ is connected whereas the checkers graph of $\mathcal{Q}_N(\Sigma)$ is not.
\end{rmk}
\section{Fixed point theorem}
\label{sec:fpt}
\subsection{Fixed point equation}
All the tools have been introduced in order to be able to apply the
contraction mapping principle.
We consider an isotropic immersion $\ell:\Sigma\to{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$, its sequence of samples $\tau_N\in {\mathscr{M}}_N$ as before and the map
$$
F_N:C^2(\mathcal{Q}_N(\Sigma))\longrightarrow C^2(\mathcal{Q}_N(\Sigma))
$$
defined by
$$F_N(\phi) = \mu^r_N\left (\tau_N - J\delta_N^\star\phi \right ).$$
Solving the equation $F_N(\phi) = 0$ provides an isotropic
perturbation of the sample mesh $\tau_N$.
\begin{rmk}
The perturbative approach introduced here is an analogue of
Theorem~\ref{theo:perttoy} in the smooth setting.
Indeed, let us denote by $f_N:\Sigma\to {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ a smooth perturbation of
$\ell:\Sigma\to{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ and $h$ a smooth function on $\Sigma$.
The perturbation $\tau_N-J\delta_N^\star\phi$ is a discrete analogue
of the smooth perturbation $K(h,f_N)=\exp_{f_N}( - i h)=
f_N-{\mathfrak{J}}\delta_{f_N}^\star h$. Thus the equation $F_N(\phi)=0$ is the discrete analogue of the equation
$F(h,f_N)=0$ (cf. \eqref{eq:implicit}) in the smooth setting.
\end{rmk}
The differential of the map is given by
$DF_N|_0 \cdot\phi= - D\mu^r_N|_{\tau_N}\circ J \delta_N^\star (\phi) = \delta_N\delta_N^\star \phi$ hence
$$
DF_N|_0 \cdot\phi= \Delta_N\phi.
$$
As pointed out in Lemma~\ref{lemma:quadratic}, the map $\mu_N^r$ is quadratic. According to Definition~\ref{dfn:quadratic} and~\eqref{eq:etaN} one can write
$$
\Delta_N \phi = 2\Psi_N^r(\tau_N,\phi)
$$
and
$$
F_N(\phi) = \eta_N + \Delta_N \phi + \mu^r_N(J\delta_N^\star \phi),
$$
where $\eta_N=F_N(0)$ is the error term.
We introduce the space
$${\mathscr{H}}_N = \Delta_N \left ({\mathscr{K}}_N^\perp \right ),$$
where ${\mathscr{K}}_N$ is the almost kernel of $\Delta_N$ defined at~\eqref{eq:almostkernel}.
By Corollary~\ref{cor:greenprep}, we have a direct sum decomposition
$$
C^2(\mathcal{Q}_N(\Sigma))={\mathscr{K}}_N\oplus {\mathscr{H}}_N,
$$
for every $N$ sufficiently large.
We define the Green operator ${{G}}_N$ of $\Delta_N$ by
$$
{{G}}_N (\psi)= \left \{
\begin{array}{l}
0 \mbox { if $\psi \in {\mathscr{K}}_N $}\\
\phi \in {\mathscr{K}}_N^\perp \mbox{ with the property that
$\Delta_N\phi =\psi$ if $\psi\in {\mathscr{H}}_N$}
\end{array}
\right .
$$
The Green operator is bounded independently of $N$, which is a crucial
property for the application of the fixed point principle:
\begin{prop}
\label{prop:green}
There exists a constant $c_8>0$ such that for all positive integers
$N$ and $\phi\in C^2(\mathcal{Q}_N(\Sigma))$, we have
$$
c_8 \|\phi\|_{{\mathcal{C}}^{0,\alpha}_w} \geq \|{{G}}_N(\phi)\|_{{\mathcal{C}}^{2,\alpha}_w}.
$$
\end{prop}
\begin{proof}
For every $N\geq N_0$ sufficiently large, we may use the decomposition
$\phi=\bar\phi + \phi_\Delta$ and the fact that
\begin{equation}
\label{eq:propgreen}
c_6\|\phi\|_{{\mathcal{C}}^{0,\alpha}_w}\geq \|\bar \phi\|_{{\mathcal{C}}^{0,\alpha}_w} +
\|\phi_\Delta \|_{{\mathcal{C}}^{0,\alpha}_w},
\end{equation}
thanks to Corollary~\ref{cor:greenprep}.
By definition $\phi_\Delta = \Delta_N\psi$ for some $\phi\in
{\mathscr{K}}_N^\perp$, and ${{G}}_N(\phi)=\psi$. By
Theorem~\ref{theo:specgap} and~\eqref{eq:propgreen}, we deduce that
$$
c_6\|\phi\|_{{\mathcal{C}}^{0,\alpha}_w} \geq
\|\phi_\Delta\|_{{\mathcal{C}}^{0,\alpha}_w}\geq c_3\| \psi\|_{{\mathcal{C}}^{2,\alpha}_w}.
$$
This proves the proposition.
\end{proof}
Notice that by definition, ${{G}}_N$ takes values in
${\mathscr{K}}_N^\perp$ and has kernel ${\mathscr{K}}_N$. If $\phi \in {\mathscr{K}}_N^\perp$, we have
${{G}}_N\circ\Delta_N \phi=\phi$. Therefore
$$
{{G}}_N\circ F_N(\phi) = \phi + {{G}}_N \left (\eta_N
+\mu^r_N(J\delta_N^\star \phi) \right).
$$
For $\phi\in {\mathscr{K}}_N^\perp$, the equation
$$F_N(\phi)\in
{\mathscr{K}}_N
$$
is equivalent to
$$
\phi = T_N(\phi)
$$
where
$$
T_N:{\mathscr{K}}_N^\perp\longrightarrow {\mathscr{K}}_N^\perp
$$
is the map defined by
$$
T_N(\phi) =- {{G}}_N (\eta_N +\mu^r_N(J\delta_N^\star \phi)).
$$
We merely need to apply the fixed point principle to the map $T_N$.
\subsection{Contracting map}
Notice that
\begin{align*}
T_N(\phi)-T_N(\phi')= &{{G}}_N\circ \mu_N^r(J\delta_N^\star \phi) -
{{G}}_N\circ \mu_N^r(J\delta_N^\star
\phi') \\
= & \frac 12{{G}}_N\circ \Psi_N^r
(J\delta_N^\star \phi'+ J\delta_N^\star \phi, J\delta_N^\star \phi'- J\delta_N^\star
\phi)
\end{align*}
hence
\begin{equation}
\label{eq:contractmap}
T_N(\phi)-T_N(\phi')=
\frac 12{{G}}_N\circ \Psi_N^r
\left (J\delta_N^\star (\phi'+\phi), J\delta_N^\star (\phi'- \phi)\right ).
\end{equation}
\begin{prop}
\label{prop:quadest}
There exists a constant $c_7>0$ such that for every $\phi,\phi'\in
C^2(\mathcal{Q}_N(\Sigma))$, we have
$$
\|\Psi_N^r(J\delta_N^\star\phi,J\delta_N^\star \phi')\|_{{\mathcal{C}}_w^{0,\alpha}}\leq c_7
\|\phi\|_{{\mathcal{C}}_w^{2,\alpha}} \|\phi'\|_{{\mathcal{C}}_w^{2,\alpha}}
$$
\end{prop}
\begin{proof}
Recall that, for $\tau\in{\mathscr{M}}_N$, $\mu^r_N(\tau)\in C^2(\mathcal{Q}_N(\Sigma))$ is given by the Formula
$$
\ip{\mu^r_N(\tau), \mathbf{f}} = \frac 12\omega({\mathscr{U}}_\tau(\mathbf{f}),{\mathscr{V}}_\tau(\mathbf{f})).
$$
We deduce that
$$
\ip{\Psi^r_N(\tau,\tau'), \mathbf{f}} = \frac 14\Big [
\omega({\mathscr{U}}_\tau(\mathbf{f}),{\mathscr{V}}_{\tau'}(\mathbf{f}))
+ \omega({\mathscr{U}}_{\tau'}(\mathbf{f}),{\mathscr{V}}_\tau(\mathbf{f}))
\Big ],
$$
and it follows that for some universal constant $c'_7>0$, we have
\begin{equation}
\label{eq:trickholderpsi}
\|\Psi^r_N(\tau,\tau')\|_{{\mathcal{C}}_w^{0,\alpha}} \leq c'_7\Big [ \|{\mathscr{U}}_\tau
\|_{{\mathcal{C}}_w^{0,\alpha}}\|{\mathscr{V}}_{\tau'} \|_{{\mathcal{C}}_w^{0,\alpha}}
+ \|{\mathscr{V}}_\tau
\|_{{\mathcal{C}}_w^{0,\alpha}}\|{\mathscr{U}}_{\tau'} \|_{{\mathcal{C}}_w^{0,\alpha}}\Big ].
\end{equation}
\begin{lemma}
\label{lemma:trickholderpsi}
There exists a universal constant $c''_7>0$ such that for all discrete
function $\phi$ and $\tau=J\delta_N^
\star\phi$, we have
$$
\|{\mathscr{U}}_\tau\|_{{\mathcal{C}}_w^{0,\alpha}} \mbox{ and } \|{\mathscr{V}}_\tau\|_{{\mathcal{C}}_w^{0,\alpha}} \leq
c''_7 \|\phi\|_{{\mathcal{C}}_w^{2,\alpha}}.
$$
\end{lemma}
\begin{proof}
We carry out the proof in the case of ${\mathscr{U}}_\tau$, as the proof for
${\mathscr{V}}_\tau$ is almost identical.
Using the index notations, we have
$$
\ip{{\mathscr{U}}_\tau,\mathbf{f}_{kl}} = \frac 1N\Big (\ip{\delta_N^\star\phi,\mathbf{v}_{k+1,l+1}}
- \ip{\delta_N^\star\phi,\mathbf{v}_{k,l}} \Big ).
$$
Using the expression of $\delta_N^\star$, we obtain
\begin{align*}
\ip{{\mathscr{U}}_\tau,\mathbf{f}_{kl}} =& \frac 1{N^2}\Big (
(\phi_{k+1,l}D^u_{k+1,l} -\phi_{k,l-1}D^u_{k,l-1}) -
(\phi_{k,l+1}D^u_{k,l+1} -\phi_{k-1,l}D^u_{k-1,l}) \\
& + (\phi_{k+1,l+1}D^v_{k+1,l+1} - 2\phi_{kl}D^v_{kl} + \phi_{k-1,l-1}D^v_{k-1,l-1})
\Big ).
\end{align*}
The first line in the above computation is related to the second order
finite difference $\frac{\partial^2}{\partial u\partial v}$ of $\phi$ whereas the
second line is related to the finite difference $\frac{\partial^2}{\partial
u^2}$ of $\phi$. The fact the the renormalized diagonals converge
smoothly allows to control the ${\mathcal{C}}_w^{0,\alpha}$-norms of these terms
using the ${\mathcal{C}}_w^{2,\alpha}$-norm of $\phi$.
\end{proof}
Inequality~\eqref{eq:trickholderpsi} together with
Lemma~\ref{lemma:trickholderpsi} completes the proof of the proposition.
\end{proof}
\begin{cor}
\label{cor:contract}
For all $\epsilon>0$ there exists $N_0\geq 1$ and $\delta >0$ such
that for all $N\geq N_0$, $\phi, \phi' \in {\mathscr{K}}_N^\perp$, such that
$\|\phi\|_{{\mathcal{C}}^{2,\alpha}}\leq \delta$ and
$\|\phi'\|_{{\mathcal{C}}^{2,\alpha}}\leq \delta$, we have
$$
\|T_N(\phi)-T_N(\phi')\|_{{\mathcal{C}}_w^{2,\alpha}}\leq \epsilon \|\phi-\phi'\|_{{\mathcal{C}}_w^{2,\alpha}}.
$$
\end{cor}
\begin{proof}
By~\eqref{eq:contractmap}, Proposition~\ref{prop:green} and Proposition~\ref{prop:quadest}
\begin{align*}
\| T_N(\phi)-T_N(\phi')\|_{{\mathcal{C}}_w^{2,\alpha}} = &
\frac 12\|{{G}}_N\Psi_N^r
\left (J\delta_N^\star (\phi'+\phi), J\delta_N^\star (\phi'- \phi)\right
)\|_{{\mathcal{C}}_w^{2,\alpha}} \\
\leq & \frac{c_7c_8}2 \|\phi+\phi'\|_{{\mathcal{C}}_w^{2,\alpha}}\|\phi-\phi'\|_{{\mathcal{C}}_w^{2,\alpha}}.
\end{align*}
In conclusion we may choose $\delta=\frac\epsilon{c_7c_8}$, which proves the corollary.
\end{proof}
\subsection{Fixed point principle}
The idea, as usual is to check whether the sequence $T_N^k(0)$
converges. If so, the limit must be a fixed point of $T_N$.
We have the following classical proposition
\begin{prop}
Let $(\EE,\|\cdot\|)$ be a finite dimensional (or Banach) normed vector space and
$T:\EE\to\EE$ an application such that
\begin{enumerate}
\item There exists $\delta >0$ such that the restriction of $T$ to
the closed ball $\bar B_\delta$ of $\EE$, centered at $0$ with
radius $\delta$, is $\frac
12$-contractant, i.e.
$$
\forall x,y\in \EE, \|x\|\leq \delta \mbox{ and } \|y\|\leq\delta
\Rightarrow \|T(x)-T(y)\|\leq \frac 12\|x-y\|.
$$
\item $\|T(0)\|\leq \frac \delta 2$
\end{enumerate}
Then the sequence $(t_k)$ defined by $t_0=0$ and $t_{k+1}=T(t_k)$
converges to an element $t_\infty\in\EE$ with $\|t_\infty\|\leq
\delta$. Furthermore, $t_\infty$ is a fixed point for $T$. Such fixed
point are unique in the ball $\bar B_\delta$. In addition, we have
$\|t_\infty\|\leq 2\|T(0)\|$.
\end{prop}
\begin{proof}
The uniqueness of fixed points is a trivial consequence of
the contracting property of $T$ in the ball $\bar B_\delta = \{x\in
\EE\|x\|\leq \delta\}$.
For the convergence, we show first by induction that $t_k$ remains in $\bar
B_\delta$ for all $k$ : this is the case for $t_0=1$ and $t_1$ by
assumption. Assume now that if $t_0,\cdots,t_{k-1}\in \bar
B_\delta$.
Then
$$
\|t_k-t_{k-1}\| =\|T(t_{k-1}) -T(t_{k-2})\|\leq \frac 12\|t_{k-1}-t_{k-2}\|
$$
and by induction
$$
\|t_k-t_{k-1}\| = \frac 1{2^{p}} \|t_{k-p}-t_{k-p-1}\|.
$$
In particular
$$
\|t_k-t_{k-1}\| = \frac 1{2^{k-1}} \|t_{1}-t_{0}\|= \frac 1{2^{k-1}}\|T(0)\|.
$$
In turn we have
$$
t_k= t_k-t_0=(t_{k}- t_{k-1})+(t_{k-1}-t_{k-2})+\cdots +(t_1-t_0)
$$
and by the triangle inequality,
$$
\|t_k\| \leq \|T(0)\|\sum_{j=0}^{k-1}\frac 1{2^k} =
\|T(0)\|\frac{1-\frac 1{2^j}}{1-\frac 12} \leq 2\|T(0)\|\leq \delta,
$$
so that $t_k\in \bar B_\delta$. This completes the induction and shows
that $t_k$ remains in $\bar B_\delta$.
Eventually, we just have to prove that $t_k$ converges. But this is
clear since
$$
t_{k+p}-t_k = (t_{k+p} - t_{k+p-1}) + (t_{k+p-2}- t_{k+p-2})+\cdots + (t_{k+1}-t_k)
$$
and by the triangle inequality
$$
\|t_{k+p}-t_k\| \leq \|t_{k+1}-t_k\|\sum_{j=0}^\infty \frac 1{2^j}
\leq 2\|t_{k+1}-t_k\| \leq \frac 2{2^{k-1}}\|T(0)\|
$$
which shows that $t_k$ is Cauchy hence convergent in the closed ball
$\bar B_\delta$. The fact that the limit of $t_k$ is a fixed point of
$T$ is clear from the definition of the sequence, by uniqueness of the limit.
\end{proof}
We obtain the following result
\begin{theo}
\label{theo:fpt}
There exists a positive integer $N_0$ and a real number $\delta>0$
such that for all $N\geq N_0$ there exists a unique $\phi_N\in
{\mathscr{K}}_N^\perp$ that satisfies
$$
\|\phi_N\|_{{\mathcal{C}}_w^{2,\alpha}}\leq
\delta \mbox{ and } F_N(\phi_N)\in{\mathscr{K}}_N.
$$
Furthermore the sequence satisfies $\|\phi_N\|_{{\mathcal{C}}_w^{2,\alpha}}={\mathcal{O}}(N^{-1})$.
\end{theo}
****Write a proof for the fact that $F_N(0) \leq \delta/2$ is small***
\begin{prop}
\label{prop:isoquad}
Let $\ell:\Sigma \to {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ be an isotropic immersion and
$p:{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2\to\Sigma$ a conformal cover introduced before, such that
the pair $(p,\ell)$ is nondegenerate.
Then the meshes
$$
\rho_N=\tau_N-J\delta_N^\star\phi_N \in {\mathscr{M}}_N
$$
where $\phi_N$ is defined by Theorem~\ref{theo:fpt} for every $N\geq
N_0$ are isotropic.
\end{prop}
\begin{proof}
By definition $\mu_N^r(\rho_N)\in{\mathcal{K}}_N$. By nondegeneracy,
${\mathcal{K}}_N={\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}{\mathbf{1}}_N$, so that $\mu_N^r(\rho_N) = \lambda{\mathbf{1}}_N$ for some
constant $\lambda$. We deduce that
$$
\ipp{\mu_N^r(\rho_N),{\mathbf{1}}_N} = \lambda\ipp{{\mathbf{1}}_N,{\mathbf{1}}_N}.
$$
This quantity does not vanish unless $\lambda =0$. But
$\ipp{\mu_N^r(\rho_N),{\mathbf{1}}_N}$ is the total symplectic area of the mesh
$\rho_N$, which has to vanish by Stokes theorem, since the symplectic
form of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^4$ is exact. In conclusion $\lambda=0$ so that~$\mu_N^r(\rho_N)=0$.
\end{proof}
\subsection{Proof of Theorem~\ref{theo:mainquad}}
We merely need to gather the previous technical results so that the
proof and our main result follows as a corollary.
\begin{proof}[Proof of Theorem~\ref{theo:mainquad}]
Let $\ell:\Sigma\to{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ we a smooth isotropic immersion. By
Proposition~\ref{prop:avoid}, we may always assume that the
conformal cover $p:{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2\to\Sigma$ is chosen in such a way that the
pair $(p,\ell)$ is non degenerate.
By Proposition~\ref{prop:isoquad}, the quadrangular meshes $\rho_N$
provided by Theorem~\ref{theo:fpt}, for $N$ sufficiently large,
are isotropic. The estimate
$\|\phi_N\|_{{\mathcal{C}}^{2,\alpha}_w}={\mathcal{O}}(N^{-1})$ implies that
$$
\sup_{\mathbf{v}\in{\mathfrak{C}}_0(\mathcal{Q}_N(\Sigma))}\|\rho_N(\mathbf{v})- \tau_N(\mathbf{v})\|_{{\mathcal{C}}^{1,\alpha}_w}
= {\mathcal{O}}(N^{-1}).
$$
It follows that
$$
\sup_{\mathbf{v}\in{\mathfrak{C}}_0(\mathcal{Q}_N(\Sigma))}\|\rho_N(\mathbf{v})- \ell(\mathbf{v})\|
= {\mathcal{O}}(N^{-1}),
$$
which proves the theorem.
\end{proof}
\subsection{The degenerate case}
If $(p,\ell)$ is a degenerate pair, Theorem~\ref{theo:fpt} still provides a family of quadrangular meshes $\rho_N$ with the property that $\rho_N\in{\mathscr{K}}_N$. However
$\rho_N$ may not be an isotropic mesh since ${\mathscr{K}}_N$ may not reduce to constants.
This difficulty can be taken care of by working ${\mathbb{Z}}} \def\DD{{\mathbb{D}}} \def\SS{{\mathbb{S}}} \def\WW{{\mathbb{W}}} \def\QQ{{\mathbb{Q}}} \def\CC{{\mathbb{C}}_2$-equivariantly.
Given a degenerate pair $(p_S,\ell_S)$,
we construct modified quadrangulations $\mathcal{Q}_N(S)$ as in \S\ref{sec:deg}.
Using the notation introduced at \S\ref{sec:deg}, we consider the lifted pair $(p,\ell)$ given by $p=p_S\circ\Phi^S$ and $\ell=\ell_S\circ\Phi^S$, where $\Phi^S:\Sigma\to S$ is a double cover introduced at~\eqref{eq:piS}.
The pair $(p,\ell)$ is degenerate as well.
Using Theorem~\ref{theo:fpt}, we find a corresponding family of quadrangular meshes $\rho_N$. All these construction are $G_N$-equivariant. In particular $\rho_N \in {\mathscr{K}}_N$ is also $G_N$-invariant. We have
$$
\rho_N = a_N{\mathbf{1}}_N +b_N\zeta_N,
$$
where $\rho_N$ and ${\mathbf{1}}_N$ are $G_N$-invariant. However $\zeta_N$ is $G_N$-anti-invariant (cf. proof of Theorem~\ref{theo:specgap2}), which implies that $b_N=0$. We conclude that $a_N=0$ as in the proof of Proposition~\ref{prop:isoquad}.
In conclusion $\rho_N$ descends to the $G_N$-quotient as an isotropic quadrangular mesh $\rho_N^S\in{\mathscr{M}}_N(S)$ and we have proved the following result
\begin{prop}
\label{prop:degcase}
Let $(p_S,\ell_S)$ be any pair, where $\ell_S:S\to{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ is an isotropic immersion and $p_S:{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2\to S$ an associated conformal cover.
Let $\tau_N^S\in {\mathscr{M}}_N(S)$ be the family of samples of $\ell_S$. Then, there exists a family of isotropic quadrangular meshes $\rho_N^S\in{\mathscr{M}}_N(S)$ such that
$$
\max_{\mathbf{v}}\|\rho^S_N(\mathbf{v})-\tau_N^S(\mathbf{v})\| ={\mathcal{O}}(N^{-1}).
$$
\end{prop}
\begin{rmk}
The approach presented in Proposition~\ref{prop:degcase} appears as
a good solution to treat our perturbation problem in a uniform
manner, whether or not the pair $(p,\ell)$ is degenerate.
The main flaw of such technique, relying on ${\mathbb{Z}}} \def\DD{{\mathbb{D}}} \def\SS{{\mathbb{S}}} \def\WW{{\mathbb{W}}} \def\QQ{{\mathbb{Q}}} \def\CC{{\mathbb{C}}_2$-equivariant
constructions, is that the moduli spaces ${\mathscr{M}}_N(S)$ do not admit a
shear action as defined in~\S\ref{sec:shear} (this is due to the
connectedness of the checkers graph of $\mathcal{Q}_N(S)$).
Unfortunately, the shear
action is used in a crucial way at~\S\ref{sec:quadtri} to obtain
generic quadrangular meshes that will allow to construct piecewise
linear immersions as in Theorem~\ref{theo:maindiscr}.
\end{rmk}
\section{From quadrangulations to triangulations}
\label{sec:quadtri}
The previous section was devoted to the construction of isotropic
meshes associated to quadrangulations, sufficiently close to a given smooth isotropic immersion
$\ell:\Sigma \to {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^ {2n}$. In this section, we explain how to define a
nearby isotropic piecewise linear map as an approximation of $\ell$.
The idea is to pass from an isotropic quadrangulation to an isotropic triangulation.
\subsection{From quadrilaterals to pyramids}
\label{sec:pyramid}
The goal of this section is to explain how to pass from an isotropic quadrilateral to an isotropic pyramid, by adding
one apex to the quadrilateral.
We start by studying a single isotropic quadrilateral
$(A_0,A_1,A_2,A_3)$, where $A_i$ are points in ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$.
We shall use the notations
\begin{equation}
\label{eq:diag}
D_0=\overrightarrow {A_0A_2} \mbox{ and } D_1=\overrightarrow {A_1A_2}
\end{equation}
for the two diagonals of the quadrilateral. Recall that the
quadrilateral is isotropic if, and only if
$$
\omega(D_0,D_1)=0.
$$
\begin{rmk}
If the diagonals of an isotropic quadrilateral are linearly
independent vectors of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$, this implies that
$$L={\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}} D_0\oplus {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}
D_1$$
is an isotropic plane of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$.
\end{rmk}
\begin{dfn}
\label{dfn:pyramid}
A \emph{pyramid} is given by five points $(P,A_0,A_1,A_2,A_3)$ of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$. The four points of quadrilateral
$(A_0,\cdots, A_3)$, called the \emph{base of the pyramid} and the \emph{apex} $P\in {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$.
If the four triangles given by
$(PA_iA_{i+1})$, where~$i$ is understood as an index modulo~$4$, are contained in
isotropic planes of~${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$, we say the the pyramid is an
\emph{isotropic pyramid} (cf. Figure~\ref{figure:pyramid}).
\end{dfn}
\begin{figure}[H]
\begin{pspicture}[showgrid=false](-2,-1)(2,1)
\psline(-1,-1)(0,-1)(.5,0)(-0.5,0)(-1,-1)
\psline(.5,1)(-1,-1)(0,-1)(.5,0)(-0.5,0)
\psline(.5,1)(-1,-1)
\psline(.5,1)(0,-1)
\psline(.5,1)(.5,0)
\psline(.5,1)(-0.5,0)
\rput(-1.5,-1){$A_0$}
\rput(0.5,-1){$A_1$}
\rput(1.0,0){$A_2$}
\rput(-1.0,0){$A_3$}
\rput(0.1,1.1){$P$}
\psset{fillstyle=solid,fillcolor=black}
\pscircle(-1,-1){.1}
\pscircle(0.5,0){.1}
\pscircle(-0.5,0){.1}
\pscircle(0,-1){.1}
\pscircle(0.5,1){.1}
\end{pspicture}
\caption{Pyramid with apex $P$ and base $(A_0,A_1,A_2,A_3)$}
\label{figure:pyramid}
\end{figure}
The following Lemma shows a first relation between isotropic quadrilaterals and isotropic pyramids:
\begin{lemma}
The base of an isotropic pyramid is an isotropic quadrilateral.
\end{lemma}
\begin{proof}
The result is obtained as a trivial consequence of the Stokes
theorem, or by elementary algebraic manipulations.
\end{proof}
Conversely, we have the following result:
\begin{lemma}
\label{lemma:pyramid}
Let $Q=(A_0,\cdots,A_3)$ be an isotropic quadrilateral of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$
with linearly independent diagonals. We denote by $W'_Q$ be the
symplectic orthogonal of the vector space spanned by the sides of the
quadrilateral $Q$. Let $W_Q$ be the set of points $P\in {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$
which are the apexes of isotropic pyramids with base
given by the quadrilateral $Q$. Then $W_Q$
is an affine subspace of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ with underlying
vector space $W'_Q$. Its dimension is $2n-2$ if $Q$ is
flat and $2n-3$ otherwise.
\end{lemma}
\begin{proof}
We are looking for a solution of
the linear system of four equations
$$
\omega(\overrightarrow{PA_i},\overrightarrow{A_iA_{i+1}})= 0,
$$
where $0\leq i \leq 3$.
Put
\begin{equation}
\label{eq:gp}
X=\overrightarrow{GP}
\end{equation}
where $G$ is by convention the barycenter of the quadrilateral.
The system can be expressed as
$$
\omega(X,\overrightarrow{A_iA_{i+1}})= \omega(\overrightarrow{GA_i},\overrightarrow{A_iA_{i+1}}).
$$
The LHS correspond to a linear map with kernel~$W'_Q$.
If the quadrilateral is flat, it is contained in an isotropic affine
plane parallel to $L={\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}} D_0\oplus {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}} D_1$. Any point $P$ in the
plane of the quadrilateral is the apex of an isotropic
pyramid. Furthermore,
the space of solutions is an affine space of codimension $2$.
If the quadrilateral is not flat, then $\dim W'_Q=2n-3$ and the LHS of the linear system has rank 3.
The condition that the quadrilateral is isotropic is
precisely the compatibility condition, that insures that the RHS of
the equations is in the image of the Linear map. We conclude that the
system of equations admits a $2n-3$-dimensional affine space of
solutions.
\end{proof}
Lemma~\ref{lemma:pyramid} is a excellent tool for passing from
isotropic meshes associated to quadrangulations to isotropic
meshes associated to triangulations and, in turns, to piecewise
linear isotropic maps. One issue, that has to be dealt with, is how
$C^0$-estimates are preserved and also, whether the picewise linear map induced by this construction are still immersions. Indeed, Lemma~\ref{lemma:pyramid} does not
provide any information about the distance from $W_Q$ to the
quadrilateral.
\subsubsection{Optimal apex}
\label{sec:optiapex}
There exists large families of isotropic pyramids as shown by Lemma~\ref{lemma:pyramid}.
In this section we introduce some particular solutions of the corresponding linear system, called \emph{optimal pyramids} and \emph{optimal apex}.
We use the notations introduced in the proof of
Lemma~\ref{lemma:pyramid}. Again, we consider an isotropic quadrilateral
$Q=(A_0,\cdots,A_3)$. We are assuming that $Q$ has linearly independent diagonals $D_0, D_1$.
Hence the diagonals span an isotropic plane $L={\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}} D_0\oplus {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}} D_1$. We may consider its
complexification
\begin{equation}
\label{eq:cxquad}
L^\CC = L\oplus JL,
\end{equation}
and the corresponding orthogonal complex (and symplectic) splitting
$$
{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}=L^\CC \oplus M.
$$
Notice that the real dimension of $L^\CC$ is $4$.
We are looking for a point $P\in{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^ {2n}$ solution of the linear system
\begin{equation}
\label{eq:sys1}
\omega(\overrightarrow{GP},\overrightarrow {A_iA_{i+1}})= \gamma_i
\quad {0\leq i \leq 3}
\end{equation}
where
\begin{equation}
\gamma_i=\omega(\overrightarrow{GA_i},\overrightarrow {GA_{i+1}}).
\end{equation}
According to Lemma~\ref{lemma:pyramid}, the affine space of
solutions $W'_Q$ has dimension $2n-2$ or $2n-3$ in ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$
depending on the flatness of the quadrilateral. We may reduce to particular solutions by adding the constraint
\begin{equation}
\label{eq:sys2}
\overrightarrow{PG}\in L^\CC .
\end{equation}
We use the notation $X=\overrightarrow{GP}$. A quadrilateral $(A_0,\cdots,A_3)$ is determined
by specifying its barycenter $G$, the side vector
$V=\overrightarrow{A_0A_1}$ and the diagonals $D_0$ and $D_1$.
We first compute the terms $\gamma_i$ of the RHS in terms of these quantities.
By definition
$$
\begin{array}{llll}
4 \overrightarrow{A_0G} &= \overrightarrow{A_0A_1}+
\overrightarrow{A_0A_2}+ \overrightarrow{A_0A_3} &= & D_0 + D_1 +2V \\
4 \overrightarrow{A_1G} &= \overrightarrow{A_1A_0}+
\overrightarrow{A_1A_2}+ \overrightarrow{A_1A_3} &= & D_0 +D_1 -2V \\
4 \overrightarrow{A_2G} &= \overrightarrow{A_2A_0}+
\overrightarrow{A_2A_1}+ \overrightarrow{A_2A_3} &= &-3 D_0+D_1+2V\\
4 \overrightarrow{A_3G} &= \overrightarrow{A_3A_0}+
\overrightarrow{A_3A_1}+ \overrightarrow{A_3A_2} & = & D_0-3D_1 -2V
\end{array}
$$
Hence
\begin{equation}
\label{eq:gammai}
\begin{array}{lll}
16\gamma_0 &= \omega(D_0+D_1+2V,D_0+D_1-2V) &=4\omega(V,D_0+D_1)\\
16\gamma_1 &= \omega(D_0+D_1-2V,-3D_0+D_1+2V) &=4\omega(V,D_0-D_1)\\
16\gamma_2 & = \omega(-3D_0+D_1+2V,D_0-3D_1-2V) &=-4\omega(V,D_0+D_1)\\
16\gamma_3 & = \omega(D_0-3D_1-2V,D_0+D_1+2V) &=-4\omega(V,D_0-D_1)
\end{array}
\end{equation}
Let $D'_0$ and $D'_1$ be the basis of $L$ defined by the orthogonality
conditions
$$
\forall i,j\in\{0,1\},\quad g(D_i,D'_j)= \delta_{ij}
$$
and put
\begin{equation}
\label{eq:Bi}
B_i=JD'_i, \quad i\in\{0,1\}
\end{equation}
which are a basis for $JL$. Notice that by definition
$$
\omega(D_i,B_j)=g(D_i,D_j)=\delta_{ij}.
$$
We may express the vectors $X$ and $V$ using the basis $(D_0,D_1,B_0,B_1)$ of $L^\CC$~as
\begin{align}
\label{eq:decX}
X &=a_0D_0 +a_1D_1 + b_0B_0+b_1B_1 \\
\label{eq:decV}
V &=\alpha_0 D_0 +\alpha_1D_1 + \beta_0B_0+\beta_1B_1 +V_M,
\end{align}
where $\alpha_i,\beta_i,a_i,b_i\in{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}$ and $V_M\in M$.
By~\eqref{eq:gammai} \eqref{eq:decX} and \eqref{eq:decV}, we have
\begin{align*}
4\gamma_0 &= \beta_0+\beta_1 \\
4 \gamma_1 & = \beta_0-\beta_1 \\
4 \gamma_2 &= - \beta_0 -\beta_1 \\
4 \gamma_3 &= -\beta_0+\beta_1
\end{align*}
The linear system~\eqref{eq:sys1} with constraint~\eqref{eq:sys2}
is equivalent to (after adding up lines)
\begin{equation}
\label{eq:sys3}
\left \{
\begin{array}{l}
\omega(X,V) = \gamma_0 \\
\omega(X,D_0)=\gamma_0+\gamma_1\\
\omega(X,D_1) = \gamma_1+\gamma_2
\end{array}
\right .
\end{equation}
where we have removed the last redundent equation.
Eventually, a solution of~\eqref{eq:sys3} is given by
\begin{equation}
\label{eq:sys4}
\left \{
\begin{array}{rl}
a_0 \beta_0 + a_1 \beta_1 - b_0\alpha_0 - b_1\alpha_1 &= \frac
14(\beta_0+\beta_1) \\
b_0 &=-\frac 12 \beta_0\\
b_1 &= \frac 12 \beta_1
\end{array}
\right .
\end{equation}
i.e. the solutions $X$ are given by
\begin{equation}
\label{eq:sys6}
X = a_0D_0+a_1D_1 - \frac {\beta_0}2B_0 +\frac{\beta_1}2B_1
\end{equation}
where $a_0$ and $a_1$ satisfy the affine equation
\begin{equation}
\label{eq:sys5}
\beta_0 a_0 +\beta_1 a_1 = \frac 14(\beta_0+\beta_1) + \frac {\beta_1\alpha_1-\beta_0\alpha_0}2.
\end{equation}
In conclusion, any solution $(a_0,a_1)$ of the affine equation
\begin{equation}
\label{eq:sys7}
\beta_0 a_0 +\beta_1 a_1 = \xi(V)
\end{equation}
where
$$
\xi(V):=\frac {\beta_0(1-2\alpha_0)+\beta_1(1+2\alpha_1)}4
$$
provides a solution to our linear system.
If the orthogonal projection of $V$ onto $JL$ does not vanish, we have
$(\beta_0,\beta_1)\neq (0,0)$ and the above equation defines a
line, which, in turn defines a one dimensional space of solutions
$X\in L^\CC$.
We summarize our computations in the following lemma:
\begin{lemma}
Assume that $Q=(A_0,\cdots,A_3)$ is an isotropic quadrilateral with
linearly independent diagonals in ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$. Assume that the
orthogonal projection of $Q$ in $L^\CC$ is not a flat quadrilateral.
Then set of points $P\in G+L^\CC$ which are the apex of an isotropic
pyramid over $Q$, form a
$1$-dimensional affine space.
\end{lemma}
Under the assumptions of the lemma, we may consider a particular solution given by
\begin{equation}
\label{eq:optimal}
X= \xi(V) \frac{\beta_0}{\beta_0^2 + \beta_1^2} D_0 + \xi(V)
\frac{\beta_1}{\beta_0^2 + \beta_1^2}D_1 - \frac{\beta_0}2B_0 +
\frac{\beta_1}2B_1.
\end{equation}
The above solution corresponds to the apex $P$, which is the closest point to the barycenter $G$, with the
property that the corresponding pyramid is isotropic and $\overrightarrow {GP}\in L^\CC$.
This leads us to the following definition:
\begin{dfn}
Let $(A_0,\cdots,A_3)$ be an isotropic quadrilateral of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ with linearly independent diagonals
and $G$ be its barycenter. The closest point $P$ to $G$ in $G+L^\CC$
such that $(P,A_0,\cdots,A_3)$ isotropic is called \emph{the optimal apex}, and the
corresponding pyramid an \emph{optimal isotropic pyramid}.
\end{dfn}
\begin{rmk}
If the orthogonal projection of the quadrilateral in $L^\CC$ is flat, then the optimal apex is just the barycenter $G$ of the quadrilateral. If it is not flat the optimal apex is given by $X=\overrightarrow{GP}$, where $X$ is given by Formula~\eqref{eq:optimal}.
\end{rmk}
Optimal pyramid enjoys nice properties. We first point out that they are almost always non degenerate in the sense of the following lemma:
\begin{lemma}
\label{lemma:isotroppyr}
Let $(A_0,\cdots,A_3)$ be an isotropic quadrilateral such that
its orthogonal projection on $L^\CC$ is not flat. Using the above notations,
let $V'$ be the orthogonal projection of $V$ on $L^\CC$ and $X$ be the
optimal solution. Then $D_0,
D_1, V',X$ is a basis of $L^\CC$, unless $\beta_0=0$ or
$\beta_1=0$.
If $\beta_0\beta_1\neq 0$ the rays of the optimal isotropic pyramid
$\overrightarrow{PA_i}$, for $0\leq i\leq 3$
are linearly independent.
\end{lemma}
\begin{proof}
Easy manipulations on vectors show that the vector space spanned by $D_0,D_1,V', X$ is also spanned by $D_0, D_1$ and the vectors
$$
\beta_0 B_0 +\beta_1B_1 \mbox{ and } -\beta_0 B_0 +\beta_1B_1.
$$
The two above vectors belong to $JL$ and they are linearly independent if, and only if
$$
\beta_0\beta_1\neq 0,
$$
which proves the lemma as the second statement is an immediate consequence of the first.
\end{proof}
\subsection{${\mathcal{C}}^0$-estimates for optimal pyramids}
\begin{dfn}
A quadrilateral of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ with orthonormal diagonals $(D_0,D_1)$ is called an
\emph{orthonormogonal quadrilateral}. If the diagonals
satisfy
$$\forall i,j\in \{0,1\} \quad \Big | g(D_i,D_j)-\delta_{ij}\Big | \leq\epsilon
$$
for some $\epsilon >0$, we say that they are
\emph{$\epsilon$-orthonormal}. Under this assumption, we say that \emph{the quadrilateral is $\epsilon$-orthonormogonal}.
\end{dfn}
By continuity, we have the following result:
\begin{lemma}
\label{lemma:continuityc0}
For every pair $D_0$, $D_1\in{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ of linearly independent vectors, we define
$D'_0$, $D_1'\in span(D_0,D_1)$ by the orthogonality relations
$$
g(D_i,D'_j) =\delta_{ij}, \forall i,j\in\{0,1\}.
$$
Then $D'_0,D'_1$ is a basis of $span(D_0,D_1)$. Furthermore,
for every $\epsilon'>0$ there exists $\epsilon_0>0$
such that for every $0<\epsilon<\epsilon_0$ and every
$\epsilon$-orthonormal family $(D_0,D_1)$, the family
$(D'_0,D_1')$ is $\epsilon'$-orthonormal.
\end{lemma}
\begin{rmk}
\label{rmk:ong}
We shall assume from now on that the quadrilateral is
$\epsilon$-orthonormogonal, with $\epsilon>0$ sufficiently small, so
that $D_0,D_1$ are linearly independent,
$\|D_i\|\leq 2$ and $\|D'_i\|\leq 2$.
\end{rmk}
\begin{prop}
\label{prop:boundedpyramid}
There exist $C>0$ and $\epsilon >0$ such that
for every $\epsilon$-orthonormogonal isotropic quadrilateral of
diameter $d$, the diameter $d'$ of the corresponding optimal isotropic pyramid satisfies
$$
d' \leq C(d+1).
$$
\end{prop}
Loosely stated, the above proposition says that, for every isotropic
quadrilateral which is almost orthonormogonal, the diameter of the
optimal isotropic pyramid is commensurate with the diameter of the
quadrilateral.
\begin{proof}
If the projection of the quadrilateral in $L^\CC$ is flat, then the optimal apex coincide with the barycenter of the quadrilateral and the proposition is obvious. Thus, we will assume that the projection of the quadrilateral is not flat in the rest of the proof.
As $\epsilon\to 0$, the basis $D_0,D_1,B_0,B_1$ becomes almost orthonormal.
In particular, there exists $\epsilon>0$ sufficiently small such that under the assumptions of the proposition, we have
$$
\max(|\alpha_0|,|\alpha_1|,|\beta_0|,|\beta_1|)\leq 2\|V\|.
$$
Then Formula~\eqref{eq:optimal} for the optimal solution $X=a_0D_0+a_1D_1+b_0B_0+b_1B_1$
shows that all the coefficients
$a_i$ and $b_i$ are controlled by $\|V\|+1$ (up to multiplication by a universal constant). Now,
$$
\|X\|\leq |a_0|\|D_0\| + |a_1|\|D_1\| +|b_0|\|B_0\| + |b_1|\|B_1\|.
$$
According to Remark~\ref{rmk:ong}, if $\epsilon$ is sufficiently small,
we have
$$
\|D_i\| \mbox{ and } \|D'_j\| \leq 2.
$$
Hence $\|B'_j\| \leq 2$ and it follows from the triangle inequality that
$$
\|X\|\leq 2(|a_0|+ |a_1| +|b_0|+ |b_1|).
$$
This shows that the distance between the optimal apex and the center of gravity of the quadrilateral is controlled by $\|V\|+1$, up to multiplication by a universal constant. The diameters of the quadrilateral controls $\|V\|$, hence the diameter of the
quadrilateral controls $\|X\|$ and the lemma follows.
\end{proof}
\subsection{Many quadrilaterals and pyramids}
Every faces of an isotropic quadrangular mesh $\tau\in{\mathscr{M}}_N$ can be
seen as a collection of isotropic quadrilaterals of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$.
In this section we explain how to define particular triangulations
$\mathcal{T}_N(\Sigma)$ as a refinement of the quadrangulations
$\mathcal{Q}_N(\Sigma)$. Then we explain how to deduce an isotropic
quadrangular mesh
$\tau'\in{\mathscr{M}}'_N=C^0(\mathcal{T}_N(\Sigma))$from $\tau$.
\subsubsection{Triangulations obtained by refinement}
We define triangulation $\mathcal{T}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$ by replacing each face $\mathbf{f}$ of $\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$ with
its barycenter $\mathbf{z}_\mathbf{f}\in{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$. The barycenter $\mathbf{z}_\mathbf{f}$ is joined to the
vertices of the face $\mathbf{f}$ by straight line
segments. We also add four faces given by the four triangles which
appear as in the picture below. This
operation is better understood by drawing a local picture of the
corresponding $CW$-complexes of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$:
\begin{figure}[H]
\begin{pspicture}[showgrid=false](-1,-1.5)(1,1.5)
\psset{linecolor=blue }
\psline (-1,1) (1,1)
\psline (-1,0) (1,0)
\psline (-1,-1) (1,-1)
\psline (-1,1) (-1,-1)
\psline (0,1) (0,-1)
\psline (1,1) (1,-1)
\color{red}
\rput (-1,1){$\bullet$}
\rput (-1,0){$\bullet$}
\rput (-1,-1){$\bullet$}
\rput (0,1){$\bullet$}
\rput (0,0){$\bullet$}
\rput (0,-1){$\bullet$}
\rput (1,1){$\bullet$}
\rput (1,0){$\bullet$}
\rput (1,-1){$\bullet$}
\color{black}
\rput (0,1.3){$\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$}
\end{pspicture}
\hspace{2cm}
\begin{pspicture}[showgrid=false](-1,-1.5)(1,1.5)
\psset{linecolor=blue }
\psline (-1,1) (1,1)
\psline (-1,0) (1,0)
\psline (-1,-1) (1,-1)
\psline (-1,1) (-1,-1)
\psline (0,1) (0,-1)
\psline (1,1) (1,-1)
\psline (-1,1) (1,-1)
\psline (0,1) (1,0)
\psline (-1,0) (0,-1)
\psline (-1,-1) (1,1)
\psline (-1,0) (0,1)
\psline (0,-1) (1,0)
\color{red}
\rput (-1,1){$\bullet$}
\rput (-1,0){$\bullet$}
\rput (-1,-1){$\bullet$}
\rput (0,1){$\bullet$}
\rput (0,0){$\bullet$}
\rput (0,-1){$\bullet$}
\rput (1,1){$\bullet$}
\rput (1,0){$\bullet$}
\rput (1,-1){$\bullet$}
\rput (-.5,.5){$\bullet$}
\rput (.5,.5){$\bullet$}
\rput (-.5,-.5){$\bullet$}
\rput (.5,-.5){$\bullet$}
\color{black}
\rput (0,1.3){$\mathcal{T}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$}
\end{pspicture}
\caption{Triangular refinement of a quadrangulation}
\end{figure}
As explained in~\S\ref{sec:quadconv} in the case of quadrangulations, the
triangulations $\mathcal{T}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$ descend to $\Sigma$t via
the covering map
$p_N:{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2\to \Sigma$. The resulting triangulation of $\Sigma$ is
denoted $\mathcal{T}_N(\Sigma)$. We define a moduli space of mesh associated to such triangulation
$$
{\mathscr{M}}'_N=C^0(\mathcal{T}_N(\Sigma))\otimes {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}.
$$
\subsubsection{Optimal triangulation of isotropic quadrangular mesh}
\label{sec:optitri}
Let $\tau\in{\mathscr{M}}_N$ be an isotropic quadrangular mesh.
In addition, we are assuming that the quadrilateral
of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ associated to each face
$\mathbf{f}$ of $\mathcal{Q}_N(\Sigma)$ via
$\tau$ have linearly independent diagonals.
For each face $\mathbf{f}$ of $\mathcal{Q}_N(\Sigma)$, the mesh $\tau$ associates an
isotropic quadrilateral with linearly independent diagonals. We
associate an optimal apex
$P_\mathbf{f}\in {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ to such a quadrilateral.
Then, we define a triangular mesh $\tau'\in {\mathscr{M}}'_N$ as follows:
\begin{itemize}
\item If $\mathbf{v}$ is a vertex of $\mathcal{Q}_N(\Sigma)$, we define
$\tau'(\mathbf{v})=\tau(\mathbf{v})$.
\item If $\mathbf{z}$ is a vertex of $\mathcal{T}_N(\Sigma)$ which is not a vertex of
$\mathcal{T}_N(\Sigma)$, it is the barycenter of a face $\mathbf{f}$ of
$\mathcal{Q}_N(\Sigma)$ and we put $\tau'(\mathbf{z})=P_f$, where $P_f$ is the optimal
vertex defined via $\tau$.
\end{itemize}
This leads us to the following definition
\begin{dfn}
Let $\tau\in{\mathscr{M}}_N$ be an isotropic quadrangular mesh with linearly
independent diagonals. The triangular mesh $\tau'\in{\mathscr{M}}_N'$
defined above is called the optimal triangulation of the isotropic mesh $\tau$.
\end{dfn}
By construction, we have the following obvious
property:
\begin{prop}
Let $\tau\in{\mathscr{M}}_N$ be an isotropic quadrangular mesh with linearly
independent diagonals. The optimal triangulation $\tau'\in{\mathscr{M}}_N'$
of the quadrangular mesh $\tau$ defines a piecewise linear map
$\ell':\Sigma\to{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$, which is isotropic.
\end{prop}
\subsection{Approximation by isotropic triangular mesh}
\label{sec:trimesh}
In Theorem~\ref{theo:fpt}, we construct a sequence of isotropic
quadrangular meshes $\rho_N\in{\mathscr{M}}_N$ out of a smooth isotropic
immersion $\ell:\Sigma\to{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$. By construction,
$$
\rho_N=\tau_N- J\delta^\star_N\phi_N,
$$
where $\|\phi_N\|_{C^{2,\alpha}} ={\mathcal{O}}(N^{-1})$.
By Proposition~\ref{prop:convell}, the renormalized diagonals of
$\tau_N$ converge towards the partial derivatives of $\ell$. Thus, the
same holds for $\rho_N$, i.e.
\begin{equation}
\label{eq:convdiag}
{\mathscr{U}}_{\rho_N}^\pm\longrightarrow \frac{\partial\ell}{\partial u} \mbox{ and }
{\mathscr{V}}_{\rho_N}^\pm\longrightarrow \frac{\partial\ell}{\partial v}.
\end{equation}
In particular the diagonals are linearly independent for every $N$
sufficiently large and we may define an optimal isotropic
triangulation $\rho'_N\in{\mathscr{M}}'_N$ associated to $\rho_N$ as in the previous
section. It turns out that the triangular meshes $\rho_N'$ are also
good ${\mathcal{C}}^0$ approximations of the map $\ell$ in the sense of the
following proposition:
\begin{prop}
\label{prop:convrhoprime}
There exists a constant $C>0$, and $N_0>0$ such that for every
integer $N\geq N_0$ and every vertex $\mathbf{v}\in \mathcal{T}_N(\Sigma)$
$$
\|\ell(\mathbf{v}) - \rho'_N(\mathbf{v})\|\leq \frac CN.
$$
\end{prop}
\begin{proof}
Since $\|\phi_N\|_{{\mathcal{C}}^{2,\alpha}_w}={\mathcal{O}}(N^{-1})$, we
deduce that $\|\phi_N\|_{{\mathcal{C}}^{1}_w}={\mathcal{O}}(N^{-1})$. It follows that
there exists a constant $C_1>0$, such that
$\|\rho_N(\mathbf{v})-\tau_N(\mathbf{v})\|= \|\delta_N^\star\phi_N(\mathbf{v})\|\leq C_1N^{-1}$ for every $N$
sufficiently large and every vertex $\mathbf{v}$ of $\mathcal{Q}_N(\Sigma)$.
In such case, we have $\tau_N(\mathbf{v})=\ell(\mathbf{v})$ and
$\rho_N(\mathbf{v})=\rho'_N(\mathbf{v})$ so that
\begin{equation}
\label{eq:comparerho1}
\|\ell(\mathbf{v})-\rho'_N(\mathbf{v})\|\leq \frac {C_1}N.
\end{equation}
If $\mathbf{v}$ is a vertex of $\mathcal{T}_N(\Sigma)$ but not a vertex of
$\mathcal{Q}_N(\Sigma)$, it is associated to a face $\mathbf{f}$ of the
quadrangulation and $\rho'_N(\mathbf{v})$ is the optimal apex associated to
$\mathbf{f}$ and $\rho_N$, by definition of $\rho'_N$ (cf. \S\ref{sec:optitri}).
The renormalized diagonals
${\mathscr{U}}_{\rho_N}^\pm$ and ${\mathscr{V}}_{\rho_N}^\pm$ converge toward
$\frac{\partial\ell}{\partial u}$ and $\frac{\partial\ell}{\partial v}$ by~\eqref{eq:convdiag}.
The partial derivatives $\frac{\partial\ell}{\partial u}$ and
$\frac{\partial\ell}{\partial v}$ are orthogonal, with norm
$\sqrt{\theta}$. Therefore
\begin{equation}
\label{eq:vfon}
\frac 1{\sqrt\theta_N}{\mathscr{U}}_{\rho_N}^\pm, \quad \frac
1{\sqrt\theta_N}{\mathscr{V}}_{\rho_N}^\pm
\end{equation}
converge toward a pair of smooth orthonormal vector fields on $\Sigma$.
In particular, there exists $N_0$ such
that for all $N\geq N_0$, the vectors fields~\eqref{eq:vfon} are
$\epsilon$-orthonormal, where $\epsilon >0$ is chosen according to
Proposition~\ref{prop:boundedpyramid}. Since $\theta$ is a positive smooth function
on a compact surface, it is bounded above and below by positive
constants. Since $\theta_N^\pm\to\theta$, it follows that $\theta_N$
is also uniformly bounded above and below by positive constants for
$N$ sufficiently large. Using
Proposition~\ref{prop:boundedpyramid} with the rescaled pyramid,
we deduce that the apex $\mathbf{v}$ is close to all the vertices $\mathbf{z}$ of
$\mathbf{f}$ in the sense that, for some constant $C_2>0$ independent of $N_0$,
$\mathbf{v}$ and $\mathbf{z}$, we have
\begin{equation}
\label{eq:comparerho}
\|\rho'_N(\mathbf{v})-\rho_N(\mathbf{z})\|\leq \frac {C_2}N.
\end{equation}
Since $\ell$ is smooth, there exists a constant $C_3>0$ such that
for every pair of points $w_1,w_2\in \Sigma$ contained in the same
face of $\mathcal{Q}_N(\Sigma)$, we have
\begin{equation}
\label{eq:varell}
\|\ell(w_1)-\ell(w_2)\|\leq\frac { C_3}N
\end{equation}
In particular, for $\mathbf{v}$ and $\mathbf{z}$ as above,
$$
\|\ell(\mathbf{v})-\rho'_N(\mathbf{v})\|\leq
\|\ell(\mathbf{v})-\ell(\mathbf{z})\|+\|\ell(\mathbf{z})-\rho_N(\mathbf{z}) \| +\|\rho_N(\mathbf{z})-\rho'_N(\mathbf{v})\|.
$$
Since $\mathbf{z}$ and $\mathbf{v}$ belong to the same face,
$\|\ell(\mathbf{v})-\ell(\mathbf{z})\|\leq C_3N^{-1}$ by~\eqref{eq:varell}.
The second term satisfies $\|\ell(\mathbf{z})-\rho_N(\mathbf{z}) \| \leq C_1N^{-1}$
by~\eqref{eq:comparerho1} and the third term
$\|\rho_N(\mathbf{z})-\rho'_N(\mathbf{v})\|\leq C_2N^{-1}$
by~\eqref{eq:comparerho}. The proposition follows, with $C=C_1+C_2+C_3$.
\end{proof}
We deduce the following result, which proves the first part
of Theorem~\ref{theo:maindiscr}
\begin{theo}
\label{theo:maindiscrweak}
The piecewise linear maps $\ell_N:\Sigma\to{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ associated to the triangular
meshes $\rho'_N$ are isotropic. Furthermore
$$
\|\ell-\ell_N\|_{{\mathcal{C}}^0} ={\mathcal{O}}(N^{-1}),
$$
where $\|\cdot\|_{{\mathcal{C}}^0}$ denotes the usual ${\mathcal{C}}^0$-norm for maps $\Sigma\to{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$.
\end{theo}
\begin{proof}
The first part of the theorem is obvious. By definition of an
isotropic triangular mesh, the piecewise linear map $\ell_N$ associated
to $\rho'_N$ is isotropic.
The following lemma is a trivial consequence of the convergence
statement of
Proposition~\ref{prop:convrhoprime}. This roughly says that the
triangles of the mesh $\rho'_N$ have diameter of order ${\mathcal{O}}(N^{-1})$.
\begin{lemma}
\label{lemma:triest}
There exists a constant $C_4>0$ such that for every $N$ sufficiently
large and every pair of vertices $\mathbf{v}_1,\mathbf{v}_2$ of
$\mathcal{T}_N(\Sigma)$ which belong to the same face
$$
\|\rho'_N(\mathbf{v}_1)-\rho'_N(\mathbf{v}_2)\|\leq \frac{C_4}N.
$$
\end{lemma}
Lemma~\ref{lemma:triest} applied to the piecewise linear maps $\ell_N$
shows that
there exists a constant $C_5>0$ such that for every $N$ sufficiently large and $w_1,w_2\in \Sigma$ which belong
to the same triangular face of $\mathcal{T}_N(\Sigma)$, we have
\begin{equation}
\label{eq:varellprimeN}
\|\ell_N(w_1)-\ell_N(w_2)\|\leq \frac{C_5}N.
\end{equation}
For $N$ sufficiently large, we may assume the
control~\eqref{eq:varell}. For $w\in\Sigma$ and $N$ sufficiently
large, we choose a vertex $\mathbf{v}$ of the face of $\mathcal{T}_N(\Sigma)$ which
contains $w$. Then
$$
\|\ell_N(w)-\ell(w)\|\leq \|\ell_N(w)-\ell_N(\mathbf{v})\|+ \|\ell_N(\mathbf{v})-\ell(\mathbf{v})\|
+\|\ell(\mathbf{v})-\ell(w)\|.
$$
The first term is bounded by~\eqref{eq:varellprimeN}, the second term
is bounded by Proposition~\ref{prop:convrhoprime} and the third is
bounded by \eqref{eq:varell}. This proves the theorem.
\end{proof}
\subsection{Piecewise linear immersions}
Recall that a piecewise linear map is an immersion if, and only if, it
is a locally injective map.
The piecewise linear isotropic approximations $\ell_N$ of a smooth
isotropic immersion $\ell:\Sigma\to{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ considered at \S\ref{sec:trimesh} are only close in
${\mathcal{C}}^0$-norm by Theorem~\ref{theo:maindiscrweak}.
Since this estimate is rather weak, we cannot deduce from this fact that
$\ell_N$ is an immersion for $N$ sufficiently large.
However there are many free parameters in our construction:
\begin{itemize}
\item The distortion action on ${\mathscr{M}}_N$ preserves isotropic meshes.
\item The apex of each isotropic pyramid with fixed base lies in an
affine space of dimension at least $2n-3$.
\end{itemize}
These parameters can be used to construct piecewise linear isotropic
immersions, at least when the dimension of the target space is
sufficiently large, which turns out to be $n\geq 3$.
\begin{rmk}
If $\ell$ is an immersion, showing that the piecewise linear isotropic surfaces $\ell_N$ which we construct converge to $\ell$ in ${\mathcal{C}}^{0,\alpha}$ norm would be enough to get piecewise linear isotropic immersions. Unfortunately, eventhough it follows from the proof of Theorem \ref{theo:mainquad} in Section \ref{sec:fpt} that the $\ell_N$ converge to $\ell$ in ${\mathcal{C}}^{k,\alpha}_w$-norms, we cannot get better control than ${\mathcal{C}}^0$ away from diagonal directions due to the shear action.
\end{rmk}
\subsubsection{Perturbed meshes without flat faces}
We start by perturbing the isotropic quadrangular meshes
$\rho_N\in{\mathscr{M}}_N$
constructed at Theorem~\ref{theo:fpt}.
Our goal is to perturb $\rho_N$ by the shear action,
to make sure that the quadrilateral
associated to each face of the mesh satisfy the following proposition
and, in particular,
are not flat in ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$.
\begin{prop}
\label{prop:partimm}
For every $N$ sufficiently large, there exists $T_N \in{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}\times {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$
such that for every $s>0$ small enough,
the quadrangular mesh
$$\rho_N^s=sT_N\cdot\rho_N$$
satisfies the
following properties:
\begin{enumerate}
\label{prop:noflatface}
\item \label{prop:item1} For each face of the quadrangular mesh $\rho^s_N$, the
orthogonal projection of
the corresponding quadrilateral
onto the complex space generated by its diagonals (cf. ~\eqref{eq:cxquad}) is not
flat.
\item \label{prop:itemvert} For every vertex $\mathbf{v}$ of $\mathcal{Q}_N(\Sigma)$, the four vectors
of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$, associated via $\rho_N^s$ to the four edges with vertex $\mathbf{v}$,
span a $3$-dimensional
subspace of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$. Furthermore any triplet obtained as a
subset of the four above
vectors is a linearly independent family.
\item \label{prop:item2} The associated
triangular meshes $(\rho_N^s)'\in {\mathscr{M}}'_N$ have generic
pyramids. In other words,
for every vertex $\mathbf{v}$ of $\mathcal{T}_N(\Sigma)$ which is not a vertex
of $\mathcal{Q}_N(\Sigma)$, the
four vectors of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ associated to the four edges of the mesh $(\rho_N^s)'$
at $\mathbf{v}$ are linearly independent.
\end{enumerate}
\end{prop}
\begin{proof}
We use the notations introduced at the beginning of \S\ref{sec:quadtri}: for
a quadrilateral $Q=(A_0,\cdots,A_3)$ of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$, we denote by $X$ the
vector defined by~\eqref{eq:gp}, $D_0,
D_1$ the diagonals defined~\eqref{eq:diag} and by $B_0,B_1$ the
vectors~\eqref{eq:Bi} of $JL$ (cf. \eqref{eq:cxquad}).
The condition that the projection of the quadrilateral onto $L^\CC$ is
flat is equivalent to $\beta_0=\beta_1=0$, where the $\beta_i$ has been defined in (\ref{eq:decV}).
Assume that the projection is flat. Then for
every $T_+ \in
{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ not contained in the hyperplanes $\beta_0=0$ or
$\beta_1=0$, the projection of the quadrilateral $Q_s=(A_0+sT_+,A_1,A_2+sT_+,A_3)$ is not
flat for every $s>0$. Furthermore the optimal pyramid with base $Q_s$ is
generic in the sense of Lemma~\ref{lemma:isotroppyr}.
Let $\rho_N$ be the isotropic quadrangular mesh considered in
hypothesis of the
proposition. We choose $T_+\in{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ which satisfies the above
property, for every quadrilateral associated to faces of the mesh
$\rho_N$ with flat projection onto the space of complexified
directions. This is possible, since we merely need to choose $T_+\in
{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ away from a finite collection of $2$-planes.
Then $\rho^s_N:=(sT_+,0)\cdot\rho_N $ satisfies the items
\eqref{prop:item1} and \eqref{prop:item2} of the proposition
provided $s>0$ is sufficiently small.
We just have to show that the condition \eqref{prop:itemvert} can be
satisfied for a suitable choice of deformations.
Given a vertex $\mathbf{v}$ of $\mathcal{Q}_N(\Sigma)$ we consider the four
diagonals
$D_{\mathbf{v},\mathbf{f}}^{\rho_N}$ for the four quadrangular faces $\mathbf{f}$
with vertex $\mathbf{v}$.
The renormalised diagonals of the mesh converge
toward the partial derivatives of $\ell$ at $\mathbf{v}$ (cf.~\eqref{eq:convdiag}) as
$N\to\infty$. Since $\ell$ is an immersion, this shows that the four
vectors $D_{\mathbf{v},\mathbf{f}}^{\rho_N}$ span a space of dimension $2$ or
$3$ for every $N$ sufficiently large. If this space is $3$-dimensional,
\eqref{prop:itemvert} is satisfied with $s=0$ and nothing needs to be
done. Assume that the space of diagonals is $2$-dimensional.
The four vertices connected by an edge to $\mathbf{v}$ define four points
of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ via $\rho_N$. By assumption, these points lie in an affine plane of
${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$. If $\rho_N(\mathbf{v})$ does not belong to this plane,
then~\eqref{prop:itemvert} is satified. Otherwise, we require the additional condition that $T_+$
does not belong to the plane spanned by the diagonals.
We have to consider every vertex $\mathbf{v}$ as above and this
adds a finite number of conditions for choosing $T_+$. A finite
family of proper subspaces of a vector space never covers the entire
space. Thus it is possible to find the desired $T_+$.
This concludes the proof of the proposition with $T_N=(T_+,0)$.
\end{proof}
\begin{cor}
\label{cor:immae}
Given $N$ large enough, for every $s>0$
sufficiently small, the isotropic triangulation $(\rho_N^s)'$
defines a piecewise linear map $\ell_{N,s}:\Sigma\to{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ which
is an immersion at every point $w\in\Sigma$ which does not belong to
the $1$-skeleton of $\mathcal{Q}_N(\Sigma)$. In particular $\ell_{N,s}$
is an immersion at almost every point of $\Sigma$.
\end{cor}
\begin{proof}
By linerarity, it is sufficient to check that $\ell_{N,s}$ is
an immersion at every vertex of $\mathcal{T}_N(\Sigma)$ which is not a vertex
of $\mathcal{Q}_N(\Sigma)$. But this is clear for $N$ large enough and $s>0$
sufficiently small, by Proposition~\ref{prop:noflatface}, item \eqref{prop:item2}.
\end{proof}
\begin{rmk}
The above corollary proves the second part of
Theorem~\ref{theo:maindiscr} concerning piecewise linear
isotropic immersions when $n= 2$. Indeed, the the $1$-skeleton of
$\mathcal{Q}_{N}(\Sigma)$ is a finite union of meridians of the torus $\Sigma$.
\end{rmk}
\subsubsection{Further perturbations by moving apexes}
We are going to apply further isotropic perturbations to the triangular meshes
$(\rho_N^s)'\in{\mathscr{M}}'_N$, so that that the corresponding piecewise
linear map is also an immersion along the $1$-skeleton of
$\mathcal{Q}_N(\Sigma)$.
By definition, $(\rho_N^s)'$ is defined from the
quadrangular mesh $\rho_N^s$, by adding the apex of an optimal
isotropic pyramid for each face of $\mathcal{Q}_N(\Sigma)$.
The definition of an optimal pyramid is somewhat
arbitrary: for $N$ large enough and $s>0$ sufficiently small, every
face of $\rho_N^s$ satisfy Proposition~\ref{prop:noflatface},
item~\eqref{prop:item1}.
Hence, for each face of $\rho_N^s$, the affine space of apexes of
isotropic pyramids is $2n-3$-dimensional.
We deduce the following lemma:
\begin{lemma}
\label{lemma:moveapex}
For $N$ large enough and $s>0$ sufficiently small, there exists a
family of isotropic deformations of the triangular isotropic mesh
$(\rho^s_N)'$. This family is obtained by moving each vertex of
$\mathcal{T}_N(\Sigma)$ which does not belong to $\mathcal{Q}_N(\Sigma)$ within a
$2n-3$-dimensional affine space.
\end{lemma}
The key obeservation, that will make Lemma~\ref{lemma:moveapex} useful for
our purpose, is that $2n-3\geq 3$ for $n\geq 3$.
In particular, we deduce the following proposition:
\begin{prop}
\label{prop:imm}
Assume that $n\geq 3$, and $N$ is sufficiently large. Then, for every $s>0$ sufficiently small, there exist isotropic triangular meshes arbitrarily
close to $(\rho_N^s)'$, which define piecewise linear immersions $\Sigma\hookrightarrow{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$.
\end{prop}
\begin{proof}
As in Corollary~\ref{cor:immae}, showing that a map is an immersion is a purely local
matter.
We draw a local picture of the triangular mesh $(\rho^s_N)'$, near
the image $O$ of vertex
$\mathbf{v}$ of $\mathcal{Q}_N(\Sigma)$. In Figure~\ref{figure:localpert}, the
bullet labelled $O$ actually represents $\rho_N^s(\mathbf{v})\in
{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$. Similarly, all be points $P_i$, and $ A_{ij}\in {\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$
of the picture are
images of corresponding vertices of $\mathcal{T}_N(\Sigma)$ by the triangular
isotropic mesh $(\rho_N^s)'$.
Notice that the black and blue bullets are prescribed by the
quadrangular mesh $\rho_N^s$, whereas the red bullets are defined by its
triangular refinement $(\rho_N^s)'$. More specifically, the red bullets are the optimal apexes of
the corresponding optimal isotropic pyramids.
\begin{figure}[H]
\begin{pspicture}[showgrid=false](-2,-2)(2,2)
\psscalebox{1.5}{
\psdiamond[linecolor=lightgray,fillstyle=solid,fillcolor=lightgray](0,0)(1,1)
\psset{linecolor=blue }
\psline (-1,1) (1,1)
\psline (-1,0) (1,0)
\psline (-1,-1) (1,-1)
\psline (-1,1) (-1,-1)
\psline (0,1) (0,-1)
\psline (1,1) (1,-1)
\psline (-1,1) (1,-1)
\psline (0,1) (1,0)
\psline (-1,0) (0,-1)
\psline (-1,-1) (1,1)
\psline (-1,0) (0,1)
\psline (0,-1) (1,0)
\color{red}
\rput (-.5,.5){$\bullet$}
\rput (.5,.5){$\bullet$}
\rput (-.5,-.5){$\bullet$}
\rput (.5,-.5){$\bullet$}
\color{black}
\rput (-1,1){$\bullet$}
\rput (-1,0){$\bullet$}
\rput (-1,-1){$\bullet$}
\rput (0,1){$\bullet$}
\rput (0,-1){$\bullet$}
\rput (1,1){$\bullet$}
\rput (1,0){$\bullet$}
\rput (1,-1){$\bullet$}
\color{blue}
\rput (0,0){$\bullet$}
}
\psset{linecolor=black }
\rput (2,0){$P_0$}
\rput (-2,0){$P_2$}
\rput (0,2){$P_1$}
\rput (0,-2){$P_3$}
\color{red}
\rput(-.75,1.25){$A_{12}$}
\rput(+.75,1.25){$A_{01}$}
\rput(-.7,-1.2){$A_{23}$}
\rput(+.7,-1.2){$A_{30}$}
\color{blue}
\rput(.2,.5){$O$}
\end{pspicture}
\caption{Local perturbations of a triangular mesh}
\label{figure:localpert}
\end{figure}
We are now looking for a perturbation $(\rho_N^s)''$ of $(\rho_N^s)'$
by moving the points $A_{ij}$.
We denote $\ell_{N,s}'$ (resp. $\ell_{N,s}''$) the piecewise
linear maps associated the triangular mesh $(\rho_N^s)''$
(resp. $(\rho_N^s)''$).
\begin{enumerate}
\item \label{item:immae}The property of being an immersion is stable under small
deformations. Thus, for sufficiently small perturbation,
Corollary~\ref{cor:immae} holds for $\ell''_{N,s}$ as well. In
particular,
$\ell''_{N,s}$ is an immersion at every point $w\in\Sigma$ contained
in the interior of one of the four faces of $\mathcal{Q}_N(\Sigma)$, with vertex
$\mathbf{v}$ (the four smaller square in the figure).
\item \label{item:perthyp}Suppose that we can choose a perturbation so that
that $\ell''_{N,s}$
is an immersion at the vertex
$\mathbf{v}$ (corresponding to the point $O$). By linearity, this implies that $\ell''_{N,s}$
is an immersion at every interior point $w\in\Sigma$
of the union of shaded faces of the triangulation $\mathcal{T}_N(\Sigma)$ (with gray color on the picture).
\end{enumerate}
If we are able to show that there exists a perturbation, which
satifies the condition \eqref{item:perthyp} as above, we deduce,
together with the above property \eqref{item:immae}, that the piecewise
linear map $\ell''_{N,s}$ is an immersion at every interior point $w$
of the union of the four faces of $\mathcal{Q}_N(\Sigma)$ with vertex $\mathbf{v}$
(the big square in Figure~\ref{figure:localpert}).
In conclusion, if we have proved the following lemma:
\begin{lemma}
\label{lemma:immew}
If $(\rho_{N}^s)''$ is a triangular mesh sufficiently close to
$(\rho_{N}^s)'$, such that the correponding piecewise linear map
$\ell''_{N,s}:\Sigma\to{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ is an immersion at every vertex of
$\mathcal{Q}_N(\Sigma)$, then $\ell''_{N,s}$ is an immersion at every point
of $\Sigma$.
\end{lemma}
We merely need to show that there exists an isotropic perturbation
$(\rho_{N,s})''$ which satisfies the hypothesis of
Lemma~\ref{lemma:immew} and the proof of the proposition will be
complete.
Consider the mesh $(\rho_N^s)'$ represented locally by
Figure~\ref{figure:localpert}. There are $2n-3$ degrees of freedom for
perturbing each red vertex $A_{ij}$, in such a way that the triangular
mesh remains isotropic.
We would like
to put them in general position, so that the piecewise linear map
is an
immersion at $O$.
First, notice that the local injectivity is partially satisfied by
$(\rho_N^s)'$ for every $s>0$ sufficiently small. Indeed, by
Corollary~\ref{cor:immae}, two contiguous triangles of the mesh $(\rho_N^s)'$
in a common pyramid, for instance $(OP_0A_{01})$ and $(OA_{01}P_1)$, are contained
in distinct planes intersecting along a line of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$, which in this particular
case is $(OA_{01})$.
Consider now the two triangles of
$(OP_0A_{01})$ and $(OA_{30}P_0)$ of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$. There are two
possibilities:
\begin{enumerate}
\item The line $(OA_{30})$ is not contained in the plane of the triangle
$(OP_0A_{01})$, the two triangles lie in distinct plane intersecting
along the line $(OP_0)$.
\item The line $(OA_{30})$ is
contained in the plane of the triangle $(OP_0A_{01})$. In this case,
the associated piecewise linear map is not locally injective at $O$.
\end{enumerate}
In the second situation, we can always find an arbitrarily small
perturbation of the point $A_{30}$ which brings us back to the first situation, such that the associated piecewise
linear map is still isotropic.
Indeed, as pointed out there is a $2n-3\geq 3$ dimensional family of
points $A_{30}$ such that provide isotropic perturbation. There is a least.
Such space cannot be contained in the plane of $(OP_0A_{01})$
for obvious dimensional reasons. Thus, we may find the wanted
arbitrarily small isotropic perturbations of $A_{30}$ such that we are
in the first situation.
We consider now the case where we have two non contiguous triangles,
for instance $(OP_0A_{01})$ and
$OP_1A_{12}$. We know that the three lines $OP_0$, $OA_{01}$ and
$OP_1$ span a $3$-dimensional space by Corollary~\ref{cor:immae}.
By moving slightly $A_{12}$ within its $2n-3$-dimensional family
of isotropic perturbation, we can make sure that the intersection of
the planes containing the triangles $(OP_0A_{01})$ and $(OP_1A_{12})$ reduces to the point $O$.
There are other situations that we should handle as well.
For instance, we consider the triangles $(OP_0A_{01})$ and
$(OA_{12}P_2)$.
By Propositin~\ref{prop:noflatface}, item \eqref{prop:itemvert},
the lines $(OP_0)$ and
$(OP_2)$ are distinct. Up to a small isotropic perturbation by moving
$A_{01}$ within its $2n-3$-dimensional family, we may assume that
$A_{01}$ does not belong to the plane $(OP_0P_2)$.
By moving $A_{12}$ similarly, we may assume that $A_{12}$ does not
belong to the plane that contains the triangle $(OP_0A_{01})$.
Eventually, the two planes that contain $(OP_0A_{01})$
and $(OA_{12}P_2)$, after perturbation, intersect at a single point $O$.
Other cases are dealt with similarly. Eventually we have proved that
there are arbitrarily small isotropic deformations of $(\rho_N^s)'$,
obtained by moving the points $A_{ij}$,
such that the eight triangles of the mesh around $O$ lie in distinct
planes. In particular, the corresponding piecewise linear map is
an immersion at the vertex $\mathbf{v}$.
By induction, we can apply further similar perturbation, so that the
isotropic piecewise linear map is an immersion at every vertex of
$\mathcal{Q}_N(\Sigma)$.
This proves the proposition.
\end{proof}
\subsection{Proof of Theorem~\ref{theo:maindiscr}}
Gathering our results provides a complete proof one of our main
results:
\begin{proof}[Proof of Theorem~\ref{theo:maindiscr}]
The existence of ${\mathcal{C}}^0$-approximations of smooth isotropic
immersions $\ell:\Sigma\to{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^{2n}$ by piecewise linear isotropic
maps is proved in Theorem~\ref{theo:maindiscrweak}.
The statement for existence of approximations by piecewise linear
isotropic immersions is a proved at Proposition~\ref{prop:imm}.
The remaining case, for $n=2$, is a consequence of Corollary~\ref{cor:immae}.
\end{proof}
\section{Discrete moment map flow}
\label{sec:dmmf}
The moduli space ${\mathscr{M}}=\{ f:\Sigma\to M, f^*[\omega]=0\}$, where $\Sigma$ is a closed
surface endowed with an area form $\sigma$ was introduced at
\S\ref{sec:dream}. If $M$ is a K\"ahler
manifold, then ${\mathscr{M}}$ has an induced formal K\"ahler structure
$({\mathscr{M}},{\mathfrak{J}},{\mathfrak{g}},\Omega)$. The
group ${\mathcal{G}}=\mathrm{Ham}(\Sigma,\sigma)$ acts isometrically on ${\mathscr{M}}$.
The action is Hamiltonian, with moment map $\mu:{\mathscr{M}}\to
C^\infty_0(\Sigma)$, given by $\mu(f)=\frac{f^*\omega}\sigma$. In this
setting, a natural moment map flow is defined (cf. \S\ref{sec:mmf}) by
$$
\frac{df}{dt} = -\frac 12 \mathrm{grad} \|\mu\|^2.
$$
The properties of the above flow shall be studied
in a sequel to this
work~\cite{JRT}. For the time being, we merely provide a numerical
simulation of the above flow, implemented in the program
\emph{Discrete Moment Map Flow (DMMF)}, hosted on the
webpage:
\begin{center}
\href{http://www.math.sciences.univ-nantes.fr/~rollin/index.php?page=flow}{http://www.math.sciences.univ-nantes.fr/\textasciitilde{}rollin/}.
\end{center}
The idea is to approximate the flow, which is an evolution equation on
an infinite dimensional space of maps, by an analogue finite dimensional
approximation.
The finite dimensional flow is expected to
converge in some sense to the infinite dimensional flow as
$N\to\infty$, at least in favorable situations, but this is part of a broader project
to be expanded later in~\cite{JRT}.
\subsection{Definition of the finite dimensional flow}
In the rest of this section, we focus on the case where $M={\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^4$,
with its standard Kähler structure and
$\Sigma$ is a surface diffeomorphic to a torus, endowed
covering map $p:{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2\to\Sigma$ with $\Gamma$, its group of deck
transformation, which is a lattice of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2$. This data allows to define the
quadrangulations $\mathcal{Q}_N(\Sigma)$.
The space
of quadrangular meshes ${\mathscr{M}}_N$
is seen as a discrete analogue of the moduli space ${\mathscr{M}}$. The moment map $\mu$ has a discrete
version as well, given by
$$
\mu_N^r:{\mathscr{M}}_N\to C^2(\mathcal{Q}_N(\Sigma)).
$$
The space of discrete functions $C^2(\mathcal{Q}_N(\Sigma))$ is also
understood as a discrete analogue of $C^\infty(\Sigma)$. Recall that
this space of discrete functions is endowed with an inner product
$\ipp{\cdot,\cdot}$, which is an analogue of the $L^2$-inner product
induced by $\sigma$ (cf. \S\ref{sec:ip}) and denoted
$\ipp{\cdot,\cdot}$ as well. We denote by $\|\cdot\|$ the norm induced by the
inner product $\ipp{\cdot,\cdot}$. Then
\begin{align*}
D\|\mu_N^r \|^2|_\tau\cdot V & =2\ipp{D\mu_N^r|_\tau \cdot V,
\mu_N^r(\tau) }\\
&=- 2\ipp{D\mu_N^r|_\tau\circ J \cdot JV, \mu_N^r(\tau) }\\
&=2 \ipp{\delta_\tau (JV), \mu_N^r(\tau)}\\
&=2\ipp{JV,\delta_\tau^\star \mu_N^r(\tau)}
\end{align*}
hence
\begin{equation}
\label{eq:gradphi}
-\frac 12 D\|\mu_N^r \|^2|_\tau\cdot V =\ipp{V,J\delta_\tau^\star \mu_N^r(\tau)}.
\end{equation}
where
$$
\delta_\tau = - D\mu_N^r|_\tau\circ J.
$$
Its adjoint $\delta_\tau^*$ is defined by
$\ipp{\delta_\tau V,\phi}=\ipp{V,\delta_\tau^\star\phi}$.
For each map $u:{\mathscr{M}}_N\to C^2(\mathcal{Q}_N(\Sigma))$,
we may define a formal gradient vector field on the moduli space
$$\mathrm{grad}\, u :{\mathscr{M}}_N\longrightarrow
C^0(\mathcal{Q}_N(\Sigma))\otimes{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^4$$
by
$Du|_\tau\cdot V =\ipp {\mathrm{grad}\, u|_\tau , V }$.
Thus, by~\eqref{eq:gradphi}
$$
-\frac 12 \mathrm{grad} \|\mu_N^r(\tau)\|^2 = J\delta_\tau^\star \mu_N^r(\tau).
$$
and we can define a downward gradient flow by
$$
\frac{d\tau}{dt} = - \frac 12 \mathrm{grad} \|\mu_N^r\|^2
$$
which is equivalent to
\begin{equation}
\label{eq:discrf}
\boxed{\frac{d\tau}{dt} = J\delta_\tau^\star\mu_N^r(\tau). }
\end{equation}
\begin{dfn}
A solution $\tau_t:I\to{\mathscr{M}}_N$ of the ordinary differential
equation~\eqref{eq:discrf}, where $I$ is an open interval of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}$, is called a solution of the discrete moment
map flow.
\end{dfn}
\begin{rmk}
The discrete moment map flow is an ordinary differential equation
with smooth coefficients on
the affine space ${\mathscr{M}}_N$. The solution exists for short time but
might blowup in finite time. The general behavior of the flow
will be addressed in a sequel to this work~\cite{JRT}.
\end{rmk}
The flow has typical properties of ODE with smooth coefficients:
\begin{prop}
\label{prop:convflow}
Assume that $\tau_t:[0,C)\to{\mathscr{M}}_N$ is a maximal solution of the
the discrete moment map flow. If $\tau_t$ is bounded for $t\in [0,C)$, then
$C=+\infty$. If in addition $\tau_t$ converges to some
$\tau_\infty$, then
$\mu_N^r(\tau_\infty)\in\ker\delta_{\tau_\infty}^*$.
\end{prop}
\begin{rmk}
If the function $\|\mu^r_N\|^2$ on ${\mathscr{M}}_N$ was Morse, any bounded
flow $\tau_t$
would automatically converge toward a critical point of the function.
Although we are not trying to prove this fact, all our experiments with the DMMF program seem to indicate that the
flow is generically bounded and convergent. If $\ker\delta_{\tau_\infty}
= 0$, the conclusion of Proposition~\ref{prop:convflow} implies that
$\mu_N^r(\tau_\infty)$ is a constant discrete function and, by
Stokes theorem, $\tau_\infty$ must be an
isotropic quadrangular mesh. Notice that the fact that the kernel of the
operator $\delta_\tau^\star$ is $1$-dimensional holds for generic
$\tau$ according to Proposition~\ref{prop:generickernel}. This is also
confirmed by all the experiments using the DMMF program.
\end{rmk}
\begin{proof}
If $C$ is finite, and $\frac {d\tau}{dt}$ is bounded, then $\tau_t$
must converge to some $\tau_C$ as $t\to C$.
This contradicts the fact that $C$ is
maximal.
Hence, if $C$ is finite, $\frac {d\tau}{dt}$ must be unbounded.
In particular $\tau_t$ cannot remain in a bounded set, as the RHS of
the evolution equation would be bounded.
In conclusion, if $\tau_t$ is bounded, we have $C=+\infty$.
If $\tau_t$ converges towards $\tau_\infty$, the limit must be a fixed
point of the flow and
$\delta^\star_{\tau_\infty}\mu_N^r(\tau_\infty)=0$.
\end{proof}
\begin{rmks}
The kernel of the operator $\delta^\star_\tau$ contains the constants. In the general setting the kernel may not be reduced to constant and Proposition~\ref{prop:convflow} does not allow to conclude that limits of the discrete flow are isotropic quadrangular meshes. It seems reasonable, especially in view of the experimental results of \S\ref{sec:discflow}, to expect that the flow is always trapped in a compact set of ${\mathscr{M}}_N$. The study of these questions shall be carried out in a sequel to this work~\cite{JRT}.
\end{rmks}
\subsection{Implementation of the discrete flow}
\label{sec:discflow}
\subsubsection{Particular lattices}
Recall that the quadrangulations $\mathcal{Q}_N(\Sigma)$ are defined by identifying the
torus $\Sigma$ with a quotient, via the diffeomorphism
$\Phi:{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2/\Gamma\to\Sigma$ induced by the covering map $p:{\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2\to\Sigma$.
We merely have to make a choice for the
lattice $\Gamma$, in order to define $\mathcal{Q}_N(\Sigma)$ and a
corresponding discrete moment map flow. This choice is arbitrary and
a sufficiently sophisticated program could deal with any choice.
This is not the case of the DMMF program, however, which is base on
the choice of lattice
$$
\Gamma'' = {\mathbb{Z}}} \def\DD{{\mathbb{D}}} \def\SS{{\mathbb{S}}} \def\WW{{\mathbb{W}}} \def\QQ{{\mathbb{Q}}} \def\CC{{\mathbb{C}} e_1\oplus {\mathbb{Z}}} \def\DD{{\mathbb{D}}} \def\SS{{\mathbb{S}}} \def\WW{{\mathbb{W}}} \def\QQ{{\mathbb{Q}}} \def\CC{{\mathbb{C}} e_2,
$$
and surface $\Sigma
={\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2/\Gamma''$ already introduced at~\S\ref{sec:examples}.
Then $\Gamma''\subset
\Lambda_N$ for every positive integer $N$. The quadrangulation $\mathcal{Q}_N({\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^2)$
descends as a quadrangulation of the quotient $\mathcal{Q}_N(\Sigma)$.
The quadrangulation has $N^2$ vertices and a mesh in ${\mathscr{M}}_N$ can be
stored as an $N\times N$ array with entries in ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^4$.
\subsubsection{The Euler method}
It is easy to provide numerical approximations of an ODE such as the
discrete moment map flow by the Euler method.
We consider discrete time values $t_i=i\Delta t$, where $i$ is an
integer and
$\Delta t>0$ is a small time step increment. Starting at
time $t_0=0$ with a mesh $\tau_0\in{\mathscr{M}}_N$, we compute $\tau_1$,
$\tau_2$, etc... as follows: given $\tau_i \in {\mathscr{M}}_N$
at time $t=i\Delta t$ we compute
$$V_{i}=\delta_{\tau_i}^\star\mu_N^r(\tau_i)$$
and define
$$
\tau_{i+1}= \tau_i +\Delta t\cdot JV_i .
$$
The above computations are easy to carry out and
the operator $\delta_\tau^\star$ is explicitly given by Lemma~\ref{lemma:operators}.
Starting from any quadragular mesh, we can compute the above flow very
quickly in real time on an ordinary machine, whenever $N$ is not too
big (for instance $N\leq 100$ on our laptop).
\subsubsection{Visualization}
A choice has to be made for the visualization of each mesh
$\tau\in{\mathscr{M}}_N$ on a computer screen. The basic idea is to choose a
projection of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^4$ on a $3$-dimensional manifold and represent a
mesh as a surface in a $3$-dimensional space.
We explain now the choice made in the DMMF program, which may not be
the best for certain situations:
we perform a radial projection of the vertices of $\tau$ onto the unit
sphere $\SS^3$ of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^4$, centered at the origin. This projection is
followed by a stereographic projection of the sphere minus a point
onto one of its tangent spaces identified to
${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^3$. Once the positions of the projections of the vertices of
$\tau$ in ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^3$ are computed, we can draw the quadrilateral
associated to the
faces in ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^3$. A library like OpenGL allows to represent a
quadrangular mesh of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^3$ in perspective.
We fill the faces with a range of colors which depends on the symplectic
density of each face (i.e. the value of $\mu_N^r(\tau)$ on this face).
In addition, motions of the mouse are used to precompose these projections
with Euclidean rotations of ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^4$. This technique allows the user to look at surfaces from
various angles using the mouse.
\subsubsection{The DMMF code}
We found out the \emph{processing} language,
which is a java dialect, was extremely efficient to code the DMMF program.
The source code and more information on the technical aspects of the
DMMF program are available on the homepage:
\begin{center}
\href{http://www.math.sciences.univ-nantes.fr/~rollin/index.php?page=flow}{http://www.math.sciences.univ-nantes.fr/\textasciitilde{}rollin/}.
\end{center}
The program starts the flow by sampling various examples of parametrized
tori in ${\mathbb{R}}} \def\LL{{\mathbb{L}}} \def\TT{{\mathbb{T}}} \def\EE{{\mathbb{E}}} \def\FF{{\mathbb{F}}^4$. From an experimental point of view, our numerous
observations seem to
indicate that the flow should always converges, that the convergence is
fast, and that the limits are
isotropic. Figure~\ref{figure:dmmf} shows an example of (static) output of the
DMMF program. This quadrangular mesh has diameter of order $1$ and
symplectic density of order $10^{-8}$. The reader is encouraged to
experiment directly the more interactive and dynamic aspects of the program.
\begin{figure}[H]
\includegraphics[width=400bp]{torus-trefoil.eps}
\caption{Isotropic quadrangular mesh}
\label{figure:dmmf}
\end{figure}
\vspace{10pt}
\bibliographystyle{abbrv}
|
{
"timestamp": "2018-12-10T02:10:33",
"yymm": "1802",
"arxiv_id": "1802.08712",
"language": "en",
"url": "https://arxiv.org/abs/1802.08712"
}
|
\section{Introduction}\label{Sec:Introduction}
General Relativity (GR) is widely accepted to be the correct description of gravity at Solar System scales. In this regime, not only do its predictions show remarkable agreement with astrophysical data, but precise measurements of phenomena such as light deflection around the Sun, perihelion shift of Mercury, and others, rule out many modifications to GR. Nonetheless, GR exhibits weaknesses both at very high and very low energy regimes.
At high energies, unavoidable singularities arise during gravitational collapses and the so-called renormalization problem limits the analysis of quantum states. At low energies, in particular, on cosmological scales, GR relies on the presence of an unknown component in order to explain the observed accelerated expansion of the Universe. The previous limitations suggest that GR may need modifications for both extreme energy regimes. Furthermore, modifications in the two regimes may be related as high energy corrections to GR might leak down to cosmological scales, showing up as low energy corrections. In this paper we explore possible connections between these two regimes. In particular, we will show how recent local constraints on the speed of gravity waves limit their solution for compact objects.
In the past decades a large variety of gravity theories have been proposed \cite{Clifton:2011jh}. They have been extensively studied and constrained with cosmological data from CMB and large scale structure. However, recently strong constraints have been imposed with the detection of gravitational waves emission GW170817 from a neutron star binary merger by LIGO and VIRGO \cite{PhysRevLett.119.161101}, and its optical counterpart (the gamma ray burst GRB 170817A) \cite{2041-8205-848-2-L12,2041-8205-848-2-L13,2041-8205-848-2-L14,2041-8205-848-2-L15,2017Sci...358.1556C}. The delay of the optical signal was of 1.7 seconds, which places a stringent constraint on the propagation speed of gravity waves $c_T$. Indeed, it was found that $|c_T^2-1|<1\times 10^{-15}$ (with unity speed of light). As a result, a large class of modified gravity theories was highly disfavored as argued in \cite{Lombriser:2015sxa,Lombriser:2016yzn,Baker:2017hug,Creminelli:2017sry,Sakstein:2017xjx,Ezquiaga:2017ekz}.
On the other hand, there have also been efforts to test gravity theories from observations of black holes \cite{Berti:2015itd, Cardoso:2016ryw, Johannsen:2015mdd}. GR predicts that black holes are solely characterized by a few ``charges" -- their mass $m$, angular momentum $a$ and electric charge $Q$ -- through what is known as the no-hair theorem. In extensions to GR additional degrees of freedom are usually introduced, which may add a new kind of charge to the spacetime solution. In this case, the metric carries additional information besides the mass, angular momentum and electric charge, and we say we are in the presence of a hairy black hole. These extra degrees of freedom will be carriers of fifth forces which may be detected from star orbits around the black holes \cite{Will:2007pp, Psaltis:2015uza, Hees:2017aal} or from the overall structure of accreting gas near the event horizon \cite{Psaltis:2010ca, Loeb:2013lfa, Johannsen:2016vqy}. These two examples correspond to non-dynamical tests which probe the stationary spacetime of a black hole. We note however, that even in the case where theories can avoid hair and have the same stationary black hole solutions as GR, new signatures may arise in dynamical situations; for example, when black holes are formed in a binary merger, the ringdown signal may carry a new set of modes that can be traced back to the new degrees of freedom \cite{Barausse:2008xv,Tattersall:2017erk, Berti:2018vdi}. In general, dynamical tests allow us to distinguish models that have the same stationary spacetime.
In this paper we consider gravity models that have $c_T=1$ on a cosmological background -- where the extra degrees of freedom can play a significant role on cosmological scales -- and, as a first approach, expose the relation between the presence of hairy static black hole solutions and the speed of gravity waves. Before we proceed, it is important to clarify what type of ``hair" we will be considering here. We will consider hair to be any modification to GR that can be measured with non-dynamical tests. In particular, hair will be a permanent charge in a static, spherically symmetric, and asymptotically flat spacetime. When the metric profile is characterized by a new global charge (different to mass, spin or electric charge) we will say we have ``primary" hair, and if the profile has modifications that depend on the same charges as in GR, we say we have ``secondary" hair \cite{Herdeiro:2015waa, Bambi:2017}. This distinction is important to understand the number of independent parameters that fully determine the black hole solution, but both have physical consequences as they induce a geometry different to GR.
We mention that in some cases it may be possible to construct ``stealth" black holes, where the geometry of spacetime is unchanged from the corresponding GR solution (i.e.~no new charge), but the black hole is dressed with a non-trivial additional field profile. For example, some stealth black hole solutions in scalar-tensor and vector-tensor theories can be found in {\cite{Babichev:2017guv,Heisenberg:2017hwb}. In this situation, non-dynamical tests will not be able to distinguish these models from GR and, therefore, for the purpose of this paper, these examples will not be considered as hairy solutions.
With a clear definition of hair, we explore static, asymptotically flat black hole solutions on different modified gravity theories that are cosmologically relevant. We find that all scalar-tensor theories considered here have no hair at all or hair with negligible effects (although exceptions can be found in theories with no cosmological effects). Other models such as vector-tensor or bimetric theories more easily lead to hairy black holes regardless of their cosmological solution. We find examples with primary and secondary hair. Our study allows us to identify the theories which may lead to observable signatures on {\it both} cosmological and astrophysical scales, and can be used to build a roadmap for a coordinated study with future large scale structure surveys and gravitational wave observations.
No-hair theorems for GR and modified gravity (e.g.~\cite{Sotiriou:2015pka}) have been constructed under quite strict conditions, e.g.~the spacetime must be asymptotically flat and hair must be permanent. It is straightforward to break these conditions in a reasonable way \cite{Cardoso:2016ryw} and thus obtain hairy solutions, even in GR. For instance, changing the boundary conditions may lead to the Schwarzschild-De-Sitter solution which has a metric such that $g_{00}=1-2m/r+\Lambda r^3/3$, i.e.~an extra term proportional to $\Lambda$ which might be considered hair. Alternatively, in scalar-tensor theories, a time-dependent boundary condition for the scalar field can anchor hair on the surface of the black hole \cite{Jacobson:1999vr}. In the same way, more complex extra fields can be arranged to form hair. A notable example arises with a complex scalar field \cite{Herdeiro:2015waa} or with coupled dilaton-Maxwell systems \cite{PhysRevD.97.064032}. More recently, it has been shown that it is possible to construct solutions in which massive scalar fields hover around black holes for an extended period of time \cite{Cardoso:2011xi} leading to long-lived but not permanent hair. Given that we live in a cosmological spacetime with an abundance of fields, all of these examples show that hair can be easily present in black holes under reasonable assumptions.
Nevertheless, all the mechanisms that have been proposed so far lead to very mild hair which, arguably, may be unobservable. For example, ``De Sitter" hair is remarkably weak compared to the usual Newtonian potential and any cosmological boundary condition that might lead to scalar hair will be highly suppressed. So if one can show that a theory must satisfy the no-hair theorem, it is extremely likely that any attempts at breaking it solely through changing the boundary condition will lead to effects which are too weak to be detected as a fifth force (although they might emerge in a stronger gravitational regime, like a black hole merger). This means that no-hair theorems are a useful guide to undertake a rough census of where to look in the panorama of gravitational theories.
This paper is organized as follows. In Section \ref{Sec:TensorSpeed} we discuss the speed of gravitational waves in the context of modified gravity. In Section \ref{Sec:ST} we focus on scalar-tensor theories with $c_T=1$ and discuss the presence of hairy static black hole solutions. In Sections \ref{Sec:Other} we discuss mainly vector-tensor theories as well as other gravitational models with $c_T=1$ that do evidence hair, such as bimetric theories. Finally, in Section \ref{Sec:conclusions} we summarise our results and discuss their consequences.
Throughout this paper we will use natural units in which $G_N=c=1$.
\section{The speed of gravitational waves.}\label{Sec:TensorSpeed}
GR is a single metric theory for a massless spin-2 particle, and hence it propagates two physical degrees of freedom corresponding to two polarizations. On any given background spacetime, GR predicts that both polarizations propagate locally along null geodesics, and thus gravitational waves travel at the same speed as that of electromagnetic waves. This feature is particular of GR where Lorentz invariance is locally recovered, and hence all massless waves are expected to propagate at the same speed. However, such a feature can easily change in a theory of gravity where additional degrees of freedom are coupled to the metric in a non-trivial way. These additional fields can take special configurations in different backgrounds, and define a preferred direction that will spontaneously break local Lorentz invariance. In this case, there will be an effective medium for propagation of gravity waves, and their speed will change. Furthermore, depending on the configurations of the additional degrees of freedom, the speed of gravity waves could be anisotropic and even polarization dependent \cite{Tso:2016mvv}.
The speed of gravity waves can be used to discriminate and test various modified gravity theories. This has been a topic of special interest in cosmology where a number of models have been proposed. In this case, the metric background is given by:
\begin{equation}
\bar{g}_{\mu\nu}=-dt^2+a(t)^2d\vec{x}^2,
\end{equation}
where $a(t)$ is the scale factor describing the expansion of the universe. Gravitational waves are described by small perturbations of the metric and thus we can write the total metric as:
\begin{equation}
g_{\mu\nu}=\bar{g}_{\mu\nu}+h_{\mu\nu},
\end{equation}
where $h_{\mu\nu}$ describes the amplitude of the waves and carries the information of all the metric polarizations. In this background, additional gravitational fields such as scalars or vectors will generically have a time-dependent solution which, even in a local frame, will define a preferred direction of time. It has been shown that for single-metric gravity models propagating a massless spin-2 particle, the action for gravity waves can generically be written as:
\begin{equation}\label{ActionTensors}
S=\frac{1}{2}\int d^3x dt \, M_*^2(t)\left[ \dot{h}_A^2 - c_T^2(t)( \vec{\nabla} h_A)^2\right],
\end{equation}
where we have expanded $h_{\mu\nu}$ in two polarization components $h_A$ with $A=+,\times$\footnote{Bimetric gravity theories will propagate additional tensor modes that will generically be coupled to $h$, and hence the action for gravity waves will be different to that in eq.~(\ref{ActionTensors}). Nevertheless, a similar analysis can be done to find $c_T$ (or some dispersion relation) in FRW.}. Here, $M_*$ is an effective Planck mass and $c_T$ is the propagation speed of gravity waves. Both of these quantities may in general depend on time, and thus in this case the speed will always be isotropic and polarization independent. It is usual that $c_T$ depends on the background solution of the additional gravitational degrees of freedom.
Let us consider one particular example of a shift-symmetric quartic Horndeski gravity theory \cite{Horndeski:1974wa,Deffayet:2009wt} given by:
\begin{align}
S=&\frac{1}{2}\int d^4x \, \sqrt{-g}\left[ G(X) R \nonumber \right. \\
&\left. + G_{,X}(X) \left( (\Box \phi)^2-\nabla_\mu\nabla_\nu\phi \nabla^\mu\nabla^\nu\phi \right) \right],
\end{align}
where $\phi$ is an additional gravitational scalar field, and $G$ is an arbitrary function of the kinetic term $X=-\frac{1}{2}\nabla_\mu \phi \nabla^\mu\phi$ and $G_{,X}$ its derivative with respect to $X$. On a cosmological background, we find $c_T^2=1/(1-\frac{2XG_{,X}}{G})$. One can see that, Taylor expanding, $G\simeq G_0+XG_{,X}$, if $XG_{,X}\ll G_0$ then, $c_T\simeq1$. This can occur if $XG_X$ is small but also if $G_0$ is large, i.e~if the contribution of the scalar field to the overall cosmological dynamics is negligible. In this paper we will consider the case when the contribution to the background dynamics is {\it not} negligible, i.e.~the extra degree of freedom has a relevant impact on cosmological scales.
There are, of course, cases in which the additional degrees of freedom do not affect the propagation speed of gravity waves. A particular, well-studied, example is Jordan-Brans-Dicke theory \cite{Jordan:1959eg,Brans:1961sx} given by:
\begin{equation} \label{JBDAction}
S=\frac{1}{2}\int d^4x \sqrt{-g}\left[ \phi R-\frac{\omega}{\phi} (\nabla_\mu \phi \nabla^\mu\phi) \right],
\end{equation}
where $\omega$ is an arbitrary constant. In this case $c_T=1$.
\section{Scalar-Tensor Theories}\label{Sec:ST}
Scalar-tensor theories have been extensively studied in both the strong gravity and cosmological regime. Much effort in recent years has been put into researching general theories of a scalar field non-minimally coupled with a metric; from Horndeski gravity \cite{1974IJTP...10..363H}, to Beyond Horndeski \cite{Gleyzes:2014dya,Zumalacarregui:2013pma}, and Degenerate Higher Order Scalar Tensor (DHOST) theories \cite{Langlois:2017mdk}. Furthermore scalar-tensor theories are ubiquitous in that they appear as some limit of other theories of gravity, such as the decoupling limit of massive gravity \cite{deRham:2011by}. The well-posedness and hyperbolicity of scalar-tensor theories has been studied in \cite{Papallo:2017qvl,Papallo:2017ddx}.
We will focus on Horndeski and Beyond Horndeski theories in this section (as well as Chern-Simons \cite{Alexander:2009tp} and Einstein-Scalar-Gauss-Bonnet gravity \cite{Sotiriou:2014pfa}) and show the solutions of static black holes when $c_T=1$. In this paper, we will ignore DHOST theories due to the relative infancy of research into their black hole solutions. Cosmological consequences of the detection of GW/GRB170817 to DHOST theories has, however, been investigated in \cite{Langlois:2017dyl, Crisostomi:2017lbg}.
\subsection{Horndeski}\label{sechorn}
The most general action for scalar-tensor gravity with 2$^{nd}$ order-derivative equations of motion is given by the Horndeski action \cite{Horndeski:1974wa}:
\begin{align}
S=\int d^4x\sqrt{-g}\sum_{n=2}^5L_n,
\end{align}
where the Horndeski Lagrangians are given by:
\begin{align}
L_2&=G_2(\phi,X)\\
L_3&=-G_3(\phi,X)\Box \phi\\
L_4&=G_4(\phi,X)R+G_{4,X}(\phi,X)((\Box\phi)^2-\phi^{\mu\nu}\phi_{\mu\nu} )\\
L_5&=G_5(\phi,X)G_{\mu\nu}\phi^{\mu\nu}-\frac{1}{6}G_{5,X}(\phi,X)((\Box\phi)^3 \nonumber\\
& -3\box\phi \phi^{\mu\nu}\phi_{\mu\nu} +2 \phi_{\mu\nu}\phi^{\mu\sigma}\phi^{\nu}_{\sigma}),
\end{align}
where $\phi$ is the scalar field with kinetic term $X=-\phi_\mu\phi^\mu/2$, $\phi_\mu=\nabla_\mu\phi$, $\phi_{\mu\nu}=\nabla_\nu\nabla_\mu\phi$, and $G_{\mu\nu}=R_{\mu\nu}-\frac{1}{2}R\,g_{\mu\nu}$ is the Einstein tensor. The $G_i$ denote arbitrary functions of $\phi$ and $X$, with derivatives $G_{i,X}$ with respect to $X$.
For theories where the scalar field plays some role on the cosmological scales, the constraint $c_T=1$ imposes $G_{4,X}=0$ and $G_5$ has to be constant (in which case $L_5$ vanishes through Bianchi identity). Therefore, the resulting constrained Horndeski action is given by:
\begin{align}
S=\int d^4x \sqrt{-g}\left[G_4(\phi)R+G_2(\phi,X)-G_3(\phi,X)\Box\phi\right].\label{Sreduced}
\end{align}
In this case, we expect the cosmological energy density fraction of the scalar field to be $\Omega_\phi\sim O(1)$ due to our considering only \textit{cosmologically relevant} theories, where $\Omega_\phi$ is given for the above action by \cite{Bellini:2014fua}:
\begin{align}
\Omega_\phi=\frac{-G_2+2X\left(G_{2,X}-G_{3,\phi}\right)+6\dot{\phi}H_0\left(XG_{3,X}-G_{4,\phi}\right)}{6G_4H_0^2},
\end{align}
where $H_0$ is the value of the Hubble parameter today, $G_{i,\phi}$ denote derivatives of the functions $G_i$ with respect to $\phi$, and overdots denote derivatives with respect to time.
We now proceed to analyse the black hole solutions arising from the action in equation (\ref{Sreduced}). Even though there is no no-hair theorem for generic $G_i$ functions, there are a number of theorems for restricted cases. We mention three distinct families of models.
First, through a conformal transformation, the action in equation (\ref{Sreduced}) can be re-expressed in the form of GR plus a minimally coupled scalar field (with modified $G_2$ and $G_3$ \cite{Bettoni:2013diz}). The Lagrangian would then resemble a Kinetic Gravity Braiding model \cite{Deffayet:2010qz}. For $G_2=\omega(\phi)X,\, G_3=0$, the reduced Horndeski action can be seen to be in the form of generalised Brans-Dicke theories \cite{PhysRev.124.925}, for which a no hair theorem exists \cite{Sotiriou:2015pka}.
Second, a no-hair theorem for K-essence (i.e.~$G_3=0$, $G_4=1$) is given in \cite{2014PhRvD..89h4056G}, provided that $G_{2X}$ and $\phi G_{2,\phi}$ are of opposite and definite signs.
Third, we mention that for shift-symmetric theories, which are invariant under $\phi \rightarrow \phi+$constant and hence $G_{i,\phi}=0$ in eq. (\ref{Sreduced}), the action takes the form of a minimally coupled scalar field with potentially unusual kinetic terms arising from $G_2(X)$ and $G_3(X)$. For such shift-symmetric theories, no-hair theorems exist for static black holes \cite{Hui:2012qt,Sotiriou:2015pka}. The outline of these no-hair theorems for shift-symmetric theories is given by:
(i) spacetime is spherically symmetric and static, and the scalar field shares the same symmetries; (ii) spacetime is asymptotically flat, $\frac{d\phi}{dr}\rightarrow 0$ when $r\rightarrow \infty$, and the norm of the Noether current associated to the shift symmetry is regular on the horizon; (iii) there is a canonical kinetic term $X$ in the action and the $G_{i,X}$ functions contain only positive or zero powers of $X$.
We note that this no-hair theorem is valid for all shift-symmetric Horndeski actions that satisfy the above conditions, even those that do not satisfy the constraint $c_T=1$. Focusing on a spherically symmetric and static spacetime, we can still have hairy black hole solutions by breaking assumptions (ii) or (iii) of this no-hair theorem. It is indeed expected that realistic situations of dark-energy models will break assumption (ii) as the scalar field is responsible for the late-time accelerated expansion of the universe or, more generally, the scalar field can lead to large-scale effects and thus $\frac{d\phi}{dr}$ does not necessarily vanish when $r\rightarrow \infty$. Examples like this can easily be realized by adding a time-dependent boundary condition to the scalar field \cite{Jacobson:1999vr} associated to the cosmological expansion. However, such a scalar hair would be highly suppressed and negligible in the vicinity of a black hole.
We explore further cases that violate assumption (iii). A number of Lagrangians that break this assumption are discussed in \cite{Babichev:2017guv} but we will focus on the only two examples which still obey the constraint $c_T=1$ for Horndeski gravity. In addition, we will discuss a class of theories that even though they explicitly depend on $\phi$, are related to shift-symmetric theories via conformal transformations and therefore they also satisfy the no-hair theorem previously mentioned.
\subsubsection{Quadratic term}
The first potentially hair-inducing term posited in \cite{Babichev:2017guv} is the addition of a square-root quadratic term to the canonical kinetic term, $G_2(X)=X+2\beta\sqrt{-X},\, G_4=\frac{1}{2}M_P^2$, where $\beta$ is an arbitrary constant. As we can see, in this case $G_{2,X}$ does contain negative powers of $X$ and thus hairy black holes may appear.
First, we require the scalar field to be cosmologically relevant, and thus
\begin{align}\label{QuadraticOmega}
\Omega_\phi=\frac{-X}{3M_P^2H_0^2}\sim O(1).
\end{align}
Now, assuming a spherically symmetric and static ansatz for the metric and scalar field:
\begin{align}
ds^2=-h(r)dt^2+\frac{1}{f(r)}dr^2+r^2d\Omega^2,\; \phi=\phi(r)\label{ansatz}
\end{align}
and requiring that the radial component of the Noether current $J^r=\frac{1}{\sqrt{-g}}\frac{\delta S}{\delta\left(\partial_r \phi\right)}=0$ (to ensure a regular current on the horizon, as required in assumption 2 above) we find:
\begin{align}
-X&=\frac{1}{2}f(r)\left(\frac{d\phi}{dr}\right)^2=\beta^2.
\end{align}
We can then solve the field equations (provided in \cite{Babichev:2017guv}) for the metric function $f(r)$ to find:
\begin{align}
f(r)=h(r)=1-\frac{2m}{r}+\frac{\beta^2}{3M_P^2}r^2.
\end{align}
Thus, we find a stealth Schwarzschild-AdS black hole of mass $m$ with an effective negative cosmological constant $\Lambda_{\text{eff}}=-{\beta^2}/{M_P^2}$ (assuming real $\beta$ and hence $\beta^2>0$.). If we required asymptotic flatness then this model would have the same solution as GR, and there would be no hair. Relaxing that assumption, we note that cosmologically relevant scalar fields satisfy eq.~(\ref{QuadraticOmega}) and we then expect $\Lambda_{\text{eff}}\sim H_0^2$. Therefore, hair would be negligible near the black hole.
\subsubsection{Cubic term}
A second possibility analysed in \cite{Babichev:2017guv} is the introduction of a logarithmic cubic term with $G_2=X,\,G_3=\alpha M_P\log(-X),\, G_4=M_P^2/2$, where $\alpha$ is an arbitrary dimensionless constant. Again, we see that in this example, $G_{3,X}$ has negative powers of $X$. If the scalar field is to have cosmological relevance, we need:
\begin{align}
\Omega_\phi=\frac{X+6\dot{\phi}H_0\alpha M_P}{3M_P^2H_0^2}\sim O(1).
\end{align}
Again, using the ansatz given by eq.~(\ref{ansatz}) we find the following expression for ${d\phi}/{dr}$ by requiring $J^r=0$:
\begin{align}
\frac{d\phi}{dr}=&\;-\alpha M_P\left(\frac{1}{h(r)}\frac{dh(r)}{dr}+\frac{4}{r}\right),
\end{align}
To proceed we assume that $h(r)=f(r)$. Making use of the field equations calculated in \cite{Heisenberg:2017hwb} (translating from a vector-tensor theory to a shift-symmetric scalar-tensor theory such that $A_\mu\rightarrow\partial_\mu\phi$, i.e.~$A_0=X_0=0,\, A_1={d\phi}/{dr}$), we find the following hairy solution:
\begin{align}
ds^2=&-\left(1-\frac{2m}{r}+\frac{c}{r^{4+\frac{1}{\alpha^2}}}\right)dt^2+\left(1-\frac{2m}{r}+\frac{c}{r^{4+\frac{1}{\alpha^2}}}\right)^{-1}dr^2\nonumber\\
&+\frac{r^2}{1+4\alpha^2}d\Omega^2,
\end{align}
where we have rescaled $r$ and $t$ by constant pre-factors to obtain a Schwarzschild-like solution.
If we imposed asymptotic flatness, we would find that the solution cannot have hair as the metric line element does not approach Minkowski when $r\rightarrow \infty$ due to the factor of ${1}/({1+4\alpha^2})$ in the angular part. Thus, we cannot construct a static, spherically symmetric, asymptotically flat solution with scalar hair in this theory (under the assumption that $h(r)=f(r)$). Furthermore, the fact that $G_3$ generically diverges for $X=0$ is suggestive that Minkowski space is not a solution for this theory, and therefore this model does not seem to be viable.
\subsubsection{Conformally shift-symmetric theories}
We now proceed to discuss models that depend explicitly on $\phi$, and hence break the assumptions of the shift-symmetric no-hair theorem. While such models could generically lead to hairy black holes, here we analyse a special class that is conformally related to shift-symmetric theories, and thus avoids scalar hair.
In the prototypical scalar-tensor theory of gravity, Brans-Dicke theory, it is well known that the theory can be recast from the `Jordan frame' (in which a non-minimal coupling between the scalar and curvature exists) into that of GR with a minimally coupled scalar field through the use of a conformal transformation \cite{Bettoni:2013diz,Sotiriou:2015pka}. The trade off is that, in this so-called `Einstein frame' where the non-minimal coupling between the scalar field and curvature has been eliminated, any additional matter fields no longer couple solely to the metric but also to the gravitational scalar field. However, if we work in a vacuum then both Jordan and Einstein frames are entirely physically equivalent. We can use this same analysis to show that theories in the Jordan frame of the type:
\begin{align}\label{SConformalShift}
S_J=\frac{M_P^2}{2}\int d^4x\;\sqrt{-g}\;&\left[\phi R +\phi^2F_2(X/\phi)-\phi F_3(X/\phi)\Box\phi\right.\nonumber\\
&\left.-V(\phi)\right]
\end{align}
can be transformed from the Jordan frame into the Einstein frame through the conformal transformation $\tilde{g}_{\mu\nu}=\phi g_{\mu\nu}$. Here, $F_i$ are arbitrary functions of $X/\phi$ and $V$ is a potential for the scalar field. Eq.~ (\ref{SConformalShift}) leads to the following action in the Einstein frame:
\begin{align}
S_E=\frac{M_P^2}{2}\int d^4x\;\sqrt{-\tilde{g}}\;&\left[ \tilde{R} +F_2(\tilde{X})+2\tilde{X}F_3(\tilde{X})-F_3(\tilde{X})\tilde{\Box}\phi\right.\nonumber\\
&\left.-\phi^{-2}V(\phi)\right].
\end{align}
In the case of vanishing potential $V=0$, the Einstein frame action $S_E$ is clearly shift symmetric in the scalar field $\phi$. Thus, via the no-hair theorem in \cite{Hui:2012qt}, static black hole solutions for $\tilde{g}_{\mu\nu}$ should be the same as in GR with no scalar hair. As a consequence, solutions for $g_{\mu\nu}$ from $S_J$ will not have hair either.
For cosmologically relevant models, $\phi$ will have a fractional energy density given by:
\begin{widetext}
\begin{align}
\Omega_\phi=\frac{V-\phi^2 F_2 +2X\left(\phi^2 F_{2,X}-F_3-\phi F_{2,\phi}\right)+3\dot{\phi} H_0\left(2\phi X F_{3,X}-M_P^2\right)}{3M_P^2H_0^2\phi}\sim O(1).
\end{align}
\end{widetext}
Generalizing to the case with $V\neq0$, it is known that if $F_3=0$ we then have a K-essence model and static black hole solutions will not have hair provided that $V_{,\phi\phi}>0$ and $F_{2,X}>0$ (these conditions can be interpreted as constraining the scalar field to be stable and to satisfy the null energy condition \cite{2014PhRvD..89h4056G,Sotiriou:2015pka}).
For non-zero $V$ and $F_3$, the no-hair condition can be shown to be (through integrating $(\phi^{-2}V)_\phi \mathcal{E}_\phi$ from the horizon to spatial infinity, with $\mathcal{E}_\phi$ being the equation of motion for the scalar field \textit{in the Einstein frame} \cite{Maselli:2015yva}):
\begin{widetext}
\begin{align}
&(F_2+2\tilde{X}F_3)_{,\tilde{X}}+f(r)\frac{d\phi}{dr} F_{3\tilde{X}}\left(\frac{1}{2h(r)}\frac{dh(r)}{dr}-\frac{2}{r}\right)\geq0\;\text{and}\;(\phi^{-2}V)_{\phi\phi}\geq 0\nonumber\\
\text{or}\; & (F_2+2\tilde{X}F_3)_{,\tilde{X}}+f(r)\frac{d\phi}{dr} F_{3\tilde{X}}\left(\frac{1}{2h(r)}\frac{dh(r)}{dr}-\frac{2}{r}\right)\leq0\;\text{and}\;(\phi^{-2}V)_{\phi\phi}\leq 0,
\end{align}
\end{widetext}
where $h(r),\,f(r)$ are the metric functions in the spherically symmetric ansatz given by eq.~(\ref{ansatz}).
\subsection{Beyond Horndeski}
Horndeski gravity can be extended by the addition of terms which lead to higher order derivative equations of motion, but without an extra propagating degree of freedom \cite{Gleyzes:2014dya,Zumalacarregui:2013pma}. The beyond Horndeski terms are given by
\begin{align}
L_4^{BH}&=F_4(\phi,X)\epsilon^{\mu\nu\rho}_{\sigma}\epsilon^{\mu'\nu'\rho'\sigma}\phi_\mu\phi_{\mu'}\phi_{\nu\nu'}\phi_{\rho\rho'}\\
L_5^{BH}&=F_5(\phi,X)\epsilon^{\mu\nu\rho\sigma}\epsilon^{\mu'\nu'\rho'\sigma'}\phi_\mu\phi_{\mu'}\phi_{\nu\nu'}\phi_{\rho\rho'}\phi_{\sigma\sigma'}
\end{align}
The condition $c_T=1$ generalizes to:
\begin{equation}
F_5=0, \quad G_{5,X}=0, \quad G_{4,X}-G_{5,\phi}=2XF_4.
\end{equation}
Note that by setting $F_4=0$ in the above equation (i.e. recovering Horndeski theory), the condition for $c_T=1$ appears to be $G_{4,X}=G_{5,\phi}$ rather than $G_{4,X}=G_{5,\phi}=0$ as stated in section \ref{sechorn}. This is not inconsistent, as Horndeski theories with $G_{4,X}=G_{5,\phi}$ will indeed result in $c_T=1$ \cite{Baker:2017hug,Creminelli:2017sry,Sakstein:2017xjx,Ezquiaga:2017ekz}. As discussed in \cite{Baker:2017hug}, however, in the Horndeski case we require both $G_{4,X}$ and $G_{5,\phi}$ to vanish \textit{independently} rather than relying on any finely tuned cancellation between the two terms. On the other hand, for Beyond Horndeski theories we can make use of the presence of the additional free function $F_4$ to cancel the contributions of $G_{4,X}$ and $G_{5,\phi}$ in $c_T$, thus preserving a richer landscape of viable theories with $c_T=1$.
Similarly to the Horndeski case, we first require the scalar energy density parameter to be cosmologically relevant, i.e.~$\Omega_\phi\sim O(1)$, with $\Omega_\phi$ given by:
\begin{widetext}
\begin{align}
\Omega_\phi=\frac{-G_2+2X\left(G_{2,X}-G_{3,\phi}\right)+6H_0\dot{\phi}\left(XG_{3,X}-G_{4\phi}-2XG_{4,\phi X}\right)+24H_0^2X^2\left(F_4+G_{4,XX}\right)-48H_0^2 X^2\left(2F_4+XF_{4,X}\right)}{6H_0^2\left(G_4-2XG_{4,X}+XG_{5\phi}\right)}
\end{align}
\end{widetext}
In \cite{Lehebel:2017fag}, it is shown that shift-symmetric Horndeski \textit{and} Beyond Horndeski have no hair for a regular, asymptotically flat spacetime, with canonical kinetic term $X$ in action and positive powers of $G_{i,X}$ and $F_{i,X}$. In what follows, we focus again on models that break the last assumption. We investigate two terms given in \cite{Babichev:2017guv} that respect the constraint $c_T=1$.
\subsubsection{Square root Quartic models}
We first consider including a $\sqrt{-X}$ term in $G_4$ (with the choice of $F_4$ corresponding to the above conditions that lead to $c_T=1$):
\begin{align}
G_2=X,\;G_4=\frac{1}{2}M_P^2+\gamma\sqrt{-X},\;F_4=\frac{\gamma}{4(-X)^{\frac{3}{2}}},
\end{align}
where $\gamma$ is an arbitrary constant.
For this choice of $G_i, F_i$, the condition for the scalar field to be cosmologically relevant is given by:
\begin{align}
\Omega_\phi=\frac{X-6H_0^2\gamma\sqrt{-X}}{3H_0^2M_P^2}\sim O(1).
\end{align}
Assuming a spherically symmetric ansatz for the metric as in eq.~(\ref{ansatz}), we find two branches of solutions for $X$:
\begin{eqnarray}
X(r)=&\;0 \quad \Rightarrow\quad \frac{d\phi}{dr}=0,
\end{eqnarray}
or
\begin{eqnarray}
X(r)=& -\left(\frac{4\gamma^2+M_P^2r^2}{3\gamma r^2}\right)^2
\Rightarrow \frac{d\phi}{dr}= \sqrt{\frac{2}{f(r)}}\frac{4\gamma^2+M_P^2r^2}{3\gamma r^2},
\end{eqnarray}
respectively. We see that the first branch results in a solution with a constant scalar field, i.e.~no-scalar hair, resulting in regular GR black holes. We thus try to find solutions for the metric functions $f(r)$ and $h(r)$ for the second branch of solutions for $X(r)$. We find the following for $f(r)$:
\begin{align}
f(r)=\frac{64 \gamma ^6+9 \gamma ^2 c_1 r^3-M_P^6 r^6+45 \gamma ^2 M_P^4 r^4+144 \gamma ^4 M_P^2
r^2}{9 \left(\gamma M_P^2 r^2-8 \gamma ^3\right)^2}.
\end{align}
For large $r$, $f(r)\sim r^2$, this solution is clearly not asymptotically flat. Thus we have not been able to construct an asymptotically flat spherically symmetric black hole solution with scalar hair for this model.
\subsubsection{Purely Quartic models}
Purely quartic models are proposed in \cite{Babichev:2017guv} (i.e.~only $G_4$ and $F_4$ non-zero and with no canonical kinetic term for the scalar field). One such model that obeys the $c_T=1$ constraint is given by:
\begin{align}
G_4=&\frac{1}{2}M_P^2+\sum_{n\geq2}2a_n\frac{(X-X_0)^{n+1}}{(n+1)(n+2)}\left[(n+1)X+X_0\right]\\
F_4=&\sum_{n\geq2}a_n(X-X_0)^n,
\end{align}
where $X=X_0$ is the constant value of the background scalar field kinetic term around the black hole:
\begin{align}
\frac{d\phi}{dr}=\pm \sqrt{-\frac{2X_0}{f(r)}},
\end{align}
thus leading to non-trivial scalar field profile for $X_0\neq0$.
The cosmological density parameter for the scalar field in this case is:
\begin{widetext}
\begin{align}
\Omega_\phi=\frac{-8X^2\sum_{n\geq2}a_n\left(X-X_0\right)^n}{M_P^2+4\sum_{n\geq2}\frac{a_n\left(X-X_0\right)^n}{n+2}\left[(3-2n)X^2-\frac{1}{n+1}X_0\left(nX+X_0\right)\right]}
\end{align}
\end{widetext}
and we expect it to be of order 1.
The black hole solution of this model is a stealth black hole, where the spacetime geometry is given by the appropriate GR solution, but the scalar field takes a non-trivial profile (as shown above). For a stealth Schwarzschild black hole, the scalar field is thus given by \cite{Babichev:2017guv}:
\begin{align}\label{STstealth}
\phi(r)=\sqrt{-2X_0}\left[\sqrt{r^2-2mr}+m\log\left(r-m+\sqrt{r^2-2mr}\right)\right].
\end{align}
The above profile for the scalar field is regular everywhere outside the horizon of the black hole, with $\phi \sim r$ as $r\rightarrow \infty$.
Since the spacetime geometry is the same as that of GR, we do not expect to be able to distinguish this model through non-dynamical tests such as analyses on orbits of stars or electromagnetic imaging of the accretion flow around the black hole. Nevertheless, we do expect to see a difference in dynamical situations. In particular, it has been suggested that while scalar-tensor theories with $\phi=$constant may not lead to any new signature during the inspiral of two black holes, the no-hair theorem can be pierced if the scalar field has some dynamics \cite{Healy:2011ef}. In the case of stealth black holes, the scalar field will have a non-trivial initial profile as in eq.~(\ref{STstealth}) which may trigger a subsequent dynamical evolution which may lead to dipole gravitational wave radiation which in turn will change the evolution of the emitted GW waveform phasing compared to that of GR.
\subsection{Einstein-Scalar-Gauss-Bonnet}
In four dimensions, the Gauss-Bonnet (GB) term is a topological invariant, and as such the addition of the GB term to the usual Einstein-Hilbert action of GR does not affect the equations of motion. If, however, the GB term is non-minimally coupled to a dynamical scalar field in the action, the dynamics are significantly altered. Consider the action of Einstein-Scalar-Gauss-Bonnet (ESGB) gravity \cite{Sotiriou:2014pfa}:
\begin{align}
S_{GB}=\int d^4x\sqrt{-g}&\left[\frac{M_P^2}{2}R-\frac{1}{2}g^{\mu\nu}\phi_\mu\phi_\nu-V(\phi)\right.\nonumber\\
&\left.-\frac{1}{2}\xi(\phi)R^2_{GB}\right],\label{SGB}
\end{align}
where we have introduced a scalar field $\phi$ with potential $V(\phi)$, which for simplicity we will neglect from now on, and a coupling function $\xi(\phi)$. In addition, we have defined $R_{GB}^2=R_{\mu\nu\alpha\beta}R^{\mu\nu\alpha\beta}-4R_{\mu\nu}R^{\mu\nu}+R^2$ as the Gauss-Bonnet term.
It is well known that models given by eq.~(\ref{SGB}) can produce scalar hair on black hole backgrounds in both the static \cite{Sotiriou:2014pfa,2017CQGra..34f4001B,Antoniou:2017hxj} and slowly rotating \cite{Pani:2009wy,Maselli:2014fca,Maselli:2015tta,Kleihaus:2014lba,Kleihaus:2011tg,Ayzenberg:2014aka} regimes. However, on a cosmological background they lead to a modified speed of gravity waves. Indeed, we find that
\begin{align}
c_T=1+4\frac{\left(H\dot{\phi}-\ddot{\phi}\right)\xi^{\prime}-\dot{\phi}^2\xi^{\prime\prime} }{M_P^2-4H\dot{\phi}\xi^{\prime}}, \label{CTGB}
\end{align}
where a prime denotes a derivative with respect to $\phi$ (see also \cite{Gong:2017kim}). Equivalently, eq.~(\ref{SGB}) can be recast into the form of Horndeski gravity with the following choice of functions $G_i$:
\begin{align}
G_2=&X-V+4\xi^{\prime\prime\prime\prime}X^2\left(\log{X}-3\right)\nonumber\\
G_3=&2\xi^{\prime\prime\prime}X\left(3\log{X}-7\right)\nonumber\\
G_4=&\frac{M_P^2}{2}+2\xi^{\prime\prime}X\left(\log{X}-2\right)\nonumber\\
G_5=&2\xi^\prime\log{X}.
\end{align}
In this form, it is clear the ESGB gravity does not conform to the constraints of \cite{Baker:2017hug,Creminelli:2017sry,Sakstein:2017xjx,Ezquiaga:2017ekz} that $G_{4,X}=0=G_5$ to ensure that $c_T=1$. Furthermore, it is well known that the case of coupling to the GB term is a loophole in the no-hair theorems for shift-symmetric Horndeski theories \cite{Sotiriou:2014pfa,Sotiriou:2015pka,Maselli:2015yva}.
Assuming that the scalar field is cosmologically relevant, we find that $\Omega_\phi\sim O(1)$ where
\begin{widetext}
\begin{align}
\Omega_\phi=\frac{V+X+4\left(\xi^{\prime\prime\prime\prime}-\xi^{\prime\prime\prime}\right)X^2\left(3-\log{X}\right)+24H_0X\left(H_0\xi^{\prime\prime}+3\dot{\phi}\xi^{\prime\prime\prime}\right)}{3H_0^2\left(M_P^2+4\xi^{\prime\prime}X\log{X}-4\xi^{\prime}H_0\dot{\phi}\right)}.
\end{align}
\end{widetext}
We then impose $c_T=1$ for a generic background evolution. Therefore, from eq.~(\ref{CTGB}) we get $\xi^{\prime}=\xi^{\prime\prime}=0$, in which case the GB term decouples from the scalar field $\phi$ and we obtain GR with a minimally coupled scalar field plus a GB term. In this case, as mentioned above, the addition of the GB term to the usual Einstein-Hilbert term represents nothing more than the addition of a total divergence that leaves the equations of motion unaffected. Thus the constraint on $c_T$ rules out the possibility of ESGB gravity having any cosmological relevance. As discussed above, the scalar field $\phi$ could avoid modifying $c_T$ at an observable level only if it is assumed to be incredibly sub-dominant on cosmological scales, i.e.~if $\Omega_\phi\ll 1$.
As a counter-example to the above discussion of the cosmological impact of ESGB gravity, the following theory is studied in \cite{Granda:2018tzi}:
\begin{align}
S=\int d^4x\sqrt{-g}&\left[\frac{M_P^2}{2}R-\frac{1}{2}g^{\mu\nu}\phi_\mu\phi_\nu-V(\phi)\right.\nonumber\\
&\left.+F_1(\phi)G^{\mu\nu}\phi_\mu\phi_\nu-\frac{1}{2}F_2(\phi)R^2_{GB}\right],
\end{align}
with string-inspired exponential forms for $F_1, F_2$, and $V$. It is shown in \cite{Granda:2018tzi} that this theory, including \textit{both} scalar coupling to the GB term and scalar derivative coupling to the Einstein tensor, can admit de-Sitter like and power-law cosmological solutions whilst maintaining $c_T=1$. This theory is clearly not shift-symmetric and so the no-hair theorem of \cite{Hui:2012qt} is not applicable, whilst a coupling to the GB term is known to produce black holes with scalar hair \cite{Sotiriou:2014pfa,2017CQGra..34f4001B,Antoniou:2017hxj}, or at least that are unstable to spontaneous scalarization \cite{Doneva:2017bvd,Silva:2017uqg}. Thus we expect that, in general, black holes in this string-inspired theory can possess scalar hair and satisfy current constraints on the speed of gravity waves, although with no cosmological effects.
\subsection{Chern-Simons}
Chern-Simons (CS) gravity \cite{Alexander:2009tp} is characterised by the addition of the Pontryagin invariant, $^*RR=\frac{1}{2}\epsilon^{\mu\nu\alpha\beta}R^\lambda_{\sigma\alpha\beta}R^\sigma_{\lambda\mu\nu}$ to the standard Einstein-Hilbert term of the action. The Pontryagin invariant can be coupled to either a dynamical or non-dynamical scalar field, leading to two different formulations of the theory. For concreteness, consider the dynamical formulation of CS gravity
\begin{align}
S_{CS}=\int d^4x \sqrt{-g}\left[\frac{M_P^2}{2}R-\frac{1}{2}g^{\mu\nu}\phi_\mu\phi_\nu+\alpha f(\phi)^*RR\right],
\end{align}
where $\alpha$ is a constant coupling parameter and $f(\phi)$ is an arbitrary function of the scalar field.
In \cite{Yagi:2016jml,2010PhRvD..82d1501G,2005PhRvD..71f3526A} it is shown that in CS gravity, gravitational waves propagate at the speed of light on conformally flat background spacetimes such as FRW. As such, \cite{Yagi:2016jml} postulates that it is not possible to constrain CS gravity purely through the propagation speed of gravitational waves. Regardless, static and spherically symmetric black holes in CS gravity do not have hair and admit the same solutions as in GR \cite{Molina:2010fb}.
\section{Other theories}\label{Sec:Other}
We now consider other theories which go beyond the broad span of scalar-tensor theories. As one expects, the moment one considers fields with more ``structure" (i.e.~more indices), there is a greater possibility of non-trivial coupling with the metric which, in turn, can lead to black hole hair.
\subsection{Einstein gravity with Maxwell, Yang-Mills and Skyrme fields}
The simplest theory with a vector field corresponds to an Einstein-Maxwell system. In this case, the black hole solution is Kerr-Newman which, besides mass and spin, is characterized by an electric charge. While this black hole solution is not considered hairy, we will start by mentioning this case (and its non-abelian extensions) to get a flavor of what to expect in the case of fields with more structure.
The Einstein-Maxwell theory is given by
\begin{eqnarray}
S=\int d^4x\sqrt{-g}\left[\frac{M^2_P}{2}R-\frac{1}{4}F_{\alpha\beta}F^{\alpha\beta} \right],
\end{eqnarray}
where $F_{\mu\nu}=\partial_\mu A_\nu - \partial_\nu A_\mu$ is the Maxwell tensor associated with a vector field $A^\alpha$. In this model, we have that the FRW cosmological fractional energy density of the vector field is:
\begin{eqnarray}\label{EMOmega}
\Omega_A=\frac{F_{0\alpha}F^{\alpha}_{\phantom{\alpha}0}+\frac{1}{4}F_{\alpha\beta}F^{\alpha\beta}}{3M^2_{\rm Pl}H_0^2}\sim\frac{|\vec{B}|^2}{H_0^2M^2_{\rm Pl}},
\end{eqnarray}
where ${\vec B}$ is the associated magnetic field. In this isotropic background, one has that $A^{\mu}=(A_0(t),{\vec 0})$ and gravity wave propagation will be direction independent with exactly $c_T=1$. Furthermore, in this case, the cosmological evolution is exactly the same as that of GR as the magnetic field vanishes ($\vec{B}=0$) in this homogeneous background, and hence $\Omega_A=0$ from eq.~(\ref{EMOmega}).
If the vector field is cosmologically relevant, taking into account the fact that the electrical conductivity of the universe is large, we expect the presence of magnetic fields which will lead to anisotropies. On the one hand, in the case of stochastic magnetic fields, the metric may be locally anisotropic but too weak to affect local gravitational wave propagation.
On the other hand, in the case of a global magnetic component, the vector field will have a spatial dependence ${\vec A}\neq0$ and the universe will be anisotropic. Therefore, the propagation of gravitational waves will be direction dependent \cite{Hu:1978td}. Current constraints on global anisotropy (and homogeneous magnetic fields) from the cosmic microwave background are remarkably tight \cite{Bunn:1996ut,Barrow:1997mj,Ade:2015bva} and we will enforce strict isotropy in what follows (although it is conceivable that multiple measurements of $c_T$ in different directions might improve these constraint).
The black hole solutions for the Einstein-Maxwell system are Kerr-Newman, which are fully characterized by three charges -- mass $m$, spin $a$ and electric charge $Q$ -- and with a non-trivial profile for the vector potential $A^\alpha$. In the spherically symmetric case (where $a=0$), one has the Reissner-Nordstrom solution given by:
\begin{eqnarray}
h(r)=1-\frac{2m}{r}+\frac{Q^2}{4\pi \epsilon_0 r^2} \ \mbox{and} \ \
A_0(r)=\frac{Q}{r},
\end{eqnarray}
where $\epsilon_0$ is the vacuum permittivity constant. Here, we have again used a metric of the form of eq.~(\ref{ansatz}) and $A^\mu=(A_0(r),A_1(r),0,0)$, with $A_1$ being an unphysical gauge mode.
The above description can be extended to the case where the gauge field is non-abelian -- the Einstein-Yang-Mills system -- or the vector field has a stronger non-linear self coupling -- the Einstein-Skyrme system; in this case we are considering genuine hair. In both of these cases, new phenomena can come into play. While, on the whole, the energy density of the fields can remain sub-dominant at cosmological scales, non-perturbative structures (topological and non-topological defects) can in principle make a non-trivial contribution to the overall energy density and to the global isotropy of space (although, generally, these effects are expected to be weak). Again, we can enforce $c_T=1$ yet still allow for non-abelian hair. Notable examples can be found in the Einstein-Yang-Mills case \cite{Bartnik:1988am,Volkov:1989fi} which combine solitonic cores with long range forces; in the Einstein-Skyrme cases there is a range of solutions combined with solitonic states \cite{Shiiki:2005pb,Gudnason:2016kuu,Adam:2016vzf}.
\subsection{Einstein-Aether}
Generalized Einstein-Aether is a gravity theory where the metric is coupled to a unit time-like vector field, dubbed the aether. This model provides a simple scenario for studying effects of local Lorentz symmetry violation. In particular, the vector field defines a preferred frame where boosts symmetry is broken but rotational symmetry is still preserved.
The action describing this model is given by:
\begin{eqnarray}
S=\int d^4x\sqrt{-g}\left[\frac{M^2_P}{2}R+{\cal F}(K)+\lambda(A^\mu A_\mu+1) \right],
\end{eqnarray}
where $\lambda$ is a Lagrange multiplier that forces the vector field to be unit time-like. Also, ${\cal F}(K)$ is an arbitrary function of the kinetic term $K$ given by $K=c_1\nabla_\mu A_\nu \nabla^\mu A^\nu+c_2(\nabla_\mu A^\mu)^2+c_3\nabla_\mu A_\nu \nabla^\nu A^\mu$ (with $c_i$ constants) (an additional quartic term in $A^\mu$ contributing to $K$ is sometimes considered).
The cosmological consequences of Einstein-Aether have been extensively explored in \cite{Zlosnik:2006zu,Zuntz:2010jp,Audren:2014hza}. The case where the aether field can play the role of either dark matter or dark energy were explored in \cite{Zuntz:2010jp} where the existing cosmological constraints ruled it out as an alternative to a cosmological constant. In the case of the standard Einstein-Aether case (with ${\cal F}(K)\sim K$), current Solar System constraints \cite{Jacobson:2008aj} combined with binary pulsar constraints \cite{Yagi:2013qpa,Yagi:2013ava} place $|c_1|,|c_3|\le 10^{-2}$ and $c_2\le 1$. Cosmological constraints allow $\Omega_{A}\sim 0.3$ so the aether field can still have a non-negligible contribution to the cosmological evolution \cite{Zuntz:2008zz}.
The propagation speed of gravitational waves on a cosmological background is such that $c_T^2=1/[1+(c_1+c_3){\cal F}_{,K}]$, and thus the constraint on $c_T$ implies $c_1=-c_3$. This means that $K$ is reduced to a canonical kinetic term (an ``$F^2$" term) supplemented by a $(\nabla_\mu A^\mu)^2$ term.
For models which respect $c_T=1$, the condition that the aether field $A^\mu$ has cosmological relevance is thus given by:
\begin{align}
\Omega_A=\frac{-{\cal F}}{3M_P^2H_0^2(1-3c_2{\cal F}_{,K})}\sim O(1),
\end{align}
where ${\cal F}_{,K}$ denotes the derivative ${\cal F}$ with respect to $K$.
Little has been done on black hole solutions for general ${\cal F}(K)$ and thus we will restrict ourselves to the standard Einstein-Aether case. Black hole solutions with hair have been found in such a case, that are regular, asymptotically flat and depend on only one free parameter \cite{Eling:2006ec,Eling:2006df,Konoplya:2006ar,Barausse:2011pu,Barausse:2013nwa}. For instance, in the case where $c_3=0$, $c_2$ must satisfy the condition:
\begin{equation}
c_2=-\frac{c_1^3}{2-4c_1+3c_1^2},
\end{equation}
where $c_1$ is the only free parameter of the black hole solution. We can use a spherically symmetric ansatz for the line element as in eq.~(\ref{ansatz}). The full solution must be found numerically, but an analytical perturbative solution can be given when $x=2m/r\ll 1$:
\begin{align}
h(r)&=1+x+(1+c_1/8)x^2+...\, ,\\
f(r)^{-1}&=1-x-c_1/48 x^2+...
\end{align}
We note that this is an example of primary hair, where the spacetime geometry is different to that of GR and, furthermore, the solution depends on an additional independent free parameter $c_1$.
More general solutions satisfying $c_3=-c_1$ (and hence $c_T=1$) are expected to have the same behaviour \cite{Eling:2006ec}. This shows that BH solutions always have hair, regardless of additional cosmological constraints. While in static black holes deviations from GR are typically of a few percent (only exceeding $10\%$ for some region of the viable parameter space) \cite{Barausse:2013nwa}, generalizations to spinning black holes may offer better prospects for observing hair in these models.
\subsection{Generalised Proca}\label{secVT}
The generalised Proca theory \cite{Heisenberg:2014rta,Tasinato:2014eka,Allys:2015sht,Allys:2016jaq,Jimenez:2016isa} is given by
\begin{equation}
S=\int d^4x\; \sqrt{-g}\left\{ F+\sum_{i=2}^6\mathcal{L}_i[A_\alpha, g_{\mu\nu}]\right\},
\end{equation}
where $\mathcal{L}_i$ are gravitational vector-tensor Lagrangians given by:
\begin{widetext}
\begin{align}\label{VT}
&\mathcal{L}_2=G_2(X,F,Y),\nonumber\\
&\mathcal{L}_3=G_3(X)\nabla_{\mu}A^\mu,\nonumber\\
&\mathcal{L}_4=G_4(X)R+G_{4X}\left[ (\nabla_{\mu}A^\mu)^2 - \nabla_{\mu}A^\nu\nabla_{\nu}A^\mu \right],\nonumber\\
&\mathcal{L}_5=G_5(X)G_{\mu\nu}\nabla^{\mu}A^\mu-\frac{1}{6}G_{5X}\left[ (\nabla_{\mu}A^\mu)^3-3(\nabla_{\alpha}A^\alpha)(\nabla_{\mu}A^\nu\nabla_{\nu}A^\mu) \right.\nonumber \\
& \left. +2 \nabla_{\mu}A^\alpha \nabla_{\nu}A^\mu \nabla_{\alpha}A^\nu \right] - g_5(X)\tilde{F}^{\alpha\mu}\tilde{F}_{\beta\mu}\nabla_{\alpha}A^\beta,\nonumber\\
&\mathcal{L}_6=G_6(X)L^{\mu\nu\alpha\beta}\nabla_\mu A_\alpha \nabla_\alpha A_\beta +\frac{1}{2}G_{6X}\tilde{F}^{\alpha\beta}\tilde{F}^{\mu\nu}\nabla_{\alpha}A_\mu \nabla_{\beta}A_\nu,
\end{align}
\end{widetext}
which are written in terms of 6 free functions $G_2$, $G_3$, $G_4$, $G_5$, $g_5$, and $G_6$. We can define the following tensors:
\begin{align}
& F_{\mu\nu}=\nabla_\mu A_\nu -\nabla_\nu A_\mu ,\\
& \tilde{F}^{\mu\nu}=\frac{1}{2}\epsilon^{\mu\nu\alpha\beta}F_{\alpha\beta},\\
& L^{\mu\nu\alpha\beta}=\frac{1}{4}\epsilon^{\mu\nu\rho\sigma} \epsilon^{\alpha\beta\gamma\delta}R_{\rho\sigma\gamma\delta},
\end{align}
where $R_{\rho\sigma\gamma\delta}$ is the Riemann tensor and $\epsilon^{\mu\nu\alpha\beta}$ is the Levi-Civita antisymmetric tensor. The 5 free parameters $G_i$ are free functions of the following scalar quantities of the previously defined tensors:
\begin{align}
& X = -\frac{1}{2}A_\mu A^\mu,\\
& F =- \frac{1}{4}F^{\mu\nu}F_{\mu\nu},\\
& Y = A^\mu A^\nu F_\mu{}^\alpha F_{\nu\alpha}.
\end{align}
The conditions to have $c_T=1$ are $G_{4,X}=G_{5,X}=0$ (with the term proportional to $G_{\mu\nu}\nabla^\mu A^\nu$ vanishing due to the Bianchi identity). In this case, the condition for cosmological relevance of the vector field $A^\mu$ is given by \cite{DeFelice:2016yws}:
\begin{align}
\Omega_A=\frac{-G_2+G_{2,X}A_0^2+3G_{3,X}H_0A_0^3}{6G_4H_0^2}\sim O(1),
\end{align}
where $A^\mu=\left(A_0(t),\vec{0}\right)$.
Most of the theories that satisfy $c_T=1$ are of the form of GR with a minimally coupled vector field possessing `exotic' kinetic terms, in which case hairy black holes are to be expected. It can be easily shown that spherically symmetric BHs can indeed have hair. For instance, \cite{Heisenberg:2017hwb} shows the solution when $G_3\not=0,\,G_4={M_P^2}/{2}$, in which case:
\begin{align}
f(r)=\left(1-\frac{m}{r}\right)^2,\; A_0=\sqrt{2}M_{Pl}\left(1-\frac{m}{r}\right),\;A_1=0,
\end{align}
where we have again assumed a metric ansatz given by eq.~(\ref{ansatz}) (with $h(r)=f(r)$), and that $A^\mu=\left(A_0(r),A_1(r),0,0\right)$. This resembles an extremal Reissner-Nordstrom black hole with a `charge' that depends on mass. This is an example of secondary hair, where the spacetime metric is different to that of GR, but both theories depend on the same number of independent free parameters.
In contrast, since the Lagrangian $\mathcal{L}_6$ couples the vector field non-minimally to curvature through $L^{\mu\nu\alpha\beta}$ this Lagrangian corresponds to an intrinsic vector mode, and as such does not contribute to the background equations of motion for a homogenous and isotropic cosmological background (where $A^\mu=\left(A_0(t),\vec{0}\right)$) \cite{Nakamura:2017dnf}. In this case, with non-minimal coupling to curvature, one solution is that of a stealth Schwarzschild black hole with a non-trivial profile for the background vector field \cite{Heisenberg:2017hwb}:
\begin{align}
f(r)=1-\frac{2m}{r},\;A_0=\text{\it}{const},\;A_1=\frac{\sqrt{A_0^2-2X_0f(r)}}{f(r)},
\end{align}
where $X=X_0=\text{\it}{const}$.
As shown in \cite{Tattersall:2017erk}, the QNMs of this stealth Schwarzschild black hole will be unaffected from the usual GR spectrum due to $F_{\mu\nu}=0$ for the above vector profile.
\subsection{Scalar-Vector-Tensor}
We now consider theories in which there are two additional fields to the metric: a scalar and a vector. A specific model was analysed in \cite{Heisenberg:2018vti}, where a shift-symmetric scalar field $\phi$ \textit{and} a $U(1)$ gauge invariant vector field $A^\mu$ are coupled \cite{Heisenberg:2018acv}. Specifically, the action of interest is given by:
\begin{align}
S=\int &d^4x\,\sqrt{-g}\left[\frac{M_P^2}{2}R+X+F+\beta_3 \tilde{F}^\mu_{\;\rho} \tilde{F}^{\nu\rho}\phi_\mu \phi_\nu \right.\nonumber\\
&\left.+ \beta_4 X^{n-1} \left(XL^{\mu\nu\alpha\beta}F_{\mu\nu}F_{\alpha\beta}+\frac{n}{2}\tilde{F}^{\mu\nu}\tilde{F}^{\alpha\beta}\phi_{\mu\alpha}\phi_{\nu\beta}\right)\right],\label{SVT}
\end{align}
where $\beta_3$ and $\beta_4$ are arbitrary constants, $X=-\phi_\mu \phi^\mu/2$ is the scalar kinetic term as in scalar-tensor theories, and with all other terms being defined as in Section \ref{secVT}.
Given that the vector field strength $F_{\mu\nu}$ vanishes on isotropic cosmological backgrounds (with $A^\mu=(A_0(t),\vec{0}))$, the action given by eq.~(\ref{SVT}) reduces to that of GR with a minimally coupled, massless scalar field in cosmological settings. The speed of gravitational waves $c_T$ will, therefore, be equal to unity in these theories, thus satisfying the constraint determined by GRB 170817A and GW170817.
Black hole solutions in this model were studied in \cite{Heisenberg:2018vti}, where asymptotically flat, static and spherically symmetric black holes with hair are found for $\beta_4=0$ and for $\beta_4\neq0$ in the cases of $n=0$ or $1$. In all of the cases studied, modified Reissner-Nordstrom-like solutions with global charges $m$ and $Q$ are found, with the black hole further endowed with a secondary scalar hair sourced by the vector charge $Q$.
\subsection{Bigravity}
We now consider bimetric theories. The only non-linear Lorentz invariant ghost-free model is given by the deRham-Gabadadze-Tolley (dRGT) \cite{deRham:2010kj, deRham:2010ik, Hassan:2011zd} massive (bi-)gravity action:
\begin{align}
S&=\; \frac{M_g^2}{2}\int d^4x\; \sqrt{-g}R_g +\frac{M_f^2}{2}\int d^4x\; \sqrt{-f}R_f \nonumber \\
&- m^2M_{g}^2\int d^4x\; \sqrt{-g}\sum_{n=0}^4\beta_n e_n\left(\sqrt{g^{-1}f}\right),
\end{align}
where we have two dynamical metrics $g_{\mu\nu}$ and $f_{\mu\nu}$ with their associated Ricci scalars $R_g$ and $R_f$, and constant mass scales $M_g$ and $M_f$, respectively. Here, $\beta_n$ are free dimensionless coefficients, while $m$ is an arbitrary constant mass scale. The interactions between the two metrics are defined in terms of the functions $e_n (\mathbb{X})$, which correspond to the elementary symmetric polynomials of the matrix $\mathbb{X}=\sqrt{g^{-1}f}$.
This bigravity action generally propagates one massive and one massless graviton, with the field $g_{\mu\nu}$ being a combination of \textit{both} modes. There is one special case where $M_f/M_g\rightarrow \infty$, and only the massive graviton propagates (while the metric $f_{\mu\nu}$ becomes a frozen reference metric). As discussed in \cite{Baker:2017hug}, constraints on the speed of gravity waves lead to bounds on the graviton mass of the order $m \lesssim 10^{-22}$eV, which are weak compared Solar System fifth force constraints of order $m \lesssim 10^{-30}$eV. As long as $m\sim H_0$ we expect the massive graviton to have some cosmological relevance.
Regarding black hole solutions, massive gravity (with one dynamical metric) has some static solutions, although they have been found to be problematic as they can describe infinitely strongly coupled regimes or have singular horizons. One well-behaved solution was proposed recently in \cite{Rosen:2017dvn}, for a time-dependent black hole.
On the other hand, massive bigravity has a much more rich phenomenology with a number of possible stationary solutions (static, rotating, and with or without charge) (see a review in \cite{Babichev:2015xha}). Focusing on asymptotically flat solutions, it is possible to have Schwarzschild or Kerr solutions for both metrics \cite{Babichev:2014tfa}. However, these solutions are generically unstable, although they can still fit data as long as $m \lesssim 5\times 10^{-23}$eV \cite{Brito:2013wya}. Hairy static solutions can also be found for some parameter space \cite{Brito:2013xaa}. Using the ansatz in eq.~(\ref{ansatz}) for the metric $g_{\mu\nu}$ and the following ansatz for $f_{\mu\nu}$ :
\begin{equation}
ds^2_f=-P(r)dt^2+\frac{1}{B(r)}dr^2 +U(r)^2d\Omega^2.
\end{equation}
Here, there are five independent functions $\{h, f, P, B, U\}$ to be determined by the equations of motion. However, due to the presence of a Bianchi constraint, there are only three independent functions $\{ f, B, U\}$ satisfying first-order differential equations. The complete solution must be found numerically, but an expansion can be made for $r\rightarrow \infty$ \cite{Brito:2013xaa}:
\begin{align}
f(r)^{1/2}&=1-\frac{c_1}{2r}+\frac{c_2(1+r\mu)}{2r}e^{-r\mu},\nonumber \\
Y(r)&= 1-\frac{c_1}{2r}-\frac{c_2(1+r\mu)}{2r}e^{-r\mu} \nonumber, \\
U(r)& = r+ \frac{c_2(1+r\mu+r^2\mu^2)}{r^2\mu^2}e^{-r\mu}
\end{align}
where $c_i$ are integration constants, $\mu=m\sqrt{1+M_g^2/M_f^2}$, and $Y$ is a proxy function for $B$ given by $Y=U'/B^{1/2}$. While the constant $c_1$ may be identified with the mass of the black hole, $c_2$ is a new charge that adds a Yukawa-type suppression to the metric due to the massive graviton.
Here we have mentioned some possible black hole solutions for bimetric theories, but we note that since these solutions are not unique, it is not clear what the physical spacetime and the outcome of gravitation collapse will be. Future simulations on non-linear gravitational collapse should allow us to find the physical solution.
\section{Conclusion}\label{Sec:conclusions}
Modifications to general relativity may affect the evolution of the universe and lead to cosmologically observable effects. The range of possible modifications has been drastically reduced with the discovery of GW170817 and the resulting constraint on the speed of gravitational waves. We have looked at the reduced space of theories to see which of them will lead to distinctive signatures around black holes, specifically black hole hair. By looking for observable signatures of that hair and combining them with constraints from current and future cosmological surveys, it should be possible to further narrow down the span of allowed modifications to general relativity and, if the data points that way, single out new physics.
We have focused on scalar-tensor theories. Of all theories, these are the most thoroughly understood and, furthermore, emerge as low energy limits of other, more intricate theories. Not only is there a reasonably general classification of scalar-tensor theories, but there is also a comprehensive body of work on black holes and black hole hair arising in them. As was shown in \cite{Baker:2017hug,Creminelli:2017sry,Sakstein:2017xjx,Ezquiaga:2017ekz}, the discovery of GW170817 places severe constraints on these models. We have found that, generally, and in the cases where they have been studied more carefully, these theories do not have hair for static, spherically symmetric, and asymptotically flat black holes. Specific examples that were constructed to have hair (as suggested in \cite{Babichev:2017guv}), in the case where they contribute cosmologically and satisfy $c_T=1$, do not have hair. We found that the case where Einstein-Scalar-Gauss-Bonnet gravity is cosmologically relevant, it is ruled out by the GW170817 constraint, while Chern-Simons gravity is left unconstrained (and furthermore, known to have hair in the slowly rotating regime \cite{Yunes:2009hc,Konno:2009kg,Yagi:2012ya,Konno:2014qua,Stein:2014xba,McNees:2015srl}). We also looked at other theories, primarily involving vectors, and found that in that case it is possible for them to satisfy the various cosmological conditions and still have black holes with hair for spherically symmetric and rotating black holes. We also discussed bimetric theories, which allow for hairy and non-hairy asymptotically flat black holes, and understanding which solution describes physical setups require further work on gravitational collapse.
Our analysis is limited in scope in the sense that we have not considered the most general actions allowed. For example we have not considered combinations of the Beyond Horndeski models studied in \cite{Babichev:2017guv}. We have done this for two reasons. The first reason is that these models were constructed as proofs of concept without any strong underlying physical motivation -- questions of analyticity arise in the limit where $X\rightarrow 0$. The second reason is that the equations of motion become vastly more complicated with multiple non-analytic leading terms which means it is difficult to obtain solutions which can be easily interpreted and classified. Lacking more general results (such as the Galileon no-hair theorem of \cite{Hui:2012qt}) it is always plausible that theories, which satisfy the constraints we impose and lead to hair, exist.
Nevertheless, our analysis is useful for determining how to move forward with the theories we looked at. Given their cosmological relevance, we take for granted that they will be thoroughly tested when the next generation of cosmological data is made available. What we can now do is determine how to combine these cosmological tests with non-dynamical tests in the strong-field regime. In the cases where the black holes do have hair, one would look for evidence of a fifth force for example in the accreting material or nearby objects.
For theories where black holes have no hair the situation is more complex. In that case, the only observations that might lead to data which allow us to constrain the extra fields are dynamical tests which include inspirals of binary black holes, extreme mass-ratio inspirals (EMRI) or ringdown of single black holes. In GR, due to the strong equivalence principle, the orbital motion and the gravitational wave signal of binary inspirals or EMRI only depend on the masses and spins of the objects involved. On the contrary, in modified gravity theories this principle is violated, and the additional degrees of freedom will determine the effective gravitational coupling constant which will depend on the new field or its derivatives at the location of the relevant object (e.g.~see \cite{Will:2004xi, Mirshekari:2013vb, Horbatsch:2011ye, Healy:2011ef} for binary inspirals in scalar-tensor theories and a general analysis in \cite{Yunes:2009ke,Berti:2005qd}, as well as \cite{Yunes:2011aa, Sopuerta:2009iy} for EMRI in scalar-tensor theories). In the case of ringdown, one would expect that the violent event would have excited any putative extra degree of freedom (such as a scalar or a vector). And while the end state might be a Kerr-Newman solution, the perturbations around this background (i.e.~the quasi-normal modes) should contain information about the extra degrees of freedom \cite{Tattersall:2017erk}. It would be interesting further work to study the specific quasi-normal-mode signatures of the theories investigated here that do not exhibit hairy black hole solutions (yet still abide by the $c_T$ constraint).
Finally, given that we have found that most scalar-tensor theories abide to the no-hair theorem and present trivial constant profiles around black holes, it would be interesting to explore black hole solutions in the presence of screening. It has been argued that models satisfying Solar System constraints (i.e.~which hide the presence of fifth forces in the weak-field limit) do so through screening. The main screening mechanisms which have been advocated are the Vainshtein \cite{Vainshtein:1972sx}, chameleon \cite{Khoury:2003rn} and symmetron \cite{Hinterbichler:2010es} mechanisms, all of which suppress the fifth force compared to the Newtonian force, depending on the local environment. The current approach is to assume that the self gravity of compact objects is sufficiently substantial that it decouples the scalar charge from the mass -- in the limit of a black hole, the scalar charge is set to zero \cite{Hui:2012jb}. However, little has been done to construct screened black hole solutions by, for example, violating the condition of asymptotic flatness. Such an analysis might give us insight on the presence of hair in more realistic setups. A particularly interesting scenario might arise in the case of a binary neutron star merger, such as GW170817. There, screened compact objects (neutron stars) end up forming a black hole; if indeed the black hole has no hair (and no screening) one might expect that the process of shedding the scalar field could lead to an observable effect. Alternatively, if the black hole adopts the screening mechanism, it would be useful to understand what is the final, stable, solution and how it jibes with the no-hair theorem.
We have entered a new era in gravitational physics in which multiple regimes can be tested with high precision. While multi-messenger gravitational wave physics has grown to prominence, we believe it should now also include other, significantly different, arenas; from the cosmological, through the galactic all the way down to astrophysical and compact objects, a wide range of observations can be brought together to construct a highly precise understanding of gravity.
\section*{Acknowledgments}
\vspace{-0.2in}
\noindent We are grateful to E.~Bellini, V.~Cardoso, and N.~Sennett for useful discussions. OJT was supported by the Science and Technology Facilities Council (STFC) Project Reference 1804725. OJT also thanks the Max Planck Institute for Gravitational Physics (Albert Einstein Institute) for hosting him whilst part of this work was completed, funded by an STFC LTA grant. PGF acknowledges support from STFC, the Beecroft Trust and the European Research Council. ML is supported at the University of Chicago by the Kavli Institute for Cosmological Physics through an endowment from the Kavli Foundation and its founder Fred Kavli.
|
{
"timestamp": "2018-04-11T02:07:59",
"yymm": "1802",
"arxiv_id": "1802.08606",
"language": "en",
"url": "https://arxiv.org/abs/1802.08606"
}
|
\section{Introduction}
Apart the big advances of Augustin-Louis Cauchy
(cf.~\cite[Subsection 2.3]{Cooke84}) and Karl
Weierstra\ss~(cf.~\cite[Subsection 2.4]{Cooke84}), Sonya
Kovalevskaya's approach (cf.~\cite[Subsection 2.5]{Cooke84}) may be considered
as one of the major landmarks obtained on the {\it nineteenth-century} for
the study of analytic representations for the solutions of partial differential equations (PDE's).
Its uselfulness in the field of hypercomplex variables is undisputed, largely due to the pioneering work of Fransiscus Sommen
\cite{Sommen81}. An immediate consequence of such kind of construction is that variants of Kovalevskaya's results -- namely the
ones associated to PDE's in the 'normal form'
(cf.~\cite[p.~32]{Cooke84})-- may be used successively as a tool
to generate null solutions for Dirac-type operators, the so-called
monogenic functions. We refer to \cite[pp.~265-268]{DSS92} for the
generalized theory in the context of hypercomplex variables, and to
\cite[Chapter III]{DSS92} for a wide class of examples involving spherical
monogenics and other generating classes of special functions. For a fractional calculus counterpart of Kovalevskaya's result involving Gegenbauer polynomials we refer to \cite{Vieira17}.
The discrete counterpart for the Cauchy-Kovalevskaya's approach was already investigated by De Ridder et al (cf.~\cite{RSS10}) and Constales \& De Ridder (cf.~\cite{CR14}) using slightly different approaches.
In the first approach (2010), the authors mimic the continuous counterpart by means of Taylor series expansions in interplay with the so-called \textit{skew-Weyl relations}, formely introduced on the paper \cite{RSKS10}. In the second approach (2014) the authors provided an alternative construction for \cite{RSS10}, using solely operational identities associated to the Chebyshev polynomials of first and second kind (cf.~\cite[p. 170]{MasonChebyshev93}):
\begin{eqnarray}
\label{ChebyshevPolynomials}T_k(\lambda)=\cos(k\cos^{-1}(\lambda)) & \mbox{resp.}& \displaystyle U_{k-1}(\lambda)=\dfrac{\sin(k\cos^{-1}(\lambda))}{\sqrt{1-\lambda^2}}.
\end{eqnarray}
In the present paper we reformulate the approaches considered on both papers. The construction stated in section \ref{CKsection} makes use of Roman's umbral calculus approach \cite{Roman84} and of the discrete Fourier theory considered by G\"urlebeck and Spr\"o\ss ig on their former book \cite{GuerlebeckSproessig97},
following up author's recent contribution \cite{FaustinoRelativistic18}.
In section \ref{PlaneWaveSection} it was obtained an explicit \textit{space-time} integral representation for the Cauchy-Kovalevskaya extension on the momentum space, based on Cauchy principal value representations for (\ref{ChebyshevPolynomials}), and on a Laplace transform identity associated to the \textit{generalized Mittag-Leffler function} (cf.~\cite[p.~21]{SamkoEtAl93})
\begin{eqnarray} \label{MittagLeffler}E_{\alpha,\beta}(z)=\sum_{k=0}^{\infty}\frac{z^k}{\Gamma(\beta+\alpha k)}, &\mbox{for}& \Re(\alpha)>0~\&~\Re(\beta)>0.
\end{eqnarray}
With such construction we provide a further effort toward opening a wider scenario on the theory of discrete hypercomplex variables in interplay with fractional integro-differential type operators (cf.~\cite[Chapter 5]{SamkoEtAl93}), with the hope that this may contribute to further developments on the field towards the discrete analogues for the \textit{Littlewood-Paley-Stein theory} (cf. \cite{SteinW2000,MSteinW02,Pierce09}).
\section{Problem setup}
The approach proposed throughout this paper is centered around the solution of a first order difference-difference Cauchy problem
on the space-time lattice $$h{\mathbb Z}^n \times \tau {\mathbb Z}_{\geq 0}:=\left\{(x,t)\in {\mathbb R}^n \times [0,\infty)~:~\frac{x}{h}\in {\mathbb Z}^n~~\&~~\frac{t}{\tau}\in {\mathbb Z} \right\}$$ involving the finite difference discretization $D_h$ for the Dirac operator over the Clifford algebra $C \kern -0.1em \ell_{n,n}$, considered by the author in \cite{FaustinoKGordonDirac16,FaustinoMMAS17}.
To motivate our approach let us take at a first glance a close look to the following second order Cauchy problem on $h{\mathbb Z}^n \times \tau {\mathbb Z}_{\geq 0}$:
\begin{eqnarray}
\label{KleinGordonh} \left\{\begin{array}{lll}
\dfrac{\Psi(x,t+\tau)+\Psi(x,t-\tau)-2\Psi(x,t)}{\tau^2}= \Delta_h \Psi(x,t) & \mbox{for} & (x,t)\in
h{\mathbb Z}^n \times \tau {\mathbb Z}_{\geq 0}
\\ \ \\
\Psi(x,0)=\Phi_0(x) & \mbox{for} & x\in h{\mathbb Z}^n\\ \ \\
\left[L_t \Psi(x,t)\right]_{t=0}=\Phi_1(x) & \mbox{for} & x\in h{\mathbb Z}^n
\end{array}\right.
\end{eqnarray}
The above problem corresponds to a difference-difference discretization of the Cauchy problem of Klein-Gordon type the \textit{massless limit} $m\rightarrow 0$ (cf.~\cite[section 4.]{Rabin82}).
Here
\begin{eqnarray}
\label{discreteLaplacian}
\displaystyle \Delta_h \Psi(x,t)=\sum_{j=1}^n
\frac{\Psi(x+h{\bf e}_j,t)+\Psi(x-h{\bf e}_j,t)-2\Psi(x,t)}{h^2}.
\end{eqnarray}
corresponds to the finite difference action of the \textit{discrete Laplacian} on the space-time lattice $h{\mathbb Z}^n \times \tau {\mathbb Z}_{\geq 0}$ (cf. \cite[subsection 1.2.]{FaustinoRelativistic18}), whereas $L_t$ is a finite difference operator satisfying the second-order constraint
$$
L_t(L_t\Psi(x,t))=\dfrac{\Psi(x,t+\tau)+\Psi(x,t-\tau)-2\Psi(x,t)}{\tau^2}.
$$
For the case where $L_t$ is a finite difference operator defined as $$L_t\Psi(x,t)=\dfrac{\Psi\left(x,t+\frac{\tau}{2}\right)-\Psi\left(x,t-\frac{\tau}{2}\right)}{\tau},$$ it was depicted by the author in his recent contribution (cf.~\cite[subsection 4.1.]{FaustinoRelativistic18}) that the solution of the difference-difference evolution problem (\ref{KleinGordonh}) may be derived from the Cauchy principal values representations for the Chebyshev polynomials of first and second kind, $T_k(\lambda)$ resp. $U_{k-1}(\lambda)$
(cf.~ \cite[subsection 4.1,p.~173]{MasonChebyshev93}):
\begin{eqnarray}
\label{ChebyshevIntegrals}
\begin{array}{lll}
\displaystyle T_{k}(\lambda)&=&\displaystyle -\frac{1}{\pi}\int_{-1}^{1} \left(1-s^2\right)^{\frac{1}{2}}\frac{U_{k-1}(s)}{s-\lambda} ds\\ \ \\
\displaystyle U_{k-1}(\lambda)&=&\displaystyle\frac{1}{\pi}\int_{-1}^{1} \left(1-s^2\right)^{-\frac{1}{2}}\frac{T_{k}(s)}{s-\lambda} ds.
\end{array}
\end{eqnarray}
As studied in depth by the author in \cite{FaustinoMonomiality14}, the above construction yields as a natural exploitation of an abstract framework involving manipulations of \textit{shift-invariant operators} that admits formal series expansions in terms of the time derivative $\partial_t$:
$$ L_t=\sum_{k=1}^\infty b_k \dfrac{\left(\partial_t\right)^k}{k!},~~~\mbox{with}~~~b_k=[(L_t)^k t^k]_{t=0}.$$
Such construction looks similar to \cite{CR14} (cf.~\cite[Remark 4.1]{FaustinoRelativistic18}), except that the $-iL_t$ operator ($i=\sqrt{-1}$) -- appearing quite often on Dirac's equation (cf.~\cite[subsection 5.]{Rabin82}) -- is replaced by finite difference operators of the form
\begin{eqnarray}
\label{FiniteDiffCR14}\nabla_\tau \Psi(x,t)={\bf e}_- \frac{\Psi(x,t)-\Psi(x,t-\tau)}{\tau}-{\bf e}_+ \frac{\Psi(x,t+\tau)-\Psi(x,t)}{\tau},
\end{eqnarray}
for two given Witt basis on the time direction, ${\bf e}_+$ resp. ${\bf e}_-$, satisfying the set of constraints
\begin{eqnarray*}
\left({\bf e}_+\right)^2=\left({\bf e}_-\right)^2=0 & \& & {\bf e}_+ {\bf e}_-+{\bf e}_-{\bf e}_+=1.
\end{eqnarray*}
The Cauchy problem that we propose to study here is of the form
\begin{eqnarray}
\label{DiracCK} \left\{\begin{array}{lll}
{\bf e}_0 \dfrac{\Psi(x,t+\tau)-\Psi(x,t-\tau)}{2\tau}+{\bf e}_{2n+1}\dfrac{2\Psi(x,t)-\Psi(x,t+\tau)-\Psi(x,t-\tau)}{2\tau} \\ \ \\
=- D_h \Psi(x,t),~~~~\mbox{for}~~~(x,t)\in h{\mathbb Z}^n \times \tau {\mathbb Z}_{\geq 0} \\ \ \\
\Psi(x,0)=\Phi_0(x),~~\mbox{for}~~~x\in h{\mathbb Z}^n
\end{array}\right..
\end{eqnarray}
Here and elsewhere
\begin{eqnarray}
\label{DiracEqh}
\begin{array}{lll}
D_h \Psi(x,t)&=&\displaystyle \sum_{j=1}^n{\bf e}_j\frac{\Psi(x+h {\bf e}_j,t)-\Psi(x-h{\bf e}_j,t)}{2h}+ \\
&+& \displaystyle \sum_{j=1}^n{\bf e}_{n+j}\frac{2\Psi(x,t)-\Psi(x+h {\bf e}_j,t)-\Psi(x-h {\bf e}_j,t)}{2h}
\end{array}
\end{eqnarray}
stands for the finite difference discretization for the Dirac operator, whereas ${\bf e}_0$ and ${\bf e}_{2n+1}$ are two Clifford generators satisfying
\begin{eqnarray*}
({\bf e}_0)^2=-1 & \& & ({\bf e}_{2n+1})^2=+1.
\end{eqnarray*}
The resulting Cauchy problem formulation over the space-time lattice $h{\mathbb Z}^n \times \tau {\mathbb Z}_{\geq 0}$ mix first order approximations for the space-time derivatives, $\partial_{t}$ and $\partial_{x_j}$ respectively, on the ${\bf e}_j-$direction ($j=0,1,\ldots,n$) plus $n+1$ second order perturbation terms, satisfying the asymptotic conditions
\begin{eqnarray*}
\dfrac{2\Psi(x,t)-\Psi(x,t+\tau)-\Psi(x,t-\tau)}{2\tau} &=&-\frac{\tau}{2}\partial_{t}^2\Psi(x,t)+O(\tau^3)\\ \ \\
\frac{2\Psi(x,t)-\Psi(x+h {\bf e}_j,t)-\Psi(x-h {\bf e}_j,t)}{2h}&=&-\frac{h}{2}\partial_{x_j}^2\Psi(x,t)+O(h^3).
\end{eqnarray*}
Throughout this paper we shall assume that ${\bf e}_0$ and ${\bf e}_{2n+1}$ together with ${\bf e}_1,{\bf e}_2,\ldots,{\bf e}_n$, ${\bf e}_{n+1},{\bf e}_{n+2}\,\ldots,{\bf e}_{2n}$ are the generators of the Clifford algebra $C \kern -0.1em \ell_{n+1,n+1}$, satisfying
\begin{eqnarray}
\label{CliffordBasis}
\begin{array}{lll}
{\bf e}_j {\bf e}_k+ {\bf e}_k {\bf e}_j=-2\delta_{jk}, & 0\leq j,k\leq n \\
{\bf e}_{j} {\bf e}_{n+k}+ {\bf e}_{n+k} {\bf e}_{j}=0, & 0\leq j\leq n ~~\&~~ 1\leq k\leq n+1\\
{\bf e}_{n+j} {\bf e}_{n+k}+ {\bf e}_{n+k} {\bf e}_{n+j}=2\delta_{jk}, & 1\leq j,k\leq
n+1.
\end{array}
\end{eqnarray}
Under the canonical isomorphism $C \kern -0.1em \ell_{n+1,n+1}\cong \mbox{End}(C \kern -0.1em \ell_{0,n+1})$ (see e.g. \cite[Chapter 4]{VazRoldao16} for further details), the equivalence between our formulation and the formulation considered by Constales and De Ridder in \cite{CR14} is rather obvious (cf.~\cite[subsection 2.3]{FaustinoKGordonDirac16}).
In particular, under the canonical identifications
\begin{eqnarray*}
{\bf e}_0 \leftrightarrow {\bf e}_--{\bf e}_+& \& & {\bf e}_{2n+1}\leftrightarrow {\bf e}_-+{\bf e}_+
\end{eqnarray*}
we find that the finite difference operators
$${\bf e}_0 \dfrac{\Psi(x,t+\tau)-\Psi(x,t-\tau)}{2\tau}+{\bf e}_{2n+1}\dfrac{2\Psi(x,t)-\Psi(x,t+\tau)-\Psi(x,t-\tau)}{2\tau}$$
and (\ref{FiniteDiffCR14}) are indeed equivalent.
\section{The discrete Cauchy-Kovalevskaya approach explained}\label{CKsection}
\subsection{Starting with a formal power series expansion}\label{EGFsection}
The formal construction of the discrete Cauchy-Kovalevskaya extension proceeds as follows:
Let us take first a close look for the equation
$$ {\bf e}_0 \dfrac{\Psi(x,t+\tau)-\Psi(x,t-\tau)}{2\tau}+{\bf e}_{2n+1}\dfrac{2\Psi(x,t)-\Psi(x,t+\tau)-\Psi(x,t-\tau)}{2\tau}
=- D_h \Psi(x,t).$$
We notice here that the factorization property $\left(D_h\right)^2=-\Delta_h$ (cf.~\cite[Proposition 2.1]{FaustinoKGordonDirac16}) together with the identities $({\bf e}_0)^2=-1$ and $({\bf e}_{2n+1})^2=+1$ lead to
\begin{eqnarray}
\label{discreteHarmonic}
\begin{array}{lll}
-\Delta_h \Psi(x,t)&=&-D_h\left(-D_h\Psi(x,t)\right)\\ \ \\
&=&\dfrac{2\Psi(x,t)-\Psi(x,t+\tau)-\Psi(x,t-\tau)}{\tau^2}.
\end{array}
\end{eqnarray}
From the combination of the above two identities, we end up with
$$ {\bf e}_0 \dfrac{\Psi(x,t+\tau)-\Psi(x,t-\tau)}{2\tau}-{\bf e}_{2n+1}\frac{\tau}{2}\Delta_h \Psi(x,t)
=- D_h \Psi(x,t),
$$
Moreover, from the set of identities $\left({\bf e}_0\right)^2=-1$ and ${\bf e}_0{\bf e}_{2n+1}=-{\bf e}_{2n+1}{\bf e}_0$ involving the Clifford generators ${\bf e}_0$ and ${\bf e}_{2n+1}$ of $C \kern -0.1em \ell_{n+1,n+1}$, there holds
\begin{eqnarray}
\label{DiracEqhtau}
\Psi(x,t+\tau)-\Psi(x,t-\tau)=\left(2\tau{\bf e}_0 D_h+\tau^2{\bf e}_{2n+1}{\bf e}_0\Delta_h\right) \Psi(x,t).
\end{eqnarray}
Here we recall that the left-hand side of (\ref{DiracEqhtau}) admits the formal series expansion (cf.~\cite[Example 2.3.]{FaustinoMonomiality14})
$$
\Psi(x,t+\tau)-\Psi(x,t-\tau)=2\sinh(\tau\partial_t)\Psi(x,t).
$$
Similarly to \cite[subsection 3.2.]{FaustinoRelativistic18}, one can conclude from the identity (\ref{DiracEqhtau}) that the solution of the Cauchy problem (\ref{DiracCK}) may be represented through the {\it Exponential Generating Function} (EGF, for short) type expansion
\begin{eqnarray}
\label{CKseriesExpansion}
\Psi(x,t)=\sum_{k=0}^{\infty} \frac{G_k(t;-\tau,2\tau)}{k!}\left(2\tau{\bf e}_0 D_h+\tau^2{\bf e}_{2n+1}{\bf e}_0\Delta_h\right)^k \Phi_0(x).
\end{eqnarray}
On the above formula $G_k(t;-\tau,2\tau)$ ($k \in \mathbb{N}_0$) denote the Gould polynomials -- the Sheffer sequence associated to the \textit{delta operator} $2\sinh(\tau \partial_t)$ (cf.~\cite[subsection 1.4.]{Roman84}).
We recall here that (\ref{CKseriesExpansion}) slightly differs from the one proposed in \cite[pp.~1469 \& 1470]{RSS10}. The main idea here to rid of the need of imposing skew-Weyl constraints (cf.~\cite[section 3]{RSKS10}) was the link between the solution of the discrete Cauchy problem (\ref{DiracCK}) with the solutions of the {discrete harmonic type equation} (\ref{discreteHarmonic}) that lead us to a first order time-evolution equation (\ref{DiracEqh}) encoded by the {\it delta operator} $2\sinh(\tau \partial_t)$.
\subsection{Operational Identities}\label{OperationalSection}
In the previous section we have used the EGF endowed by the Sheffer sequence associated to the \textit{delta operator} $2\sinh(\tau\partial_t)$, to obtain a formal series representation for the solution of the Cauchy problem (\ref{DiracEqh}).
Now we will see how the factorization of the finite difference operator $2\tau{\bf e}_0 D_h+\tau^2{\bf e}_{2n+1}{\bf e}_0\Delta_h$ together with rather simple properties involving umbral calculus techniques lead to some interesting operational identities.
We start to observe that the set of relations (\ref{CliffordBasis}) associated to the Clifford generators of $C \kern -0.1em \ell_{n+1,n+1}$ lead to the set of anti-commuting relations
\begin{eqnarray*}
{\bf e}_0 D_h+D_h {\bf e}_0=0 \\
{\bf e}_0 D_h ({\bf e}_{2n+1}{\bf e}_0 \Delta_h)+({\bf e}_{2n+1}{\bf e}_0 \Delta_h){\bf e}_0 D_h=0,
\end{eqnarray*}
and hence, to the factorization property
\begin{eqnarray*}
\label{factorizationDhtau}\left(2\tau{\bf e}_0 D_h+\tau^2{\bf e}_{2n+1}{\bf e}_0\Delta_h\right)^2=-4\tau^2\Delta_h+ \tau ^4 \Delta_h^2.
\end{eqnarray*}
Thereby, the iterated powers $\left(2\tau{\bf e}_0 D_h+\tau^2{\bf e}_{2n+1}{\bf e}_0\Delta_h\right)^{k}$ appearing on the right-hand side of the infinite summand (\ref{CKseriesExpansion}) may be splitted on even ($k=2m$) and odd powers ($k=2m+1$) as follows:
\begin{eqnarray}
\label{powersDhtauk}
\small{\begin{array}{lll}
\left(2\tau{\bf e}_0 D_h+\tau^2{\bf e}_{2n+1}{\bf e}_0\Delta_h\right)^{2m}&=&\left(-4\tau^2\Delta_h+ \tau ^4 \Delta_h^2\right)^m
\\
\left(2\tau{\bf e}_0 D_h+\tau^2{\bf e}_{2n+1}{\bf e}_0\Delta_h\right)^{2m+1}&=&\left(-4\tau^2\Delta_h+ \tau ^4 \Delta_h^2\right)^m\left(2\tau{\bf e}_0 D_h+\tau^2{\bf e}_{2n+1}{\bf e}_0\Delta_h\right).
\end{array}}
\end{eqnarray}
By the substitution of (\ref{powersDhtauk}) into (\ref{CKseriesExpansion}) we recognize, after some umbral calculus manipulations involving the inverse $L^{-1}(s)=\frac{1}{\tau}\sinh^{-1}\left(\frac{s}{2}\right)$ of the function $L(s)=2\sinh(\tau s)$ (cf.~\cite[Appendix A]{FaustinoRelativistic18}) that
\begin{eqnarray}
\label{CKseriesExpansionUmbral}
\begin{array}{lll}
\Psi(x,t)&=&
\displaystyle\sum_{m=0}^{\infty} \frac{2^{2m}G_{2m}(t;-\tau,2\tau)}{(2m)!}\left(-\tau^2\Delta_h+ \frac{\tau^4}{4} \Delta_h^2\right)^m\Phi_0(x)+\\ \ \\ &+&\displaystyle \sum_{m=0}^{\infty} \frac{2^{2m+1}G_{2m+1}(t;-\tau,2\tau)}{(2m+1)!}\left(-\tau^2\Delta_h+ \frac{\tau^4}{4} \Delta_h^2\right)^m \times \\ \ \\ &\times & \left[\tau{\bf e}_0 D_h\Phi_0(x)+\dfrac{\tau^2}{2}{\bf e}_{2n+1}{\bf e}_0\Delta_h\Phi_0(x)\right] \\ \ \\
&=&\cosh\left(\dfrac{t}{\tau}\sinh^{-1}\left(\sqrt{-\tau^2\Delta_h+ \dfrac{\tau^4}{4} \Delta_h^2}\right)\right)\Phi_0(x)+\\ \ \\
&+&\dfrac{\sinh\left(\dfrac{t}{\tau}\sinh^{-1}\left(\sqrt{-\tau^2\Delta_h+ \dfrac{\tau^4}{4} \Delta_h^2}\right)\right)}{\sqrt{-\tau^2\Delta_h+ \dfrac{\tau^4}{4} \Delta_h^2}}\times \\ \ \\ &\times &\left[\tau{\bf e}_0 D_h\Phi_0(x)+\dfrac{\tau^2}{2}{\bf e}_{2n+1}{\bf e}_0\Delta_h\Phi_0(x)\right].
\end{array}
\end{eqnarray}
\subsection{The discrete Fourier analysis approach}
In analogy to \cite[section 3.]{FaustinoRelativistic18}, the computation of the solution for the time-evolution problems may be obtained with the aid
of the discrete Fourier transform (cf.~\cite[subsection 5.2.1]{GuerlebeckSproessig97})
\begin{eqnarray}
\label{discreteFh}
(\mathcal{F}_{h} \Psi)(\xi,t)&=&\left\{\begin{array}{lll}
\displaystyle \frac{h^n}{\left(2\pi\right)^{\frac{n}{2}}}\displaystyle
\sum_{x\in h{\mathbb Z}^n}\Psi(x,t)e^{i x \cdot \xi} & \mbox{for} & \xi\in Q_h
\\ \ \\
0 & \mbox{for} & \xi\in {\mathbb R}^n \setminus \left(-\frac{\pi}{h},\frac{\pi}{h} \right]^n,
\end{array}\right.
\end{eqnarray}
where $Q_h=\left(-\frac{\pi}{h},\frac{\pi}{h} \right]^n$ stands for the \textit{Brioullin representation} of the $n-$torus ${\mathbb R}^n/\frac{2\pi}{h}{\mathbb Z}^n$ (cf.~\cite[p.~
324]{Rabin82}).
Here we recall that the map
$\mathcal{F}_h:\ell_2(h{\mathbb Z}^n)\rightarrow L_2(Q_h)$ is an isometry so that
\begin{eqnarray}
\label{FourierTransform} (\mathcal{F}^{-1}_h
\Phi)(x,t)=\frac{1}{\left(2\pi\right)^{\frac{n}{2}}}\int_{Q_h}
\Phi(\xi,t)e^{-ix\cdot \xi }d\xi.
\end{eqnarray}
Using the fact that the (Clifford) constants
\begin{eqnarray}
\label{FourierMultipliers}
\begin{array}{lll}
d_h(\xi)^2&=&\displaystyle \sum_{j=1}^{n}\frac{4}{h^2}\sin^2\left(\frac{h\xi_j}{2}\right) \\ \ \\
\textbf{z}_{h}(\xi)
&=&\displaystyle \sum_{j=1}^n -i{\bf e}_j\dfrac{\sin(h\xi_j)}{h}+\sum_{j=1}^n {\bf e}_{n+j}\dfrac{1-\cos(h\xi_j)}{h}
\end{array}
\end{eqnarray}
are the Fourier multipliers of $\mathcal{F}_h\circ (-\Delta_h)\circ\mathcal{F}_h^{-1}$ resp. $\mathcal{F}_h\circ D_h\circ \mathcal{F}_h^{-1}$, we get that the EGF operational identity (\ref{CKseriesExpansionUmbral}) on the momentum space $Q_h \times \tau {\mathbb Z}_{\geq 0}$ reads as
\begin{eqnarray}
\label{CKseriesFourier}
\begin{array}{lll}
\mathcal{F}_h\Psi(\xi,t)&=&\cos\left(\dfrac{t}{\tau}\sin^{-1}\left(\sqrt{\tau^2d_h(\xi)^2+ \dfrac{\tau^4}{4} d_h(\xi)^4}\right)\right)\mathcal{F}_h\Phi_0(\xi)+\\ \ \\
&+&\dfrac{\sin\left(\dfrac{t}{\tau}\sin^{-1}\left(\sqrt{\tau^2d_h(\xi)^2+ \dfrac{\tau^4}{4} d_h(\xi)^4}\right)\right)}{\sqrt{\tau^2d_h(\xi)^2+ \dfrac{\tau^4}{4} d_h(\xi)^4}}\times \\ &\times&\left[\tau{\bf e}_0 {\bf z}_h(\xi)-\dfrac{\tau^2}{2}{\bf e}_{2n+1}{\bf e}_0d_h(\xi)^2\right]\mathcal{F}_h\Phi_0(\xi),
\end{array}
\end{eqnarray}
On the other hand, with the aid of the fundamental trigonometric identity
\begin{eqnarray*}
\sin^{-1}(s)=\cos^{-1}\left(\sqrt{1-s^2}\right)&\mbox{for}&0\leq s\leq 1,
\end{eqnarray*}
we realize that the condition $\displaystyle d_h(\xi)^2 \leq \frac{2 }{\tau^2}(\sqrt{2}-1)$ that yields from the constraint
\begin{eqnarray*}
0\leq \tau^2 d_h(\xi)^2+\dfrac{\tau^4}{4} d_h(\xi)^4 \leq 1
\end{eqnarray*}
allows us to represent (\ref{CKseriesFourier}) in terms of the Chebyshev polynomials of first and second kind, $T_k(\lambda)$ resp. $U_{k-1}(\lambda)$, defined viz (\ref{ChebyshevPolynomials}).
Namely, one has
\begin{eqnarray}
\label{CKseriesFourier}
\begin{array}{lll}
\mathcal{F}_h\Psi(\xi,t)&=&T_{\frac{t}{\tau}}\left(\sqrt{1-\tau^2d_h(\xi)^2- \dfrac{\tau^4}{4} d_h(\xi)^4}\right)\mathcal{F}_h\Phi_0(\xi)~+\\ \ \\
&+&
U_{\frac{t}{\tau}-1}\left(\sqrt{1-\tau^2d_h(\xi)^2- \dfrac{\tau^4}{4} d_h(\xi)^4}\right)\times \\ \ \\
&\times & \left[\tau{\bf e}_0 {\bf z}_h(\xi)-\dfrac{\tau^2}{2}{\bf e}_{2n+1}{\bf e}_0d_h(\xi)^2\right]\mathcal{F}_h\Phi_0(\xi).
\end{array}
\end{eqnarray}
Finally, in the view of the discrete Fourier inversion properties (cf.~\cite[p.~247]{GuerlebeckSproessig97}) we get
that the formal solution (\ref{CKseriesExpansion}) is given by the \textit{discrete convolution formula}
\begin{eqnarray}
\label{discreteConvolutionformula}
\Psi(x,t)&=&\sum_{y\in h{\mathbb Z}^n} h^n {\bf K}_\tau(x-y,t)\Phi_0(y),
\end{eqnarray}
involving the kernel function
\begin{eqnarray}
\label{ChebyshevKernel}
\begin{array}{lll}
{\bf K}_\tau(x,t)&=& \dfrac{1}{(2\pi)^{\frac{n}{2}}}\displaystyle \int_{Q_h}T_{\frac{t}{\tau}}\left(\sqrt{1-\tau^2d_h(\xi)^2~- \dfrac{\tau^4}{4} d_h(\xi)^4}\right) e^{-i x\cdot \xi}d\xi~+\\ \ \\
&+& \dfrac{1}{(2\pi)^{\frac{n}{2}}}\displaystyle \int_{Q_h}U_{\frac{t}{\tau}-1}\left(\sqrt{1-\tau^2d_h(\xi)^2- \dfrac{\tau^4}{4} d_h(\xi)^2}\right) \times \\
&\times & \left[\tau{\bf e}_0 {\bf z}_h(\xi)-\dfrac{\tau^2}{2}{\bf e}_{2n+1}{\bf e}_0d_h(\xi)^2\right] e^{-i x\cdot \xi}d\xi.
\end{array}
\end{eqnarray}
\section{Integral Representation formulae}\label{PlaneWaveSection}
\subsection{A Fourier representation formula for $K_\tau(x,t)$ on the space-time}\label{ChebyshevSection}
We start this subsection by exploiting a
quite unexpected hypersingular integral representation for the kernel function (\ref{ChebyshevKernel}) by the Cauchy principal value representations (\ref{ChebyshevIntegrals}) for the Chebyshev polynomials of the first and second kind.
It turns out that it does not require any substantially new ideas: We reformulate only the construction considered in \cite[subsection 5.1]{FaustinoRelativistic18}.
On the sequel we will make use of the factorization identities (cf.~\cite[subsection 2.1.]{FaustinoRelativistic18})
\begin{center}
${\bf z}_h(\xi)^2=d_h(\xi)^2$ \& ${\bf z}_h(\xi)^4=d_h(\xi)^4$
\end{center}
involving the Fourier multipliers (\ref{FourierMultipliers}) of $\mathcal{F}_h\circ (-\Delta_h)\circ\mathcal{F}_h^{-1}$ resp. $\mathcal{F}_h\circ D_h\circ \mathcal{F}_h^{-1}$.
First, we observe that for the change of variable $s=\cos(\omega \tau)$, with $0\leq \omega\leq \dfrac{\pi}{\tau}$,
it easily follows by straightforward integral simplifications involving parity arguments that the Cauchy principal value representations (\ref{ChebyshevIntegrals}) for the Chebyshev polynomials (\ref{ChebyshevPolynomials}) may be rewritten as (cf.~\cite[p.~173]{MasonChebyshev93})
\begin{eqnarray}
\label{ChebyshevIntegralsTrigonometric}
\begin{array}{lll}
T_{\frac{t}{\tau}}(\lambda)=&\displaystyle- \dfrac{\tau}{\pi} \int_{0}^{\frac{\pi}{\tau}} \frac{\sin(\omega \tau)}{\cos(\omega \tau)-\lambda} ~\sin(\omega t)d\omega =&\displaystyle \dfrac{\tau}{2\pi} \int_{-\frac{\pi}{\tau}}^{\frac{\pi}{\tau}} \frac{-i\sin(\omega \tau)}{\cos(\omega \tau)-\lambda}~e^{-i\omega t} d\omega \\ \ \\
U_{\frac{t}{\tau}-1}(\lambda)=&\displaystyle \dfrac{\tau}{\pi} \int_{0}^{\frac{\pi}{\tau}} \frac{1}{\cos(\omega \tau)-\lambda}~\cos(\omega t) d\omega =&\displaystyle \dfrac{\tau}{2\pi} \int_{-\frac{\pi}{\tau}}^{\frac{\pi}{\tau}} \frac{1}{\cos(\omega \tau)-\lambda} e^{-i\omega t}d\omega.
\end{array}
\end{eqnarray}
Hence, by inserting the integral identities (\ref{ChebyshevIntegralsTrigonometric}) on (\ref{ChebyshevKernel}), there follows through the substitution $\lambda=\sqrt{1-\tau^2{\bf z}_h(\xi)^2- \frac{\tau^4}{4} {\bf z}_h(\xi)^4}$ that
\begin{eqnarray*}
\label{KtauChebyshev}
{\bf K}_\tau(x,t)= \nonumber \\
= \frac{\tau}{(2\pi)^{\frac{n}{2}+1}}\int_{Q_h} \int_{-\frac{\pi}{\tau}}^{\frac{\pi}{\tau}}\frac{-i\sin(\omega \tau)+\tau{\bf e}_0 {\bf z}_h(\xi)-\frac{\tau^2}{2}{\bf e}_{2n+1}{\bf e}_0{\bf z}_h(\xi)^2}{\cos(\omega\tau)-\sqrt{1-\tau^2{\bf z}_h(\xi)^2- \frac{\tau^4}{4} {\bf z}_h(\xi)^4}}~e^{-i(\omega t+x
\cdot \xi)}d\omega d\xi.
\end{eqnarray*}
The above formula provides us an integral representation of (\ref{ChebyshevKernel}) over $Q_h \times \left(-\frac{\pi}{\tau},\frac{\pi}{\tau}\right]$. This representation, that fulfils for values of $d_h(\xi)^2$ satisfying the condition $$\displaystyle d_h(\xi)^2 \leq \frac{2 }{\tau^2}(\sqrt{2}-1)$$ is nothing else than a \textit{space-time Fourier inversion type formula} encoded by the
solution of (\ref{DiracCK}) on the momentum space $Q_h \times \left(-\frac{\pi}{\tau},\frac{\pi}{\tau}\right]$.
\subsection{Towards a fractional integro-differential type representation}
We end this note by providing an alternative way to describe (\ref{ChebyshevKernel}) by means of a fractional integral representation of Bessel type (cf.~\cite[part 27 of Chapter 5]{SamkoEtAl93}) for the term
\begin{eqnarray}
\label{IntegrandChebyshev}
\frac{-i\sin(\omega \tau)+\tau{\bf e}_0 {\bf z}_h(\xi)-\frac{\tau^2}{2}{\bf e}_{2n+1}{\bf e}_0{\bf z}_h(\xi)^2}{\cos(\omega\tau)-\sqrt{1-\tau^2{\bf z}_h(\xi)^2- \frac{\tau^4}{4} {\bf z}_h(\xi)^4}}.
\end{eqnarray}
To do so, we make use of the Laplace transform identity (cf.~\cite[p.~21]{SamkoEtAl93})
\begin{eqnarray}
\label{LaplaceIdentityMittagLeffler}
\int_{0}^\infty e^{p\lambda^2} p^{\beta-1}E_{\alpha,\beta}\left(s p^{\alpha}\right)dp= \dfrac{\lambda^{-2\beta}}{1-s \lambda^{-2\alpha}},~~\mbox{for}~~ \Re(\lambda^2)>|s|^{\frac{1}{\alpha}}~\&~\Re(\beta)>0
\end{eqnarray}
involving the \textit{generalized Mittag-Leffler function} $E_{\alpha,\beta}$ defined viz equation (\ref{MittagLeffler}), to obtain a regularization for the term (\ref{IntegrandChebyshev}).
We recall here that for the substitutions
\begin{eqnarray*}
s=\cos(\omega\tau),~~~~
\lambda=\sqrt{1-\tau^2{\bf z}_h(\xi)^2- \frac{\tau^4}{4} {\bf z}_h(\xi)^4} & \& & \alpha=\beta=\frac{1}{2}
\end{eqnarray*}
on (\ref{LaplaceIdentityMittagLeffler}) it readily follows that (\ref{IntegrandChebyshev}) admits the following integral representation
\begin{eqnarray*}
\frac{-i\sin(\omega \tau)+\tau{\bf e}_0 {\bf z}_h(\xi)-\frac{\tau^2}{2}{\bf e}_{2n+1}{\bf e}_0{\bf z}_h(\xi)^2}{\cos(\omega\tau)-\sqrt{1-\tau^2{\bf z}_h(\xi)^2- \frac{\tau^4}{4} {\bf z}_h(\xi)^4}}= \\ \ \\
=-\int_{0}^\infty \left(-i\sin(\omega \tau)+\tau{\bf e}_0 {\bf z}_h(\xi)-\frac{\tau^2}{2}{\bf e}_{2n+1}{\bf e}_0{\bf z}_h(\xi)^2\right) e^{-p\left({\tau^2{\bf z}_h(\xi)^2+ \frac{\tau^4}{4} {\bf z}_h(\xi)^4}\right)}~\times \\ \ \\ \times \dfrac{E_{\frac{1}{2},\frac{1}{2}}\left(~\cos(\omega \tau) \sqrt{p}~\right)}{\sqrt{p}}~e^{p}dp.
\end{eqnarray*}
Hence, after a wise interchanging on the order of integration of ${\bf K}_\tau(x,t)$, there holds the curious integral representation formula:
\begin{eqnarray*}
\label{KtauChebyshev}
{\bf K}_\tau(x,t)=\int_{0}^{\infty} ~\left[ \frac{1}{(2\pi)^{\frac{n}{2}}} \int_{Q_h} e^{-p\left({\tau^2{\bf z}_h(\xi)^2+ \frac{\tau^4}{4} {\bf z}_h(\xi)^4}\right)} {\mathcal{F}_h{\bf H}_\tau({\bf z}_h(\xi),t;p)}{}e^{-ix\cdot \omega}d\xi\right]~e^{p}dp. \\
\end{eqnarray*}
Here, $\mathcal{F}_h{\bf H}_\tau({\bf z}_h(\xi),t;p)$ stands for the auxiliar function
\begin{eqnarray}
\label{AuxiliarFunction}
\begin{array}{lll}
\mathcal{F}_h{\bf H}_\tau({\bf z}_h(\xi),t;p)&=&\displaystyle -\frac{\tau}{{2\pi}} \int_{-\frac{\pi}{\tau}}^{\frac{\pi}{\tau}} \left(-i\sin(\omega \tau)+\tau{\bf e}_0 {\bf z}_h(\xi)-\frac{\tau^2}{2}{\bf e}_{2n+1}{\bf e}_0{\bf z}_h(\xi)^2\right)\times \\ \ \\ &\times& \dfrac{E_{\frac{1}{2},\frac{1}{2}}\left(~\cos(\omega \tau) \sqrt{p}~\right)}{\sqrt{p}} ~e^{-i\omega t} d\omega.
\end{array}
\end{eqnarray}
Furthermore, after some straightforward manipulations involving the inversion of the discrete Fourier transform $\mathcal{F}_h$ (cf.~\cite[p.~247]{GuerlebeckSproessig97}) we conclude that the \textit{discrete convolution formula} (\ref{discreteConvolutionformula}) is equivalent to the subordination formula
$$
\Psi(x,t)=
\int_{0}^{\infty} {e^{p\left(-\tau^2\Delta_h+ \frac{\tau^4}{4} \Delta_h^2\right)}}{\bf H}_\tau(D_h,t;p)\left[\Phi_0(x)\right]e^pdp,
$$
involving the \textit{integro-difference operator} ${\bf H}_\tau(D_h,t;p)$ defined through equation (\ref{AuxiliarFunction}), and the semigroup $\left\{e^{p\left(-\tau^2\Delta_h+ \frac{\tau^4}{4} \Delta_h^2\right)}\right\}_{p\geq 0}$ endowed by the Cauchy problem of differential-difference type
\begin{eqnarray}
\label{diffHeatType} \left\{\begin{array}{lll}
\partial_p \Psi (x,p)= -\tau^2\Delta_h\Psi (x,p)+ \frac{\tau^4}{4} \Delta_h^2 \Psi (x,p) &, (x,p
)\in
h{\mathbb Z}^n \times [0,\infty)
\\ \ \\
\Psi (x,0)=\Phi (x) &, x\in h
{\mathbb Z}^n
\end{array}\right.. \\ \nonumber
\end{eqnarray}
The description considered above looks tailor-suited for operational purposes, since it combines some well-know facts from Chebyshev polynomials with a fine integral representation involving the \textit{generalized Mittag-Leffler function} (\ref{MittagLeffler}), leading to a quite unexpected link between the Cauchy problem (\ref{DiracCK}) besides the discrete Cauchy-Kovalevskaya extension, and the Cauchy problem (\ref{diffHeatType}) of heat type.
Perhaps an exploitation of this construction may be easily found for Gegenbauer polynomials, or for more general families of ultraspherical polynomials by means of Jacobi expansions (cf.~\cite{Vieira17}). For now we will leave this question open for the interested reader.
Last but not least,
after completion of this work we learnt from \cite{SteinW2000,MSteinW02,Pierce09} that the incorporation of tools from fractional integral calculus on the discrete setting is almost well-known on the harmonic analysis comunity.
On the Clifford analysis community, this is surely a new research topic that deserves to be developed.
|
{
"timestamp": "2018-12-11T02:06:13",
"yymm": "1802",
"arxiv_id": "1802.08605",
"language": "en",
"url": "https://arxiv.org/abs/1802.08605"
}
|
\section{Old Introduction}
The abstract interpretation~\cite{Cousot:1977:AIU:512950.512973}
community has developed a wide variety of techniques to support static
analysis of numeric program properties (``numeric static analysis'').
Such analyses can detect buffer overflows~\cite{wagner:ndss00}, analyze a program's resource
usage~\cite{Gulwani:2010:RP:1806596.1806630,Gulwani:2009:CRP:1542476.1542518},
detect side
channels~\cite{Brumley:2003:RTA:1251353.1251354,Bortz:2007:EPI:1242572.1242656},
and discover vectors for denial of service
attacks~\cite{hashtable-attack,hash-dos}.
We aim to apply fully automatic, numeric static analysis to real-world software written
in Java. The basic approach is clear: we need to combine some numeric
domain with a model of the heap, and we need to account for method
calls (i.e., make the analysis
interprocedural). Fu~\cite{fu2014modularly} and Ferrara et
al~\cite{ferrara2014generic,ferrara2015automatic} show how to combine
a heap analysis (such as a points-to
analysis~\cite{Landi:1992:SAA:143095.143137} or shape
analysis~\cite{Sagiv:1999:PSA:292540.292552}) with a numeric domain
(such as intervals or polyhedra~\cite{Cousot:1977:AIU:512950.512973}).
But they stop short of exploring the needed ingredients for a scalable
tool. For example, neither consider method
calls: Fu ignores the effect of calls, making unsound
assumptions about heap-allocated objects, while Ferrara et al. focus on
inferring challenging invariants in small, non method-calling
programs. Clousot~\cite{logozzoclousot} checks numeric assertions on
heap-manipulating, interprocedural programs, but to do so must make
strong assumptions about (non)aliasing of method arguments, and
expects programmers to specify method pre- and post-conditions.
(Section~\ref{sec:related} considers related work in
detail.) Beyond the handling of method calls, there are a multitude of
other design choices in scaling abstract interpretation to large Java
programs. But these have not been evaluated systematically in the
literature, so we were left to wonder: how do various
choices trade precision and performance?
In this paper, we develop a family of combined numeric and heap
analyses for Java. We use multiple linear regression to determine how
design choices affect performance, precision, and the tradeoff between
the two, considering a total of \mwh{216?} variations.
As far as we are aware, our basic analysis is one of the few
combined numeric and heap analyses that is fully automated and scales
to large Java programs. We do not know of any prior work that has considered so many
analysis variations simultaneously.
Our analyses use a combination of numeric domains, a
points-to analysis for the heap, and heap-based strong updates. In more
detail, the analyses employ an ahead-of-time points-to
analysis~\cite{Smaragdakis:2015:PA:2802194.2802195} to name locations
in the heap. The analysis can then associate standard numeric domains
(such as intervals) with both local and heap-allocated objects. To
improve precision, we allow strong
updates~\cite{Chase:1990:APS:93542.93585} on heap-allocated data by
separately tracking numeric values named by \emph{access
paths}~\cite{De:2012:SFP:2367163.2367203,Wei:2014:SPA:2945642.2945644}. For example, if \code{a1} and \code{a2} may alias
according to the points-to analysis, then after a write \code{a2.f =
42} our analysis strongly updates the value for access path
\code{a2.f} to be \code{42}. Thus, if \code{a2.f} is subsequently
accessed before either \code{a1} or \code{a2} is written, the analysis
still knows the precise value of
\code{a2.f}. \mwh{new stuff: inlining, special handling of access paths}
(Section~\ref{sec:analysis} describes our basic combined
numeric and heap analysis in detail.)
\mwh{Lots of what follows will need to be updated.}
We construct a family of
interprocedural analyses by varying five
options. The first option is the analysis direction, which is either
\emph{top-down} (TD) or \emph{bottom-up} (BU), i.e., it either
proceeds from the \code{main} method forward through the program, or
it starts at leaf methods and works its way backward,
respectively. TD analysis is simple and precise, but it may analyze the
same method multiple times in different contexts. In contrast, a
BU analysis is expensive for each individual method, but that
method will only be analyzed once and its summary used at any call
sites, which may be a better tradeoff. As mentioned earlier, the most
related prior work ignores interprocedural
analysis entirely, so our overall (numeric+heap+methods) analysis is novel. The TD
analysis is straightforward, but the BU analysis requires more careful
handling of heap-allocated objects, constituting an interesting
contribution in its own right. \jeff{Describe idea of previous.}
(Section~\ref{app:interproc} describes the top-down and bottom-up
approaches.)
The other four options we vary are more standard. We vary the \emph{numeric
domain} (interval, polyhedral, or a combination of them via
``packing'' \cite{cousot2005astree}, considering both block-level
and method-level packs); \emph{context-sensitivity} of the points-to analysis
(context-insensitive, 1-CFA~\cite{shivers91}, or type-sensitivity
\cite{smaragdakis2011pick}); the \emph{object representation} (class-based,
allocation-site based, or allocation-site based except for strings);
and \emph{widening} (done after either one, three, or 10
iterations)\pxm{The widening variations ended up not mattering,
perhaps this could be removed from experiments?}.
(Section \ref{sec:custom} describes these analysis variants.) Our
implementation uses
WALA~\cite{wala} and
PPL~\cite{Bagnara:2008:PPL:1385689.1385711} (Section \ref{sec:impl}).
Finally, we apply all of 216 analysis variants to five\pxm{all Java
programs from the ...?} Java programs
from the DaCapo benchmark suite~\cite{DaCapo:paper}. To measure
precision, we used the results of the analysis for two clients:
checking array indexes are in bounds and estimating values of
numerically-typed fields. The latter is particularly useful as a
comparison point for analyses that ignore the heap entirely; doing so
soundly would mean all fields could take any value.
We measured the analysis running time, and
the precision of each client in terms of the number of
array accesses provably in bounds and the count of unconstrained field reads.
We define the tradeoff of the two as the precision divided by the
performance, i.e., the number of checks discharged per minute of
analysis time. We then performed multiple linear regression to
determine how analysis options affected performance, precision, and
the tradeoff.
We observed several interesting results. First, as expected,
polyhedra are far more precise than intervals, but can be orders of
magnitude more expensive. Fortunately, using method-level packing
loses little in terms of precision and gains much in
performance. Second, BU analysis is generally faster, but requires
using a (partially) relational domain to be competitive with TD in
terms of precision. This makes sense: BU summaries of methods are most
useful when they can relate inputs to outputs and/or side effects on
the heap. Third, context-sensitive points-to information does not
benefit our analysis. This is because standard numeric abstract
interpretation is already flow- and context-sensitive,
i.e., it treats each call to a method distinctly from other calls.
The loss of context-sensitive aliasing information when analyzing a
called method seems not to harm precision. Finally, an allocation
site-based object representation but with special
handling of strings provided the best tradeoff generally, but not
as significantly in the BU case. Section~\ref{sec:eval}
provides details and additional observations.
\section{Old Evaluation}
\subsection{RQ1: Performance}
\label{acimpact}
\begin{table}[t!]
\small
\centering
\begin{tabular}{|c|c|r|r|r|}
\hline
{\bf Option} & {\bf Setting} & \parbox{0.3in}{\centering \bf Est. (min)} & \multicolumn{1}{c|}{\bf CI} & {\bf $p$-value} \\
\hline \hline
\multirow{2}{*}{\opt{AO}} & \opt{TD} & - & - & - \\
\hhline{~----}
& \cellcolor{green}\opt{BU} & \cellcolor{green}-25.4 & \cellcolor{green}[-39.0, -11.8] & \cellcolor{green}$<$0.001 \\
\hline
\multirow{4}{*}{\opt{DP}} & \opt{INT} & - & - & - \\
\hhline{~----}
& \cellcolor{green}\opt{POL} & \cellcolor{green}127.1 & \cellcolor{green}[113.5, 140.7] & \cellcolor{green} $<$0.001 \\
\hhline{~----}
& \opt{HY1} & 0.1 & [-13.5, 13.7] & 0.987\\
\hhline{~----}
& \opt{HY2} & 13.5 & [-0.1, 27.1] & 0.051\\
\hline
\multirow{3}{*}{\opt{CS}} & \opt{CI} & - & - & - \\
\hhline{~----}
& \opt{1CFA} & 11.6 & [-0.19, 23.4] & 0.053 \\
\hhline{~----}
& \opt{1TYP} & 3.7 & [-8.1, 15.5] & 0.539 \\
\hline
\multirow{3}{*}{\opt{OR}} & \opt{CLAS} & - & - & - \\
\hhline{~----}
& \opt{SMUS} & -7.8 & [-23.4, 7.7] & 0.324 \\
\hhline{~----}
& \opt{ALLO} & -9.2 & [-24.7, 6.4] & 0.248 \\
\hline
\multirow{3}{*}{\opt{WI}} & \opt{W1} & - & - & - \\
\hhline{~----}
& \opt{W3} & 0.2 & [-5.7, 6.1] & 0.946 \\
\hhline{~----}
&\cellcolor{green}\opt{W10} & \cellcolor{green}15.3 & \cellcolor{green}[9.4, 21.2] & \cellcolor{green}$<$0.001 \\
\hline
\hline
\multirow{4}{*}{\opt{AO:DP}} & \opt{TD:INT} & - & - & - \\
\hhline{~----}
& \cellcolor{green}\opt{BU:POL} & \cellcolor{green}-73.8 & \cellcolor{green}[-87.3, -60.2] & \cellcolor{green}$<$0.001 \\
\hhline{~----}
& \opt{BU:HY1} & -0.2 & [-13.5, 13.7] & 0.977 \\
\hhline{~----}
& \opt{BU:HY2} & -1.0 & [-14.6, 12.6] & 0.882 \\
\hline
\multirow{3}{*}{\opt{AO:CS}} & \opt{TD:CI} & - & - & - \\
\hhline{~----}
& \opt{BU:1CFA} & -0.3 & [-12.1, 11.5] & 0.960 \\
\hhline{~----}
& \cellcolor{green}\opt{BU:1TYP} & \cellcolor{green}32.1 & \cellcolor{green}[20.4, 43.9] & \cellcolor{green}$<$0.001 \\
\hline
\multirow{3}{*}{\opt{AO:OR}} & \opt{TD:CLAS} & - & - & - \\
\hhline{~----}
& \cellcolor{green}\opt{BU:SMUS} & \cellcolor{green}23.9 & \cellcolor{green}[12.1, 35.7] & \cellcolor{green}$<$0.001 \\
\hhline{~----}
& \cellcolor{green}\opt{BU:ALLO} & \cellcolor{green}82.0 & \cellcolor{green}[70.2, 93.7] & \cellcolor{green}$<$0.001 \\
\hline
\multirow{7}{*}{\opt{DP:OR}} & \opt{INT:CLAS} & - & - & - \\
\hhline{~----}
& \opt{POL:SMUS} & -8.2 & [-24.9, 8.4] & 0.331 \\
\hhline{~----}
& \cellcolor{green}\opt{POL:ALLO} & \cellcolor{green}-18.5 & \cellcolor{green}[-35.2, -1.9] & \cellcolor{green}0.029 \\
\hhline{~----}
& \opt{HY1:SMUS} & 0.4 & [-16.3, 17.0] & 0.966 \\
\hhline{~----}
& \opt{HY1:ALLO} & 1.9 & [-14.7, 18.6] & 0.822 \\
\hhline{~----}
& \opt{HY2:SMUS} & 14.5 & [-2.1, 31.2] & 0.086 \\
\hhline{~----}
& \opt{HY2:ALLO} & 9.6 & [-7.0, 26.3] & 0.258 \\
\hline
\multirow{5}{*}{\opt{CS:OR}} & \opt{CI:CLAS} & - & - & - \\
\hhline{~----}
& \opt{1CFA:SMUS} & -6.2 & [-20.6, 8.2] & 0.398 \\
\hhline{~----}
& \opt{1TYP:SMUS} & 0.4 & [-14.0, 14.9] & 0.952 \\
\hhline{~----}
& \cellcolor{green}\opt{1CFA:ALLO} & \cellcolor{green}-24.9 & \cellcolor{green}[-39.4, -10.5] & \cellcolor{green}$<$0.001 \\
\hhline{~----}
& \cellcolor{green}\opt{1TYP:ALLO} & \cellcolor{green}28.8 & \cellcolor{green}[14.4, 43.2] & \cellcolor{green}$<$0.001 \\
\hline
\end{tabular}
\caption{\textmd{Performance over all benchmark programs. Lower is better in Est. column. R$^2$ of 0.65.}}
\vspace{1pt}
\hrule
\vspace{-12pt}
\label{table:performance}
\end{table}
Table \ref{table:performance} presents a summary of a regression
that shows the effect of analysis configuration options on
performance. We measure performance as the time to run the core
analysis and then use the results to implement the checks for \afmt{Array}{}
and \afmt{Field}{}.
In the top part of the figure, the first column shows the
independent variables and the second column shows a setting. One of
the settings, identified by dashes in the remaining columns, is the
baseline in the regression. (The baseline configuration in
Table~\ref{table:benchmarks} is the combination of these baseline settings). For the other
settings, the third column shows the estimated effect of that setting
with all other settings (including the choice of program) held fixed.
For example,
the second row of the table shows that bottom-up analysis decreases
overall analysis time by 25.4 minutes compared to top-down
analysis, using the baseline settings of the other options.
\jeff{Is that on average across programs?}
The fourth column shows
the 95\% confidence interval around the
estimate, and the last column shows the $p$-value. As is standard, we
consider $p$-values less than 0.05 (5\%) significant; such rows are
highlighted in green in the table. \jeff{What happens if the analysis
didn't complete? We mention this in the next two sections but not here.}
The bottom part of the table is similar, but it shows the additional effects of
two-way combinations of options compared to the baseline effects of
each option. For example, the \opt{BU:POL} line shows that bottom-up,
polyhedral analysis is 27.9 minutes slower compared to the effect of
top-down, interval analysis (\opt{TD:INT}). We count this effect by adding the
individual effects of \opt{BU} (-25.4) and \opt{POL} (127.1) as
well as the \opt{BU:POL} interaction (-73.8). Note that
not all interactions are shown (e.g., \opt{AO:WI} is not in the table). Any
interactions not included were found, during model generation, to be
unnecessary to explain difference from the baseline, i.e., they have
no meaningful effect.
Note that although we included the program as an independent variable
in the regression, we do not discuss the effect of particular programs
here. As mentioned above, including the program as a variable allows
us factor out program-specific influence and thus focus on the general
effect of the analysis options. \jeff{This sounds wishy-washy. Can we
be more precise? Some readers will wonder why we don't normalize, so
we should explain exactly what we're measuring by including the
programs as a variable.}
We observe several interesting results from Table \ref{table:performance}.
\emph{Using polyhedra incurs significant slowdown though not as much
with bottom-up analysis.} Overall, polyhedral analysis (\opt{POL})
is two hours slower than using the interval domain, which is a large
effect with high significance. Indeed, 137 out of 205 analyses that
timed out used the polyhedral domain. However, when we combine bottom-up
analysis with polyhedra (\opt{BU:POL}), the slowdown relative to \opt{TD:INT}
is only 25 minutes (as calculated above). This trend matches our timeout data:
68\% (i.e., 93 out of 137) analyses that
timed out using polyhedra are also top-down.
\emph{Bottom-up analysis is faster than top-down analysis in general,
but is more sensitive to other options.} We see that bottom-up
analysis (\opt{BU}) has an overall benefit of $25.4$ minutes by itself,
compared to top-down analysis. However, when combined with
allocation-based object representation, bottom-up analysis is
$56.6$ minutes slower than a top-down
analysis (i.e., the individual effect of \opt{BU}, $-25.4$,
plus the \opt{BU:ALLO} interaction, $82.0$). We believe this is because
our bottom-up analysis introduces
``initial paths" for each heap object access
(Section \ref{ana:bu}). These additional
abstract objects in the numeric domain slow down the
analysis relative to a top-down approach. This is especially
pronounced for string-manipulating functions---\opt{BU:SMUS} offers
considerably less slowdown. \opt{BU:1TYP} also incurs a significant
additional slow-down.
\emph{Context-sensitivity interacts with object-representation, and
more aggressive widening improves performance.} We see that
combining allocation-site object-representation with 1-CFA
(\opt{1CFA:ALLO}) and combining it with type-sensitive analysis
(\opt{1TYP:ALLOC}) have approximately equal and opposite
effects. In other words, type-sensitive analysis is more efficient
with class-based object representation and 1-CFA analysis is more
efficient with allocation-based representation. This may be related to
the design of type-sensitive analysis, which uses the class names of
the receiver objects as calling contexts. We also see that widening
more aggressively
(after one or three iterations) is faster than widening after 10
iterations (\opt{W10}).
\subsection{RQ2: Precision}
\label{sec:precision}
\begin{table}[t]
\small
\centering
\begin{tabular}{|c|c|r|r|r|}
\hline
{\bf Option} & {\bf Setting} & \textbf{Est. (\#)} & \multicolumn{1}{c|}{\bf CI} & {\bf $p$-value} \\
\hline \hline
\multirow{2}{*}{\opt{AO}} & \opt{TD} & -& - & - \\
\hhline{~----}
& \cellcolor{green}\opt{BU} & \cellcolor{green}-38.8 & \cellcolor{green}[-44.1, -33.5] & \cellcolor{green}$<$0.001 \\
\hline
\multirow{4}{*}{\opt{DP}} & \opt{INT} & - & - & - \\
\hhline{~----}
& \cellcolor{green}\opt{POL} & \cellcolor{green}30.0 & \cellcolor{green}[23.4, 36.6] & \cellcolor{green} $<$0.001 \\
\hhline{~----}
& \opt{HY1} & 4.3 & [-0.4, 9.1] & 0.071\\
\hhline{~----}
& \cellcolor{green}\opt{HY2} & \cellcolor{green}26.2 & \cellcolor{green}[21.3, 31.08] & \cellcolor{green}$<$0.001 \\
\hline
\multirow{3}{*}{\opt{OR}} & \opt{CLAS} & - & - & - \\
\hhline{~----}
& \cellcolor{green}\opt{SMUS} & \cellcolor{green}7.2 & \cellcolor{green}[2.7, 11.7] & \cellcolor{green}0.002 \\
\hhline{~----}
& \cellcolor{green}\opt{ALLO} & \cellcolor{green}11.1 & \cellcolor{green}[6.6, 15.7] & \cellcolor{green}$<$0.001 \\
\hline
\multirow{2}{*}{\opt{CL}} & \opt{FD} & -& - & - \\
\hhline{~----}
& \cellcolor{green}\opt{AR} & \cellcolor{green}-51.7 & \cellcolor{green}[-57.0, -46.3] & \cellcolor{green}$<$0.001 \\
\hline
\hline
\multirow{4}{*}{\opt{AO:DP}} & \opt{TD:INT} & - & - & - \\
\hhline{~----}
& \cellcolor{green}\opt{BU:POL} & \cellcolor{green}40.3 & \cellcolor{green}[33.2, 47.4] & \cellcolor{green}$<$0.001 \\
\hhline{~----}
& \opt{BU:HY1} & -1.8 & [-7.4, 3.7] & 0.518 \\
\hhline{~----}
& \cellcolor{green}\opt{BU:HY2} & \cellcolor{green}41.2 & \cellcolor{green}[35.5, 47.0] & \cellcolor{green}$<$0.001 \\
\hline
\multirow{3}{*}{\opt{AO:OR}} & \opt{TD:CLAS} & - & - & - \\
\hhline{~----}
& \cellcolor{green}\opt{BU:SMUS} & \cellcolor{green}-12.3 & \cellcolor{green}[-17.4, -7.2] & \cellcolor{green}$<$0.001 \\
\hhline{~----}
& \cellcolor{green}\opt{BU:ALLO} & \cellcolor{green}-13.1 & \cellcolor{green}[-18.5, -7.8] & \cellcolor{green}$<$0.001 \\
\hline
\multirow{2}{*}{\opt{AO:CL}} & \opt{TD:FD} & -& - & - \\
\hhline{~----}
& \cellcolor{green}\opt{BU:AR} & \cellcolor{green}27.5 & \cellcolor{green}[23.2, 31.8] & \cellcolor{green}$<$0.001 \\
\hline
\multirow{4}{*}{\opt{DP:CL}} & \opt{INT:FD} & - & - & - \\
\hhline{~----}
& \cellcolor{green}\opt{POL:AR} & \cellcolor{green}-26.3 & \cellcolor{green}[-33.0, -19.5] & \cellcolor{green}$<$0.001 \\
\hhline{~----}
& \opt{HY1:AR} & -3.1 & [-8.7, 2.4] & 0.267 \\
\hhline{~----}
& \cellcolor{green}\opt{HY2:AR} & \cellcolor{green}-25.6 & \cellcolor{green}[-31.3, -19.9] & \cellcolor{green}$<$0.001 \\
\hline
\multirow{3}{*}{\opt{OR:CL}} & \opt{CLAS:FD} & - & - & - \\
\hhline{~----}
& \cellcolor{green}\opt{SMUS:AR} & \cellcolor{green}144.3 & \cellcolor{green}[139.2, 149.3] & \cellcolor{green}$<$0.001 \\
\hhline{~----}
& \cellcolor{green}\opt{ALLO:AR} & \cellcolor{green}147.7 & \cellcolor{green}[142.4, 153.0] & \cellcolor{green}$<$0.001 \\
\hline
\end{tabular}
\caption{\textmd{Precision for \afmt{Array}{} (\opt{AR}) and \afmt{Field}{} (\opt{FD}). Positive values
in Est. column indicate better precision. R$^2$ of 0.96.}}
\hrule
\vspace{-12pt}
\label{tab:precision}
\end{table}
Table~\ref{tab:precision} summarizes a regression showing how option
settings affect precision. As before, we include the benchmark program as an
independent variable, but we omit its effects from the table.
We also add the analysis client---option \opt{CL} with
settings \opt{FD} for \afmt{Field}{} and \opt{AR} for \afmt{Array}{}---as an
independent variable so we can analyze differences
between the analyses.
The dependent variable is the number of checks
discharged by the analysis. Thus, in absolute terms, the analyses can
discharge 51.7 fewer checks for \afmt{Array}{} than for \afmt{Field}{} (line
\opt{AR} in the table). Statistically significant settings and
interactions are highlighted in green. If an analysis did not complete under a given
configuration, we omit those configurations from the
regression.
We see several interesting results
from Table \ref{tab:precision}.
\emph{More relational domains improves precision, especially for \afmt{Field}{}.}
Using polyhedra (\opt{POL}) or method-level packing (\opt{HY2})
improve precision by about 30 more checks. However, the \opt{POL:AR}
and \opt{HY2:AR} interactions also
show that the more precise numeric domains are less important to the
precision of \afmt{Array}{}, likely because the precision of
\afmt{Array}{} is dominated by object representation, as discussed just above.
\emph{The bottom-up analysis harms precision overall, but is boosted
by relational domains.} In general, bottom-up analysis (\opt{BU})
has a negative effect on precision (38.8 fewer checks)
compared to top-down. However, when combined with
\opt{POL} or \opt{HY2}, precision of \opt{BU} matches that of
\opt{TD}. For example, \opt{POL} adds 30.0; while
\opt{BU} subtracts 38.8 from this it adds 40.3
for \opt{BU:POL}. Interaction \opt{BU:HY2} compensates similarly. This
makes sense because \opt{BU} method summaries work best when they
relate a method's inputs to its effects and outputs, and such
relations cannot be captured by \opt{INT}.
\emph{Widening and context-sensitivity are not significant.}
Widening and context-sensitivity do not appear in the model, meaning
they do not affect precision significantly.
This makes sense because both the \opt{TD} and \opt{BU} analyses are
already somewhat context-sensitive when run with the \opt{CI} option.
\jeff{This should be mentioned earlier in the paper, and there should
be a back reference here to that discussion.}
For \opt{TD} each call will be interpreted in the context of the
current abstraction, and for \opt{BU} it will be instantiated in it;
only aliasing constraints will be context-insensitive.
For widening, the likely explanation is the lack of loops that run for
a fixed number of iterations that is less than 3, or 10, for which no
loss of precision due to widening is incurred.
\emph{More precise object representation improves precision,
especially for \opt{TD} \afmt{Array}{}.}
Strong \opt{SMUS:AR}
and \opt{ALLO:AR} interactions show that object representation is the
most dominant factor in the precision of \afmt{Array}{} (the added $144+$
checks easily overcoming the relatively fewer number of \opt{AR}
checks, $-51.7$, compared to \opt{FD}).
With top-down they do even better (\opt{SMUS} and \opt{ALLO} add 7.2
and 11.1, resp.), while for \opt{BU} they do slightly worse. Array
checking using \opt{CLAS} does worse because it conflates all arrays
of the same type as one abstract object, thus imprecisely
approximating those arrays' lengths, causing some checks to spuriously
fail.
\subsection{RQ3: Tradeoffs}
\begin{table*}[t]
\small
\centering
\begin{tabular}{l l}
\begin{tabular}{|c|c|r|r|r|}
\hline
{\bf Option} & {\bf Setting} & \textbf{Est. (\#)} & \multicolumn{1}{c|}{\bf CI} & {\bf $p$-value} \\
\hline
\multirow{2}{*}{\opt{AO}} & \opt{TD} & -& - & - \\
\hhline{~----}
& \cellcolor{green}\opt{BU} & \cellcolor{green}28.3583 & \cellcolor{green}[23.3, 47.2] & \cellcolor{green}$<$0.001 \\
\hline
\multirow{4}{*}{\opt{DP}} & \opt{INT} & - & - & - \\
\hhline{~----}
& \opt{HY1} & -2.1396 & [-9.7,5.4] & 0.579 \\
\hhline{~----}
& \cellcolor{green}\opt{HY2} & \cellcolor{green}-24.3592 & \cellcolor{green}[-31.9,-16.7] & \cellcolor{green}$<$0.001 \\
\hhline{~----}
& \cellcolor{green}\opt{POL} & \cellcolor{green}-48.6091 & \cellcolor{green}[-56.1,-41.0] & \cellcolor{green}$<$0.001 \\
\hline
\multirow{3}{*}{\opt{CS}} & \opt{CI} & - & - & - \\
\hhline{~----}
& \cellcolor{green}\opt{1CFA} & \cellcolor{green}-17.6066 & \cellcolor{green}[-24.5,-10.6] & \cellcolor{green}$<$0.001 \\
\hhline{~----}
& \cellcolor{green}\opt{1TYP} & \cellcolor{green}-32.6407 & \cellcolor{green}[-39.5,-25.7] & \cellcolor{green}$<$0.001 \\
\hline
\multirow{3}{*}{\opt{OR}} & \opt{CLAS} & - & - & - \\
\hhline{~----}
& \opt{ALLO} & 0.4072 & [-6.5,7.3] & 0.908 \\
\hhline{~----}
& \cellcolor{green}\opt{SMUS} & \cellcolor{green}12.9499 & \cellcolor{green}[6.0,19.8] & \cellcolor{green}$<$0.001 \\
\hline
\multirow{3}{*}{\opt{WI}} & \opt{W1} & - & - & - \\
\hhline{~----}
& \opt{W3} & -2.9252 & [-9.4,3.6] & 0.382 \\
\hhline{~----}
& \cellcolor{green}\opt{W10} & \cellcolor{green}-33.8812 & \cellcolor{green}[-40.4,-27.3] & \cellcolor{green}$<$0.001 \\
\hline
\multirow{2}{*}{\opt{CL}} & \opt{FD} & - & - & - \\
\hhline{~----}
& \cellcolor{green}\opt{AR} & \cellcolor{green}7.6265 & \cellcolor{green}[1.9,13.2] & \cellcolor{green}0.008 \\
\hline
\multirow{4}{*}{\opt{AO:DP}} & \opt{TD:INT} & - & - & - \\
\hhline{~----}
& \cellcolor{green}\opt{BU:HY1} & \cellcolor{green}5.4758 & \cellcolor{green}[0.4,10.5] & \cellcolor{green}0.033 \\
\hhline{~----}
& \cellcolor{green}\opt{BU:HY2} & \cellcolor{green}10.5236 & \cellcolor{green}[5.4,15.5] & \cellcolor{green}$<$0.001 \\
\hhline{~----}
& \cellcolor{green}\opt{BU:POL} & \cellcolor{green}17.7632 & \cellcolor{green}[12.7,22.8] & \cellcolor{green}$<$0.001 \\
\hline
\multirow{7}{*}{\opt{DP:CS}} & \opt{INT:CI} & - & - & - \\
\hhline{~----}
& \opt{HY1:1CFA} & 0.5547 & [-5.6, 6.7] & 0.860 \\
\hhline{~----}
& \cellcolor{green}\opt{HY2:1CFA} & \cellcolor{green}7.9726 & \cellcolor{green}[1.7, 14.1] & \cellcolor{green}0.011 \\
\hhline{~----}
& \opt{POL:1CFA} & 4.5400 & [-1.6, 10.7] & 0.150 \\
\hhline{~----}
& \opt{HY1:1TYP} & 2.3923 & [-3.7, 8.5] & 0.448 \\
\hhline{~----}
& \cellcolor{green}\opt{HY2:1TYP} & \cellcolor{green}14.7121 & \cellcolor{green}[8.5, 20.8] & \cellcolor{green}$<$0.001 \\
\hhline{~----}
& \cellcolor{green}\opt{POL:1TYP} & \cellcolor{green}22.3021 & \cellcolor{green}[16.1, 28.4] & \cellcolor{green}$<$0.001 \\
\hline
\multirow{7}{*}{\opt{DP:OR}} & \opt{INT:CLAS} & - & - & - \\
\hhline{~----}
& \cellcolor{green}\opt{HY1:ALLO} & \cellcolor{green}-7.593 & \cellcolor{green}[-13.7, -1.4] & \cellcolor{green}0.016 \\
\hhline{~----}
& \opt{HY2:ALLO} & -5.2126 & [-11.3, 0.9] & 0.098 \\
\hhline{~----}
& \opt{POL:ALLO} & -0.7067 & [-6.8, 5.4] & 0.822 \\
\hhline{~----}
& \opt{HY1:SMUS} & -4.4587 & [-10.6, 1.7] & 0.157 \\
\hhline{~----}
& \cellcolor{green}\opt{HY2:SMUS} & \cellcolor{green}-11.7440 & \cellcolor{green}[-17.9, -5.5] & \cellcolor{green}$<$0.001 \\
\hhline{~----}
& \cellcolor{green}\opt{POL:SMUS} & \cellcolor{green}-7.4024 & \cellcolor{green}[-13.5, -1.2] & \cellcolor{green}0.019 \\
\hline
\multirow{3}{*}{\opt{AO:CS}} & \opt{TD:CS} & - & - & - \\
\hhline{~----}
& \cellcolor{green}\opt{BU:1CFA} & \cellcolor{green}-21.2563 & \cellcolor{green}[-25.6,-16.8] & \cellcolor{green}$<$0.001 \\
\hhline{~----}
& \cellcolor{green}\opt{BU:1TYP} & \cellcolor{green}-25.5611 & \cellcolor{green}[-29.9,-21.1] & \cellcolor{green}$<$0.001 \\
\hline
\end{tabular}
\begin{tabular}{|c|c|r|r|r|}
\hline
{\bf Option} & {\bf Setting} & \textbf{Est. (\#)} & \multicolumn{1}{c|}{\bf CI} & {\bf $p$-value} \\
\hline
\multirow{3}{*}{\opt{AO:OR}} & \opt{TD:CLAS} & - & - & - \\
\hhline{~----}
& \cellcolor{green}\opt{BU:ALLO} & \cellcolor{green}-52.5418 & \cellcolor{green}[-56.9,-48.1] & \cellcolor{green}$<$0.001 \\
\hhline{~----}
& \opt{BU:SMUS} & -3.2799 & [-7.6,1.0] & 0.141 \\
\hline
\multirow{7}{*}{\opt{DP:WI}} & \opt{INT:W1} & - & - & - \\
\hhline{~----}
& \opt{HY1:W3} & -0.2827 & [-6.5, 5.9] & 0.928 \\
\hhline{~----}
& \opt{HY2:W3} & 0.4882 & [-5.6, 6.6] & 0.877 \\
\hhline{~----}
& \opt{POL:W3} & 1.8668 & [-4.3, 8.0] & 0.554 \\
\hhline{~----}
& \opt{HY1:W10} & 1.9846 & [-4.2, 16.6] & 0.529 \\
\hhline{~----}
& \cellcolor{green}\opt{HY2:W10} & \cellcolor{green}10.4442 & \cellcolor{green}[4.2, 16.6] & \cellcolor{green}$<$0.001 \\
\hhline{~----}
& \cellcolor{green}\opt{POL:W10} & \cellcolor{green}17.6876 & \cellcolor{green}[11.5, 23.8] & \cellcolor{green}$<$0.001 \\
\hline
\multirow{4}{*}{\opt{DP:CL}} & \opt{INT:FD} & - & - & - \\
\hhline{~----}
& \opt{HY1:ARRAY} & -2.1297 & [-7.1,2.9] & 0.408 \\
\hhline{~----}
& \cellcolor{green}\opt{HY2:ARRAY} & \cellcolor{green}-12.3116 & \cellcolor{green}[-17.3,-7.2] & \cellcolor{green}$<$0.001 \\
\hhline{~----}
& \cellcolor{green}\opt{POL:ARRAY} & \cellcolor{green}-13.6298 & \cellcolor{green}[-18.6,-8.5] & \cellcolor{green}$<$0.001 \\
\hline
\multirow{5}{*}{\opt{CS:OR}} & \opt{CI:CLAS} & - & - & - \\
\hhline{~----}
& \cellcolor{green}\opt{1CFA:ALLO} & \cellcolor{green}27.5132 & \cellcolor{green}[22.1,32.8] & \cellcolor{green}$<$0.001 \\
\hhline{~----}
& \cellcolor{green}\opt{1TYP:ALLO} & \cellcolor{green}17.1402 & \cellcolor{green}[11.7,22.4] & \cellcolor{green}$<$0.001 \\
\hhline{~----}
& \opt{1CFA:SMUS} & -5.2482 & [-10.6,0.1] & 0.054 \\
\hhline{~----}
& \cellcolor{green}\opt{1TYP:SMUS} & \cellcolor{green}-11.0730 & \cellcolor{green}[-16.4,-5.7] & \cellcolor{green}$<$0.001 \\
\hline
\multirow{5}{*}{\opt{CS:WI}} & \opt{CI:W1} & - & - & - \\
\hhline{~----}
& \opt{1CFA:W3} & 0.9160 & [-4.4,6.2] & 0.737 \\
\hhline{~----}
& \opt{1TYP:W3} & 2.2364 & [-3.1,7.5] & 0.413 \\
\hhline{~----}
& \cellcolor{green}\opt{1CFA:W10} & \cellcolor{green}9.6493 & \cellcolor{green}[4.2,15.0] & \cellcolor{green}$<$0.001 \\
\hhline{~----}
& \cellcolor{green}\opt{1TYP:W10} & \cellcolor{green}17.3549 & \cellcolor{green}[11.9,22.7] & \cellcolor{green}$<$0.001 \\
\hline
\multirow{3}{*}{\opt{CS:CL}} & \opt{CI:FD} & - & - & - \\
\hhline{~----}
& \opt{1CFA:ARRAY} & -2.2854 & [-6.6,2.0] & 0.305 \\
\hhline{~----}
& \cellcolor{green}\opt{1TYP:ARRAY} & \cellcolor{green}-8.1502 & \cellcolor{green}[-12.5,-3.7] & \cellcolor{green}$<$0.001 \\
\hline
\multirow{5}{*}{\opt{OR:WI}} & \opt{CLAS:W1} & - & - & - \\
\hhline{~----}
& \opt{ALLO:W3} & 1.0841 & [-4.2,6.4] & 0.691 \\
\hhline{~----}
& \opt{SMUS:W3} & -0.3002 & [-5.6,5.0] & 0.912 \\
\hhline{~----}
& \opt{ALLO:W10} & 4.3739 & [-0.9,9.7] & 0.109 \\
\hhline{~----}
& \cellcolor{green}\opt{SMUS:W10} & \cellcolor{green}-5.4235 & \cellcolor{green}[-10.7,-0.1] & \cellcolor{green}0.047 \\
\hline
\multirow{3}{*}{\opt{OR:CL}} & \opt{CLAS:FD} & - & - & - \\
\hhline{~----}
& \cellcolor{green}\opt{ALLO:ARRAY} & \cellcolor{green}16.2518 & \cellcolor{green}[11.8,20.6] & \cellcolor{green}$<$0.001 \\
\hhline{~----}
& \cellcolor{green}\opt{SMUS:ARRAY} & \cellcolor{green}25.2598 & \cellcolor{green}[20.8,29.6] & \cellcolor{green}$<$0.001 \\
\hline
\multirow{3}{*}{\opt{WI:CL}} & \opt{W1:FD} & - & - & - \\
\hhline{~----}
& \opt{W3:ARRAY} & -0.3795 & [-4.7,3.9] & 0.864 \\
\hhline{~----}
& \cellcolor{green}\opt{W10:ARRAY} & \cellcolor{green}-5.2744 & \cellcolor{green}[-9.6,-0.9] & \cellcolor{green}0.018 \\
\hline
\end{tabular}
\end{tabular}
\caption{\textmd{Tradeoff for \afmt{Array}{} and \afmt{Field}{}. Positive values
in Est. column indicate a better tradeoff. R$^2$ of 0.75.}}
\hrule
\vspace{-12pt}
\label{tab:tradeoff}
\end{table*}
Next, we examine how analysis settings affect the tradeoff between
precision and performance, i.e., whether it is worthwhile to run a
slower analysis given that it may discharge more checks. We measure
this tradeoff using \emph{checks-per-minute (cpm)}
as a metric. We compute cpm by dividing the number of checks discharged by the overall analysis time. For example, if an analysis discharged 20 checks and took
5 minutes, it would have a tradeoff of 4cpm. Of
course, this metric is only a rough approximation of the tradeoff an
analysis user might consider, but we believe it still gives us a useful,
quantitative way to compare analysis configurations. Also note that if
when run on a particular program a configuration could not finish its
analysis within the time budget, its cpm is 0 for that program.
Table~\ref{tab:tradeoff} summarizes a model of how
option settings affect a configuration's cpm. As in
Section~\ref{sec:precision}, we include the program and the
analysis options as independent variables, and we omit the program from the table.
Overall we can see that the number of interactions in
this model is significantly more than the previous regressions,
suggesting that the relationship between analysis configuration and
cpm is very complex. Nonetheless, we can observe several results which
generally confirm the tradeoffs uncovered in the prior discussion.
\emph{Polyhedral analysis has a much worse overall tradeoff than other
numeric domains, particularly for \opt{TD}.} The largest cpm
decrease over the interval domain
for any individual setting is 48.6 from the polyhedral domain
(\opt{POL}). This result is expected given the significant performance
issues of \opt{POL} reported in Table
\ref{table:performance}. \opt{HY1} is similar to \opt{INT} in terms of
the tradeoff. However, the results of \opt{HY2} are more subtle. Although
\opt{HY2} still decreases cpm by 24.4 relative to the interval domain,
it is statistically better than \opt{POL} (notice the confidence
intervals do not overlap).
Moreover, \opt{HY2}'s interactions with \opt{BU},
\opt{1CFA}, \opt{1TYP}, and \opt{W10} all show improvements to cpm.
\emph{Bottom-up analysis generally has higher cpm than top-down
variants.} Bottom-up analysis (\opt{BU}) adds 28 cpm over the
baseline, and loses little when moving to relational domains:
\opt{HY2} subtracts 24.3 but \opt{BU:HY2} adds 10.5, while \opt{POL}
subtracts 48.6 but \opt{BU:POL} adds 17.8. Tradeoff \opt{BU:HY1} does
better still. These trends suggest that the efficiency of bottom-up
analysis outweighs its precision loss in terms of cpm. However, the
\opt{BU:ALLO} interaction shows bottom-up, allocation-based analysis
produces 24.1 fewer cpm (i.e., the individual effect of \opt{BU},
28.4, plus the interaction, -52.5) than top-down, allocation-site
based analysis. This result is also reflected in Table
\ref{table:performance}, because the performance of bottom-up,
allocation-based analysis is poor.
\emph{\opt{SMUS} has the best tradeoff among object representation
settings.} \opt{SMUS}, whose precision is similar to \opt{ALLO} in
Table~\ref{tab:precision}, and whose performance is close to
\opt{CLAS} in Table~\ref{table:performance}, outperforms both
\opt{CLAS} and \opt{ALLO} by about 13 cpm.
\emph{Other, more precise settings benefit from 1-CFA and
type-sensitive analysis.} In general, context sensitivity has
a negative impact on cpm compared to context-insensitive analysis,
especially for type-sensitive analysis. The decrease in cpm is 32.6
for type-based analysis (\opt{1TYP}) and 17.6 for
\opt{1CFA}. However, we note that context-sensitive analysis
benefits from the interactions with various other, more precise
settings, including allocation-based representation
(\opt{\{1TYP,1CFA\}:\{SMUS, ALLO\}}) and with delayed widening
(\opt{\{1TYP,1CFA\}:\{W3,X10\}}).
\begin{table*}[t!]
\small
\centering
\begin{tabular}{|c|r|r||r|r|r|r|r|r|r|r|r|r|r|r|}
\hline
& \# & \# & \multicolumn{3}{c|}{\bf Best Performance} & \multicolumn{3}{c|}{\bf Best Precision} & \multicolumn{3}{c|}{\bf Best Precision (all term.)} & \multicolumn{3}{c|}{\bf Best Tradeoff}\\
\textbf{Name} & {\footnotesize \afmt{Array}{}} & {\footnotesize \afmt{Field}{}} & \multicolumn{3}{c|}{{\footnotesize\opt{BU-INT-CI-CLAS-W1}}} & \multicolumn{3}{c|}{\opt{}} & \multicolumn{3}{c|}{{\footnotesize\opt{TD-HY2-1CFA-ALLO-W3}}} & \multicolumn{3}{c|}{{\footnotesize\opt{BU-HY1-CI-SMUS-W3}}}\\
\hline \hline
& & & Time & \afmt{Array}{} & \afmt{Field}{} & Time & \afmt{Array}{} & \afmt{Field}{} & Time & \afmt{Array}{} & \afmt{Field}{} & Time & \afmt{Array}{} & \afmt{Field}{}\\
\hline
eclipse & 367 & 280 & 1.14 & 35\% & 43\% & 14.64$^1$ & 82\% & 70\% & 3.13 & 78\% & 64\% & 1.04 & 71\% & 43\% \\
\hline
fop & 552 & 537 & 3.70 & 34\% & 52\% & 15.93$^2$ & 80\% & 75\% & 15.93 & 80\% & 75\% & 8.14 & 64\% & 53\% \\
\hline
hsqldb & 364 & 301 & 4.24 & 39\% & 42\% & 118.88$^3$ & 79\% & 70\% & 118.88 & 79\% & 70\% & 5.72 & 75\% & 42\% \\
\hline
lusearch & 372 & 302 & 0.94 & 35\% & 44\% & 17.04$^4$ & 81\% & 72\% & 3.27 & 77\% & 63\% & 1.18 & 74\% & 44\% \\
\hline
xalan & 371 & 299 & 0.85 & 39\% & 44\% & 13.92$^5$ & 80\% & 70\% & 3.61 & 78\% & 64\% & 1.12 & 74\% & 44\% \\
\hline
\end{tabular}
{\scriptsize $~^1$~\opt{TD-POL-1CFA-ALLO-W10} \hspace{.1cm} $~^2$~\opt{TD-HY2-1CFA-ALLO-W3} \hspace{.1cm} $~^3$~\opt{TD-HY2-1CFA-ALLO-W3} \hspace{.1cm} $~^4$~\opt{BU-POL-1CFA-ALLO-W10} \hspace{.1cm} $~^5$~\opt{TD-POL-1CFA-ALLO-W10}}
\caption{Performance and precision for a range of analysis configurations.}
\vspace{-12pt}
\label{tab:rq4}
\end{table*}
\emph{\opt{W10} negatively affects the tradeoff.} Overall, \opt{W10}
decreases cpm by 35 over \opt{W1}, the second highest decrease for any
individual option setting after \opt{POL}. Because \opt{W10} is slow
in Table~\ref{table:performance} and insignificant for precision in
Table~\ref{tab:precision}, this result is expected.
\emph{\afmt{Array}{} and \afmt{Field}{} have different tradeoffs.} There are
several significant interactions involving the analysis client,
notably \opt{ALLO:ARRAY} and \opt{SMUS:ARRAY}. Similarly to the
discussion in Section \ref{sec:precision}, this indicates that the impact
of analysis settings on two analysis clients are different.
\dropcomment{
\begin{table*}[th!]
\small
\centering
\begin{tabular}{|c|c||c|c|c||c|c|c|}
\hline
\multicolumn{2}{|c|}{} & \multicolumn{3}{c|}{${\tt T1\_A_1}$} & \multicolumn{3}{c|}{${\tt T2\_A_1}$} \\
\hline
{\bf Option} & {\bf Setting} & {\bf Est. (${\tt T1\_A_1}$)} & {\bf CI} & {\bf $p$-value} & {\bf Est. (${\tt T2\_A_2}$)} & {\bf CI} & {\bf $p$-value} \\
\hline
\multirow{2}{*}{\opt{AO}} & \opt{TD} & -& - & - & -& - & -\\
\hhline{~-------}
& \opt{BU} & \cellcolor{green}32.2 &\cellcolor{green}[23.9, 40.5] & \cellcolor{green}$<$0.001 & 26.8 & [-1.5, 55.1] & 0.064\\
\hline
\multirow{4}{*}{\opt{DP}} & \opt{INT} & - & - & - & - & - & -\\
\hhline{~-------}
& \opt{POL} & \cellcolor{green}-58.5 & \cellcolor{green}[-70.2, -46.8] & \cellcolor{green}$<$0.001 & \cellcolor{green}-113.4 & \cellcolor{green}[-141.8, -85.1] & \cellcolor{green}$<$0.001\\
\hhline{~-------}
& \opt{HY1} & -4.6 & [-16.3, 7.1] & 0.440 & -1.1 & [-29.4, 27.2] & 0.938\\
\hhline{~-------}
& \opt{HY2} & \cellcolor{green}-34.8 & \cellcolor{green}[-46.5, -23.1] & \cellcolor{green}$<$0.001 & 21.0 & [-7.3, 49.3] & 0.147\\
\hline
\multirow{3}{*}{\opt{CS}} & \opt{CI} & - & - & - & - & - & -\\
\hhline{~-------}
& \opt{1CFA} & \cellcolor{Gray}-17.1 & \cellcolor{Gray}[-27.8, -6.3] & \cellcolor{Gray}0.002 & -3.1 & [-27.6, 21.5] & 0.807\\
\hhline{~-------}
& \opt{1TYP} & \cellcolor{Gray}-36.0 & \cellcolor{Gray}[-46.8, -25.3] & \cellcolor{Gray}$<$0.001 & 22.3 & [-2.2, 46.8] & 0.075 \\
\hline
\multirow{3}{*}{\opt{OR}} & \opt{CLAS} & - & - & - & - & - & -\\
\hhline{~-------}
& \opt{SMUS} & \cellcolor{green}50.7 & \cellcolor{green}[40.0, 61.4] & \cellcolor{green}$<$0.001 & \cellcolor{green}162.3 & \cellcolor{green}[129.9, 194.8] & \cellcolor{green}$<$0.001\\
\hhline{~-------}
& \opt{ALLO} & \cellcolor{green}26.8 & \cellcolor{green}[16.1, 37.6] & \cellcolor{green}$<$0.001 & \cellcolor{green}179.8 & \cellcolor{green}[147.4, 212.3] & \cellcolor{green}$<$0.001\\
\hline
\multirow{3}{*}{\opt{WI}} & \opt{W1} & - & - & - & - & - & -\\
\hhline{~-------}
& \opt{W3} & -3.3 & [-13.4, 6.8] & 0.522 & 0.1 & [-12.2, 12.3] & 0.991\\
\hhline{~-------}
&\opt{W10} & \cellcolor{green}-37.9 & \cellcolor{green}[-48.0, -27.7] & \cellcolor{green}$<$0.001 & -11.8 & [-24.1, 0.5] & 0.059\\
\hline
\hline
\multirow{4}{*}{\opt{AO:DP}} & \opt{TD:INT} & - & - & - & - & - & -\\
\hhline{~-------}
& \opt{BU:POL} & \cellcolor{green}15.0 & \cellcolor{green}[6.8, 23.3] & \cellcolor{green}$<$0.001 & \cellcolor{green}116.1 & \cellcolor{green}[87.8, 144.5] & \cellcolor{green}$<$0.001\\
\hhline{~-------}
& \opt{BU:HY1} & 6.6 & [-1.7, 14.8] & 0.119 & 3.0 & [-25.3, 31.3] & 0.834\\
\hhline{~-------}
& \opt{BU:HY2} & 5.9 & [-2.4, 14.2] & 0.162 & -14.3 & [-42.6, 14.1] & 0.324\\
\hline
\multirow{3}{*}{\opt{AO:CS}} & \opt{TD:CI} & - & - & - & - & - & -\\
\hhline{~-------}
& \opt{BU:1CFA} & \cellcolor{green}-23.8 & \cellcolor{green}[-31.0, -16.7] & \cellcolor{green}$<$0.001 & -1.0 & [-25.5, 23.5] & 0.936\\
\hhline{~-------}
& \opt{BU:1TYP} & \cellcolor{green}-29.2 & \cellcolor{green}[-36.4, -22.1] & \cellcolor{green}$<$0.001 & \cellcolor{green}-55.2 & \cellcolor{green}[-79.7, -30.7] & \cellcolor{green}$<$0.001 \\
\hline
\multirow{3}{*}{\opt{AO:OR}} & \opt{TD:CLAS} & - & - & - & - & - & -\\
\hhline{~-------}
& \opt{BU:SMUS} & 4.3 & [-2.9, 11.4] & 0.244 & \cellcolor{green}-31.0 & \cellcolor{green}[-55.6, -6.5] & \cellcolor{green}0.013\\
\hhline{~-------}
& \opt{BU:ALLO} & \cellcolor{green}-58.0 & \cellcolor{green}[-137.1, -88.0] & \cellcolor{green}$<$0.001 & \cellcolor{green}-112.5 & \cellcolor{green}[-65.1, -50.8] & \cellcolor{green}$<$0.001\\
\hline
\dropcomment{
\multirow{7}{*}{\opt{DP:CS}} & \opt{INT:CI} & - & - & - & X & X & X\\
\hhline{~-------}
& \opt{POL:1CFA} & 7.3 & [-2.8, 17.4] & 0.157 & X & X & X\\
\hhline{~-------}
& \opt{POL:1TYP} & \cellcolor{Gray}29.1 & \cellcolor{Gray}[19.0, 39.2] & \cellcolor{Gray}$<$0.001 & X & X & X \\
\hhline{~-------}
& \opt{HY1:1CFA} & 1.0 & [-9.2, 11.1] & 0.853 & X & X & X\\
\hhline{~-------}
& \opt{HY1:1TYP} & 3.4 & [-6.7, 13.5] & 0.513 & X & X & X\\
\hhline{~-------}
& \opt{HY2:1CFA} & \cellcolor{Gray}10.8 & \cellcolor{Gray}[0.7, 21.0] & \cellcolor{Gray}0.036 & X & X & X\\
\hhline{~-------}
& \opt{HY2:1TYP} & \cellcolor{Gray}20.8 & \cellcolor{Gray}[10.6, 30.9] & \cellcolor{Gray}$<$0.001& X & X & X \\
\hline
\multirow{7}{*}{\opt{DP:OR}} & \opt{INT:CI} & - & - & - & - & - & -\\
\hhline{~-------}
& \opt{POL:SMUS} & \cellcolor{green}-20.4 & \cellcolor{green}[-30.5, -10.2] & \cellcolor{green}$<$0.001 & \cellcolor{green}-84.5 & \cellcolor{green}[-119.2, -49.8] & \cellcolor{green}$<$0.001 \\
\hhline{~-------}
& \opt{POL:ALLO} & -8.4 & [-18.5, 1.7] & 0.103 & \cellcolor{green}-71.2 & \cellcolor{green}[-105.9, -36.5] & \cellcolor{green}$<$0.001\\
\hhline{~-------}
& \opt{HY1:SMUS} & -5.4 & [-15.5, 4.7] & 0.295 & 0.1 & [-34.6, 34.8] & 0.994\\
\hhline{~-------}
& \opt{HY1:ALLO} & -9.2 & [-19.3, 0.9] & 0.075 & 4.0 & [-30.7, 38.7] & 0.821\\
\hhline{~-------}
& \opt{HY2:SMUS} & \cellcolor{green}-20.1 & \cellcolor{green}[-84.4, -15.1] & \cellcolor{green}$<$0.001 & \cellcolor{green}-49.8 & \cellcolor{green}[-30.3, -10.0] & \cellcolor{green}0.005\\
\hhline{~-------}
& \opt{HY2:ALLO} & -8.3 & [-18.5, 1.8] & 0.106 & \cellcolor{green}-39.5 & \cellcolor{green}[-74.2, -4.8] & \cellcolor{green}0.026\\
\hline
\multirow{7}{*}{\opt{DP:WI}} & \opt{INT:W1} & - & - & - \\
\hhline{~-------}
& \opt{POL:W3} & 2.3 & [-7.8, 12.4] & 0.656 \\
\hhline{~-------}
& \opt{POL:W10} & \cellcolor{green}21.2 & \cellcolor{green}[11.1, 31.3] & \cellcolor{green}$<$0.001 \\
\hhline{~-------}
& \opt{HY1:W3} & -0.4 & [-10.5, 9.7] & 0.941 \\
\hhline{~-------}
& \opt{HY1:W10} & 2.6 & [-7.5, 12.7] & 0.613 \\
\hhline{~-------}
& \opt{HY2:W3} & 0.8 & [-9.3, 11.0] & 0.869 \\
\hhline{~-------}
& \opt{HY2:S10} & \cellcolor{green}13.9 & \cellcolor{green}[3.8, 24.1] & \cellcolor{green}0.007 \\
\hline
}
\multirow{5}{*}{\opt{CS:OR}} & \opt{CI:CLAS} & - & - & - & - & - & -\\
\hhline{~-------}
& \opt{1CFA:SMUS} & \cellcolor{Gray}-14.5 & \cellcolor{Gray}[-23.3, -5.7] & \cellcolor{Gray}0.001 & -0.1 & [-30.1, 30.0] & 0.997\\
\hhline{~-------}
& \opt{1TYP:SMUS} & \cellcolor{Gray}-25.6 & \cellcolor{Gray}[-34.4, -16.8] & \cellcolor{Gray}$<$0.001 & -6.8 & [-36.8, 23.2] & 0.657\\
\hhline{~-------}
& \opt{1CFA:ALLO} & \cellcolor{green}26.7 & \cellcolor{green}[18.0, 35.5] & \cellcolor{green}$<$0.001 & 21.9 & [-8.2, 51.9] & 0.153\\
\hhline{~-------}
& \opt{1TYP:ALLO} & \cellcolor{Gray}10.0 & \cellcolor{Gray}[1.2, 18.8] & \cellcolor{Gray}0.025 & \cellcolor{green}-77.8 & \cellcolor{green}[-107.8, -47.7] & \cellcolor{green}$<$0.001\\
\hline
\dropcomment{
\multirow{5}{*}{\opt{CS:WI}} & \opt{CI:W1} & - & - & - \\
\hhline{~-------}
& \opt{1CFA:W3} & 0.9 & [-7.8, 9.7] & 0.833 \\
\hhline{~-------}
& \opt{1TYP:W3} & 2.5 & [-6.3, 11.3] & 0.574 \\
\hhline{~-------}
& \opt{1CFA:W10} & \cellcolor{green}10.5 & \cellcolor{green}[1.7, 19.3] & \cellcolor{green}0.019 \\
\hhline{~-------}
& \opt{1TYP:W10} & \cellcolor{green}19.6 & \cellcolor{green}[10.8, 28.4] & \cellcolor{green}$<$0.001 \\
\hline
\multirow{5}{*}{\opt{OR:WI}} & \opt{CI:W1} & - & - & - \\
\hhline{~-------}
& \opt{SMUS:W3} & -0.84 & [-9.6, 7.9] & 0.850 \\
\hhline{~-------}
& \opt{ALLO:W3} & 0.85 & [-7.9, 9.6] & 0.849 \\
\hhline{~-------}
& \opt{SMUS:W10} & \cellcolor{green}-13.2 & \cellcolor{green}[-21.9, -4.4] & \cellcolor{green}0.003 \\
\hhline{~-------}
& \opt{ALLO:W10} & -0.6 & [-9.3, 8.2] & 0.900 \\
\hline
}
\end{tabular}
\caption{\textmd{\afmt{Field}{} performance and precision tradeoffs. Positive values in Est. columns indicate better tradeoffs.}}
\vspace{-12pt}
\label{table:A1tradeoff}
\end{table*}
}
\subsection{RQ4: Overall quality}
\label{pimpact}
Finally, we examine how well ``best'' analysis configurations
work for \afmt{Array}{} and \afmt{Field}{}, in absolute terms. We chose
configurations by examining the regressions in
Tables~\ref{table:performance}, \ref{tab:precision},
and~\ref{tab:tradeoff} and picking the best option settings
for each (details below).
Table~\ref{tab:rq4} shows the results. The first three columns list
each benchmark and its total number of array accesses and numerically
typed field reads. Then for each best analysis (in various terms), the
remaining column groups show that analysis' running time and the
percent of array accesses checked and numerically typed field reads
that were constrained.
We chose the best analyses as follows. For performance, we examined
Table~\ref{table:performance} and found \opt{BU} is the statistically
best analysis order. Any choice for domain except \opt{POL} is
equivalent; we choose \opt{INT} because it is the simplest. Similarly,
we choose \opt{CI} and \opt{CLAS} because those are the simplest of a
statistically equivalent set of choices. And we choose \opt{W1}
because it is statistically better than \opt{W10} and simpler.
For Table~\ref{tab:precision}, recall the regression omitted analysis
configurations that timed out---hence the best configuration according
to that regression might not, and in fact does not, complete on all
the benchmarks. Thus, we show two results for precision. The first
group of precision results is based on the most precise
configuration for each benchmark individually. The next group picks
one overall configuration for which all benchmarks complete, and which
is close to the best configuration in Table~\ref{tab:precision}. For
example, for domain, instead of \opt{POL}, which times out on some
programs, we use \opt{HY2}, which is a close second in the table. Note
that the precision table did not specify a setting for object
representation or widening (they were not needed to fit the data); we
chose \opt{1CFA} and \opt{W3} because those are the most precise
variants for which all programs complete.
Finally, we picked the best tradeoff from Table~\ref{tab:tradeoff}. In
that table, \opt{BU}, \opt{CI}, and \opt{SMUS} are statistically the
best. We chose \opt{HY1} as it does best when combined with \opt{BU}
and \opt{W3} is faster with the same precision as \opt{W10}.
Looking over the results, we make several observations. First, the
best performing configuration is far faster than the most precise
configurations. However, the running time is not much better than the
best tradeoff configuration. Moreover, for \afmt{Array}{}, the best
tradeoff configuration is almost as precise as the best precision
configuration, showing that for \afmt{Array}{} it is possible to achieve
both good precision and good performance. The story is less good for
\afmt{Field}{}, where precision for the best tradeoff drops considerably
compared to the most precise analyses. This is because \opt{HY1} is
far less precise than \opt{HY2} or \opt{POL}. Indeed, if we used
\opt{HY2} instead, we would have far better results, but \opt{fop}'s
analysis no longer terminates.\footnote{In particular: \opt{eclipse}
2.18m, 75\%, 60\%; \opt{hsqldb} 8.99m, 76\%, 63\%; \opt{lusearch}
1.88m, 75\%, 65\%; \opt{xalan} 2.04m, 75\%, 62\%.}
Second, we see that the best terminating configuration for each
program individually always utilizes \opt{ALLO}, \opt{1CFA}, and
\opt{HY2} (or \opt{POL}); in all cases but \opt{lusearch} it uses
\opt{TD} because \opt{BU} is more expensive and may not terminate.
Third, we see that the best precision configuration that terminates on
all programs has precision close to the individual best configurations
(but moreso for \afmt{Array}{} than for \afmt{Field}{}). This suggests that if a user is
most interested in high precision, it may not be necessary to
customize the analysis for a particular subject program.
Finally, bottom-up analysis provides a good tradeoff for \afmt{Array}{},
and context-insensitive points-to analysis is sufficient. As mentioned
earlier, we believe this is due to the nature of the client: array
bounds checking likely performs a lot of local reasoning, and the
analysis provides a sufficient amount of ``pseudo'' context
sensitivity, as discussed in Section~\ref{sec:precision}.
\section{Introduction}
Static analysis of numeric program properties has a broad range of
useful applications. Such analyses can potentially detect array bounds
errors~\cite{wagner:ndss00}, analyze a program's resource
usage~\cite{Gulwani:2010:RP:1806596.1806630,Gulwani:2009:CRP:1542476.1542518}, detect side
channels~\cite{Brumley:2003:RTA:1251353.1251354,Bortz:2007:EPI:1242572.1242656}, and discover vectors for denial of
service attacks~\cite{hashtable-attack,hash-dos}.
One of the major approaches to numeric static analysis is abstract
interpretation~\cite{Cousot:1977:AIU:512950.512973}, in which program
statements are evaluated over an abstract domain until a fixed point is
reached. Indeed, the first paper on abstract
interpretation~\cite{Cousot:1977:AIU:512950.512973} used numeric
intervals as one example abstract domain, and many subsequent
researchers have explored abstract interpretation-based numeric static
analysis \cite{fu2014modularly,ferrara2014generic,ferrara2015automatic,logozzoclousot,Calcagno:2011:CSA:2049697.2049700,henry2012pagai}.
Despite this long history, applying abstract interpretation to
real-world Java programs remains a challenge. Such programs are large,
have many interacting methods, and make heavy use of heap-allocated
objects. In considering how to build an analysis that aims to be sound but
also precise, prior work has explored some of these challenges, but not
all of them together. For example, several works have considered the
impact of the choice of numeric domain (e.g., intervals vs. convex
polyhedra) in trading off precision for performance but not considered
other tradeoffs~\cite{ferrara2015automatic,mardziel13belieflong}. Other works have considered how to
integrate a numeric domain with analysis of the heap, but unsoundly
model method calls~\cite{fu2014modularly} and/or focus on very precise
properties that do not scale beyond small
programs~\cite{ferrara2014generic,ferrara2015automatic}. Some
scalability can be recovered by using programmer-specified pre- and
post-conditions~\cite{logozzoclousot}. In all of these cases, there is
a lack of consideration of the broader design space in which many
implementation choices interact. (Section~\ref{sec:related} considers
prior work in detail.)
In this paper, we describe and then systematically explore a large
design space of fully automated, abstract interpretation-based numeric
static analyses for Java. Each analysis is identified by a choice of
five configurable options---the numeric domain, the heap abstraction,
the object representation, the interprocedural analysis order, and the
level of context sensitivity. In total, we study 162 analysis
configurations to asses both how individual configuration options
perform overall and to study interactions between different
options. To our knowledge, our basic analysis is one of the few fully
automated numeric static analyses for Java, and we do not know of any
prior work that has studied such a large static analysis design space.
We selected analysis configuration options that are well-known in the
static analysis literature and that are key choices in designing a
Java static analysis. For the numeric domain, we considered both
intervals~\cite{CousotCousot76-1} and convex
polyhedra~\cite{Cousot:1978:ADL:512760.512770}, as these are popular
and bookend the precision/performance spectrum. (See
Section~\ref{sec:analysis}.)
Modeling the flow of data through the heap requires handling pointers
and aliasing. We consider three different choices of
\emph{heap abstraction}: using \emph{summary
objects}~\cite{fu2014modularly,gopan2004numeric}, which are
\emph{weakly updated}, to summarize multiple heap locations; \emph{access
paths}~\cite{De:2012:SFP:2367163.2367203,Wei:2014:SPA:2945642.2945644},
which are \emph{strongly updated}; and a combination of the two.
To implement these
abstractions, we use an ahead-of-time, global \emph{points-to
analysis}~\cite{Ryder:2003:DPR:1765931.1765945}, which maps static/local variables and
heap-allocated fields to abstract objects.
We explore three variants of \emph{abstract object
representation}: the standard
\emph{allocation-site abstraction} (the most precise) in which each syntactic
\code{new} in the program represents an abstract object;
\emph{class-based abstraction} (the least precise) in which each class represents all
instances of that class; and a \emph{smushed string
abstraction} (intermediate precision) which is the same as allocation-site abstraction except
strings are modeled using a class-based
abstraction~\cite{Bravenboer:2009:SDS:1640089.1640108}.
(See Section~\ref{sec:heap}.)
We compare three choices in the \emph{interprocedural
analysis order} we use to model method calls: \emph{top-down analysis}, which starts with
\code{main} and analyzes callees as they are encountered; and
\emph{bottom-up analysis}, which starts at the leaves of the call tree
and instantiates method summaries at call sites; and a hybrid analysis that
is bottom-up for library methods and top-down for
application code. In general, top-down analysis explores
fewer methods, but it may analyze callees multiple times. Bottom-up
analysis explores each method once but needs to create summaries,
which can be expensive.
Finally, we compare three kinds of
\emph{context-sensitivity} in the points-to analysis:
\emph{context-insensitive} analysis, \emph{1-CFA
analysis}~\cite{shivers91} in which one level of calling context is
used to discriminate pointers, and \emph{type-sensitive
analysis}~\cite{smaragdakis2011pick} in which the type of the
receiver is the context. (See Section~\ref{sec:methods}.)
We implemented our analysis using WALA~\cite{wala} for its
intermediate representation and points-to analyses and either
APRON~\cite{apron,Jeannet:2009:ALN:1575060.1575116} or ELINA~\cite{elina,DBLP:conf/popl/SinghPV17} for the interval or
polyhedral, respectively, numeric domain. We then applied all
162 analysis configurations to the DaCapo benchmark
suite~\cite{DaCapo:paper}, using the numeric analysis to try to prove
array accesses are within bounds. We measured the analyses' performance
and the number of array bounds checks they discharged.
We analyzed
our results by using a multiple linear regression over analysis
features and outcomes, and by performing data visualizations.
We studied three research questions. First, we examined how
analysis configuration affects performance. We found that using
summary objects causes significant slowdowns, e.g., the vast majority
of the analysis runs that timed out used summary objects. We also
found that polyhedral analysis incurs a significant slowdown, but only
half as much as summary objects. Surprisingly, bottom-up analysis
provided little performance advantage generally, though it did provide
some benefit for particular object representations.
Finally, context-insensitive
analysis is faster than context-sensitive analysis, as might be
expected, but the difference is not great when combined with more
approximate (class-based and smushed string) abstract object
representations.
Second, we examined how analysis configuration affects precision. We
found that using access paths is critical to precision. We also
found that the bottom-up analysis has worse precision than top-down
analysis, especially when using summary objects, and that using a
more precise abstract object representation improves precision. But
other traditional ways of improving precision do so
only slightly (the polyhedral domain) or not significantly
(context-sensitivity).
Finally, we looked at the precision/performance tradeoff for all
programs. We found that using access paths is always a good idea, both
for precision and performance, and top-down analysis works
better than bottom-up. While summary objects, originally proposed by
Fu~\cite{fu2014modularly}, do help precision for
some programs, the benefits are often marginal when considered as a
percentage of all checks, so they tend not to outweigh their large performance
disadvantage. Lastly, we found that the precision
gains for more precise object representations and polyhedra are
modest, and performance costs can be magnified by other analysis
features.
In summary, our empirical study provides a large,
comprehensive evaluation of the effects of important numeric
static analysis design choices on performance, precision, and their
tradeoff; it is the first of its kind. We plan to release our code and
data to support further research and evaluation.
\vspace{-6pt}
\section{Numeric Static Analysis}
\label{sec:analysis}
\begin{table}[t]
\small
\centering
\begin{tabular}{ |c|c@{~~~}p{2.4in}|}
\hline
{\bf Config. Option} & \multicolumn{1}{c}{\textbf{Setting}} & \multicolumn{1}{c|}{\textbf{Description}} \\
\hline
\multirow{2}{*}{\parbox{1in}{\centering Numeric domain (\opt{ND})}} & \opt{INT} & Intervals \\
& \opt{POL} & Polyhedra \\
\hline
\multirow{3}{*}{\parbox{1in}{\centering Heap abstraction (\opt{HA})}}
& \opt{SO} & Only summary objects \\
& \opt{AP} & Only access paths \\
& \opt{AP+SO} & Both access paths and summary objects\\
\hline
\multirow{3}{*}{\parbox{1in}{\centering Abstract object representation (\opt{OR})}}
& \opt{ALLO} & Alloc-site abstraction \\
& \opt{CLAS} & Class-based abstraction \\
& \opt{SMUS} & Alloc-site except Strings \\
\hline
\multirow{3}{*}{\parbox{1in}{\centering Inter-procedural analysis order (\opt{AO})}} &
\opt{TD} & Top-down \\
& \opt{BU} & Bottom-up \\
& \opt{TD+BU} & Hybrid top-down and bottom-up \\
\hline
\multirow{3}{*}{\parbox{1in}{\centering Context sensitivity (\opt{CS})}} & \opt{CI} & Context-insensitive \\
& \opt{1CFA} & 1-CFA \\
& \opt{1TYP} & Type-sensitive \\
\hline
\end{tabular}
\caption{\textmd{Analysis configuration options, and their possible settings.}}
\vspace{-15pt}
\label{table:ac}
\end{table}
A \emph{numeric static analysis} is one that tracks numeric properties of
memory locations, e.g., that $x \leq 5$ or $y > z$. A natural starting
point for a numeric static analysis for Java programs is numeric
abstract interpretation over program variables within a single
procedure/method~\cite{Cousot:1977:AIU:512950.512973}.
A standard abstract interpretation expresses numeric properties using
a \emph{numeric abstract domain}, of which the most common are
\emph{intervals} (also known as boxes) and \emph{convex polyhedra}.
Intervals~\cite{CousotCousot76-1} define abstract states using
inequalities of the form $p~\mathit{relop}~n$ where $p$ is a
variable, $n$ is a constant integer, and $\mathit{relop}$ is a relational
operator such as $\leq$. A variable such as $p$ is sometimes called a
\emph{dimension}, as it describes one axis of a numeric space.
Convex polyhedra~\cite{Cousot:1978:ADL:512760.512770} define abstract
states using linear relationships between variables and constants,
e.g., of the form $3p_1 - p_2 \leq 5$.
Intervals are less precise but more efficient than polyhedra.
Operation on intervals have time complexity linear in the number of
dimensions whereas the time complexity for polyhedra operations is
exponential in the number of dimensions.\footnote{Further, the time
complexity of join is $O(d \cdot c^{2^{d+1}})$ where $ c $ is the
number of constraints, and $ d $ is the number of
dimensions~\cite{elina}.}
Numeric abstract interpretation, including our own analyses, are usually flow-sensitive, i.e.,
each program point has an associated abstract state characterizing
properties that hold at that point. Variable assignments are
\emph{strong updates}, meaning information about the variable is
replaced by information from the right-hand side of the
assignment. At merge points (e.g., after the
completion of a conditional), the abstract states of the possible
prior states are \emph{joined} to yield properties that hold regardless
of the branch taken. Loop bodies are reanalyzed until their
constituent statements' abstract states reach a fixed point. Reaching
a fixed point is accelerated by applying the numeric domain's standard \emph{widening}
operator~\cite{bagnara2003precise} in place of join after a fixed
number of iterations.
Scaling a basic numeric abstract interpreter to full Java requires making
many design choices. Table~\ref{table:ac} summarizes the key choices
we study in this paper. Each configuration option has a range of
settings that potentially offer different precision/performance tradeoffs.
Different options may interact with each other to affect the tradeoff.
In total, we study five options with two or three settings each. We
have already discussed the first option, the numeric domain (\opt{ND}), for which
we consider intervals (\opt{INT}) and polyhedra (\opt{POL}). The next two options
consider the heap, and are discussed in the next section, and the last two
options consider method calls, and are discussed in Section~\ref{sec:methods}.
For space reasons, the main presentation focuses on the high-level
design and tradeoffs. Detailed algorithms are given formally in
Appendices~\ref{app:analysis} and~\ref{app:interproc} for the heap and
interprocedural analysis, respectively.
\vspace{-6pt}
\section{The Heap}
\label{sec:heap}
The numeric analysis described so far is sufficient only for analyzing
code with local, numeric variables.
To analyze numeric properties of heap-manipulating programs, we must
also consider heap locations $x.f$,
where $x$ is a reference to a heap-allocated object, and $f$ is a
numeric field.\footnote{In our implementation, statements such as $z = x.f.g$ are
decomposed so that paths are at most length one, e.g.,
$w = x.f; z = w.g$.} To do so requires developing a \emph{heap
abstraction} (\opt{HA}) that accounts for
aliasing. In particular, when variables $x$ and $y$ may point to
the same heap object, an assignment to $x.f$ could affect
$y.f$. Moreover, the referent of a pointer may be uncertain, e.g.,
the true branch of a
conditional could assign location $o_1$ to $x$, while the false branch
could assign $o_2$ to $x$. This uncertainty must be reflected in
subsequent reads of $x.f$.
We use a \emph{points-to analysis} to reason about aliasing. A
points-to analysis computes a mapping $\mathit{Pt}$ from variables $x$ and
access paths $x.f$ to (one or more) \emph{abstract
objects}~\cite{Ryder:2003:DPR:1765931.1765945}. If $\mathit{Pt}$
maps two variables/paths $p_1$ and $p_2$ to a common abstract object
$o$ then $p_1$ and $p_2$ \emph{may alias}. We also use points-to
analysis to determine the call graph, i.e., to determine what method
may be called by an expression $x.m(\ldots)$ (discussed in Section~\ref{sec:methods}).
\vspace{-6pt}
\subsection{Summary objects (\opt{SO})}
\label{sec:sum-obj}
The first heap abstraction we study is based on
Fu~\cite{fu2014modularly}: use a \emph{summary object} (\opt{SO}) to
abstract information about multiple heap locations as a single
abstract state ``variable''~\cite{gopan2004numeric}.
As an example, suppose that $\mathit{Pt}(x) = \{ o \}$ and we
encounter the assignment $x.f \sassign 5$. Then in this approach, we
add a variable $o\_f$ to the abstract state, modeling the field $f$
of object $o$, and we add constraint
$o\_f = n$. Subsequent assignments to such summary
objects must be \emph{weak updates}, to respect the \emph{may alias}
semantics of the points-to analysis. For example, suppose $y.f$
may alias $x.f$, i.e., $o \in \mathit{Pt}(x) \cap
\mathit{Pt}(y)$. Then after a later assignment $y.f \sassign 7$ the
analysis would weakly update $o\_f$ with 7, producing constraints
$5 \leq o\_f \leq 7$ in the abstract state. These
constraints conservatively model that either $o\_f = 5$ or $o\_f = 7$,
since the assignment to $y.f$ may or may not affect $x.f$.
In general, weak updates are more expensive than strong updates, and
reading a summary object is more expensive than reading a variable. A
strong update to $x$ is implemented by \emph{forgetting} $x$ in the
abstract state,\footnote{Doing so has the effect of ``connecting''
constraints that are transitive via $x$. For example, given
$y \leq x \leq 5$, forgetting $x$ would yield constraint
$y \leq 5$.} and then re-adding it to be equal to the
assigned-to value. Note that $x$ cannot appear in the assigned-to value
because programs are converted into static single assignment form (Section \ref{sec:impl}).
A weak update---which is not directly supported in
the numeric domain libraries we use---is implemented by copying the
abstract state, strongly updating $x$ in the copy, and then joining
the two abstract states.
Reading from a summary object
requires ``expanding'' the abstract state with a copy $o'\_f$ of the
summary object and its constraints, creating a constraint on $o'\_f$,
and then forgetting $o'\_f$. Doing this ensures that operations on a
variable into which a summary object is read do not affect prior
reads. A normal read just references the read variable.
Fu~\cite{fu2014modularly} argues that this basic approach is
better than ignoring heap locations entirely by measuring how often field
reads are not unconstrained, as would be the case for a
heap-unaware analysis. However, it is unclear whether the approach is sufficiently precise
for applications such as array-bounds check elimination. Using
the polyhedra numeric domain should help. For example, a \code{Buffer}
class might store an array in one field and a conservative bound on an
array's length in another. The polyhedral domain will permit relating the
latter to the former while the interval domain will not. But the slowdown
due to the many added summary objects may be prohibitive.
\vspace{-9pt}
\subsection{Access paths (\opt{AP})}
\vspace{-3pt}
An alternative heap abstraction we study is to treat \emph{access paths}
(\opt{AP}) as if they are normal variables, while still accounting for possible
aliasing~\cite{De:2012:SFP:2367163.2367203,Wei:2014:SPA:2945642.2945644}. In particular, a path $x.f$ is
modeled as a variable $x\_f$, and an assignment $x.f \sassign n$
strongly updates $x\_f$ to be $n$. At the same time, if there exists
another path $y.f$ and $x$ and $y$ may alias, then we must weakly
update $y\_f$ as possibly containing~$n$.
In general, determining which paths must be weakly updated
depends on the abstract object representation and context-sensitivity
of the points-to analysis.
Two key benefits of \opt{AP} over \opt{SO} are that (1) \opt{AP} supports strong
updates to paths $x.f$, which are more precise and less expensive
than weak updates, and (2) \opt{AP} may require
fewer variables to be tracked, since, in our design, access paths are
mostly local to a method whereas points-to sets are computed across
the entire program. On the other
hand, \opt{SO} can do better at summarizing invariants about heap locations
pointed to by other heap locations, i.e., not necessarily via an
access path. Especially when performing an interprocedural analysis,
such information can add useful precision.
\vspace{-6pt}
\subsubsection{Combined (\opt{AP+SO})}
A natural third choice is to combine \opt{AP} and \opt{SO}.
Doing so sums both the costs and benefits of the two approaches.
An assignment $x.f \sassign n$ strongly updates $x\_f$ and weakly
updates $o\_f$ for each $o$ in $\mathit{Pt}(x)$ and each $y\_f$ where
$\mathit{Pt}(x) \cap \mathit{Pt}(y) \neq \emptyset$.
Reading from $x.f$ when it has not been previously assigned to is just
a normal read, after first strongly updating $x\_f$ to be the join of
the summary read of $o\_f$ for each $o \in \mathit{Pt}(x)$.
\vspace{-6pt}
\subsection{Abstract object representation (\opt{OR})}
\vspace{-3pt}
Another key precision/performance tradeoff is the \emph{abstract
object representation} (\opt{OR}) used by the points-to
analysis. In particular, when $Pt(x) = \{ o_1, ..., o_n \}$, where do the
names $o_1, ..., o_n$ come from? The answer impacts the naming of
summary objects, the granularity of alias checks for assignments to access paths, and
the precision of the call-graph, which requires aliasing information
to determine which methods are targeted by a dynamic dispatch
$x.m(...)$.
As shown in the third row of Table~\ref{table:ac}, we explore three
representations for abstract objects. The first choice names
abstract objects according to their \emph{allocation site} (\opt{ALLO})---all objects
allocated at the same program point have the same name. This is precise but potentially
expensive, since there are many possible allocation sites, and each
path $x.f$ could be mapped to many abstract objects. We also consider
representing abstract objects using \emph{class names} (\opt{CLAS}), where all objects of the
same class share the same abstract name, and a hybrid \emph{smushed
string} (\opt{SMUS}) approach, where every \code{String} object has the same abstract
name but objects of other types have allocation-site
names~\cite{Bravenboer:2009:SDS:1640089.1640108}. The class
name approach is the least precise but potentially more efficient
since there are fewer names to consider. The smushed string analysis
is somewhere in between. The question is whether the reduction in
names helps performance enough, without overly compromising
precision.
\vspace{-6pt}
\section{Method Calls}
\vspace{-3pt}
\label{sec:methods}
So far we have considered the first three options of
Table~\ref{table:ac}, which handle integer variables and the heap.
This section considers the last two options---interprocedural
analysis order (\opt{AO}) and context sensitivity (\opt{CS}).
\vspace{-6pt}
\subsection{Interprocedural analysis order (\opt{AO})}
\vspace{-3pt}
\label{subsec:interproc}
We implement three styles of interprocedural analysis: top-down (\opt{TD}),
bottom-up (\opt{BU}), and their combination (\opt{TD+BU}). The \opt{TD} analysis starts at
the program entry point and, as it encounters method calls, analyzes the
body of the callee (memoizing duplicate calls). The \opt{BU} analysis starts at the leaves of
the call graph and analyzes each method in isolation, producing a
summary of its behavior \cite{Whaley:1999:CPE:320384.320400,Gulwani:2007:CPS:1762174.1762199}.
(We discuss call graph construction in
the next subsection.) This summary is then instantiated at each
method call. The hybrid analysis works top-down for application code
but bottom-up for any code from the Java standard library.
\vspace{-9pt}
\subsubsection{Top-down (\opt{TD}).}
Assuming the analyzer knows the method being
called, a simple approach to top-down analysis would be to transfer
the caller's state to the beginning of callee, analyze the callee in
that state, and then transfer the state at the end of the callee back
to the caller. Unfortunately, this approach is
prohibitively expensive because the abstract state would accumulate
all local variables and access paths across all
methods along the call-chain.
We avoid this blowup by analyzing a call to method $m$ while
considering only relevant local variables and heap
abstractions. Ignoring the heap for the moment, the basic
approach is as follows. First, we make a copy $C_m$ of the caller's
abstract state $C$. In $C_m$, we set variables for $m$'s formal
numeric arguments to the actual arguments
and then forget (as defined in Section~\ref{sec:sum-obj}) the caller's local variables. Thus $C_m$ will only
contain the portion of $C$ relevant to $m$. We analyze $m$'s body,
starting in $C_m$, to yield the final state $C'_m$. Lastly, we
merge $C$ and $C'_m$, strongly update the variable that receives the returned
result, and forget the callee's local variables---thus avoiding
adding the callee's locals to the caller's state.
Now consider the heap. If we are using summary objects, when we copy
$C$ to $C_m$ we do not forget those objects that might be used by $m$
(according to the points-to analysis). As $m$ is analyzed, the
summary objects will be weakly updated, ultimately yielding state
$C'_m$ at $m$'s return. To merge $C'_m$ with $C$, we first forget the summary
objects in $C$ not forgotten in $C_m$ and then concatenate $C'_m$ with
$C$. The result is that updated summary objects from $C'_m$ replace those
that were in the original $C$.
If we are using access paths, then at the call we forget access paths
in $C$ because assignments in $m$'s code might invalidate them. But if we
have an access path $x.f$ in the caller and we pass $x$ to $m$, then
we retain $x.f$ in the callee but rename it to use $m'$s parameter's
name. For example, $x.f$ becomes $y.f$ if $m$'s parameter is $y$. If $y$ is never
assigned to in $m$, we can map $y.f$ back to $x.f$ (in the caller)
once $m$ returns.\footnote{Assignments to $y.f$ in the callee are
fine; only assignments to $y$ are problematic.} All other access
paths in $C_m$ are forgotten prior to concatenating with the caller's state.
Note that the above reasoning is only for numeric values. We take no
particular steps for pointer values as the points-to analysis already
tracks those across all methods.
\vspace{-12pt}
\subsubsection{Bottom up (\opt{BU}).}
In the \opt{BU} analysis, we analyze a method $m$'s body to produce a
\emph{method summary} and then instantiate the summary at calls to
$m$. Ignoring the heap, producing a method summary for $m$ is
straightforward: start analyzing $m$ in a state $C_m$ in which its
(numeric) parameters are unconstrained variables. When $m$ returns,
forget all variables in the final state except the parameters and
return value, yielding a state $C'_m$ that is the method
summary. Then, when $m$ is called, we concatenate $C'_m$ with the current
abstract state; add constraints between the parameters and their actual
arguments; strongly update the variable receiving the result with the
summary's returned value; and then forget those variables.
When using the polyhedral numeric domain, $C'_m$ can express
relationships between input and output parameters, e.g., \code{ret
$\leq$ z} or \code{ret = x+y}. For the interval domain, which is
non-relational, summaries are more limited, e.g., they can express
\code{ret $\leq 100$} but not \code{ret $\leq$ x}. As such, we expect
bottom-up analysis to be far more useful with the polyhedral domain
than the interval domain.
\vspace{-9pt}
\paragraph*{Summary objects.} Now consider the heap. Recall that when
using summary objects in the \opt{TD} analysis, reading a path $x.f$ into
$z$ ``expands'' each summary object $o\_f$ when $o \in Pt(x)$ and
strongly updates $z$ with the join of these expanded objects, before
forgetting them. This expansion makes a copy of each summary object's
constraints so that later use of $z$ does not incorrectly impact the
summary. However, when analyzing a method bottom-up, we may not yet
know all of a summary object's constraints. For example, if $x$ is
passed into the current method, we will not (yet) know if $o\_f$ is
assigned to a particular numeric range in the caller.
We solve this problem by allocating a fresh, unconstrained
\emph{placeholder object} at each read of $x.f$ and include it in the
initialization of the assigned-to variable~$z$. The placeholder is
also retained in $m$'s method summary. Then at a call to $m$, we
instantiate each placeholder with the constraints in the caller
involving the placeholder's summary location. We also create a fresh
placeholder in the caller and weakly update it to the placeholder in
the callee; doing so allows for further constraints to be added from
calls further up the call chain.
\vspace{-9pt}
\paragraph*{Access paths.}
If we are using access paths, we treat them just as in \opt{TD}---each $x.f$
is allocated a special variable that is strongly updated when
possible, according to the points-to analysis. These are not kept in
method summaries. When also using summary objects, at the first read
to $x.f$ we initialize it from the summary objects derived from $x$'s
points-to set, following the above expansion procedure. Otherwise
$x.f$ will be unconstrained.
\vspace{-9pt}
\subsubsection{Hybrid (\opt{TD+BU}).} In addition to \opt{TD} or
\opt{BU} analysis (only), we implemented a hybrid strategy that
performs \opt{TD} analysis for the application, but \opt{BU} analysis
for code from the Java standard library. Library methods are analyzed
first, bottom-up. Application method calls are analyzed top-down. When
an application method calls a library method, it applies the \opt{BU}
method call approach. \opt{TD+BU} could potentially be better than
\opt{TD} because library methods, which are likely called many times,
only need to be analyzed once. \opt{TD+BU} could similarly be better
than \opt{BU} because application methods, which are likely not called
as many times as library methods, can use the lower-overhead \opt{TD}
analysis.
Now, consider the interaction between the heap abstraction
and the analysis order. The use of access paths (only) does not greatly affect the
normal \opt{TD}/\opt{BU} tradeoff: \opt{TD} may yield greater precision by adding
constraints from the caller when analyzing the callee, while \opt{BU}'s
lower precision comes with the benefit of analyzing method bodies less
often. Use of summary objects complicates this tradeoff. In the \opt{TD}
analysis, the use of summary objects adds a relatively stable overhead
to all methods, since they are included in every method's abstract
state. For the \opt{BU} analysis, methods further down in the call chain
will see fewer summary objects used, and method bodies may end up
being analyzed less often than in the \opt{TD} case. On the other hand,
placeholder objects add more dimensions overall (one per read) and
more work at call sites (to instantiate them). But, instantiating a
summary may be cheaper than reanalyzing the method.
\vspace{-9pt}
\subsection{Context sensitivity (\opt{CS})}
\vspace{-3pt}
\label{sec:cs}
The last design choice we considered was context-sensitivity. A
\emph{context-insensitive} (\opt{CI}) analysis conflates information from
different call sites of the same method. For example, two calls to
method $m$ in which the first passes $x_1, y_1$ and the second passes
$x_2,y_2$ will be conflated such that within $m$ we will only know
that either $x_1$ or $x_2$ is the first parameter, and either $y_1$ or
$y_2$ is the second; we will miss the correlation between parameters.
A context sensitive analysis provides some distinction among different
call sites. A \emph{1-CFA analysis}~\cite{shivers91} (\opt{1CFA}) distinguishes
based on one level of calling context, i.e., two calls originating
from different program points will be distinguished, but two calls
from the same point, but in a method called from two different points
will not. A \emph{type-sensitive analysis}~\cite{smaragdakis2011pick}
(\opt{1TYP}) uses the type of the receiver as the context.
Context sensitivity in the points-to analysis affects alias checks,
e.g., when determining whether an assignment to $x.f$ might affect
$y.f$. It also affects the abstract object representation and call
graph construction. Due to the latter, context sensitivity also
affects our interprocedural numeric analysis. In a context-sensitive
analysis, a single method is essentially treated as a family of
methods indexed by a calling context. In particular, our analysis
keeps track of the current context as a \emph{frame}, and when
considering a call to method \code{x.m()}, the target methods to which
\code{m}
may refer differ depending on the frame. This provides more
precision than a context-insensitive (i.e., frame-less) approach, but
the analysis may consider the same method code many times, which adds
greater precision but also greater expense. This is true both for \opt{TD}
and \opt{BU}, but is perhaps more detrimental to the latter since it
reduces potential method summary reuse.
On the other hand, more precise analysis may reduce
unnecessary work by pruning infeasible call graph edges. For example,
when a call might dynamically dispatch to several different methods,
the analysis must consider them all, joining their abstract states. A
more precise analysis may consider fewer target methods.
\vspace{-9pt}
\section{Implementation}
\vspace{-3pt}
\label{sec:impl}
We have implemented an analysis for Java with all of the options described in
the previous two sections.
Our implementation is based on the intermediate representation in the
T. J. Watson Libraries for Analysis (WALA) version 1.3.10~\cite{wala},
which converts a Java bytecode program into static single assignment
(SSA) form \cite{Cytron:1991:ECS:115372.115320}, which is then
analyzed. We use APRON~\cite{apron,Jeannet:2009:ALN:1575060.1575116} trunk revision 1096 (published on 2016/05/31)
implementation of intervals, and ELINA~\cite{elina,DBLP:conf/popl/SinghPV17}, snapshot as of
October 4, 2017, for convex polyhedra. Our current
implementation supports all non-floating point numeric
Java values and comprises 14K lines of Scala code.
Next we discuss a few additional implementation details.
\vspace{-9pt}
\paragraph{Preallocating dimensions.}
In both APRON and ELINA, it is very expensive to perform join operations
that combine abstract states with different variables. Thus, rather than add
dimensions as they arise during
abstract interpretation, we instead \emph{preallocate} all necessary
dimensions---including for local variables, access paths, and summary objects, when enabled---at
the start of a method body. This ensures the abstract states
have the same dimensions at each join point. We found that, even
though this approach makes some states larger than they need to be,
the overall performance savings is still substantial.
\vspace{-9pt}
\paragraph{Arrays.}
Our analysis encodes an array as an object with two fields,
\code{contents}, which represents the contents
of the array, and \code{len}, representing the array's
length. Each read/write from \code{a[i]} is modeled as a
weak read/write of \code{contents} (because all array elements are
represented with the same field), with an added check that
\code{i} is between $0$ and \code{len}. We treat \code{String}s as a
special kind of array.
\vspace{-9pt}
\paragraph{Widening.}
As is standard in abstract interpretation, our implementation performs
widening to ensure termination when analyzing loops. In a pilot study,
we compared widening after between one and ten iterations. We found
that there was little added precision when applying widening after more than three
iterations when trying to prove array indexes in bounds (our target
application, discussed next). Thus we widen at that point in our implementation.
\vspace{-9pt}
\paragraph{Limitations.}
Our implementation is sound with a few exceptions.
In particular, it ignores calls to native methods and uses of
reflection. It is also unsound in its handling of recursive method
calls. If the return value of a recursive method is numeric,
it is regarded as unconstrained. Potential side effects of the the recursive calls
are not modeled.
\vspace{-9pt}
\section{Evaluation}
\vspace{-3pt}
\label{sec:eval}
In this section, we present an empirical study of our family of
analyses, focusing on the following research questions:
\smallskip
\noindent \textbf{RQ1: Performance.} How does the configuration
affect analysis running time?
\noindent \textbf{RQ2: Precision.} How does the
configuration affect analysis precision?
\noindent \textbf{RQ3: Tradeoffs.} How does the configuration
affect precision and performance?
\smallskip
To answer these questions, we chose an important analysis client,
array index out-of-bound analysis, and ran it
on the DaCapo benchmark suite ~\cite{DaCapo:paper}. We vary
each of the analysis features listed in
Table~\ref{table:ac}, yielding 162 total configurations. To understand
the impact of analysis features, we used multiple linear regression and logistic regression to
model precision and performance (the dependent variables) in terms of
analysis features and across programs (the independent variables). We
also studied per-program data directly.
Overall, we found that using access paths is a significant boon to
precision but costs little in performance, while using summary objects
is the reverse, to the point that use of summary objects is a
significant source of timeouts. Polyhedra add precision compared to
intervals, and impose some performance cost, though only half as much
as summary objects. Interestingly, when both summary objects and
polyhedra together would result in a timeout, choosing the first tends
to provide better precision over the second. Finally, bottom-up
analysis harms precision compared to top-down analysis, especially
when only summary objects are enabled, but yields little gain in
performance.
\vspace{-9pt}
\subsection{Experimental setup}
\vspace{-3pt}
We evaluated our analyses by using them to perform array index out of
bounds analysis. More specifically, for each benchmark program, we
counted how many array access instructions (\code{x[i]=y},
\code{y=x[i]}, etc.) an analysis configuration could verify were in
bounds (i.e., \code{i<x.length}), and measured the time taken
to perform the analysis.
\begin{table}[t!]
\centering
\begin{tabular}{|l|r|r|r|r|r|r|r|r|} \hline
& & \# & \multicolumn{3}{c|}{Best Performance} & \multicolumn{3}{c|}{Best Precision} \\
Prog & Size & Checks & Time(min) & \# Checks & Percent & Time(min) & \# Checks & Percent\\ \hline \hline
& & & \multicolumn{3}{c|}{\opt{BU-AP-CI-CLAS-INT}} & \multicolumn{3}{c|}{\opt{TD-AP+SO-1TYP-CLAS-INT}} \\
\opt{antlr} & 55734 &1526 & 0.6 & 1176 & 77.1\% & 18.5 & 1306 & 85.6\% \\ \hline
& & & \multicolumn{3}{c|}{\opt{BU-AP-CI-CLAS-INT}} & \multicolumn{3}{c|}{\opt{TD-AP-1TYP-SMUS-POL}} \\
\opt{bloat} & 150197 & 4621 & 4.0 & 2538 & 54.9\% & 17.2 & 2795 & 60.5\%\\ \hline
& & & \multicolumn{3}{c|}{\opt{BU-AP-CI-CLAS-INT}} & \multicolumn{3}{c|}{\opt{TD-AP-1TYP-SMUS-INT}} \\
\opt{chart} & 167621 & 7965 & 3.3 & 5593 & 70.2\% & 7.7 & 5654 & 71.0\% \\ \hline
& & &\multicolumn{3}{c|}{\opt{BU-AP-CI-ALLO-INT}} & \multicolumn{3}{c|}{\opt{TD-AP+SO-1TYP-SMUS-POL}} \\
\opt{eclipse} & 18938 & 1043 & 0.2 & 896 & 85.9\% & 3.3 & 977 & 93.7\% \\ \hline
& & & \multicolumn{3}{c|}{\opt{BU-AP-CI-CLAS-INT}} & \multicolumn{3}{c|}{\opt{TD-AP+SO-1CFA-SMUS-INT}} \\
\opt{fop} & 33243 & 1337 & 0.4 & 998 & 74.6\% & 2.6 & 1137 & 85.0\% \\ \hline
& & & \multicolumn{3}{c|}{\opt{BU-AP-CI-SMUS-INT}} & \multicolumn{3}{c|}{\opt{TD-AP+SO-CI-SMUS-INT}} \\
\opt{hsqldb} & 19497 & 1020 & 0.3 & 911 &89.3\% & 1.4 & 975 & 95.6\% \\ \hline
& & & \multicolumn{3}{c|}{\opt{BU-AP-CI-SMUS-INT}} & \multicolumn{3}{c|}{\opt{TD-AP-1CFA-CLAS-POL}} \\
\opt{jython} & 127661 & 4232 & 1.3 & 2667 & 63.0\% & 33.6 & 2919 & 69.0\% \\ \hline
& & & \multicolumn{3}{c|}{\opt{BU-AP-CI-SMUS-INT}} & \multicolumn{3}{c|}{\opt{TD-AP+SO-1TYP-ALLO-INT}} \\
\opt{luindex} & 69027 & 2764 & 1.8 & 1682 & 60.9\% & 46.8 & 2015 & 72.9\%\\ \hline
& & & \multicolumn{3}{c|}{\opt{BU-AP-CI-CLAS-INT}} & \multicolumn{3}{c|}{\opt{TD-AP+SO-1CFA-ALLO-POL}} \\
\opt{lusearch} & 20242 & 1062 & 0.2 & 912 & 85.9\% & 54.2 & 979 & 92.2\% \\ \hline
& & & \multicolumn{3}{c|}{\opt{BU-AP-CI-CLAS-INT}} & \multicolumn{3}{c|}{\opt{TD-AP+SO-CI-CLAS-INT}} \\
\opt{pmd} &116422 & 4402 & 1.7 & 3153 & 71.6\% & 49.5 & 3301 & 75.0\%\\ \hline
& & & \multicolumn{3}{c|}{\opt{BU-AP-CI-CLAS-INT}} & \multicolumn{3}{c|}{\opt{TD-AP+SO-1CFA-SMUS-POL}} \\
\opt{xalan} & 20315 & 1043 & 0.2 & 912 & 87.4\% & 3.8 & 981 & 94.1\% \\ \hline
\end{tabular}
\caption{\textmd{Benchmarks and overall results.}}
\vspace{-24pt}
\label{tab:benchoverall}
\end{table}
\vspace{-9pt}
\paragraph*{Benchmarks.}
We analyzed all eleven programs from the DaCapo benchmark
suite~\cite{DaCapo:paper} version 2006-10-MR2. The first three columns of
Table~\ref{tab:benchoverall} list the programs' names,
their size (number of IR instructions), and the number of array bounds checks they
contain. The rest of the table indicates the
fastest and most precise analysis configuration for each
program; we discuss these results in Section~\ref{sec:rq3}.
We ran each benchmark three times under each of the 162 analysis
configurations. The experiments were performed on two 2.4~GHz single
processor (with four logical cores) Intel Xeon E5-2609 servers, each
with 128GB memory running Ubuntu 16.04 (LTS). On each server, we ran
three analysis configurations in parallel, binding each process to a
designated core.
Since many analysis configurations are time-intensive, we set a
limit of 1~hour for running a benchmark under a
particular configuration. All performance results reported are the median of the
three runs. We also use the median precision result, though note the
analyses are deterministic, so the precision does not
vary except in the case of timeouts. Thus, we treat an analysis as
not timing out as long as either two or three of the three runs
completed, and otherwise it is a timeout.
Among the 1782 median results (11 benchmarks, 162
configurations), 667 of them (37\%) timed out. The percentage
of the configurations that timed out analyzing a program ranged
from 0\% (\opt{xalan}) to 90\% (\opt{chart}).
\vspace{-9pt}
\paragraph*{Statistical Analysis.}
To answer RQ1 and RQ2, we constructed a model for each question using
multiple linear regression. Roughly put, we attempt to produce a model
of performance (RQ1) and precision (RQ2)---the
\emph{dependent variables}---in terms of a linear combination of
analysis configuration options (i.e., one choice from each of the five categories given in
Table~\ref{table:ac}) and the benchmark program (i.e., one
of the eleven subjects from DaCapo)---the \emph{independent variables}.
We include the programs themselves as independent variables, which
allows us to roughly factor out program-specific sources of
performance or precision gain/loss (which might include size,
complexity, etc.); this is standard in this sort of regression~\cite{seltman}.
Our models also consider all two-way interactions among analysis
options. In our scenario, a significant interaction between two option
settings suggests that the combination of them has a different impact
on the analysis precision and/or performance compared to their
independent impact.
To obtain a model that best fits the data, we performed variable
selection via the Akaike Information Criterion (AIC)
\cite{burnham2011aic}, a standard measure of model quality. AIC
drops insignificant independent variables to better estimate the
impact of analysis options. The R$^2$ values for the models are good,
with the lowest of any model being 0.71.
After performing the regression, we examine the results to discover
potential trends. Then we draw plots to examine how those trends
manifest in the different programs. This lets us study the whole
distribution, including outliers and any non-linear behavior, in a way
that would be difficult if we just looked at the regression model. At
the same time, if we only looked at plots it would be hard to see
general trends because there is so much data.
\vspace{-9pt}
\paragraph*{Threats to Validity.}
There are several potential threats to the validity of our
study. First, the benchmark programs may not be representative
of programs that analysis users are interested in.
That said, the
programs were drawn from a well-studied benchmark suite, so they
should provide useful insights.
Second, the insights drawn from the results of the array index
out-of-bound analysis may not reflect the trends of other analysis
clients. We note that array bounds checking is a standard, widely
used analysis.
Third, we examined a design space of 162 analysis configurations, but
there are other design choices we did not explore.
Thus, there may be other independent variables
that have important effects. In addition, there may be limitations
specific to our implementation, e.g., due to precisely how WALA
implements points-to analysis.
Even so, we relied on time-tested implementations as much as
possible, and arrived at our choices of
analysis features by studying the literature and conversing with experts. Thus,
we believe our study has value even if further variables are
worth studying.
Fourth, for our experiments we ran each analysis configuration three
times, and thus performance variation may not be fully accounted
for. While more trials would add greater statistical assurance, each
trial takes about a week to run on our benchmark machines, and we
observed no variation in precision across the trials. We did observe variations
in performance, but they were small and did not affect the broader trends.
In more detail, we computed the variance of the running time among a set of three runs of a
configuration as {\tt (max-min)/median} to
calculate the variance. The average variance across all configurations is
only 4.2\%. The maximum total time difference (\texttt{max-min}) is 32 minutes, an outlier
from \opt{eclipse}. All the other time differences are within 4
minutes.
\vspace{-9pt}
\subsection{RQ1: Performance}
\vspace{-3pt}
\label{perf}
\begin{table}[t!]
\centering
\begin{scriptsize}
\begin{tabular}{|c|c|r|r|r|}
\hline
{\bf Option} & {\bf Setting} & \textbf{Est. (min)} & \multicolumn{1}{c|}{\bf CI} & {\bf $p$-value} \\
\hline \hline
\multirow{3}{*}{\opt{AO}} &
\opt{TD} & - & - & -\\
\hhline{~----}
& \opt{BU} & -1.98 & [-6.3, 1.76] & 0.336\\
\hhline{~----}
& \opt{TD+BU} & 1.97 & [-1.78, 6.87] & 0.364\\
\hline
\multirow{3}{*}{\opt{HA}} &
\opt{AP+SO} & - & - & -\\
\hhline{~----}
& \cellcolor{green} \opt{AP} & \cellcolor{green}-37.6 &\cellcolor{green} [-42.36, -32.84] &\cellcolor{green} $<$0.001\\
\hhline{~----}
& \opt{SO} & 0.15 & [-4.60, 4.91] & 0.949\\
\hline
\multirow{3}{*}{\opt{CS}} &
\opt{1TYP} & - & - & -\\
\hhline{~----}
& \cellcolor{green} \opt{CI} & \cellcolor{green}-7.09 &\cellcolor{green} [-10.89, -3.28] &\cellcolor{green} $<$0.001\\
\hhline{~----}
& \opt{1CFA} & 1.62 & [-2.19, 5.42] & 0.405\\
\hline
\multirow{3}{*}{\opt{OR}} &
\opt{ALLO} & - & - & -\\
\hhline{~----}
& \cellcolor{green} \opt{CLAS} & \cellcolor{green}-11.00 &\cellcolor{green} [-15.44, -6.56] &\cellcolor{green} $<$0.001\\
\hhline{~----}
& \cellcolor{green} \opt{SMUS} & \cellcolor{green}-7.15 &\cellcolor{green} [-11.59, -2.70] &\cellcolor{green} 0.002\\
\hline
\multirow{2}{*}{\opt{ND}} &
\opt{POL} & - & - & -\\
\hhline{~----}
& \cellcolor{green} \opt{INT} & \cellcolor{green}-16.51 &\cellcolor{green} [-19.56, -13.46] &\cellcolor{green} $<$0.001\\
\hline \hline
\multirow{5}{*}{\opt{AO:HA}} &
\opt{TD:AP+SO} & - & - & -\\
\hhline{~----}
& \cellcolor{green} \opt{BU:AP} & \cellcolor{green}-5.31 &\cellcolor{green} [-9.35, -1.27] &\cellcolor{green} 0.01\\
\hhline{~----}
& \opt{TD+BU:AP} & -3.13 & [-7.38, 1.12] & 0.15\\
\hhline{~----}
& \opt{BU:SO} & 0.11 & [-3.92, 4.15] & 0.956\\
\hhline{~----}
& \opt{TD+BU:SO} & -0.08 & [-4.33, 4.17] & 0.97\\
\hline
\multirow{5}{*}{\opt{AO:OR}} &
\opt{TD:ALLO} & - & - & -\\
\hhline{~----}
& \cellcolor{green} \opt{BU:CLAS} & \cellcolor{green}-8.87 &\cellcolor{green} [-12.91, -4.83] &\cellcolor{green} $<$0.001\\
\hhline{~----}
& \cellcolor{green} \opt{BU:SMUS} & \cellcolor{green}-4.23 &\cellcolor{green} [-8.27, -0.19] &\cellcolor{green} 0.04\\
\hhline{~----}
& \opt{TD+BU:CLAS} & -4.07 & [-8.32, 0.19] & 0.06\\
\hhline{~----}
& \opt{TD+BU:SMUS} & -2.52 & [-6.77, 1.74] & 0.247\\
\hline
\multirow{3}{*}{\opt{AO:ND}} &
\opt{TD:POL} & - & - & -\\
\hhline{~----}
& \cellcolor{green} \opt{BU:INT} & \cellcolor{green}8.04 &\cellcolor{green} [4.73, 11.33] &\cellcolor{green} $<$0.001\\
\hhline{~----}
& \opt{TD+BU:INT} & 2.35 & [-1.12, 5.82] & 0.185\\
\hline
\multirow{5}{*}{\opt{HA:CS}} &
\opt{AP+SO:1TYP} & - & - & -\\
\hhline{~----}
& \cellcolor{green} \opt{AP:1CFA} & \cellcolor{green}7.01 &\cellcolor{green} [2.83, 11.17] &\cellcolor{green} $<$0.001\\
\hhline{~----}
& \opt{AP:CI} & 3.38 & [-0.79, 7.54] & 0.112\\
\hhline{~----}
& \opt{SO:CI} & -0.20 & [-4.37, 3.96] & 0.924\\
\hhline{~----}
& \opt{SO:1CFA} & -0.21 & [-4.37, 3.95] & 0.921\\
\hline
\multirow{5}{*}{\opt{HA:OR}} &
\opt{AP+SO:ALLO} & - & - & -\\
\hhline{~----}
& \cellcolor{green} \opt{AP:CLAS} & \cellcolor{green}9.55 &\cellcolor{green} [5.37, 13.71] &\cellcolor{green} $<$0.001\\
\hhline{~----}
& \cellcolor{green} \opt{AP:SMUS} & \cellcolor{green}6.25 &\cellcolor{green} [2.08, 10.42] &\cellcolor{green} $<$0.001\\
\hhline{~----}
& \opt{SO:SMUS} & 0.07 & [-4.09, 4.24] & 0.973\\
\hhline{~----}
& \opt{SO:CLAS} & -0.43 & [-4.59, 3.73] & 0.839\\
\hline
\multirow{3}{*}{\opt{HA:ND}} &
\opt{AP+SO:POL} & - & - & -\\
\hhline{~----}
& \cellcolor{green} \opt{AP:INT} & \cellcolor{green}6.94 &\cellcolor{green} [3.53, 10.34] &\cellcolor{green} $<$0.001\\
\hhline{~----}
& \opt{SO:INT} & 0.08 & [-3.32, 3.48] & 0.964\\
\hline
\multirow{5}{*}{\opt{CS:OR}} &
\opt{1TYP:ALLO} & - & - & -\\
\hhline{~----}
& \cellcolor{green} \opt{CI:CLAS} & \cellcolor{green}4.76 &\cellcolor{green} [0.59, 8.93] &\cellcolor{green} 0.025\\
\hhline{~----}
& \opt{CI:SMUS} & 4.02 & [-0.15, 8.18] & 0.05\\
\hhline{~----}
& \opt{1CFA:CLAS} & -3.09 & [-7.25, 1.08] & 0.147\\
\hhline{~----}
& \opt{1CFA:SMUS} & -0.52 & [-4.68, 3.64] & 0.807\\
\hline
\end{tabular}
\end{scriptsize}
~\\
\caption{\textbf{Model of run-time performance} in terms of
analysis configuration options (Table~\ref{table:ac}),
including two-way interactions. Independent variables
for individual programs not shown. $R^2$ of 0.72.}
\vspace{-18pt}
\label{tab:performance}
\end{table}
\begin{table}[t!]
\centering
\begin{scriptsize}
\begin{tabular}{|c|c|r|r|r|r|}
\hline
{\bf Option} & {\bf Setting} & \textbf{Coef.} & \multicolumn{1}{c|}{\bf CI} & \textbf{Exp(coef.)} & {\bf $p$-value} \\
\hline \hline
\multirow{3}{*}{\opt{AO}} &
\opt{TD} & - & - & -& -\\
\hhline{~-----}
&\cellcolor{green} \opt{BU} &\cellcolor{green} -1.47 &\cellcolor{green} [-2.04, -0.92] &\cellcolor{green} 0.23 & \cellcolor{green} $<$0.001\\
\hhline{~-----}
& \opt{TD+BU} & 0.09 & [-0.46, 0.65] & 1.09 & 0.73\\
\hline
\multirow{3}{*}{\opt{HA}} &
\opt{AP+SO} & - & - & -& -\\
\hhline{~-----}
& \cellcolor{green} \opt{AP} & \cellcolor{green}-10.6 &\cellcolor{green} [-12.29, -9.05] &\cellcolor{green} 2.49E-5&\cellcolor{green} $<$0.001\\
\hhline{~-----}
& \opt{SO} & 0.03 & [-0.46, 0.53] & 1.03 & 0.899\\
\hline
\multirow{3}{*}{\opt{CS}} &
\opt{1TYP} & - & - & -& -\\
\hhline{~-----}
& \cellcolor{green} \opt{CI} & \cellcolor{green}-0.89 &\cellcolor{green} [-1.46, -0.34] & \cellcolor{green}0.41 &\cellcolor{green} 0.002\\
\hhline{~-----}
&\cellcolor{green} \opt{1CFA} &\cellcolor{green} 0.94 &\cellcolor{green} [0.39, 1.49] &\cellcolor{green} 2.56 &\cellcolor{green} 0.001\\
\hline
\multirow{3}{*}{\opt{OR}} &
\opt{ALLO} & - & - & -& -\\
\hhline{~-----}
& \cellcolor{green} \opt{CLAS} & \cellcolor{green}-3.84 &\cellcolor{green} [-4.59, -3.15] &\cellcolor{green} 0.02 &\cellcolor{green} $<$0.001\\
\hhline{~-----}
& \cellcolor{green} \opt{SMUS} & \cellcolor{green}-1.78 &\cellcolor{green} [-2.36, -1.23] &\cellcolor{green} 0.17 &\cellcolor{green} $<$0.001\\
\hline
\multirow{2}{*}{\opt{ND}} &
\opt{POL} & - & - & -& -\\
\hhline{~-----}
& \cellcolor{green} \opt{INT} & \cellcolor{green}-3.73 &\cellcolor{green} [-4.40, -3.13] &\cellcolor{green} 0.02 &\cellcolor{green} $<$0.001\\
\hline
\end{tabular}
\end{scriptsize}
~\\
\caption{\textbf{Model of timeout} in terms of
analysis configuration options (Table~\ref{table:ac}). Independent variables
for individual programs not shown. $R^2$ of 0.77.}
\vspace{-18pt}
\label{tab:timeout}
\end{table}
Table \ref{tab:performance} summarizes our regression model for
performance. We measure performance as the time to run both the core
analysis and perform array index out-of-bounds checking. If a
configuration timed out while analyzing a program, we set its running
time as one hour, the time limit (characterizing a lower bound on the
configuration's performance impact). Another option would
have been to leave the configuration out of the regression, but
doing so would underrepresent the important negative contribution to
performance.
In the top part of the table, the first column shows the independent
variables and the second column shows a setting. One of the settings,
identified by dashes in the remaining columns, is the baseline in the
regression. We use the following settings as baselines: \opt{TD},
\opt{AP+SO}, \opt{1TYP}, \opt{ALLO}, and \opt{POL}.
We chose the baseline according to what we expected to be the most
precise settings. For the other settings, the third column
shows the estimated effect of that setting with all other settings
(including the choice of program, each an independent variable) held
fixed. For example, the fifth row of the table shows that
\opt{AP} (only) decreases
overall analysis time by 37.6 minutes compared to \opt{AP+SO} (and the
other baseline settings). The fourth column shows the 95\% confidence interval around
the estimate, and the last column shows the $p$-value. As is standard,
we consider $p$-values less than 0.05 (5\%) significant; such rows are
highlighted green.
The bottom part of the table shows the additional effects of two-way
combinations of options compared to the baseline effects of each
option. For example, the \opt{BU:CLAS} row shows a coefficient of
-8.87. We add this to the individual effects of \opt{BU} (-1.98) and
\opt{CLAS} (-11.0) to compute that \opt{BU:CLAS} is 21.9 minutes faster (since the number is
negative) than the baseline pair of \opt{TD:ALLO}. Not all
interactions are shown, e.g., \opt{AO:CS} is not in the table. Any
interactions not included were deemed not to have meaningful effect
and thus were dropped by the model generation process~\cite{burnham2011aic}.
Setting the running time of a timed-out configuration as one hour in
Table \ref{tab:performance} may under-report a configuration's
(negative) performance impact. For a more complete view, we follow the suggestion of
Arcuri and Briand \cite{Arcuri:2011:PGU:1985793.1985795},
and construct a model of success/failure using logistic regression.
We consider ``if a configuration timed out'' as the categorical dependent variable,
and the analysis configuration options and the benchmark programs as independent variables.
Table \ref{tab:timeout} summarizes our logistic regression model for timeout.
The coefficients in the third column
represent the change in log likelihood associated
with each configuration setting, compared to the baseline setting.
Negative coefficients indicate lower likelihood
of timeout. The
exponential of the coefficient, Exp(coef) in the fifth column, indicates roughly
how strongly that configuration setting being turned on affects the likelihood
relative to the baseline setting. For example, the third row of the table shows that
\opt{BU} is roughly 5 times less likely to time out compared to \opt{TD}, a significant
factor to the model.
\smallskip
Table \ref{tab:performance} and \ref{tab:timeout} present several interesting performance
trends.
{\it Summary objects incur a significant slowdown.} Use of summary
objects results in a very large slowdown, with high significance. We
can see this in the \opt{AP} row in Table \ref{tab:performance}. It indicates that using
\emph{only} \opt{AP} results in an average 37.6-minute speedup
compared to the baseline \opt{AP+SO} (while \opt{SO} only had no
significant difference from the baseline).
We observed a similar trend in Table \ref{tab:timeout}; use of summary objects
has the largest effect, with high significance, on the likelihood of timeout.
Indeed, 624 out of the 667 analyses that timed out had
summary objects enabled (i.e., \opt{SO} or \opt{AP+SO}). We
investigated further and found the slowdown from summary objects is
mostly due to significantly larger number of dimensions included in
the abstract state. For example, analyzing \opt{jython} with
\opt{AP-TD-CI-ALLO-INT} has, on average, 11 numeric variables when
analyzing a method, and the whole analysis finished in 15
minutes. Switching \opt{AP} to \opt{SO} resulted in, on average, 1473
variables per analyzed method and the analysis ultimately timed out.
{\it The polyhedral domain is slow, but not as slow as summary objects.}
Choosing \opt{INT} over baseline \opt{POL} nets a speedup of 16.51
minutes. This is the second-largest performance effect with high
significance, though it is half as large as the effect of
\opt{SO}. Moreover, per Table \ref{tab:timeout}, turning on \opt{POL} is more likely to result
in timeout; 409 out of 667 analyses that timed out
used \opt{POL}.
{\it Heavyweight \opt{CS} and \opt{OR} settings hurt performance,
particularly when using summary objects.} For \opt{CS} settings,
\opt{CI} is faster than baseline \opt{1TYP} by 7.1 minutes, while
there is not a statistically significant difference with
\opt{1CFA}. For the \opt{OR} settings, we see that the more
lightweight representations \opt{CLAS} and \opt{SMUS} are faster than
baseline \opt{ALLO} by 11.00 and 7.15 minutes, respectively, when
using baseline \opt{AP+SO}. This makes sense because these
representations have a direct effect on reducing the number of summary
objects. Indeed, when summary objects are disabled, the performance
benefit disappears: \opt{AP:CLAS} and \opt{AP:SMUS} add
back 9.55 and 6.25 minutes, respectively.
{\it Bottom-up analysis provides no substantial performance advantage.}
Table \ref{tab:timeout} indicates that a \opt{BU} analysis
is less likely to time out than a \opt{TD} analysis.
However, the performance model in Table \ref{tab:performance}
does not show a performance advantage of bottom-up analysis:
neither \opt{BU} nor \opt{TD+BU} provide a
statistically significant impact on running time over baseline
\opt{TD}. Setting one hour for the configurations
that timed out in the performance model might fail to
capture the negative performance of top-down analysis.
This observation underpins the utility of constructing a success/failure
analysis to complement the performance model.
In any case, we might have expected bottom-up analysis to provide a real performance
advantage (Section \ref{subsec:interproc}), but that is not what we
have observed.
\begin{table}[t!]
\centering
\begin{scriptsize}
\begin{tabular}{|c|c|r|r|r|}
\hline
{\bf Option} & {\bf Setting} & \textbf{Est. (\#)} & \multicolumn{1}{c|}{\bf CI} & {\bf $p$-value} \\
\hline \hline
\multirow{3}{*}{\opt{AO}} &
\opt{TD} & - & - & -\\
\hhline{~----}
& \cellcolor{green} \opt{TD+BU} & \cellcolor{green}-134.22 &\cellcolor{green} [-184.93, -83.50] &\cellcolor{green} $<$0.001\\
\hhline{~----}
& \cellcolor{green} \opt{BU} & \cellcolor{green}-129.98 &\cellcolor{green} [-180.24, -79.73] &\cellcolor{green} $<$0.001\\
\hline
\multirow{3}{*}{\opt{HA}} &
\opt{AP+SO} & - & - & -\\
\hhline{~----}
& \cellcolor{green} \opt{SO} & \cellcolor{green}-94.46 &\cellcolor{green} [-166.79, -22.13] &\cellcolor{green} 0.011\\
\hhline{~----}
& \opt{AP} & -5.24 & [-66.47, 55.99] & 0.866\\
\hline
\multirow{3}{*}{\opt{OR}} &
\opt{ALLO} & - & - & -\\
\hhline{~----}
& \cellcolor{green} \opt{CLAS} & \cellcolor{green}-90.15 &\cellcolor{green} [-138.80, -41.5] &\cellcolor{green} $<$0.001\\
\hhline{~----}
& \opt{SMUS} & 35.47 & [-14.72, 85.67] & 0.166\\
\hline
\multirow{2}{*}{\opt{ND}} &
\opt{POL} & - & - & -\\
\hhline{~----}
& \opt{INT} & 5.11 & [-28.77, 38.99] & 0.767\\
\hline \hline
\multirow{5}{*}{\opt{AO:HA}} &
\opt{TD:AP+SO} & - & - & -\\
\hhline{~----}
& \cellcolor{green} \opt{BU:SO} & \cellcolor{green}-686.79 &\cellcolor{green} [-741.82, -631.76] &\cellcolor{green} $<$0.001\\
\hhline{~----}
& \cellcolor{green} \opt{TD+BU:SO} & \cellcolor{green}-630.99 &\cellcolor{green} [-687.41, -574.56] &\cellcolor{green} $<$0.001\\
\hhline{~----}
& \cellcolor{green} \opt{TD+BU:AP} & \cellcolor{green}63.59 &\cellcolor{green} [14.71, 112.47] &\cellcolor{green} 0.011\\
\hhline{~----}
& \cellcolor{green} \opt{BU:AP} & \cellcolor{green}58.92 &\cellcolor{green} [11.75, 106.1] &\cellcolor{green} 0.014\\
\hline
\multirow{5}{*}{\opt{AO:OR}} &
\opt{TD:ALLO} & - & - & -\\
\hhline{~----}
& \cellcolor{green} \opt{TD+BU:CLAS} & \cellcolor{green}156.31 &\cellcolor{green} [107.78, 204.83] &\cellcolor{green} $<$0.001\\
\hhline{~----}
& \cellcolor{green} \opt{BU:CLAS} & \cellcolor{green}141.46 &\cellcolor{green} [94.13, 188.80] &\cellcolor{green} $<$0.001\\
\hhline{~----}
& \opt{BU:SMUS} & -29.16 & [-77.69, 19.37] & 0.238\\
\hhline{~----}
& \opt{TD+BU:SMUS} & -29.25 & [-79.23, 20.72] & 0.251\\
\hline
\multirow{5}{*}{\opt{HA:OR}} &
\opt{AP+SO:ALLO} & - & - & -\\
\hhline{~----}
& \cellcolor{green} \opt{SO:CLAS} & \cellcolor{green}-351.01 &\cellcolor{green} [-408.35, -293.67] &\cellcolor{green} $<$0.001\\
\hhline{~----}
&\cellcolor{green} \opt{SO:SMUS} &\cellcolor{green} -72.23 &\cellcolor{green} [-131.99, -12.47] &\cellcolor{green} 0.017\\
\hhline{~----}
& \opt{AP:SMUS} & -16.88 & [-67.20, 33.44] & 0.51\\
\hhline{~----}
& \opt{AP:CLAS} & -8.81 & [-57.84, 40.20] & 0.724\\
\hline
\multirow{3}{*}{\opt{HA:ND}} &
\opt{AP+SO:POL} & - & - & -\\
\hhline{~----}
& \cellcolor{green} \opt{AP:INT} & \cellcolor{green}-58.87 &\cellcolor{green} [-99.39, -18.35] &\cellcolor{green} 0.004\\
\hhline{~----}
& \cellcolor{green} \opt{SO:INT} & \cellcolor{green}-61.96 &\cellcolor{green} [-109.08, -14.84] &\cellcolor{green} 0.01\\
\hline
\end{tabular}
\end{scriptsize}
\caption{\textbf{Model of precision}, measured as \# of array indexes
proved in bounds, in terms of
analysis configuration options (Table~\ref{table:ac}),
including two-way interactions. Independent variables
for individual programs not shown. $R^2$ of 0.98.}
\vspace{-18pt}
\label{tab:precision}
\end{table}
\vspace{-6pt}
\subsection{RQ2: Precision}
\label{sec:precision}
Table \ref{tab:precision} summarizes our regression model for
precision, using the same format as Table~\ref{tab:performance}. We
measure precision as the number of array indexes proven to be in
bounds. As recommended by Arcuri and
Briand~\cite{Arcuri:2011:PGU:1985793.1985795}, we omit from the
regression those configurations that timed out.\footnote{The
alternative of setting precision to be 0 would misrepresent the
general power of a configuration, particularly when combined with
runs that did not time out. Fewer runs might reduce statistical
power, however, which is captured in the model.} We see several
interesting trends.
{\it Access paths are critical to precision.} Removing access paths
from the configuration, by switching from \opt{AP+SO} to \opt{SO}, yields
significantly lower precision. We see this in the \opt{SO} (only)
row in the table, and in all of its interactions (i.e., \opt{SO:$opt$} and
\opt{$opt$:SO} rows). In contrast,
\opt{AP} on its own is not statistically worse than \opt{AP+SO},
indicating that summary objects often add little precision.
This is unfortunate, given their high performance cost.
{\it Bottom-up analysis harms precision overall, especially for
\opt{SO} (only).} \opt{BU} has a strongly negative effect on
precision: 129.98 fewer checks compared to \opt{TD}. Coupled with \opt{SO} it fares even worse:
\opt{BU:SO} nets 686.79 fewer checks, and \opt{TD+BU:SO} nets 630.99
fewer. For example, for \opt{xalan} the most precise configuration,
which uses \opt{TD} and \opt{AP+SO}, discharges 981 checks, while all
configurations that instead use \opt{BU} and \opt{SO} on \opt{xalan}
discharge close to zero checks. The same basic trend holds for just
about every program.
{\it The relational domain only slightly improves precision.} The row
for \opt{INT} is not statistically different from the baseline
\opt{POL}. This is a bit of a surprise, since by itself \opt{POL}
is strictly more precise than \opt{INT}.
In fact, it does improve precision empirically
when coupled with either \opt{AP} or \opt{SO}---the interaction
\opt{AP:INT} and \opt{SO:INT} reduces the number of checks. This sets
up an interesting performance tradeoff that we explore in Section~\ref{sec:rq3}: using
\opt{AP+SO} with \opt{INT} vs. using \opt{AP} with \opt{POL}.
{\it More precise abstract object representation improves
precision, but context sensitivity does not.}
The table shows \opt{CLAS} discharges 90.15 fewer checks compared to
\opt{ALLO}. Examining the data in detail, we found this occurred
because \opt{CLAS} conflates all arrays of the same type as one abstract
object, thus imprecisely approximating those arrays' lengths, in turn causing
some checks to fail.
Also notice that context sensitivity (\opt{CS}) does not appear in the
model, meaning it does not significantly increase or decrease the
precision of array bounds checking. This is interesting, because
context-sensitivity is known to reduce points-to set
size~\cite{lhotak2008evaluating,smaragdakis2011pick} (thus yielding
more precise alias checks and dispatch targets). However, for our application
this improvement has minimal impact.
\vspace{-6pt}
\subsection{RQ3: Tradeoffs}
\label{sec:rq3}
Finally, we examine how analysis settings affect the tradeoff between
precision and performance.
To begin out discussion, recall Table~\ref{tab:benchoverall}
(page~\pageref{tab:benchoverall}),
which shows the fastest configuration and the most precise
configuration for each benchmark. Further, the table shows the
configurations' running time, number of checks discharged, and percentage
of checks discharged.
We see several interesting patterns in this table, though note the table
shows just two data points and not the full distribution. First, the
configurations in each column are remarkably consistent. The fastest
configurations are all of the form \opt{BU-AP-CI-*-INT}, only varying
in the abstract object representation. The most precise configurations
are more variable, but all include \opt{TD} and some form of
\opt{AP}. The rest of the options differ somewhat, with different
forms of precision benefiting different benchmarks. Finally, notice
that, overall, the fastest configurations are much faster than the
most precise configurations---often by an order of magnitude---but
they are not that much less precise---typically by 5--10 percentage
points.
To delve further into the tradeoff, we examine, for each program, the
overall performance and precision distribution for the analysis
configurations, focusing on particular options (\opt{HA}, \opt{AO},
etc.). As settings of option \opt{HA} have come up prominently in
our discussion so far, we start with it and then move through the
other options. Figure~\ref{fig:tradeoff} gives per-benchmark scatter plots of
this data. Each plotted point corresponds to one configuration, with
its performance on the $x$-axis and number of discharged array bounds
checks on the $y$-axis. We regard a configuration that times
out as discharging no checks, so it is
plotted at (60, 0). The shape of a point indicates the \opt{HA} setting
of the corresponding configuration: black circle for
\opt{AP}, red triangle for \opt{AP+SO}, and blue cross for \opt{SO}.
As a general trend, we see that \emph{access paths improve precision
and do little to harm performance; they should always be enabled.}
More specifically, configurations using \opt{AP} and \opt{AP+SO} (when
they do not time out) are always toward the top of the graph, meaning
good precision. Moreover, the performance profile of \opt{SO} and
\opt{AP+SO} is quite similar, as evidenced by related clusters in the
graphs differing in the y-axis, but not the x-axis. In only one case
did \opt{AP+SO} time out when \opt{SO} alone did
not.\footnote{In particular, for \opt{eclipse},
configuration \opt{TD+BU-SO-1CFA-ALLO-POL} finished at 59 minutes,
while \opt{TD+BU-AP+SO-1CFA-ALLO-POL} timed out.}
On the flip side, \emph{summary objects are a significant performance
bottleneck for a small boost in precision.} On
the graphs, we can see that the black \opt{AP} circles are often among the most
precise, while \opt{AP+SO} tend to be the best ($8/11$ cases in
Table~\ref{tab:benchoverall}). But \opt{AP} are much faster. For
example, for \opt{bloat}, \opt{chart}, and \opt{jython}, only \opt{AP}
configurations complete before the timeout, and for \opt{pmd}, all but
four of the configurations that completed use \opt{AP}.
\begin{figure}[p]
\centering
\begin{subfigure}{0.325\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/tradeoff/antlr}
\caption{antlr}
\end{subfigure}
\begin{subfigure}{0.325\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/tradeoff/bloat}
\caption{bloat}
\end{subfigure}
\begin{subfigure}{0.325\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/tradeoff/chart}
\caption{chart}
\end{subfigure}
\begin{subfigure}{0.325\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/tradeoff/eclipse}
\caption{eclipse}
\end{subfigure}
\begin{subfigure}{0.325\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/tradeoff/fop}
\caption{fop}
\end{subfigure}
\begin{subfigure}{0.325\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/tradeoff/hsqldb}
\caption{hsqldb}
\end{subfigure}
\begin{subfigure}{0.325\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/tradeoff/jython}
\caption{jython}
\end{subfigure}
\begin{subfigure}{0.325\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/tradeoff/luindex}
\caption{luindex}
\end{subfigure}
\begin{subfigure}{0.325\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/tradeoff/lusearch}
\caption{lusearch}
\end{subfigure}
\begin{subfigure}{0.325\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/tradeoff/pmd}
\caption{pmd}
\end{subfigure}
\begin{subfigure}{0.325\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/tradeoff/xalan}
\caption{xalan}
\end{subfigure}
\caption{Tradeoffs: \opt{AP} vs. \opt{SO} vs. \opt{AP+SO}.}
\vspace{-6pt}
\label{fig:tradeoff}
\end{figure}
\begin{figure}[p]
\centering
\begin{subfigure}{0.325\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/tradeoff-ao/antlr}
\caption{antlr}
\end{subfigure}
\begin{subfigure}{0.325\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/tradeoff-ao/bloat}
\caption{bloat}
\end{subfigure}
\begin{subfigure}{0.325\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/tradeoff-ao/chart}
\caption{chart}
\end{subfigure}
\begin{subfigure}{0.325\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/tradeoff-ao/eclipse}
\caption{eclipse}
\end{subfigure}
\begin{subfigure}{0.325\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/tradeoff-ao/fop}
\caption{fop}
\end{subfigure}
\begin{subfigure}{0.325\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/tradeoff-ao/hsqldb}
\caption{hsqldb}
\end{subfigure}
\begin{subfigure}{0.325\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/tradeoff-ao/jython}
\caption{jython}
\end{subfigure}
\begin{subfigure}{0.325\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/tradeoff-ao/luindex}
\caption{luindex}
\end{subfigure}
\begin{subfigure}{0.325\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/tradeoff-ao/lusearch}
\caption{lusearch}
\end{subfigure}
\begin{subfigure}{0.325\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/tradeoff-ao/pmd}
\caption{pmd}
\end{subfigure}
\begin{subfigure}{0.325\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/tradeoff-ao/xalan}
\caption{xalan}
\end{subfigure}
\caption{Tradeoffs: \opt{TD} vs. \opt{BU} vs. \opt{TD+BU}.}
\vspace{-6pt}
\label{fig:tradeoff-ao}
\end{figure}
\textit{Top-down analysis is preferred: Bottom-up is less precise and
does little to improve performance.} Figure~\ref{fig:tradeoff-ao}
shows a scatter plot of the precision/performance behavior of
all configurations, distinguishing those with \opt{BU} (black
circles), \opt{TD} (red triangles), and \opt{TD+BU} (blue
crosses). Here the trend is not as stark as with \opt{HA},
but we can see that the mass of \opt{TD} points is towards the
upper-left of the plots, except for some timeouts, while \opt{BU} and
\opt{TD+BU} have more configurations at the bottom, with low
precision. By comparing the same (x,y) coordinate on a graph in this
figure with the corresponding graph in the previous one, we can see
options interacting. Observe that the cluster of black circles at the
lower left for \opt{antlr} in Figure~\ref{fig:tradeoff-ao}(a)
correspond to \opt{SO}-only configurations in
Figure~\ref{fig:tradeoff}(a), thus illustrating the strong negative
interaction on precision of \opt{BU:SO} we discussed in the previous
subsection. The figures (and Table~\ref{tab:benchoverall}) also show
that the best-performing configurations involve bottom-up analysis,
but usually the benefit is inconsistent and very small. And
\opt{TD+BU} does not seem to balance the precision/performance
tradeoff particularly well.
\textit{Precise object representation often helps with precision at a
modest cost to performance.} Figure~\ref{fig:tradeoff-obj} shows a
representative sample of scatter plots illustrating the tradeoff
between \opt{ALLO}, \opt{CLAS}, and \opt{SMUS}. In general, we see
that the highest points tend to be \opt{ALLO}, and these are more to
the right of \opt{CLAS} and \opt{SMUS}. On the other hand, the
precision gain of \opt{ALLO} tends to be modest, and these usually
occur (examining individual runs) when combining with \opt{AP+SO}. However,
summary objects and \opt{ALLO} together greatly increase the risk of
timeouts and low performance. For example, for \opt{eclipse} the row
of circles across the bottom are all \opt{SO}-only.
\begin{figure}[t]
\centering
\begin{subfigure}{0.325\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/tradeoff-obj/eclipse}
\caption{eclipse}
\end{subfigure}
\begin{subfigure}{0.325\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/tradeoff-obj/lusearch}
\caption{lusearch}
\end{subfigure}
\begin{subfigure}{0.325\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/tradeoff-obj/pmd}
\caption{pmd}
\end{subfigure}
\caption{Tradeoffs: \opt{ALLO} vs. \opt{SMUS} vs. \opt{CLAS}.}
\vspace{-6pt}
\label{fig:tradeoff-obj}
\end{figure}
\begin{figure}[p]
\centering
\begin{subfigure}{0.325\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/tradeoff-nd/antlr}
\caption{antlr}
\end{subfigure}
\begin{subfigure}{0.325\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/tradeoff-nd/bloat}
\caption{bloat}
\end{subfigure}
\begin{subfigure}{0.325\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/tradeoff-nd/chart}
\caption{chart}
\end{subfigure}
\begin{subfigure}{0.325\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/tradeoff-nd/eclipse}
\caption{eclipse}
\end{subfigure}
\begin{subfigure}{0.325\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/tradeoff-nd/fop}
\caption{fop}
\end{subfigure}
\begin{subfigure}{0.325\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/tradeoff-nd/hsqldb}
\caption{hsqldb}
\end{subfigure}
\begin{subfigure}{0.325\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/tradeoff-nd/jython}
\caption{jython}
\end{subfigure}
\begin{subfigure}{0.325\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/tradeoff-nd/luindex}
\caption{luindex}
\end{subfigure}
\begin{subfigure}{0.325\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/tradeoff-nd/lusearch}
\caption{lusearch}
\end{subfigure}
\begin{subfigure}{0.325\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/tradeoff-nd/pmd}
\caption{pmd}
\end{subfigure}
\begin{subfigure}{0.325\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/tradeoff-nd/xalan}
\caption{xalan}
\end{subfigure}
\caption{Tradeoffs: \opt{INT} vs. \opt{POL}.}
\vspace{-6pt}
\label{fig:tradeoff-nd}
\end{figure}
\textit{The precision gains of \opt{POLY} are more modest than gains
due to using \opt{AP+SO} (over \opt{AP}).}
Figure~\ref{fig:tradeoff-nd} shows scatter plots comparing
\opt{INT} and \opt{POLY}. We investigated several groupings in more
detail and found an interesting interaction between the numeric domain
and the heap abstraction:
\opt{POLY} is often better than \opt{INT} for \opt{AP}
(only). For example, the points in the upper left of \opt{bloat} use
\opt{AP}, and \opt{POLY} is slightly better than \opt{INT}. The same
phenomenon occurs in \opt{luindex} in the cluster of triangles and
circles to the upper left. But \opt{INT} does better further up and to the right in
\opt{luindex}. This is because these configurations use \opt{AP+SO},
which times out when \opt{POLY} is enabled. A similar phenomenon
occurs for the two points in the upper right of \opt{pmd}, and the
most precise points for \opt{hsqldb}. Indeed, when a configuration
with \opt{AP+SO-INT} terminates, it will be more precise than those
with \opt{AP-POLY}, but is likely slower.
We manually inspected the cases where \opt{AP+SO-INT} is more precise
than \opt{AP-POLY}, and found that it mostly is because of the
limitation that access paths are dropped through method calls.
\opt{AP+SO} rarely
terminates when coupled with \opt{POLY} because of the very large
number of dimensions added by summary objects.
\vspace{-6pt}
\section{Related Work}
\label{sec:related}
Our numeric analysis is novel in its focus on fully automatically
identifying numeric invariants in real (heap-manipulating,
method-calling) Java programs, while aiming to be sound. We know of no
prior work that carefully studies precision and performance tradeoffs
in this setting. Prior work tends to be much more imprecise and/or
intentionally unsound, but scale better, or more precise, but not
scale to programs as large as those in the DaCapo benchmark suite.
\vspace{-6pt}
\paragraph*{Numeric vs. heap analysis.}
Many abstract interpretation-based analyses focus on numeric
properties or heap properties, but not both. For example, Calcagno et
al. \cite{Calcagno:2011:CSA:2049697.2049700} uses separation logic to create a
compositional, bottom-up heap analysis. Their client analysis for Java
checks for NULL pointers~\cite{fbinfer}, but not out-of-bounds array
indexes. Conversely, the PAGAI analyzer
\cite{henry2012pagai} for LLVM explores abstract interpretation
algorithms for precise invariants of numeric variables, but ignores
the heap (soundly treating heap locations as $\top$).
\vspace{-6pt}
\paragraph*{Numeric analysis in heap-manipulating programs.}
Fu~\cite{fu2014modularly} first proposed the basic summary object heap
abstraction we explore in this paper. The approach uses a points-to
analysis~\cite{Ryder:2003:DPR:1765931.1765945} as the basis of
generating abstract names for summary objects that are weakly
updated~\cite{gopan2004numeric}. The approach does not support strong
updates to heap objects and ignores procedure calls, making unsound
assumptions about effects of calls to or from the procedure being
analyzed. Fu's evaluation on DaCapo only considered how often the
analysis yields a non-$\top$ field, while ours considers how often the
analysis can prove that an array index is in bounds, which is a more
direct measure of utility. Our experiments strongly suggest that when
modeled soundly and at scale, summary objects add enormous performance
overhead while doing much less to assist precision when compared to
strongly updatable access paths
alone~\cite{De:2012:SFP:2367163.2367203,Wei:2014:SPA:2945642.2945644}.
Some prior work focuses on inferring precise invariants about
heap-allocated objects, e.g., relating the presence of an object in a collection
to the value of one of the object's fields.
Ferrera et al~\cite{ferrara2014generic,ferrara2015automatic} also
propose a composed analysis for numeric properties of heap
manipulating programs. Their approach is amenable to
both points-to and shape analyses (e.g., TVLA~\cite{lev2000tvla}),
supporting strong updates for the latter.
\textsc{Deskcheck} \cite{mccloskey2010statically} and Chang and
Rival~\cite{Chang:2008:RIS:1328438.1328469,chang2013} also aim to
combine shape analysis and numeric analysis, in both cases requiring
the analyst to specify predicates about the data structures of
interest.
Magill~\cite{magill2010} automatically converts heap-manipulating programs
into integer programs such that proving a numeric property of the latter
implies a numeric shape property (e.g., a list's length) of the
former.
The systems just described support more precise invariants than our approach, but are less
general or scalable: they tend to focus on much smaller programs, they do
not support important language features (e.g., Ferrara's approach lacks
procedures, \textsc{Deskcheck} lacks loops), and may require manual
annotation.
Clousot \cite{logozzoclousot} also aims to check numeric invariants on
real programs that use the heap. Methods are analyzed in isolation
but require programmer-specified pre/post conditions and object
invariants.
In contrast, our interprocedural analysis is fully
automated, requiring no annotations. Clousot's heap analysis makes local, optimistic (and
unsound) assumptions about aliasing,\footnote{Interestingly, Clousot's
assumptions often, but not always, lead to sound
results~\cite{christakis2015experimental}.}
while our approach aims to be
sound by using a global points-to analysis.
\vspace{-6pt}
\paragraph*{Measuring analysis parameter tradeoffs.}
We are not aware of work exploring performance/precision
tradeoffs of features in realistic abstract interpreters.
Oftentimes, papers leave out important algorithmic details. The
initial \textsc{Astre\'{e}}
paper~\cite{Blanchet:2003:SAL:781131.781153} contains a wealth of ideas, but
does not evaluate them systematically, instead reporting anecdotal
observations about their particular analysis targets. More often, papers
focus on one element of an analysis to evaluate, e.g.,
Logozzo~\cite{logozzo2008pentagons} examines precision and performance
tradeoffs useful for certain kinds of numeric analyses, and
Ferrara~\cite{ferrara2015automatic} evaluates his technique using both
intervals and octagons as the numeric domain. Regarding the latter,
our paper shows that interactions with the heap abstraction can have a
strong impact on the numeric domain precision/performance tradeoff.
Prior work by Smaragdakis et al.~\cite{smaragdakis2011pick}
investigates the performance/precision tradeoffs of various
implementation decisions in points-to analysis. \textsc{Paddle}
\cite{lhotak2008evaluating} evaluates tradeoffs among different
abstractions of heap allocation sites in a points-to analysis, but
specifically only evaluates the heap analysis and not other analyses
that use it.
\vspace{-6pt}
\section{Conclusion and Future Work}
We presented a family of static numeric analyses for Java. These
analyses implement a novel combination of techniques to handle method
calls, heap-allocated objects, and numeric analysis. We ran the 162
resulting analysis configurations on the DaCapo benchmark suite, and
measured performance and precision in proving array indexes in bounds.
Using a combination of multiple linear regression and data
visualization, we found several trends. Among others, we discovered
that strongly updatable access paths are always a good idea, adding
significant precision at very little performance cost. We also found
that top-down analysis also tended to improve precision at little
cost, compared to bottom-up analysis. On the other hand, while summary
objects did add precision when combined with access paths, they also
added significant performance overhead, often resulting in
timeouts. The polyhedral numeric domain improved precision, but would
time out when using a richer heap abstraction; intervals and a richer
heap would work better.
The results of our study suggest several directions for future work. For example, for
many programs, a much more expensive analysis often did not add much
more in terms of precision; a pre-analysis that identifies the
tradeoff would be worthwhile. Another direction is to investigate a
more sparse representation of summary objects that retains their
modest precision benefits, but avoids the overall blowup.
We also plan to consider other analysis configuration options.
Our current implementation uses an ahead-of-time points-to
analysis to model the heap; an alternative solution is to analyze the heap along
with the numeric analysis~\cite{Pioli99combininginterprocedural}. Concerning abstract
object representation and context sensitivity, there are other potentially interesting
choices, e.g., recency abstraction~\cite{Balakrishnan:2006:RHS:2090874.2090894} and
object sensitivity~\cite{Milanova:2005:POS:1044834.1044835}. Other interesting dimensions
to consider are field
sensitivity~\cite{Hind:2001:PAH:379605.379665} and widening, notably
\emph{widening with thresholds}.
Finally, we plan to explore other effective ways to design
hybrid top-down and bottom-up analysis~\cite{zhang2014hybrid},
and investigate sparse inter-procedural analysis for better performance~\cite{Oh:2012:DIS:2254064.2254092}.
\vspace{-6pt}
\subsubsection{Acknowledgments.} We thank Gagandeep Singh for his
help in debugging ELINA. We thank Arlen Cox, Xavier Rival,
and the anonymous reviewers for their detailed feedback and
comments. This research was supported in part by DARPA under contracts
FA8750-15-2-0104 and FA8750-16-C-0022.
\bibliographystyle{splncs03}
|
{
"timestamp": "2018-02-27T16:19:15",
"yymm": "1802",
"arxiv_id": "1802.08927",
"language": "en",
"url": "https://arxiv.org/abs/1802.08927"
}
|
\section{Introduction}
Many attractive applications of
modern machine-learning techniques
involve training models using highly sensitive data.
For example,
models trained on
people's personal messages
or detailed medical information
can offer
invaluable insights
into real-world language usage
or the diagnoses and treatment of human diseases~\citep{mcmahan2017learning,liu2017detecting}.
A key challenge in such applications
is to prevent
models from revealing inappropriate details of the sensitive data---a
non-trivial task,
since models
are known to implicitly memorize such details during training
and also to inadvertently reveal them during inference~\citep{zhang2016understanding,shokri2016membership}.
Recently,
two promising, new model-training approaches
have offered the hope
that practical, high-utility machine learning
may be compatible with
strong privacy-protection guarantees for sensitive training data~\citep{pate-dpsgd}.
This paper revisits
one of these approaches,
\emph{Private Aggregation of Teacher Ensembles}, or PATE~\citep{papernot2016semi},
and develops techniques that
improve its scalability and practical applicability.
PATE
has the advantage of
being able to
learn from the aggregated consensus
of separate ``teacher'' models trained on disjoint data,
in a manner that both provides intuitive privacy guarantees
and is
agnostic to the underlying machine-learning techniques
(cf.\ the approach of
differentially-private stochastic gradient descent \citep{abadi2016deep}).
In the PATE approach
multiple teachers are trained on disjoint sensitive data
(e.g., different users' data),
and uses the teachers' aggregate consensus answers in a black-box fashion
to supervise the training of
a ``student'' model.
By publishing only the student model (keeping the teachers private)
and
by adding carefully-calibrated Laplacian noise
to the aggregate answers used to train the student,
the original PATE work
showed how to establish
rigorous $(\varepsilon,\delta)$
differential-privacy guarantees~\citep{papernot2016semi}---a gold standard of privacy~\citep{dwork2006calibrating}.
However, to date, PATE has been applied to
only simple tasks, like MNIST,
without any realistic, larger-scale evaluation.
The techniques presented in this paper
allow PATE to be applied on a larger scale
to build more accurate models,
in a manner that
improves
both on PATE's intuitive privacy-protection due to the teachers' independent consensus
as well as its differential-privacy guarantees.
As shown in our experiments,
the result is a gain in privacy, utility, and practicality---an
uncommon joint improvement.
\begin{figure}[t]
\centering
\begin{minipage}[b]{.33\textwidth}
\centering
\includegraphics[width=\textwidth]{utility_queries_answered.pdf}
\end{minipage}
\begin{minipage}[b]{.315\textwidth}
\centering
\includegraphics[width=\textwidth]{lnmax_vs_gnmax.pdf}
\end{minipage}
\hspace*{.005\textwidth}
\begin{minipage}[b]{.33\textwidth}
\centering
\includegraphics[width=\textwidth]{noisy_thresholding_check_perf.pdf}
\end{minipage}
\caption{
Our contributions are techniques (Confident-GNMax)
that improve on the original PATE (LNMax)
on all measures.
\textit{Left:} Accuracy is higher throughout training,
despite greatly improved privacy (more in \autoref{table:results_summary}).
\textit{Middle:} The $\varepsilon$ differential-privacy bound on privacy cost
is quartered, at least (more in \autoref{fig:threshold-check}).
\textit{Right:} Intuitive privacy is also improved,
since students are trained on answers with a much stronger consensus among the teachers
(more in \autoref{fig:threshold-check}).
These are results
for a character-recognition task,
using the most favorable LNMax parameters
for a fair comparison.
}
\label{fig:aggreg-lap-vs-gauss}
\end{figure}
The primary technical contributions of this paper
are new mechanisms for aggregating
teachers' answers
that are more selective and add less noise.
On all measures,
our techniques improve on the original PATE mechanism
when evaluated on the same tasks using the same datasets,
as described in \autoref{sec:expt-eval}.
Furthermore,
we evaluate both variants of PATE on a
new, large-scale character recognition task
with 150 output classes, inspired by MNIST.
The results show that PATE can be successfully utilized
even to uncurated datasets---with
significant class imbalance
as well as erroneous class labels---and
that our new aggregation mechanisms
improve both privacy and model accuracy.
To be more selective,
our new mechanisms
leverage some pleasant synergies between privacy and utility
in PATE aggregation.
For example,
when teachers disagree,
and there is no real consensus,
the privacy cost is much higher;
however,
since such disagreement
also suggest that the teachers may not give a correct answer,
the answer may simply be omitted.
Similarly,
teachers may avoid giving an answer
where the student already is confidently predicting the right answer.
Additionally, we ensure that these selection steps are themselves done
in a private manner.
To add less noise,
our new PATE aggregation mechanisms
sample Gaussian noise,
since the tails of that distribution diminish far more rapidly
than those of the Laplacian noise
used in the original PATE work.
This reduction greatly increases the chance
that the noisy aggregation of teachers' votes
results in the correct consensus answer,
which is especially important when PATE is scaled
to learning tasks with large numbers of output classes.
However,
changing the sampled noise
requires redoing the entire PATE privacy analysis from scratch
(see \autoref{sec:gaussian-pate} and details in \autoref{ap:privacy-analysis}).
Finally,
of independent interest
are the details of
our evaluation
extending that of the original PATE work.
In particular,
we find
that the virtual adversarial training (VAT) technique of~\citet{miyato2017virtual}
is a good
basis for semi-supervised learning on tasks with many classes,
outperforming
the improved GANs by~\cite{salimans2016improved}
used in the original PATE work.
Furthermore,
we explain how to tune
the PATE approach
to achieve very strong privacy ($\varepsilon \approx 1.0$)
along with high utility,
for our
real-world character recognition learning task.
This paper is structured as follows:
\autoref{sec:related-work} is the related work section;
\autoref{sec:pate-overview} gives a background on PATE and an overview of our
work;
\autoref{sec:gaussian-pate} describes our improved aggregation mechanisms;
\autoref{sec:expt-eval} details our experimental evaluation;
\autoref{sec:conclude} offers conclusions;
and proofs are deferred to the Appendices.
\makeatletter{}\section{Related Work}
\label{sec:related-work}
Differential privacy is by now the gold standard of privacy. It offers a rigorous framework whose threat model
makes few assumptions about the adversary's capabilities, allowing differentially
private algorithms to effectively cope against strong adversaries.
This is not the case of all privacy definitions, as demonstrated by
successful attacks against anonymization techniques~\citep{aggarwal2005k,narayanan2008robust,bindschaedler2017plausible}.
The first learning algorithms adapted to provide differential privacy
with respect to their training data were often linear and convex~\citep{pathak2010multiparty,chaudhuri2011differentially,song2013stochastic,bassily2014differentially,hamm2016learning}.
More recently, successful developments in deep learning called for
differentially private stochastic gradient descent algorithms~\citep{abadi2016deep},
some of which have been tailored to learn in
federated~\citep{mcmahan2017learning} settings.
Differentially private selection mechanisms like GNMax (\autoref{sec:max-of-gaussian}) are commonly used in
hypothesis testing, frequent itemset mining, and as building blocks of more
complicated private mechanisms. The most commonly used differentially private
selection mechanisms are exponential mechanism~\citep{mcsherry2007mechanism}
and LNMax~\citep{bhaskar2010discovering}. Recent works offer lower bounds on sample complexity
of such problem~\citep{steinke2017tight,bafna-ullman}.
The Confident and Interactive Aggregator proposed in our work
(\autoref{ssec:confident-aggregator} and \autoref{ssec:interactive-protocol}
resp.) use the intuition
that selecting samples under certain constraints could result in better training
than using samples uniformly at random.
In Machine Learning Theory, active learning~\citep{cohn1994improving} has been shown to allow learning from fewer labeled examples than the passive case (see e.g. \citet{hanneke2014theory}). Similarly, in model stealing \citep{tramer2016stealing}, a goal is to learn a model from limited access to a teacher network.
There is previous work in differential privacy literature~\citep{hardt2010multiplicative, roth2010interactive} where the
mechanism first {\em decides} whether or not to answer a query, and then privately answers the
queries it chooses to answer using a traditional noise-addition mechanism. In these cases,
the sparse vector technique~\citep[Chapter 3.6]{dwork2014algorithmic} helps bound the privacy cost in terms of
the number of answered queries. This is in contrast to our work where a constant \emph{fraction} of
queries get answered and the sparse vector technique does not seem to help
reduce the privacy cost. Closer to our work, \citet{bun2017make} consider a
setting where the answer to a query of interest is often either very large or
very small. They show that a sparse vector-like analysis applies in this case,
where one pays only for queries that are in the middle.
\makeatletter{}\section{Background and Overview}
\label{sec:pate-overview}
We introduce essential components of our approach towards a generic and
flexible framework for machine
learning with provable privacy guarantees for training data.
\subsection{The PATE Framework}
\label{sec:pate-background}
Here, we provide an overview of the PATE framework.
To protect the privacy of training data during learning, PATE transfers knowledge
from an ensemble of teacher models trained on partitions
of the data to a student model.
Privacy guarantees may be understood intuitively and
expressed rigorously in terms of differential privacy.
Illustrated in \autoref{fig:approach-overview},
the PATE framework
consists of three key parts: (1) an ensemble of $n$ teacher
models, (2) an aggregation mechanism and (3) a student model.
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{approach-overview}
\caption{Overview of the approach: (1) an ensemble of teachers
is trained on disjoint subsets of the sensitive data, (2) a
student model is trained on public data labeled using the ensemble.}
\label{fig:approach-overview}
\end{figure}
\noindent
\textbf{Teacher models:}
Each teacher is a model trained
independently on a subset of the data whose privacy one wishes
to protect.
The data is partitioned to ensure no
pair of teachers will have trained on overlapping data.
Any learning technique
suitable for the data can be used for any teacher.
Training each teacher on a \emph{partition} of the
sensitive data produces $n$ different models solving the same task. At
inference, teachers independently predict labels.
\noindent
\textbf{Aggregation mechanism:}
When there is a strong consensus among teachers,
the label they almost all agree on does not depend on the model
learned by any given teacher.
Hence, this collective decision is intuitively private with respect to any
given training point---because such a point could have been included
only in one of the teachers' training set.
To provide rigorous guarantees of differential privacy, the aggregation mechanism of the original PATE
framework counts votes assigned to each class, adds carefully calibrated
Laplacian noise to the resulting vote histogram, and outputs the class with
the most noisy votes as the ensemble's prediction. This mechanism is referred to as
the max-of-Laplacian mechanism, or LNMax, going forward.
For samples $x$ and classes $1, \ldots, m$, let $f_j(x) \in \kset{m}$ denote
the $j$-th teacher model's prediction and $n_i$ denote the vote count for the
$i$-th class (i.e., $n_i \ensuremath{\triangleq} |f_j(x) = i|$). The output of the mechanism is
$\mathcal{A}(x) \ensuremath{\triangleq} \argmax_i \left(n_i(x) + \Lap{1/\gamma}\right)$. Through a
rigorous analysis of this mechanism, the PATE framework provides a
differentially private API: the privacy cost of each aggregated prediction
made by the teacher ensemble is known.
\noindent
\textbf{Student model:} PATE's final step involves the training of a student
model by knowledge transfer from the teacher ensemble using access to
public---but \emph{unlabeled}---data. To limit the privacy cost of labeling
them, queries are only made to the aggregation mechanism for a subset of
public data to train the student in a semi-supervised way using a fixed number
of queries. The authors note that every additional ensemble prediction
increases the privacy cost spent and thus cannot work with unbounded queries.
Fixed queries fixes privacy costs as well as diminishes the value of attacks
analyzing model parameters to recover training
data~\citep{zhang2016understanding}. The student only sees public data and
privacy-preserving labels.
\subsection{Differential Privacy}
Differential privacy~\citep{dwork2006calibrating}
requires that
the sensitivity of the distribution of an algorithm's output to small
perturbations of its input be limited. The following variant of the definition
captures this intuition formally:
\begin{definition}
A randomized mechanism $\mathcal{M}$ with domain $\mathcal{D}$ and range
$\mathcal{R}$ satisfies $(\ensuremath{\varepsilon},\delta)$-differential privacy if for
any two adjacent inputs $D, D'\in \mathcal{D}$ and for any subset of
outputs $S\subseteq\mathcal{R}$ it holds that:
\begin{equation}
\label{eq:dp}
\mathbf{Pr}[\mathcal{M}(D)\in S]\leq e^{\ensuremath{\varepsilon}} \cdot \mathbf{Pr}[\mathcal{M}(D')\in S]+\delta.
\end{equation}
\end{definition}
For our application of differential privacy to ML,
adjacent inputs are defined as two datasets that only differ by one training
example and the randomized mechanism $\mathcal{M}$ would be the model training
algorithm.
The privacy parameters have the following natural interpretation: $\ensuremath{\varepsilon}$ is an upper bound on the loss of privacy, and $\delta$ is the probability with which this guarantee may not hold. Composition theorems~\citep{dwork2014algorithmic} allow us to keep track of the privacy cost when we run a sequence of mechanisms.
\subsection{R\'enyi\xspace Differential Privacy}
\label{sec:rdp}
\cite{papernot2016semi} note that the natural approach to bounding PATE's privacy
loss---by bounding the privacy cost of each label queried and
using strong composition~\citep{dwork2010boosting} to derive the total
cost---yields loose privacy guarantees. Instead, their approach
uses \emph{data-dependent} privacy analysis. This
takes advantage of the fact that when the consensus among the teachers is very
strong, the plurality outcome has overwhelming likelihood leading to a very
small privacy cost whenever the consensus occurs. To capture this effect quantitatively,
\cite{papernot2016semi} rely on the \emph{moments accountant}, introduced by~\cite{abadi2016deep}
and building on previous work~\citep{bun2016concentrated, dwork2016concentrated}.
In this section, we recall the language of
R\'enyi\xspace Differential Privacy or RDP \citep{mironov2017renyi}. RDP generalizes
pure differential privacy ($\delta = 0$) and is closely related to the moments
accountant. We choose to use RDP as a more natural analysis framework when dealing with
our mechanisms that use Gaussian noise. Defined below, the RDP of a mechanism
is stated in terms of the R\'enyi\xspace divergence. \medskip
\begin{definition}[R\'enyi\xspace Divergence]
The R\'enyi\xspace divergence of order $\lambda$ between two distributions $P$ and~$Q$
is defined as:
\[\Div{\lambda}{P}{Q}
\ensuremath{\triangleq} \frac{1}{\lambda - 1} \log \expt{x\sim Q}{\power{P(x)/Q(x)}\lambda}
= \frac{1}{\lambda - 1} \log \expt{x\sim P}{\power{P(x)/Q(x)}{\lambda-1}}.
\]
\end{definition}\medskip
\begin{definition}[R\'enyi\xspace Differential Privacy (RDP)]
A randomized mechanism $\mathcal{M}$ is said to guarantee $(\ensuremath{\lambda},\ensuremath{\varepsilon})$-RDP with $\ensuremath{\lambda} \geq 1$ if for any neighboring datasets $D$ and $D'$,
\begin{align*}
\Div{\lambda}{\mathcal{M}(D)}{\mathcal{M}(D')}
= \frac{1}{\lambda - 1} \log \expt{x\sim \mathcal{M}(D)}{\power{\frac{\mathbf{Pr}\left[\mathcal{M}(D)=x
\right]\hfill}{\mathbf{Pr}\left[\mathcal{M}(D')=x\right]}}{\lambda-1}}
\leq \ensuremath{\varepsilon}.
\end{align*}
\end{definition}
RDP generalizes pure differential privacy in the sense that $\ensuremath{\varepsilon}$-differential privacy is equivalent to $(\infty, \ensuremath{\varepsilon})$-RDP. \cite{mironov2017renyi} proves the following key facts
that allow easy composition of RDP guarantees and their conversion to $(\ensuremath{\varepsilon}, \delta)$-differential privacy bounds.
\begin{theorem}[Composition]\label{thm:ma_composition}
If a mechanism $\mathcal{M}$ consists of a sequence of adaptive mechanisms $\mathcal{M}_1, \dots, \mathcal{M}_k$ such that for any $i\in[k]$, $\mathcal{M}_i$ guarantees $(\lambda, \ensuremath{\varepsilon}_i)$-RDP, then $\mathcal{M}$ guarantees $(\lambda, \sum_{i=1}^k \ensuremath{\varepsilon}_i)$-RDP.
\end{theorem}\smallskip
\begin{theorem}[From RDP to DP]\label{thm:ma_convert}
If a mechanism $\mathcal{M}$ guarantees $(\lambda, \ensuremath{\varepsilon})$-RDP, then
$\mathcal{M}$ guarantees $(\ensuremath{\varepsilon} + \frac{\log 1/\delta}{\lambda - 1}, \delta)$-differential privacy
for any $\delta \in (0, 1)$.
\end{theorem}
While both $(\ensuremath{\varepsilon},\delta)$-differential privacy and RDP are relaxations of
pure $\ensuremath{\varepsilon}$-differential privacy,
the two main advantages of RDP are as follows. First, it composes nicely;
second, it captures the privacy guarantee of Gaussian noise in a much cleaner
manner compared to $(\ensuremath{\varepsilon},\delta)$-differential privacy. This lets us do a
careful privacy analysis of the GNMax mechanism as stated in
\autoref{theorem:higher-to-lower}. While the analysis of
\cite{papernot2016semi} leverages the first aspect of such frameworks with the
Laplace noise (LNMax mechanism), our analysis of the GNMax mechanism relies on both.
\subsection{PATE Aggregation Mechanisms}
\label{ssec:aggregation}
The aggregation step is a crucial component of PATE. It
enables knowledge transfer from the teachers
to the student while enforcing privacy. We improve the LNMax mechanism
used by~\cite{papernot2016semi} which adds Laplace noise to
teacher votes and outputs the class with the highest votes.
First, we
add Gaussian noise with an accompanying privacy analysis in the RDP framework.
This modification effectively reduces the noise needed to achieve the same privacy cost per student query.
Second, the aggregation mechanism is now \emph{selective}:
teacher votes are analyzed to decide which student queries are \emph{worth} answering. This
takes into account both the privacy cost of each query and its payout in
improving the student's utility. Surprisingly, our analysis shows that these two
metrics are not at odds and in fact align with each other: the privacy cost is
the smallest when teachers agree, and when teachers agree, the label is more
likely to be correct thus being more useful to the student.
Third, we propose and study an \emph{interactive} mechanism that takes into
account not only teacher votes on a queried example but possible student
predictions on that query. Now, queries worth answering are those where the
teachers agree on a class but the student is not confident in its prediction
on that class. This third modification aligns the two metrics discussed above even further:
queries where the student already agrees with the consensus of teachers are
not worth expending our privacy budget on, but queries
where the student is
less confident are useful and answered at a small privacy cost.
\subsection{Data-dependent Privacy in PATE}
A direct privacy analysis of the aggregation mechanism, for reasonable values of
the noise parameter, allows answering only few queries
before the privacy cost becomes prohibitive. The original PATE proposal used a
data-dependent analysis, exploiting the fact that when the teachers have
large agreement, the privacy cost is usually much smaller than the
data-independent bound would suggest.
In our work, we perform a data-dependent privacy analysis of the aggregation
mechanism with Gaussian noise. This change of noise distribution turns out be technically much more
challenging than the Laplace noise case and we defer the details to
\autoref{ap:privacy-analysis}. This increased complexity of the analysis
however does not make the algorithm any more complicated and thus
allows us to improve the privacy-utility tradeoff.
\paragraph{Sanitizing the privacy cost via smooth sensitivity analysis.} An
additional challenge with data-dependent privacy analyses arises from the fact
that the privacy cost itself is now a function of the private data. Further,
the data-dependent bound on the privacy cost has large global sensitivity (a
metric used in differential privacy to calibrate the noise injected) and is
therefore difficult to sanitize. To remedy this, we use the smooth sensitivity
framework proposed by \citet{nissim2007smooth}.
\autoref{ap:smooth-sensitivity} describes how we add noise to the computed
privacy cost using this framework to publish a sanitized version of the
privacy cost. \autoref{sec:computing-smooth-sensitivity} defines
smooth sensitivity and outlines algorithms \ref{alg:ls}--\ref{alg:ss} that
compute it. The rest of \autoref{ap:smooth-sensitivity} argues the
correctness of these algorithms. The final analysis shows that the incremental
cost of sanitizing our privacy estimates is modest---less than 50\% of the raw estimates---thus
enabling us to use precise data-dependent privacy analysis
while taking into account its privacy implications.
\makeatletter{}\section{Improved Aggregation Mechanisms for PATE}
\label{sec:gaussian-pate}
The privacy guarantees provided by PATE stem from the design and
analysis of the aggregation step.
Here, we detail our improvements to the
mechanism used by~\cite{papernot2016semi}. As outlined in \autoref{ssec:aggregation}, we first replace the Laplace
noise added to teacher votes with Gaussian noise, adapting the
data-dependent privacy analysis. Next, we describe the Confident and
Interactive Aggregators that select
queries worth answering in a privacy-preserving way: the privacy budget is shared between the query selection and answer computation. The aggregators use different heuristics to select queries: the former does not take into account student
predictions, while the latter does.
\subsection{The GNMax Aggregator and Its Privacy Guarantee}
\label{sec:max-of-gaussian}
This section uses the following notation. For a sample $x$ and classes $1$ to $m$,
let $f_j(x) \in \kset{m}$ denote the $j$-th teacher model's
prediction on $x$ and $n_i(x)$ denote the vote count for the $i$-th class (i.e., $n_i(x) =
|\{j\colon f_j(x) = i\}|$). We define a Gaussian NoisyMax (GNMax) aggregation mechanism as:
\[
\GM{\sigma}(x) \ensuremath{\triangleq} \argmax_i \left \{ n_i(x) + \mathcal{N}(0,\sigma^2)\right \},
\]
where $\mathcal{N}(0, \sigma^2)$ is the Gaussian distribution
with mean 0 and variance $\sigma^2$. The aggregator outputs the class with
noisy plurality after adding Gaussian noise to each vote count. In what
follow, \emph{plurality} more generally refers to the
highest number of teacher votes assigned among the classes.
The Gaussian distribution is more concentrated than the Laplace distribution
used by~\cite{papernot2016semi}. This concentration directly improves the aggregation's utility
when the number of classes~$m$ is large.
The GNMax mechanism satisfies $(\ensuremath{\lambda},\ensuremath{\lambda}/\sigma^2)$-RDP, which
holds for all inputs and all $\ensuremath{\lambda}\geq 1$ (precise statements and proofs of
claims in this section are deferred to \autoref{ap:privacy-analysis}). A straightforward application of
composition theorems leads to loose privacy bounds.
As an example, the standard advanced composition theorem applied to
experiments in the last two rows of \autoref{table:results_summary} would give us $\ensuremath{\varepsilon}=8.42$
and $\ensuremath{\varepsilon}=10.14$ resp.~at $\delta=10^{-8}$ for the Glyph dataset.
To refine these, we work out a careful \emph{data-dependent} analysis
that yields values of $\ensuremath{\varepsilon}$ smaller than~$1$ for the same~$\delta$.
The following theorem translates data-independent RDP guarantees for higher orders
into a data-dependent RDP guarantee for a smaller order $\ensuremath{\lambda}$. We use it in conjunction
with \autoref{prop:gaussian-q} to bound the privacy cost of each query to the GNMax algorithm
as a function of $\ensuremath{\tilde{q}}$, the probability that the most common answer will not be output by the mechanism.\medskip
\edef\thmtransfer{\the\value{theorem}}
\begin{theorem}[informal]\label{theorem:higher-to-lower} Let $\mathcal{M}$ be a randomized algorithm with $\left(\mu_1,
\ensuremath{\varepsilon}_1\right)$-RDP and $\left(\mu_2, \ensuremath{\varepsilon}_2\right)$-RDP guarantees
and suppose that given a dataset $D$, there exists \emph{a likely} outcome $i^*$ such that $\mathbf{Pr}\left[\mathcal{M}(D) \neq i^* \right] \leq \ensuremath{\tilde{q}}$. Then the \emph{data-dependent} R\'enyi\xspace differential privacy for $\mathcal{M}$ of order $\ensuremath{\lambda}\leq \mu_1, \mu_2$ at $D$ is bounded by a function of $\ensuremath{\tilde{q}}$, $\mu_1$, $\ensuremath{\varepsilon}_1$, $\mu_2$, $\ensuremath{\varepsilon}_2$, which approaches 0 as $\ensuremath{\tilde{q}}\rightarrow 0$.
\end{theorem}
The new bound improves on the data-independent privacy for $\ensuremath{\lambda}$ as long as the distribution of the algorithm's output \emph{on that input} has a strong peak (i.e., $\ensuremath{\tilde{q}}\ll 1$).
Values of $\ensuremath{\tilde{q}}$ close to $1$ could result in a looser bound. Therefore, in
practice we take the minimum between this bound and $\ensuremath{\lambda}/\sigma^2$ (the
data-independent one).
The theorem generalizes Theorem 3 from~\cite{papernot2016semi}, where it was shown for a mechanism satisfying $\ensuremath{\varepsilon}$-differential
privacy (i.e., $\mu_1=\mu_2=\infty$ and $\ensuremath{\varepsilon}_1=\ensuremath{\varepsilon}_2$).
The final step in our analysis uses the following lemma to bound the probability $\ensuremath{\tilde{q}}$ when
$i^*$ corresponds to the class with the true plurality of teacher votes.
\edef\lemmaqt{\the\value{theorem}}
\begin{proposition}\label{prop:gaussian-q}
For any $i^* \in \kset{m}$, we have $\mathbf{Pr}\left[\GM{\sigma}(D) \neq i^*\right] \leq
\frac{1}{2} \sum_{i\neq i^*} \mathrm{erfc}\left(\frac{n_{i^*}-n_i}{2\sigma}\right),$
where $\mathrm{erfc}$ is the complementary error function.
\end{proposition}
In \autoref{ap:privacy-analysis}, we detail how these results translate to
privacy bounds. In short, for each query to the GNMax aggregator,
given teacher votes $n_i$ and the class $i^*$ with maximal support, \autoref{prop:gaussian-q} gives us the value of $\ensuremath{\tilde{q}}$ to use in \autoref{theorem:higher-to-lower}. We optimize over $\mu_1$ and $\mu_2$ to
get a data-dependent RDP guarantee for any order $\ensuremath{\lambda}$. Finally, we use
composition properties of RDP to analyze a sequence of queries, and translate
the RDP bound back to an $(\ensuremath{\varepsilon},\delta)$-DP bound.
\paragraph{Expensive queries.} This data-dependent privacy analysis leads us
to the concept of an \emph{expensive} query in terms of its privacy
cost. When teacher votes largely
disagree, some $n_{i^*} - n_i$ values may be small leading to a large value for
$\ensuremath{\tilde{q}}$: i.e., the lack of consensus
amongst teachers indicates that the aggregator is likely to output a wrong
label.
Thus expensive queries from a privacy perspective are often bad for
training too. Conversely, queries with strong consensus
enable tight privacy bounds. This synergy motivates the aggregation mechanisms
discussed in the following sections: they evaluate the strength of the
consensus before answering a query.
\subsection{The Confident-GNMax Aggregator}
\label{ssec:confident-aggregator}
In this section, we
propose a refinement of the GNMax aggregator that
enables us to filter out queries for which teachers do not have a sufficiently
strong consensus. This filtering enables the teachers to avoid answering
expensive queries.
We also take note to do this selection step itself in a private
manner.
The proposed \emph{Confident Aggregator} is described in
\autoref{alg:confident aggregator}.
To select queries with overwhelming consensus, the algorithm checks if
the plurality vote crosses a threshold $T$. To enforce privacy in this step,
the comparison is done after adding Gaussian noise with variance $\sigma_1^2$.
Then, for queries that pass this noisy threshold check, the aggregator
proceeds with the usual GNMax mechanism with a smaller variance $\sigma_2^2$.
For queries that do not pass the noisy threshold check, the aggregator
simply returns $\bot$ and the student discards this example in its training.
In practice, we often choose significantly higher values for $\sigma_1$ compared to
$\sigma_2$. This is because we pay the cost of the noisy threshold check \emph{always}, and
without the benefit of knowing that the consensus is strong. We pick $T$ so that queries where the plurality gets less than half
the votes (often very expensive) are unlikely to pass the threshold
after adding noise, but we still have a high enough yield amongst the queries with a
strong consensus. This tradeoff leads us to look for $T$'s between $0.6 \times$ to $0.8
\times$ the number of teachers.
The privacy cost of this aggregator is intuitive: we pay for the
threshold check
for every query, and for the GNMax step only for queries that pass
the check.
In the work of \cite{papernot2016semi}, the mechanism paid a privacy cost for every
query, expensive or otherwise. In comparison, the Confident Aggregator expends a much smaller
privacy cost to check against the threshold, and by answering a significantly
smaller fraction of expensive queries, it
expends a lower privacy cost overall.
\begin{algorithm}[t] \caption{\textbf{-- Confident-GNMax Aggregator:} given a query, consensus among teachers is first estimated in a privacy-preserving way to then only reveal confident teacher predictions.}
\label{alg:confident aggregator}
\begin{algorithmic}[1] \Require input $x$, threshold $T$, noise parameters $\sigma_1$ and $\sigma_2$
\If{$\max_i\{n_j(x)\} + \mathcal{N}(0, \sigma_1^2) \geq T$} \Comment{Privately
check for consensus}
\State\Return $\argmax_j \left\{ n_j(x) + \mathcal{N}(0, \sigma_2^2)\right\}$
\Comment{Run the usual max-of-Gaussian}
\Else
\State\Return $\bot$
\EndIf
\end{algorithmic}
\end{algorithm}
\subsection{The Interactive-GNMax Aggregator}
\label{ssec:interactive-protocol}
While the Confident Aggregator excludes expensive queries, it
ignores the possibility that the student might receive labels that contribute
little to learning, and in turn to its utility. By incorporating the student's
current predictions for its public training data, we design an \emph{Interactive
Aggregator} that discards queries where the student already
confidently predicts the same label as the teachers.
Given a set of queries,
the Interactive Aggregator (\autoref{alg:interactive-protocol})
selects those answered by
comparing student predictions to teacher votes for each class. Similar to Step 1 in
the Confident Aggregator, queries where the plurality of these noised differences
crosses a threshold are answered with GNMax.
This noisy threshold suffices to enforce privacy of the first step
because student predictions can be considered public information
(the student is trained in a differentially private manner).
For queries that fail this check, the mechanism reinforces the predicted
student label if the student is confident enough and does this without
looking at teacher votes again. This limited form of supervision comes at a small
privacy cost.
Moreover, the order of the checks ensures that a student falsely
confident in its predictions on a query is not accidentally reinforced if it
disagrees with the teacher consensus. The privacy accounting is
identical to the Confident Aggregator except in considering the difference between
teachers and the student instead of only the teachers votes.
In practice, the Confident
Aggregator can be used to start training a student when it can make no
meaningful predictions and training can be finished off with the Interactive
Aggregator after the student gains some proficiency.
\begin{algorithm}[t]
\caption{\textbf{-- Interactive-GNMax Aggregator}: the protocol first compares student predictions to the teacher votes in a privacy-preserving way to then either (a) reinforce the student prediction for the given query or (b) provide the student with a new label predicted by the teachers.}
\label{alg:interactive-protocol}
\begin{algorithmic}[1]
\Require input $x$, confidence $\gamma$, threshold $T$, noise parameters
$\sigma_1$ and $\sigma_2$, total number of teachers $M$
\State{Ask the student to provide prediction scores $\textbf{p}(x)$}
\If{$\max_j \{n_j(x) - M p_j(x)\} + \mathcal{N}(0, \sigma_1^2) \geq T$} \Comment{Student does not agree with teachers}
\State\Return $\argmax_j \{n_j(x) + \mathcal{N}(0, \sigma_2^2)\}$ \Comment{Teachers provide new label}
\ElsIf{$\max\{p_{i}(x)\} > \gamma$} \Comment{Student agrees with teachers and is confident}
\State\Return $\arg\max_j p_j(x)$ \Comment{Reinforce student's prediction}
\Else
\State\Return $\bot$ \Comment{No output given for this label}
\EndIf
\end{algorithmic}
\end {algorithm}
\makeatletter{}\section{Experimental Evaluation} \label{sec:expt-eval}
Our goal is first to show that the improved aggregators
introduced in \autoref{sec:gaussian-pate}
enable the application of PATE to uncurated data, thus
departing from previous results on tasks with balanced and well-separated
classes. We
experiment with the Glyph dataset described below to
address two aspects left open by~\cite{papernot2016semi}: (a) the
performance of PATE on a task with a larger number of
classes (the framework was only evaluated on datasets with at most 10 classes) and (b)
the privacy-utility tradeoffs offered by PATE on
data that is class imbalanced and partly mislabeled. In \autoref{ssec:laplace-vs-gaussian}, we evaluate the
improvements given by the GNMax aggregator
over its Laplace counterpart (LNMax) and demonstrate the necessity of the
Gaussian mechanism for uncurated tasks.
In \autoref{ssec:student-training}, we then evaluate the performance of PATE with both the Confident and Interactive
Aggregators on all datasets used to benchmark the original PATE framework,
in addition to Glyph.
With the right teacher and
student training, the two mechanisms from \autoref{sec:gaussian-pate}
achieve high accuracy
with very tight privacy bounds. Not answering queries
for which teacher consensus is too low (Confident-GNMax) or
the student's predictions already agree with teacher votes (Interactive-GNMax)
better aligns utility and privacy: queries are answered at a significantly reduced cost.
\subsection{Experimental Setup}
\label{sec:expt-setup}
\paragraph{MNIST, SVHN, and the UCI Adult databases.} We evaluate with two computer vision
tasks (MNIST and Street View House Numbers~\citep{netzer2011reading}) and
census data from the UCI Adult dataset~\citep{kohavi1996scaling}.
This enables a comparative analysis of the utility-privacy
tradeoff achieved with our Confident-GNMax aggregator and the LNMax originally used
in PATE. We replicate the experimental setup and results found
in~\cite{papernot2016semi} with code and teacher votes made
available online. The source code for the privacy analysis in this paper as
well as supporting data required to run this analysis is available on
Github.\footnote{\url{https://github.com/tensorflow/models/tree/master/research/differential_privacy}}
A detailed description of the experimental
setup can be found in~\cite{papernot2016semi};
we provide here only a brief overview.
For MNIST and SVHN, teachers are convolutional networks trained on partitions
of the training set. For UCI Adult, each teacher is a random forest.
The test set is split in two halves: the first is used as unlabeled
inputs to simulate the student's public data
and the second is used as a hold out to evaluate test performance.
The MNIST and SVHN students are convolutional networks trained using semi-supervised learning with GANs
\`a la~\cite{salimans2016improved}. The student for the Adult dataset are fully supervised random forests.
\paragraph{Glyph.} This optical character recognition task has \emph{an order of
magnitude more classes} than all previous applications of PATE.
The Glyph dataset also possesses many characteristics shared by real-world
tasks: e.g., it is imbalanced and some inputs are mislabeled.
Each input is
a $28 \times 28$ grayscale image containing a single glyph
generated synthetically from a collection of over 500K computer fonts.\footnote{Glyph data
is not public but similar data is available publicly as part of the notMNIST dataset.}
Samples representative of the difficulties raised by the data are depicted
in \autoref{fig:glyph-samples}.
The task is to classify inputs as one of the $150$ Unicode
symbols used to generate them.
This set of 150 classes results from pre-processing
efforts. We discarded additional classes that had few samples; some classes
had at least 50 times fewer inputs than the most popular classes, and
these were almost exclusively incorrectly labeled inputs.
We also merged classes that were too
ambiguous for even a human to differentiate them.
Nevertheless, a manual inspection of samples grouped by classes---favorably to
the human observer---led to the conservative estimate that
some classes remain 5 times more frequent, and mislabeled inputs
represent at least $10\%$ of the data.
\begin{figure}[p]
\centering
\includegraphics[width=\textwidth]{glyph-examples.pdf}
\caption{\textbf{Some example inputs from the Glyph dataset along with the class they are labeled as.} Note the ambiguity (between the comma and apostrophe) and the mislabeled input.}
\label{fig:glyph-samples}
\end{figure}
To simulate the availability of private and public data
(see \autoref{sec:pate-background}), we split data
originally marked as the training set (about 65M points) into partitions given to the teachers.
Each teacher is
a ResNet~\citep{he2016deep} made of 32
leaky ReLU layers.
We train on batches of 100 inputs for 40K steps using SGD with momentum.
The learning rate, initially set to $0.1$, is decayed after 10K steps to
$0.01$ and again after 20K steps to $0.001$. These parameters
were found with a grid search.
We split holdout data in two subsets of 100K and 400K samples: the first
acts as public data to train the student and the second
as its testing data. The student architecture
is a convolutional network
learnt in a semi-supervised fashion with virtual adversarial training (VAT) from~\cite{miyato2017virtual}.
Using unlabeled data,
we show how VAT can regularize the student
by making predictions constant in
\emph{adversarial}\footnote{In this context, the adversarial component refers to
the phenomenon commonly referred to as adversarial examples~\citep{biggio2013evasion,szegedy2013intriguing}
and not to the adversarial training approach taken in GANs.} directions.
Indeed, we found that GANs did not yield as much utility
for Glyph as for MNIST or SVHN.
We train with Adam
for 400 epochs and a learning rate of $6\cdot 10^{-5}$.
\subsection{Comparing the LNMax and GNMax Mechanisms}
\label{ssec:laplace-vs-gaussian}
\autoref{sec:max-of-gaussian} introduces the GNMax mechanism and
the accompanying privacy analysis. With a Gaussian distribution, whose tail diminishes more rapidly
than the Laplace distribution, we expect better utility when using the new
mechanism (albeit with a more involved privacy analysis).
To study the tradeoff between privacy and accuracy with the two mechanisms,
we run experiments training several ensembles of $M$ teachers for $M \in
\{100, 500, 1000, 5000\}$ on the Glyph data. Recall that 65 million training inputs are partitioned and
distributed among the $M$ teachers with each teacher receiving between 650K
and 13K inputs for the values of $M$ above. The test data is used to query the
teacher ensemble and the resulting labels (after the LNMax and GNMax
mechanisms) are compared with the ground truth labels provided in the dataset. This
predictive performance of the teachers is essential to good student
training with accurate labels and is a useful proxy for utility.
For each mechanism, we compute $(\ensuremath{\varepsilon},
\delta)$-differential privacy guarantees. As is common in literature, for
a dataset on the order of $10^8$ samples, we choose $\delta=10^{-8}$ and
denote the corresponding $\ensuremath{\varepsilon}$ as the privacy cost. The total $\ensuremath{\varepsilon}$ is calculated
on a subset of 4,000 queries, which is representative of the number of labels
needed by a student for accurate training (see
\autoref{ssec:student-training}). We visualize in
\autoref{fig:aggreg-lap-vs-gauss-detailed} the effect of the noise distribution (left) and the number of teachers (right) on the tradeoff between privacy costs and label accuracy.
\begin{figure}[p]
\centering
\begin{minipage}{.49\textwidth}
\centering
\includegraphics[width=\textwidth]{privacy_utility_tradeoff_comparison.pdf}
\end{minipage}
\hfill
\begin{minipage}{.49\textwidth}
\centering
\includegraphics[width=\textwidth]{privacy_utility_tradeoff.pdf}
\end{minipage}
\caption{\textbf{Tradeoff between utility and privacy for the LNMax and GNMax aggregators on Glyph:}
effect of the noise distribution (left) and size of the teacher ensemble (right). The LNMax aggregator uses a Laplace distribution and GNMax a Gaussian. Smaller values of the privacy cost~$\varepsilon$ (often obtained by increasing the noise scale $\sigma$---see \autoref{sec:gaussian-pate}) and higher accuracy are better.}
\label{fig:aggreg-lap-vs-gauss-detailed}
\end{figure}
\begin{table}[p]
\centering
{\small\renewcommand{\arraystretch}{1.2}
\begin{tabular}{|c|l|c|c|c|c|}
\hline
& & \textbf{Queries} & \textbf{Privacy} & \multicolumn{2}{c|}{\textbf{Accuracy}} \\
\textbf{Dataset} & \textbf{Aggregator} & \textbf{answered} & \textbf{bound} $\boldsymbol{\varepsilon}$ & \textbf{Student}& \textbf{Baseline} \\ \hline \hline
\multirow{3}{*}{MNIST} & LNMax~\citep{papernot2016semi} & 100 & 2.04 & 98.0\% & \multirow{3}{*}{99.2\%} \\ \cline{2-5}
& LNMax~\citep{papernot2016semi} & 1,000 & 8.03 & 98.1\% & \\ \cline{2-5}
& Confident-GNMax {\scriptsize($T\texttt{=}200,\sigma_1\texttt{=}150,\sigma_2\texttt{=}40$)} & 286 & \textbf{1.97} & \textbf{98.5\%} & \\ \hline\hline
\multirow{3}{*}{SVHN} & LNMax~\citep{papernot2016semi} & 500 & 5.04 & 82.7\% & \multirow{3}{*}{92.8\%} \\ \cline{2-5}
& LNMax~\citep{papernot2016semi} & 1,000 & 8.19 & 90.7\% & \\ \cline{2-5}
& Confident-GNMax {\scriptsize($T\texttt{=}300,\sigma_1\texttt{=}200,\sigma_2\texttt{=}40$)} & 3,098 & \textbf{4.96} & \textbf{91.6\%} & \\ \hline\hline
\multirow{2}{*}{Adult} & LNMax~\citep{papernot2016semi} & 500 & 2.66 & 83.0\% & \multirow{2}{*}{85.0\%} \\ \cline{2-5}
& Confident-GNMax {\scriptsize($T\texttt{=}300,\sigma_1\texttt{=}200,\sigma_2\texttt{=}40$)} & 524 & \textbf{1.90} & \textbf{83.7\%} & \\ \hline\hline
\multirow{3}{*}{Glyph} & LNMax & 4,000 & 4.3 & 72.4\% & \multirow{3}{*}{82.2\%} \\ \cline{2-5}
& Confident-GNMax {\scriptsize($T\texttt{=}1000,\sigma_1\texttt{=}500,\sigma_2\texttt{=}100$)} & 10,762 & 2.03 & \textbf{75.5\%} & \\ \cline{2-5}
& Interactive-GNMax, two rounds
& 4,341 & \textbf{0.837} & 73.2\% & \\ \hline
\end{tabular}
}
\caption{\textbf{Utility and privacy of the students.} The Confident- and
Interactive-GNMax aggregators introduced in \autoref{sec:gaussian-pate} offer better tradeoffs between privacy (characterized by the value of the bound $\varepsilon$) and utility (the accuracy of the student compared to a non-private baseline) than the LNMax aggregator used by the original PATE proposal on all datasets we evaluated with. For MNIST, Adult, and SVHN, we use
the labels of ensembles of $250$ teachers published by~\cite{papernot2016semi} and set $\delta=10^{-5}$ to compute values of $\varepsilon$ (to the exception of SVHN where $\delta=10^{-6}$).
All Glyph results use an ensemble of 5000 teachers
and
$\varepsilon$ is computed for $\delta=10^{-8}$.
}
\label{table:results_summary}
\end{table}
\paragraph{Observations.}
On the left of \autoref{fig:aggreg-lap-vs-gauss}, we compare our GNMax aggregator to the LNMax aggregator
used by the original PATE proposal, on an ensemble of $1000$ teachers and
for varying noise scales $\sigma$. At fixed test accuracy, the GNMax
algorithm
consistently outperforms the LNMax mechanism
in terms of privacy cost.
To explain this improved performance, recall notation from
\autoref{sec:max-of-gaussian}. For both
mechanisms, the data dependent privacy cost scales linearly with $\ensuremath{\tilde{q}}$---the
likelihood of an answer other than the true plurality. The value of $\ensuremath{\tilde{q}}$ falls
of as $\exp(-x^2)$ for GNMax and $\exp(-x)$ for LNMax, where $x$ is the ratio
$(n_{i^*} - n_i) / \sigma$. Thus, when $n_{i^*} - n_i$ is (say) $4\sigma$,
LNMax would have $\ensuremath{\tilde{q}} \approx e^{-4} = 0.018...$, whereas GNMax would have
$\ensuremath{\tilde{q}} \approx e^{-16} \approx 10^{-7}$, thereby leading to a much higher
likelihood of returning the true plurality. Moreover, this reduced $\ensuremath{\tilde{q}}$
translates to a smaller privacy cost for a given $\sigma$ leading to a better utility-privacy tradeoff.
As long as each teacher has sufficient data to learn a good-enough model,
increasing the number $M$ of teachers improves the tradeoff---as
illustrated on the right of \autoref{fig:aggreg-lap-vs-gauss-detailed} with GNMax. The larger ensembles lower the privacy
cost of answering queries by tolerating larger $\sigma$'s.
Combining the two observations made in this Figure, for a fixed label accuracy,
we lower privacy costs by switching to the GNMax aggregator and
training a larger number $M$ of teachers.
\subsection{Student Training with the GNMax Aggregation Mechanisms}
\label{ssec:student-training}
As outlined in \autoref{sec:pate-overview}, we train a student on
public data labeled by the aggregation mechanisms. We
take advantage of PATE's flexibility and
apply the technique that performs best on each dataset: semi-supervised
learning with
Generative Adversarial
Networks~\citep{salimans2016improved} for MNIST and SVHN,
Virtual Adversarial
Training~\citep{miyato2017virtual} for Glyph, and fully-supervised random forests
for UCI Adult.
In addition to evaluating the total privacy
cost associated with training the student model, we compare its utility to a non-private
baseline obtained by training on the sensitive data (used to train teachers
in PATE): we use the baselines of $99.2\%$, $92.8\%$, and $85.0\%$ reported
by~\cite{papernot2016semi} respectively for MNIST, SVHN, and UCI Adult, and we measure a baseline of
$82.2\%$ for Glyph. We compute $(\ensuremath{\varepsilon}, \delta)$-privacy bounds and denote the
privacy cost as the $\ensuremath{\varepsilon}$ value at a value of $\delta$ set accordingly to
number of training samples.
\paragraph{Confident-GNMax Aggregator.} Given a pool of 500 to 12,000 samples to learn from (depending on the dataset), the
student submits queries to the teacher ensemble running the Confident-GNMax
aggregator from \autoref{ssec:confident-aggregator}. A grid search over a
range of plausible values for parameters $T$, $\sigma_1$ and $\sigma_2$
yielded the values reported in \autoref{table:results_summary},
illustrating the tradeoff between utility and privacy achieved.
We additionally measure the number of queries selected by the teachers to be
answered and compare student utility to a non-private baseline.
The Confident-GNMax aggregator outperforms LNMax for the
four datasets considered in the original PATE proposal: it
reduces the privacy cost $\varepsilon$, increases student accuracy, or
both simultaneously.
On the uncurated Glyph data, despite the imbalance of classes and
mislabeled data (as evidenced by the 82.2\% baseline), the Confident Aggregator
achieves 73.5\% accuracy with a privacy
cost of just $\ensuremath{\varepsilon} = 1.02$. Roughly 1,300 out of 12,000 queries
made are not answered, indicating that several expensive queries were
successfully avoided. This selectivity is analyzed in more details in \autoref{ssec:noisy-threshold-expt}.
\paragraph{Interactive-GNMax Aggregator.} On Glyph, we evaluate the utility and privacy
of an interactive training routine
that proceeds in \emph{two rounds}. Round one runs student training with a
Confident Aggregator. A grid search targeting the best privacy for
roughly 3,400 answered queries (out of 6,000)---sufficient to bootstrap a
student---led us to
setting $(T\texttt{=}3500, \sigma_1\texttt{=}1500, \sigma_2\texttt{=}100)$ and a privacy cost of $\ensuremath{\varepsilon} \approx 0.59$.
In round two, this student was then trained with 10,000 more queries made with the Interactive-GNMax
Aggregator $(T\texttt{=}3500, \sigma_1\texttt{=}2000, \sigma_2\texttt{=}200)$.
We computed the resulting (total) privacy cost and utility at an
\emph{exemplar} data point through another grid search of plausible parameter
values. The result appears in the last row of \autoref{table:results_summary}.
With just over 10,422 answered queries in total at a
privacy cost of $\ensuremath{\varepsilon} = 0.84$, the trained student was able to achieve 73.2\%
accuracy.
Note that this students required
fewer answered queries compared to the Confident Aggregator.
The best overall cost of student training occurred when the privacy
costs for the first and second rounds of training were roughly the
same. (The total $\ensuremath{\varepsilon}$ is less than $0.59 \times 2 = 1.18$ due to better
composition---via Theorems~\ref{thm:ma_composition} and \ref{thm:ma_convert}.)
\paragraph{Comparison with Baseline.} Note that the Glyph student's accuracy remains seven
percentage
points below the non-private model's accuracy achieved by training
on the 65M sensitive inputs. We hypothesize that this is due
to the uncurated nature of the data considered. Indeed, the class imbalance
naturally requires more queries to return labels from the less represented classes.
For instance, a model trained on 200K queries is only 77\% accurate on test data.
In addition, the large fraction of mislabeled inputs are likely to
have a large privacy cost: these inputs are sensitive
because they are outliers of the distribution, which is reflected by the weak
consensus among teachers on these inputs.
\begin{figure}[t]
\centering
\begin{minipage}{.59\textwidth}
\centering
\includegraphics[width=\textwidth]{noisy_thresholding_check_perf_details_ulfar_edited.pdf}
\end{minipage}
\hfill
\begin{minipage}{.4\textwidth}
\centering
\includegraphics[width=\textwidth]{lnmax_vs_2xgnmax_large_ulfar_edited.pdf}
\end{minipage}
\caption{\textbf{Effects of the noisy threshold checking:} \emph{Left:} The number of queries
answered by LNMax, Confident-GNMax moderate $(T\texttt{=}3500,
\sigma_1\texttt{=}1500)$, and Confident-GNMax aggressive $(T\texttt{=}5000,
\sigma_1\texttt{=}1500)$. The black dots and the right axis (in log scale)
show the expected cost of answering a single query in each bin (via GNMax,
$\sigma_2{=}100$). \emph{Right:} Privacy cost of answering all (LNMax) vs
only inexpensive queries (GNMax) for a given number of answered queries. The
very dark area under the curve is the cost of selecting queries; the rest is the cost of answering them.
}
\label{fig:threshold-check}
\end{figure}
\subsection{Noisy Threshold Checks and Privacy Costs}
\label{ssec:noisy-threshold-expt}
Sections~\ref{sec:max-of-gaussian} and \ref{ssec:confident-aggregator} motivated the need for a noisy
threshold checking step before having the teachers answer queries: it
prevents most of the privacy budget being consumed by few queries that are
expensive and also likely to be incorrectly answered.
In \autoref{fig:threshold-check}, we compare the privacy cost $\varepsilon$ of answering
all queries to only answering confident queries for a fixed number of queries.
We run additional experiments to support the evaluation
from \autoref{ssec:student-training}.
With the votes of 5{,}000 teachers on the Glyph dataset, we plot in \autoref{fig:threshold-check} the histogram of the plurality vote
counts ($n_{i^*}$ in the notation of \autoref{sec:max-of-gaussian}) across
25{,}000 student queries.
We
compare these values to the vote counts of queries that passed the noisy threshold
check for two sets of parameters $T$ and $\sigma_1$ in \autoref{alg:confident aggregator}.
Smaller values
imply weaker teacher agreements and consequently more expensive queries.
When $(T\texttt{=}3500, \sigma_1\texttt{=}1500)$ we capture
a significant fraction of queries where teachers have a strong
consensus (roughly $> 4000$ votes) while managing to filter out many queries with poor consensus. This
\emph{moderate check} ensures that although many queries with plurality votes between
2,500 and 3,500 are answered (i.e., only 50--70\% of teachers agree on a
label) the expensive ones are most likely discarded.
For $(T\texttt{=}5000, \sigma_1\texttt{=}1500)$, queries with poor consensus are completely
culled out.
This selectivity comes at the expense of a noticeable
drop for queries that might have had a strong
consensus and little-to-no privacy cost.
Thus, this \emph{aggressive check} answer fewer queries
with very strong privacy
guarantees. We
reiterate that this threshold checking step
itself is done in a private manner. Empirically, in our Interactive
Aggregator experiments, we expend about a third to
a half of our privacy budget on this step, which still yields a very
small cost \emph{per query} across 6,000 queries.
\makeatletter{}\section{Conclusions}\label{sec:conclude}
The key insight motivating the addition of a noisy thresholding step
to the two aggregation mechanisms proposed in our work is that there is a form of synergy
between the privacy and accuracy of labels output by the aggregation:
\emph{labels that come at a small privacy cost also happen to be more
likely to be correct}. As a consequence, we are able to provide more quality
supervision to the student by choosing not to output labels when the
consensus among teachers is too low to provide an aggregated prediction
at a small cost in privacy. This observation was further confirmed in some
of our experiments where we observed that if we trained the student
on either private or non-private labels, the former
almost always gave better performance than the latter---for a fixed number of labels.
Complementary with these aggregation mechanisms is the use of a Gaussian
(rather than Laplace) distribution to perturb teacher votes. In our experiments
with Glyph data, these changes proved essential to preserve the accuracy of the
aggregated labels---because of the large number of classes. The analysis
presented in \autoref{sec:gaussian-pate} details the delicate but
necessary adaptation of analogous results for the Laplace NoisyMax.
As was the case for the original PATE proposal, semi-supervised learning
was instrumental to ensure the student achieves strong utility
given a limited set of labels from the aggregation mechanism. However,
we found that virtual adversarial training outperforms the approach from~\cite{salimans2016improved}
in our experiments with Glyph data.
These results establish lower bounds on the performance that a student
can achieve when supervised with our aggregation mechanisms;
future work may continue to investigate
virtual adversarial training, semi-supervised generative adversarial networks
and other techniques for learning the student in these particular settings
with restricted supervision.
\makeatletter{}\section*{Acknowledgments}
We are grateful to Mart\'in Abadi, Vincent Vanhoucke, and Daniel Levy for
their useful inputs and discussions towards this paper.
\newpage
|
{
"timestamp": "2018-02-27T02:07:50",
"yymm": "1802",
"arxiv_id": "1802.08908",
"language": "en",
"url": "https://arxiv.org/abs/1802.08908"
}
|
\section{Introduction}
\label{sec:intro}
Televised public debates have
become a focal point of election campaigns
\cite{mckinney2004political}.
A famous example is from the 1988 U.S. vice presidential debate.
After Dan Quayle compared himself to John F. Kennedy,
Lloyd Bentsen
dismissively replied, ``you are no Jack Kennedy''.
This moment received wide post-debate coverage, and even pervades later debates and popular parodies.\footnote{\url{https://en.wikipedia.org/wiki/Senator,_you're_no_Jack_Kennedy\#Legacy}.}
We refer to moments that are frequently quoted by media outlets
as {\em highlights\xspace}.
Media-selected highlights\xspace in post-debate coverage shape how the public interprets election debates, because these highlights\xspace may be the only debate content consumed by many voters \cite{Fridkin01012008,Hillygus:AmericanJournalOfPoliticalScience:2003,hwang2007applying}.
\begin{figure}[t]
\centering
\vspace{0.4cm}
{
\begin{tabular}{p{0.5\textwidth}}
\begin{tcolorbox}[width=0.45\textwidth,
boxsep=0pt,
left=5pt,
right=5pt,
top=3pt,
bottom=3pt,
arc=8pt,
boxrule=1.5pt,
%
colback=gray!20!white
]
{\bf SANDERS}: \positivesent{Do I consider myself part of the casino capitalist process by which so few have so much and so many have so little by which Wall Street's greed and recklessness wrecked this economy?} [...]\\
{\bf CLINTON}:
[...] I think what Senator Sanders is saying certainly makes sense in the terms of the inequality that we have. [...]\\
{\bf SANDERS}: [...]
\negativesent{So what we need to do is support small and medium-sized businesses, the backbone of our economy, but we have to make sure that every family in this country gets a fair shake} [...]
\end{tcolorbox}
\end{tabular}
}
\vspace{-0.4cm}
\caption{
Between the two bold sentences from Bernie Sanders from neighboring turns in the 2016 Democratic primary debates, the first sentence was quoted 23 times
in newspapers within a week after the debate, while the second one was
not quoted at all in our data.
Yet in our experiments,
3 out of 5 humans
thought the second one was quoted more.
}
\label{tab:example}
\end{figure}
However, most highlights\xspace that media select are not as exceptional as ``you are no Jack Kennedy'',
and it remains unclear what factors determine media selection.
Consider the example in \figref{tab:example}.
It was not obvious to
participants in our experiments
which of the two passages from Sanders was a highlight\xspace.
Even knowing that the first one was highlighted\xspace,
we can propose multiple plausible explanations for this choice of the media.
It could
be the catchiness of ``casino capitalist'', or the parallel structure of ``so few have so much'' and ``so many have so little''.
It may also relate to the conversational dynamics (e.g.,
Clinton's agreement about inequality) or non-textual factors such as the media's political leanings.
Some
qualitative
studies have
investigated the effect of language-related factors in how the media select highlights
\cite{atkinson1984our,clayman1995defining,hallin1992sound,gidengil2003talking}.
For instance, to explain the popularity of ``you are no Jack Kennedy'', \citet{clayman1995defining} suggests three important factors:
1) {\em narrative relevance} (how well a moment fits in a news story); 2) {\em conspicuousness} (how much a moment stands out in a debate); 3) {\em extractability} (how self-contained a moment is).
However, it is nontrivial to computationally characterize these qualitative factors, and their predictive power remains unknown.
What is also missing in the existing literature is an understanding of how consumers of news coverage, i.e., the public, interpret media outlets' selection of highlights.
Moreover, media selection of highlights holds promise for understanding media bias and polarization.
Existing studies have shown that
non-textual factors such as
the media's preferences and biases can affect the selection of highlights\xspace
\cite{groseclose2005measure,lin2011more,Baum:PoliticalCommunication:2008,Niculae:2015:QSP:2736277.2741688}.
In particular, \citet{Niculae:2015:QSP:2736277.2741688} demonstrate implicit structure in the media (e.g., international vs. domestic) by analyzing the patterns in quotes of President Barack Obama.
Since televised presidential debates have been happening for decades, analysis of debate coverage can shed light on the evolution of media preferences over time.
To quantitatively investigate these questions,
we collect American presidential debate transcripts,
including both general debates (the debates between general election
candidates after primary elections) and primary debates (the
earlier, within-party debates in primary elections), and post-debate coverage in newspapers
(details in \secref{sec:data}).
Our dataset spans more than three decades.
\para{The present work: the effect of wording on media choices (\secref{sec:textual}).}
The first thrust of this paper investigates the effect of wording on media choices and examines whether the public understands these choices.
To do that, we propose a binary classification framework,
where a well-quoted sentence (highlight) is paired with a
sentence that is not a highlight,
controlling for the speaker and the debate situation.
The task is to identify which one was quoted more.
Using this classification framework, we investigate how well humans and machine-learned classifiers can predict media choices and what the distinguishing textual factors are.
We find that media choices in the selection of highlights are not entirely obvious to humans.
As a proxy for the general public, we request
Mechanical Turk
workers
to perform the classification task and explain what factors they use in making predictions.
Although they are able to identify some textual signals
and outperform random chance (50\%),
they only achieve an average accuracy of 60\%.
Meanwhile, there seem to exist more signals in the wording that are
not salient to untrained humans.
With carefully-designed features
that
build on past qualitative studies
\cite{atkinson1984our,clayman1995defining,hallin1992sound,gidengil2003talking},
our classifiers achieve an accuracy of 66\%.
This result indicates that textual factors can predict
media choices to a greater extent than average human performance suggests.
In fact, the main performance gap
comes from primary debates.
One possible explanation for the gap in human performance between general debates and primary debates
is the amount of past exposure:
primary debates receive less media coverage than general debates and humans
may have weaker memories of primary debate highlights.
We also observe interesting similarities and differences when comparing
distinguishing factors that humans mentioned
with
those identified by data-driven methods.
For instance,
negativity is
considered important in both approaches.
However, the two approaches view conversational context differently.
Only 3\% of human responses mention that context matters,
while
our models suggest that it is a significant factor:
highlights\xspace tend to be more different from the speaker's previous utterance,
and are more likely to be picked up in later utterances.
\para{The present work: quoting patterns over time (\secref{sec:media_preferences}).}
The second thrust of this paper examines the media's own preferences and biases in selecting highlights\xspace over time.
Instead of viewing all media outlets as a uniform body, we take advantage of the longitudinal nature of our data and study whether the news media have
become more fragmented over time.
Using a bag-of-sentences approach,
we construct a bipartite graph over media outlets and the sentences they quoted.
Consistent with existing studies on polarization \cite{Baum:PoliticalCommunication:2008}, we observe a decreasing trend in bipartisan coverage in general elections, where a clear two-party structure exists.
When we investigate the similarity between media outlets without partisan assumptions,
we find an increasing trend in the tightness of local clustering, but do not observe that media outlets
are becoming less similar to each other over time on
average.
\begin{figure*}[t]
\centering
\begin{subfigure}[t]{0.23\textwidth}
\addFigure{\textwidth}{quote_stat/news_dist.pdf}
\caption{Cumulative fraction of quotes by media outlet.}
\label{fig:news_dist}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.23\textwidth}
\addFigure{\textwidth}{quote_stat/quoted_fraction_cmp.pdf}
\caption{Fraction of debate sentences being (partially) quoted.}
\label{fig:speech_quoted}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.23\textwidth}
\addFigure{\textwidth}{quote_stat/quote_fraction_article_cmp.pdf}
\caption{Fraction of texts that are quotes in news articles.}
\label{fig:article_quotes}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.23\textwidth}
\addFigure{\textwidth}{quote_pattern/quote_dist.pdf}
\caption{Fraction of quotes from each decile of turns.}
\label{fig:quote_dist}
\end{subfigure}
\caption{
In \figref{fig:news_dist}, media outlets are sorted by total number of quotes, and there is a heavy tail.
\figref{fig:speech_quoted}
%
%
shows that
%
an increasing fraction of sentences in the debates are (partially) quoted by the media over time.
%
%
\figref{fig:article_quotes} indicates that an increasing fraction of texts in news articles are direct quotes from the debates.
%
\figref{fig:quote_dist} shows that more quotes are from the beginning of a debate.
Throughout the paper, error bars represent standard error, dashed lines show the best linear fit and * in a legend indicates that
%
the linear coefficient is statistically significantly different from 0 with $p<0.05$.
%
%
}
\label{fig:fraction_cmp}
\end{figure*}
\section{An Overview of the Dataset}
\label{sec:data}
Our dataset consists of two parts: debate transcripts and
post-debate news coverage.
We extract transcripts from general debates since 1960
and primary debates since the 2000 presidential election from the \textit{American Presidency Project}.\footnote{\url{http://www.presidency.ucsb.edu/debates.php}.
In fact, presidential debates are a relatively recent phenomenon, despite their prominence nowadays. After the first general debate between John F. Kennedy and Richard Nixon in 1960, there were no general debates until 1976. For more historical details, refer to \url{http://www.cnn.com/2012/09/30/opinion/greene-debates/}.}
Throughout this work,
we define a {{\em turn}} as an uninterrupted utterance by a single speaker.
In order to collect
post-debate news coverage, we use LexisNexis Academic\footnote{\url{http://www.lexisnexis.com/hottopics/lnacademic/}.} to search all newspapers within seven days after each debate.
As LexisNexis indexes newspapers since 1980, we study media highlights of presidential debates from 1980 to 2016.
To achieve high recall, we use debate type (``presidential'', ``vice'', ``democratic'' and ``republican'') and the word ``debate'' as the query.
Although newspapers are a subset of news coverage,
they comprise a long-standing and often-studied segment of the media
and are highly amenable to replication studies.
We leave exploration of additional media sources to future work.
Inspired by existing studies \cite{leskovec2009meme,Niculae:2015:QSP:2736277.2741688,simmons2011memes,tan2016lost},
we define ``highlights''
based on quotes in news articles that directly come from the debates.
In this work, we differentiate quotes from quotations.
We refer to any texts in news articles that are enclosed in quotation marks as {\em quotations}, and {\em quotes} are a subset of quotations that can be matched to a turn at a debate.
We determine whether a quotation matches a turn in the presidential debate based on word overlap and fuzzy matching.
The extraction process
produces pairs of quotes and quoted sentences from the corresponding debate.
We will present the formal definition of highlights\xspace in \secref{sec:quotability}.
Our dataset and supplementary material are available at \url{https://chenhaot.com/papers/debate-quotes.html}.
\tableref{tab:basic_stats} shows overall
statistics of our dataset.
We next discuss basic properties of our dataset. In particular, we observe an increasing trend of news media quoting presidential debate moments.
\para{A diverse set of newspapers (\figref{fig:news_dist}).}
It is important to point out that there
are more newspapers over time in our dataset, partly because more media outlets began to quote presidential debates and partly because LexisNexis gradually improves
their collection
of newspapers.
Only four newspapers
quoted general debates in 1980; 334 newspapers
did in 2016.
There are around 700 unique newspapers in total and the top 10\% of the newspapers in quoting debates account for 72\% of all the quotes.
The \emph{New York Times} and \emph{Washington Post} are consistently the top newspapers with the most quotes since 1980.
We also have small newspapers (e.g., \emph{Rhode Island Lawyers Weekly}) and international newspapers (e.g., \emph{The Guardian}).
\begin{table}
\centering
\begin{tabular}{lrp{0.4in}p{0.4in}p{0.4in}}
\toprule
debate type & \#debates & avg. \#sent's & avg. \#tokens & avg. \#quotes\\
\midrule
general & 26 & 1064.9 & 16278.0 & 944.2\\
vice & 9 & 1018.2 & 15974.0 & 618.4\\
Democratic & 38 & 1070.6 & 16028.3 & 330.7\\
Republican & 59 & 1270.8 & 17781.1 & 369.1\\
\bottomrule
\end{tabular}
\caption{Dataset statistics. The last three columns show the average number of sentences, the average number of tokens, and the average number of quotes per debate, respectively.}
\label{tab:basic_stats}
\end{table}
\para{An increasing trend of quoting (\figref{fig:speech_quoted} and \figref{fig:article_quotes}).}
Since there are more media outlets over time,
it is expected that an increasing fraction of sentences in the debates are quoted in the news media.
In comparison, general debates are quoted much more than primary debates.
But it is unexpected that the fraction of texts that are direct quotes
in news articles is also growing over time, as we observe.
This suggests that directly quoting the candidates is an increasingly common way to cover debates.
\para{More quotes are from the beginning of a debate (\figref{fig:quote_dist}).}
Later turns in a debate are less likely to be quoted in the media.
This decreasing likelihood
is robust across different types of debates and echoes findings on movie quotes \cite{Danescu-Niculescu-Mizil+Cheng+Kleinberg+Lee:12}.
\section{The Effect of Wording on Media Choices}
\label{sec:textual}
We
study how
textual factors
associate with
media selection of highlights from presidential debates for three reasons.
First,
media-selected highlights are the only debate content consumed by voters
who do not watch the debate and hence rely on post-debate coverage.
How well the public understands media selection of highlights is worth studying because the public uses this understanding to interpret news coverage.
Second,
debating candidates cannot control their popularity
or the news media's political preferences,
but they can always choose the wording when they seek to deliver a message.
Understanding the effects of wording can thus inform political communication.
Third, it is valuable to know the extent to which we are able to
predict media choices using {\em only} textual information---{\em although
of course textual information alone may not fully predict media choices}.
To study the effect of wording on media choices, we propose an experimental framework that controls for the speaker and the debate situation
and formulate a binary classification task (\secref{sec:quotability}).
We then study the public understanding of media choices by evaluating human performance on this task and analyzing free-form explanations in human surveys (\secref{sec:human_eval}).
We further build on existing theories and develop quantitative features for data-driven classifiers (\secref{sec:machine_features}), and examine their prediction performance in \secref{sec:performance}.
\subsection{Experimental Framework}
\label{sec:quotability}
To investigate
how textual factors associate with
media selection of highlights\xspace from presidential debates,
we need to control for other confounding factors such as who the speaker is and what state the debate is in.
Inspired by ``natural experiments'' and previous studies about the effect of wording on message sharing, memorability, and persuasion \cite{Danescu-Niculescu-Mizil+Cheng+Kleinberg+Lee:12,Dinardo:Microeconometrics:2010,Tan:ProceedingsOfAcl:2014,Tan:2016:WAI:2872427.2883081},
we propose a classification task that asks humans and machines to decide which sentence was quoted more in the news media between two ``similar'' sentences.
\para{Binary classification framework.}
To formally define highlights\xspace,
we use
a sentence as the basic unit of analysis.
In our natural experiment framework, we find a matching ``negative'' sentence for each
media-selected highlight\xspace
and evaluate whether humans or machines can tell the highlighted\xspace one from the not-highlighted\xspace one in a pair.
Because
debating candidates
have varying popularity and the debate progresses
with different levels of importance (see \figref{fig:quote_dist}),
we match each
highlight\xspace with a not-highlighted\xspace sentence of {similar length} within
three turns by the {same speaker}.\footnote{There exist alternate ways to account for topic shift, e.g., \cite{Nguyen:2012:SHN:2390524.2390536}.}
We consider a sentence {\em highlighted\xspace} if it was among the most quoted
$t\%$ sentences from the corresponding debate.
We opted for this instead of absolute-count thresholding because
the number of quotes increases over time
(see \figref{fig:speech_quoted}).
We experiment with $t=1,2,\dots, 10$.
The results are robust across choices of $t$, and we thus report only the results for $t=10$ (except for overall accuracy). %
Following the above definition, we extract $\sim$14K pairs of sentences from all the debates.
For a pair of sentences, we randomize the order and predict whether the first one was highlighted\xspace.
A random guess gives an accuracy of 50\%.
We randomly select 80\% of our data for training and hold out the other 20\% for testing.
To build machine learning classifiers in \secref{sec:performance},
we construct a vector representation of each pair by extracting features from each sentence and take the difference between them.
We use logistic regression with $\ell_2$-regularization.
This approach is equivalent to a linear framework for the ranking task within a pair \cite{Joachims:2002:OSE:775047.775067}.
We grid search
the best $\ell_2$ coefficient based on five-fold cross-validated accuracy on the training set over
$\bigl\{2^x\bigr\}$, where $x$ ranges over 20 values evenly spaced between --8 and 1.
\begin{table}[t]
\centering
%
\begin{tabular}{p{0.32\textwidth}r}
\toprule
category & \% humans \\
\midrule
circular (sound bite, newsworthy) & 30.0 \\
\midrule
provocative, sensational & 25.5 \\
surprising, funny & 17.0\\
issues, informative & 16.0 \\
controversial & 15.0 \\
memory, past exposure & 12.0 \\
\bottomrule
\end{tabular}
\caption{Top factors in human surveys and the percentage of humans that mentioned them.}
\label{tab:human_factors}
\end{table}
\subsection{Human Interpretation of Media Choices}
\label{sec:human_eval}
Using the above classification framework, we first
examine the public understanding of how the news media select highlights\xspace.
We recruit 200 U.S.-based
Mechanical Turk workers as a sample of
untrained humans (the public) to perform the prediction task on randomly sampled pairs from the held-out set.
In addition, we ask our participants to explain the important factors that they use to make predictions in free-form responses.
Specifically, we request each participant to label 25 pairs and finish
an exit survey to explain what factors they used to make predictions
as well as their experience of watching debates and their political ideology.
For a pair, we show the highlighted\xspace sentence, the not-highlighted\xspace sentence, and a few surrounding sentences\footnote{For both sentences in a pair, we include
up to
3 sentences before it and 3 sentences after it to provide some context for the participants to understand the current state of the debate.} in the order
they occurred in the debate and ask the participants to guess which one was quoted more in the news media.
To make sure that they understand the task, we prepare three training
pairs and require a comprehension quiz before they start.
We also provide a bonus for each correct guess
to incentivize participants to try their best.
Further details of the human experiments are in the appendix.
\para{Media choices are not obvious to the public.}
The average human accuracy is 60\% and Fleiss' $\kappa$ between human labels is 0.2, indicating slight agreement.\footnote{Surprisingly,
there is no clear relationship between an individual's
prediction performance and their self-reported level of experience or political
ideology.}
These observations suggest that media choices are not obvious to humans, at least based on textual content.
One plausible explanation is that the textual information is insufficient to explain media choices: media choices are influenced by external factors such as statements made outside presidential debates and public opinion shifts.
However, as we will show later, there seem to
exist signals in the wording that are not salient to untrained
humans.
Another more pessimistic explanation is that humans have a limited understanding of how the news media select highlights.
\para{Important factors in human surveys.} To examine the important factors from the perspective of humans,
we categorize free-form explanations in human surveys and present the top factors in \tableref{tab:human_factors}.
The most common factors cited are circular;
i.e., 30\% of the participants mentioned that they made decisions based
on which one is newsworthy or which one makes a good sound bite.
This suggests that it is nontrivial for humans to reason about media selection of highlights.
Among the next five most frequently mentioned categories, participants mentioned {\em sensational} (emotional, negative, shocking, etc.) and {\em surprising or funny}.
Most of these factors are difficult to operationalize computationally.
Interestingly, {\em memory or past exposure} was explicitly mentioned by 12\% of the participants,
indicating that humans may predict media choices even less accurately without unavoidable media exposure.
These top factors do not directly align with
existing qualitative studies.
For instance, \citet{clayman1995defining}, the most relevant work, points out three important factors:
1) {\em narrative relevance} (how well a moment fits in a news story); 2) {\em conspicuousness} (how much a moment stands out in a debate); 3) {\em extractability} (how self-contained a moment is).
It is unclear how to map the factors that our participants mentioned to these three.
Notably, only 3\% of the participants mentioned that context in which a sentence occurred matters,
while an equal number of people
brought up
appealing to liberal voters as a criterion (none discussed the other direction).
Extractability in \citet{clayman1995defining}, or ``can be taken out of context'' was viewed important by 4\% of the participants.
However, ``potential to be twisted'', a slightly different but more malicious interpretation, was explicitly mentioned by 6.5\% of the participants.
These observations indicate a negative attitude toward the news media,
or at least some skepticism about their role in American
politics.
\begin{comment}
\begin{tabular}{p{0.1\textwidth}rp{0.28\textwidth}}
\toprule
length & $\uparrow\uparrow\uparrow\uparrow$ & Informativeness helps. The result is consistent with \citet{Tan:ProceedingsOfAcl:2014,Tan:2016:WAI:2872427.2883081}.\\
\midrule
posemo & & \multirow{4}{0.35\textwidth}{Negative emotions matter while positive emotions do not. This is consistent with the negativity in news media found in \citet{geer2012news}.}\\
negemo & $\uparrow\uparrow\uparrow\uparrow$ & \\
& & \\
& & \\
\midrule
negation & $\uparrow\uparrow\uparrow\uparrow$ & \\
negative conjunction & $\uparrow\uparrow\uparrow\uparrow$ & \\
\midrule
i & $\uparrow\uparrow\uparrow\uparrow$ & \\
you & $\uparrow\uparrow\uparrow\uparrow$ & \\
shehe & $\uparrow\uparrow\uparrow\uparrow$ &\\
they & & \\
we & $\downarrow\downarrow\downarrow\downarrow$ & \\
\midrule
hedges & $\uparrow\uparrow\uparrow\uparrow$ & \\
\midrule
superlatives & & \\
\midrule
indefinite articles & $\uparrow\uparrow\uparrow$ & \\
\midrule
unigram & $\uparrow\uparrow$ & \\
bigram & & \\
trigram & & \\
POS unigram & & \\
POS bigram & & \\
POS trigram & \\
\midrule
parallelism & $\uparrow$ & \\
\bottomrule
\end{tabular}
\end{comment}
\begin{table*}[t]
%
%
\begin{center}
\begin{tabulary}{\textwidth}{@{}p{0.1\textwidth}@{\hspace{2pt}} @{\hspace{2pt}} p{0.72\textwidth} @{\hspace{2pt}}p{0.12\textwidth} @{\hspace{2pt}}r@{}}
%
\toprule
Feature set & \multicolumn{1}{c}{Related theories/intuitions and brief description} & \multicolumn{2}{c}{Significance} \\
%
\midrule
\multirow{2}{\hsize}{Informative-ness}
& \multirow{2}{\hsize}{We use length as a proxy of informativeness.
%
Longer sentences are more likely to be
highlighted\xspace despite our control on length as discussed in \secref{sec:quotability}.
%
This echoes findings in \citet{Tan:ProceedingsOfAcl:2014,Tan:2016:WAI:2872427.2883081}.}
& \multirow{2}{*}{length} & \multirow{2}{*}{$\uparrow\uparrow\uparrow\uparrow$}\\
& & & \\
\noalign{\smallskip}
%
\hline
\multirow{3}{*}{Emotions}
& \multirow{3}{\hsize}{We consider positive and negative words in \citet{pennebaker2001linguistic}.
Highlighted\xspace sentences use significantly more negative words, while there is no difference in positive words. This is consistent with negativity bias \cite{rozin2001negativity} and the negativity found in the news media \cite{geer2012news}.}
& posemo & \\
& & negemo & $\uparrow\uparrow\uparrow\uparrow$ \\
& & & \\
\noalign{\smallskip}
\hline
\multirow{2}{*}{Contrast}
& \multirow{2}{\hsize}{%
%
We use negations and negative conjunctions (e.g., \emph{not, but, although}) to capture contrast.
Our result echoes \citet{atkinson1984our}, which demonstrates the importance of contrast.}
& negation & $\uparrow\uparrow\uparrow\uparrow$\\
& & negative conj. & $\uparrow\uparrow\uparrow\uparrow$\\
\noalign{\smallskip}
\hline
\multirow{3}{\hsize}{Personal pronouns}
& \multirow{3}{\hsize}{In general, highlighted\xspace sentences use more personal pronouns except first person plural and third person plural.
One explanation for the contrast between \emph{I} and \emph{we} is that
media outlets prefer statements about candidates themselves to unifying statements using \emph{we}.%
%
}
& i, you, she, he & $\uparrow\uparrow\uparrow\uparrow$\\
& & they & \\
& & we & $\downarrow\downarrow\downarrow\downarrow$\\
\noalign{\smallskip}
\hline
\multirow{2}{\hsize}{Uncertainty/ subjectivity}
& \multirow{2}{\hsize}{Hedging is a common way to express uncertainty \cite{lakoff1975hedges} and we use a dictionary from \citet{tan+lee:16tad}.
%
In the debate context, hedges may also represent subjectivity.
}
& \multirow{2}{*}{hedges} & \multirow{2}{*}{$\uparrow\uparrow\uparrow\uparrow$}\\
& & & \\
\noalign{\smallskip}
\hline
\multirow{2}{\hsize}{Strong emphasis}
& \multirow{2}{\hsize}{Superlatives represent the extreme form of an adjective or an adverb and can be used to put emphasis on a statement.
Surprisingly, highlighted\xspace sentences do not use more superlatives.}
& \multirow{2}{*}{superlatives} & \\
& & & \\
\noalign{\smallskip}
\hline
\multirow{2}{*}{Generality}
& \multirow{2}{\hsize}{We count indefinite articles to measure generality. Our findings are consistent with \citet{Danescu-Niculescu-Mizil+Cheng+Kleinberg+Lee:12,Shahaf:2015:IJI:2783258.2783388,Tan:ProceedingsOfAcl:2014}.
%
}
& \multirow{2}{*}{indef. articles} & \multirow{2}{*}{$\uparrow\uparrow\uparrow$}\\
& & & \\
\noalign{\smallskip}
\hline
\multirow{4}{\hsize}{Language model}
& \multirow{4}{\hsize}{To capture surprise or conspicuousness,
we compute language model scores
%
%
based on NYT texts and
part-of-speech (POS) tags in the WSJ portion of Penn Treebank.
However, the only significant feature
%
is that highlighted\xspace sentences are more similar to NYT texts in unigram usage. This finding is consistent with
%
message sharing \cite{Tan:ProceedingsOfAcl:2014}
but is different from
%
memorable movie quotes \cite{Danescu-Niculescu-Mizil+Cheng+Kleinberg+Lee:12}.}
& unigram & $\uparrow\uparrow$ \\
& & bi-, trigram & \\
& & POS \{1, 2, 3\}-gram & \\
\noalign{\smallskip}
\hline
\multirow{3}{*}{Parallelism}
& \multirow{3}{\hsize}{Using parallel sentence structure is a rhetorical technique, e.g, the first sentence in \tableref{tab:example} and ``I've never wilted in my life, and I've never wavered in my life''.
%
We use average longest common sequences between sub-sentences to measure it \cite{songlearning}.}
& \multirow{3}{*}{parallelism} & \multirow{3}{*}{$\uparrow$}\\
& & & \\
& & & \\
\bottomrule
%
\end{tabulary}
\end{center}
%
%
\caption{Testing results of sentence-alone features. Upward arrows
indicate that highlighted\xspace sentences have larger scores in that
feature, while downward arrows suggest the other way around
($\uparrow\uparrow\uparrow\uparrow: p < 0.0001$,
$\uparrow\uparrow\uparrow: p < 0.001$, $\uparrow\uparrow: p
< 0.01$, $\uparrow: p < 0.05$, the same for downward arrows; $p$ refers to the $p$-value after the Bonferroni correction).}
\label{tab:sentence_features}
%
\end{table*}
\subsection{Quantitative Features}
\label{sec:machine_features}
Building on the above human intuitions and existing studies, we
develop two sets of features: sentence-alone features, and
conversation-flow features that attempt to capture conversational
dynamics.
In this section, we use training data to identify important features that distinguish highlights from non-highlights, and compare the signals from data-driven methods with the factors from humans' free-form explanations.
In addition to these two sets of features,
we will employ bag-of-words features as a strong baseline in \secref{sec:performance}, i.e., unigrams and bigrams that occur at least 5 times in the training set.
\para{Sentence-alone features.}
We first examine features that do not rely on any contextual information in the debates and can be extracted from a sentence alone.
We evaluate whether highlighted\xspace sentences are significantly different from not-highlighted\xspace sentences in each feature.
Specifically, for each feature, we compute the feature values for both highlighted\xspace and not-highlighted\xspace sentences and
conduct one-sided paired $t$-tests with the Bonferroni correction \cite{bonferroni1936teoria}.
\tableref{tab:sentence_features} presents intuitions and theories for each feature set, including related work.
We
present the full computational details in the appendix.
By comparing results in \tableref{tab:sentence_features} with previously discussed factors from human surveys, we find that the top factors in human surveys also tend to be statistically significant signals from data-driven methods, such as length (informative), and negative emotions (sensational).
But this is not always the case, e.g., positive emotions and strong emphasis were not statistically significant signals.
Meanwhile, signals such as personal pronouns, hedges, and language model features arise from data-driven methods, but humans may not pay as much attention to them.
A complete comparison between computational features and human factors
would require operationalizing controversiality, sensationalism, humor, etc.;
we leave this to future work.
\para{Conversation-flow features.}
Although only a handful of participants in our human experiment mentioned that context matters, %
conversational dynamics in the debates may contribute to the selection of highlights \cite{Zhang:ProceedingsOfNaacl:2016}.
We propose a novel set of conversation-flow features and indeed observe intriguing conversational dynamics around the highlights.
In order to capture the local context of a sentence ($s$),
we compare the sentence with its neighboring turns.
We use a window $w$ and denote the content words in the next $w$ turns by the same speaker as $\operatorname{Words}^{\mathrm{post}}_{\mathrm{self}}(w)$, the content words in the previous $w$ turns by the same speaker as $\operatorname{Words}^{\mathrm{prev}}_\mathrm{self}(w)$.
Similarly, we extract $\operatorname{Words}^{\mathrm{post}}_\mathrm{other}(w)$ and $\operatorname{Words}^{\mathrm{prev}}_\mathrm{other}(w)$ for other speakers.
We compute Jaccard similarity between
the sentence ($\operatorname{Words}_s$) and
its neighboring turns.
For instance,
$$
\operatorname{Jaccard}^{\mathrm{post}}_{\mathrm{other}}(5)=
\frac
{\lvert \operatorname{Words}_s \cap \operatorname{Words}^{\mathrm{post}}_{\mathrm{other}}(5) \rvert}
{\lvert \operatorname{Words}_s \cup \operatorname{Words}^{\mathrm{post}}_{\mathrm{other}}(5) \rvert}$$
\noindent measures the similarity between the
sentence and
the 5 turns by other speakers after the sentence.
\begin{figure}[t]
\centering
\begin{subfigure}[t]{0.32\textwidth}
\addFigure{\textwidth}{quotability_feature/{jaccard_self_content_window_0.1}.pdf}
\caption{Similarity to one's own turns.}
\label{fig:simi_self}
\end{subfigure}
\begin{subfigure}[t]{0.32\textwidth}
\addFigure{\textwidth}{quotability_feature/{jaccard_other_content_window_0.1}.pdf}
\caption{Similarity to turns by other speakers.}
\label{fig:simi_other}
\end{subfigure}
\caption{%
\figref{fig:simi_self} and \figref{fig:simi_other} present conversation-flow features that are based on Jaccard similarity between a sentence and its neighboring turns (negative windows for previous turns, positive windows for later turns, error bars are tiny).
%
%
In \figref{fig:simi_self}, a kink exists around 0
%
in similarity to turns by the same speaker, while in \figref{fig:simi_other}, highlighted\xspace sentences are consistently more similar to turns by other speakers.
%
%
%
%
\label{fig:flow_features}}
\end{figure}
\begin{itemize}[leftmargin=*]
\item A kink exists in similarity to turns by the same speaker (\figref{fig:simi_self}).
%
Highlighted\xspace and not-highlighted\xspace sentences present the same level of similarity
to turns by the same speaker until
%
%
the last turn before the sentence.
An interesting kink shows up around the sentence:
highlighted\xspace sentences are less similar to the turn immediately before
%
but are more similar to turns after.
%
We take this as a sign that ``changepoints'' in a
monologue, where the speaker shifts in topic or style, are more likely
to be quoted.
%
%
\item Highlighted\xspace sentences are more similar to neighboring turns by other speakers (\figref{fig:simi_other}).
%
%
Regarding the overall trend,
both for highlighted\xspace and not-highlighted\xspace sentences,
the similarity is smaller
for the immediate neighboring turns ($w$ is 1 or -1) than when more turns are included.
%
%
%
%
This is because moderators often speak right before and
right after candidates, and
%
moderators speak distinctly from candidates, in terms of words (due to different communicative goals).
%
\end{itemize}
\subsection{Prediction Performance}
\label{sec:performance}
Finally, we study to what extent media choices can be predicted only from textual factors by examining classification performance on the held-out set.
We also investigate the difference between general debates and primary
debates.
\begin{figure*}
\centering
\begin{subfigure}[t]{0.45\textwidth}
\centering
\addFigure{\textwidth}{performance/{heldout_subset_0.1}.pdf}
\caption{Accuracy of different features vs. humans}
%
\label{fig:heldout_10}
\end{subfigure}
%
\begin{subfigure}[t]{0.45\textwidth}
\centering
\addFigure{\textwidth}{performance/{all_0.1_debate_group_with_all}.pdf}
%
\caption{cross-group performance}
\label{fig:cross_debate_type}
\end{subfigure}
%
%
%
%
%
\caption{Classification accuracy.
%
In \figref{fig:heldout_10}, each point measures the accuracy of
%
a feature set (indicated by the color) on the subset of held-out data where the quote count of the highlighted\xspace sentence in a pair is among the top $x$\% quoted sentences in the debate.
In other words, smaller $x$ values correspond to ``easier'' pairs where the highlighted\xspace sentence is more prominent.
$y$ values for $x=10$ give the accuracies on the full held-out
data (all, 66.0\% vs. human, 60.1\%, $p= 0.0008$).
%
\figref{fig:cross_debate_type} illustrates the accuracy when we apply the classifier trained on a debate type to another debate type.
Different colors represent the debate type of the training data,
%
and the $x$-axis represents the debate type of the test data.
Note that ``human'' reports the accuracy of human predictions on
the corresponding debate type in the test data, and is not a
function of the training set.
%
%
}
\label{fig:performance}
\end{figure*}
\para{Overall prediction accuracy (\figref{fig:heldout_10}).}
Using only textual factors, our classifier achieves an accuracy of 66\% on the held-out set for $t=10$.
The accuracy of both machines and humans increases for ``easier'' pairs,
the ones in which highlighted\xspace sentences were quoted more frequently.
This trend confirms that meaningful signals exist in the wording.
The accuracy of machines (``all'') is always above humans and the difference is statistically significant.
The bag-of-words (BOW) model already outperforms humans in this task.
In comparison with the features that we propose in
\secref{sec:machine_features} ({``all -- BOW''}\xspace),
although {``all -- BOW''}\xspace has far fewer features, it
yields a similar accuracy to BOW.
{``all -- BOW''}\xspace
works relatively well when highlighted\xspace sentences in a pair
were quoted more frequently and when there are fewer training instances, while BOW has an advantage
when the highlighted\xspace sentence is closer to the 10\% threshold (right side of \figref{fig:heldout_10}).
Combining all features (including BOW; ``all'') always leads to the best accuracy.
\emph{Note that the accuracies of machines and humans are not meant to be compared head-to-head, since machines rely on training data to identify the useful signals, while humans depend on their daily (potentially biased) media exposure.}
Instead, we view this accuracy gap as evidence suggesting that some signals in the wording are hard for humans to identify.
This also points to the potential to inform the public with the help of machines.
\para{Differences across debate types (\figref{fig:cross_debate_type}).}
As primary debates have more candidates and receive less coverage than general debates,
the news media may employ different criteria to select highlights\xspace.
To explore the differences,
we train classifiers on subsets of the training data from primary debates and test on different types of debate.
Differences indeed exist in how wording affects media selection:
the classifiers do not usually perform well when tested on other debate types, except from Republican primary debates to general debates.
In fact, using all training instances does not improve
over using the pairs only from the matching debate type, despite the
latter's use of fewer training instances.\footnote{A more comparable setup is to subsample training instances to match the size in a particular debate type.
Doing this, training only using the matching debate type
(known as ``in-domain'') always outperforms using all debate types.
%
}
\para{Machines outperform humans only in primary debates.}
Human accuracy is much better in general debates than in primary debates.
In fact, the advantage of our classifiers (``all'') in \figref{fig:heldout_10} mainly comes from primary debates.
The reason may be that general debates receive more attention and more coverage,
humans are thus more likely to remember what was selected as highlights\xspace; indeed, ``memory, past exposure'' was an important factor in the surveys.
If this is the case, humans may have an even more limited understanding of media choices
had there been no influence from previous exposure.
We also observe that humans perform better in general debates after 2000 than in those before 2000.
\section{Media Preferences over Time}
\label{sec:media_preferences}
%
%
%
%
%
Beyond the effect of wording,
a media outlet's own preferences can potentially impact how highlights\xspace are
selected.
In fact, media polarization
has attracted significant interest from both researchers and the public
\cite{Baum:PoliticalCommunication:2008,iyengar2009red}.
We take advantage of the longitudinal nature of our dataset, and
evaluate the extent of media fragmentation over time.
Building on the intuition that outlets are similar if they quote the same sentences with similar sentiments, we employ two approaches to quantify the fragmentation level.
We first consider the existing two-party structure in the U.S. and evaluate %
whether the media quote both parties ``evenly''.
Second, we borrow concepts from the clustering literature and examine the overall similarity between media outlets beyond the partisan assumption.
\subsection{Bipartisan Coverage}
\begin{figure}[t]
\centering
\includegraphics[clip, trim=0.cm 0.2cm 0.0cm 0.2cm,width=0.47\textwidth]{figure/bipartite-graph.png}
\vspace{-.35cm}
\caption{Bipartite graph between media outlets and highlights
%
from presidential debates. The edges are sampled from presidential
debates in 2016.
%
}
\label{fig:bipartite_example}
\end{figure}
Because
U.S. presidential elections
typically involve two major parties,
we first take advantage of this two-party structure
and
evaluate fragmentation by how much ``Democratic-leaning'' media outlets quote Republican candidates and vice versa.
\para{Bipartite graph representation.}
A natural representation of quoting patterns is a bipartite graph between outlets and sentences, where an edge between media outlet $i$ and candidate sentence $j$ indicates that $i$ quotes $j$ (e.g., \figref{fig:bipartite_example}).
This graph can be represented using a media-sentence matrix $\mathbf{D} \in
\mathbb{R}^{M \times S}$, where each row represents a media outlet
($M$ is the number of
outlets) and each column represents
a sentence from a candidate ($S$ is the number of sentences).
To obtain $\mathbf{D}_{ij}$, we use three methods to account for both the frequency and the sentiment of a media outlet quoting a candidate utterance:
{\em a.~count} ($\mathbf{D}_{ij}$ is the number of times that sentence $j$ was quoted in outlet $i$);
{\em b.~positive context} ($\mathbf{D}_{ij}$ is the number of positive words in the 30 words around each quote of sentence $j$ in outlet $i$);
{\em c.~negative context} (similar to positive context but counting negative words).
We normalize each row so that the $\ell_2$-norm is 1 and remove media outlets that used fewer than 10 quotes in an election.
\begin{figure*}[t]
\centering
\begin{subfigure}[t]{0.3\textwidth}
\addFigure{\textwidth}{media_bias/cut_fractions.pdf}
\caption{The fraction of weights in the min-cut.}
\label{fig:cut_fractions}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.3\textwidth}
\addFigure{\textwidth}{media_bias/media_bias_mean_dist.pdf}
\caption{Average pairwise similarity across all media outlets.}
\label{fig:bias_global_dist}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.3\textwidth}
\addFigure{\textwidth}{media_bias/media_bias_dist.pdf}
\caption{Average similarity with 3 nearest neighbors.}
\label{fig:bias_local_dist}
\end{subfigure}
%
%
%
%
%
%
\caption{\figref{fig:cut_fractions}
estimates whether the outlets quote evenly across two parties and
%
shows a declining trend over time.
%
\figref{fig:bias_global_dist} gives the global average pairwise similarity and there are no clear trends,
while \figref{fig:bias_local_dist} shows that ``local similarity'' (average similarity with top nearest neighbors) increases over time.
%
There are not enough media outlets with at least 10 quotes to
compute meaningful results in the 80s, so we exclude those years.
In all figures, different colors represent different ways to represent media outlets with their quoting patterns.
%
%
%
%
%
%
\label{fig:bias}}
\end{figure*}
\para{Using min-cut to identify bipartisan coverage.}
We focus on news coverage of general debates, because there has
always been one presidential candidate and one vice-presidential
candidate from the Democratic party and the Republican party during
the past three decades.\footnote{We ignore all independent candidates in this analysis.}
We thus build matrices for the bipartite graphs based on general debates in each presidential election.
To the extent that outlets ``lean''
one way or the other, we expect them to quote one party or the other
more (or to positively quote one party more, or to negatively quote one
party more).
In the extreme case, a subset of media outlets might only quote the Republican candidates and the rest only quote the Democratic candidates, in other words, there exists no bipartisan coverage.
These intuitions align with the idea of using min-cut to identify bipartisan coverage.
If we apply the min-cut algorithm to separate sentences from Democratic
candidates and sentences from Republican candidates in the bipartite graph
\cite{stoer1997simple},
then the extreme case where no bipartisan coverage exists leads to a min-cut of 0.
Conversely, the more costly the min-cut, the more
entangled the two sides are.
\para{Decreasing bipartisan coverage (\figref{fig:cut_fractions}).}
We thus compute the fraction of weights in the min-cut and smaller values in this statistic
indicate that
media outlets quote largely from one of the two sides and little from
the other.\footnote{Note that media outlets that preferentially quote
Democratic candidates may not support the Democratic party, because
a quote can be presented in a negative light.
We thus also use sentiment information in the context of quotes to populate $\mathbf{D}$.
}
\figref{fig:cut_fractions} shows that
cross-cutting coverage in the min-cut is declining over time, under
all three definitions.
It is worth noting that the fraction of weights in the min-cut is not small (about 40\%, upper bounded by 50\%) despite the declining trend, which suggests that media outlets tend to at least cover both sides.
The fact that there exists less cross-cutting coverage in sentiment (both positive and negative) than in counts indicates that although media outlets quote both sides, the sentiment differs.\footnote{In terms of the partition resulted from the min-cut, in most years, the majority of the media outlets are in the partition associated with Republican candidate sentences, at least consistent with a recent report in 2016 \cite{patterson2016news}.
This is especially true for big media outlets such as the \emph{New York Times} and \emph{Washington Post}. A notable exception is that the \emph{Washington Post} is in the Democratic partition for positive contexts in 2008.}
\subsection{Beyond Partisan Assumptions}
An alternative way to estimate fragmentation is to study how well media outlets cluster together without partisan assumptions.
We build matrices based on both primary debates and general debates in each presidential election.
We use each row $\mathbf{D}_i$ to represent media outlet $i$ and investigate the quality of clustering between media outlets.
Clustering purity is typically evaluated
at two levels: in a pure clustering,
inter-cluster distances
are large and intra-cluster distances are small \cite{rousseeuw1987silhouettes}.
When we use the Silhouette score (a measure of purity) to identify the
optimal number of clusters in a $K$-means clustering of media representations in $\mathbf{D}$,
the optimal number is close to the total number of media outlets, $M$, which suggests that there are many isolated small clusters.
We thus examine fragmentation at the above two levels by considering each media outlet as a singleton:
whether media outlets have become less similar
to each other overall (analogous to
inter-cluster distances); and
whether media outlets have become more similar to their nearest neighbors
(analogous to
intra-cluster distances).
We use cosine similarity to measure the similarity between a pair of media outlets.
\para{No clear trends in ``inter-cluster'' similarity (\figref{fig:bias_global_dist}).}
To evaluate whether the news media become less similar to each other, we calculate the global mean of all pairwise similarities.
A decreasing
global mean would indicate fragmentation at the
global level, but we do not observe consistent trends or any
statistically significant correlation with time (\figref{fig:bias_global_dist}).
The similarity in positive context and negative context is always
smaller than the similarity in usage frequency.
This again suggests that, although different media outlets may quote the same sentences, they present different opinions around the quotes.
\para{Increasing ``intra-cluster'' similarity (\figref{fig:bias_local_dist}).}
To capture ``local'' similarity that is analogous to intra-cluster distances,
we propose a statistic that
measures the average similarity between a media outlet and
its $K$ nearest neighbors.
We refer to this as {\em local similarity}.
A growing local similarity suggests increasing tightness at the local level.
We observe consistent increasing trends across three definitions, and
this observation is robust with choices of $K$.
This observation is related to the fact that there are increasingly many media outlets over time and it is thus more likely for a media outlet to have a close nearest neighbor.
However, this hypothesis is insufficient to explain our observations,
because it also suggests that ``inter-cluster'' similarity should increase, which does not hold.
\para{Discussion.}
Our observations are derived from a three-decade dataset and are consistent with past work on polarization and partisan selective exposure \cite{Baum:PoliticalCommunication:2008,Stroud:JournalOfCommunication:2010},
but the reasons behind the decreasing bipartisan coverage and the increasing %
local similarity
require further investigation.
Our results are certainly limited by the relatively short history of presidential debates.
It is also important to note that our study does not take into account the
influence among media outlets themselves
\cite{Golan:JournalismStudies:2007,boyle:2001,kim+barnett:1996}.
For instance, \citet{Golan:JournalismStudies:2007} shows a correlation between the morning \emph{New York Times} and three evening television news programs.
Further studies regarding the diffusion in media selection of highlights
can shed more light on our observations.
\section{Related Work}
\label{sec:related}
We have discussed the most relevant studies throughout the paper.
Here we discuss three additional strands of related work.
\para{The effect of post-debate coverage on public opinion.}
Studies have shown that media choices about coverage can
have serious consequences \cite{brubaker2009effect,Fridkin01012008,Hillygus:AmericanJournalOfPoliticalScience:2003,hwang2007applying,Tsfati01072003,patterson2016press,gross+etal:17}.
For instance, \citet{Fridkin01012008} show that in the 2004 U.S. election,
citizens who only read the news coverage
rated Kerry more negatively compared to those who watched the debate firsthand, because media outlets highlighted the moment of Kerry outing Cheney's lesbian daughter, although this moment did not catch much attention from the live audience.
\citet{boydstun2014real} develop a mobile app to collect real-time feedback for presidential debates.
\citet{patterson2016press} discuss the media's critical tendency and its partisan consequences in the U.S.
\para{Influences between the media and politicians.}
Although our work focuses on
media selection of highlights,
politicians often behave based on their beliefs about media preferences, which suggests complex dynamics between the media and politicians \cite{KarenCallaghan:PoliticalCommunication:2010,blumler1999third,grand2015emergence,clayman2002news,prat2011political}.
For instance, \citet{blumler1999third} discuss
politicians' increasing adaptation to different news values and formats
in the presence of media abundance.
Also relevant is research on
the influence of politicians on the media, including agenda-setting, rhetorical positioning, and framing \cite{mccombs1972agenda,Chong:JournalOfCommunication:2007,Entman:JournalOfCommunication:1993,sim:2013:emnlp,Wells:PoliticalCommunication:2016}.
\para{Power dynamics in debates and other types of coverage.}
Studies have shown that language use and topic control in debates can reflect influence between candidates and indicate power dynamics
\cite{Nguyen:2012:SHN:2390524.2390536,Nguyen:2014:MTC:2629710.2629739,prabhakaran2014staying}.
More recently,
social media have also become an important channel to monitor public
opinion on debates in real time \cite{broersma2012social,diakopoulos2010characterizing}
and potentially change news media coverage.
\section{Conclusion}
\label{sec:conclusion}
In this paper,
we conduct the first systematic study on
media selection of highlights\xspace from presidential debates, using a three-decade dataset.
We introduce a computational framework that controls for the speaker and the debate situation to study the effect of textual factors.
First, we find that
media choices are not obvious to
Mechanical Turk workers,
suggesting that the public may
have a limited understanding of how the news media choose highlights\xspace in news coverage.
Second, although machines
and humans achieve similar accuracy in general debates, machines
significantly outperform humans in predicting media-chosen highlights\xspace in primary debates.
Our findings indicate that there exist signals in the textual
information that untrained humans do not find salient.
In particular,
highlights are locally distinct from the speaker's previous turn, but are later echoed more by both the speaker and other participants.
We further demonstrate a declining trend of bipartisan coverage using macro quoting patterns and analyze the quality of clustering between media outlets without partisan assumptions.
The news media play an important role in connecting the public and politicians.
Our work indicates that the public may not understand what factors matter in media choices.
Quantitative studies in this domain can complement qualitative theories to improve the understanding of political and media communication for both scholars and the public.
\para{Acknowledgments.} We thank Yejin Choi, Aaron Jaech, Luke Zettlemoyer,
anonymous reviewers, and all members of Noah's ARK for helpful comments and discussions. This research was made possible by a University of Washington Innovation Award.
|
{
"timestamp": "2018-02-27T02:00:16",
"yymm": "1802",
"arxiv_id": "1802.08690",
"language": "en",
"url": "https://arxiv.org/abs/1802.08690"
}
|
\section{Introduction}
\label{sec:into}
The existence of the tiny neutrino mass can be naturally explained by the seesaw mechanism \cite{Minkowski:1977sc, Yanagida:1980xy, Schechter:1980gr, Sawada:1979dis, GellMann:1980vs,Glashow:1979nm, Mohapatra:1979ia} which extends the Standard Model (SM) through Majorana type Right Handed Neutrinos (RHNs).
As a result the SM light neutrinos become Majorana particles. Alternatively there is a simple model,
neutrino Two Higgs Doublet Model ($\nu$THDM) \cite{Davidson:2009ha, Wang:2006jy}, which can generate the Dirac mass term for the
light neutrinos as well as for the other fermions in the SM. In this model we have two Higgs doublets; one is the same as the SM-like Higgs doublet and the other one is having a small VEV $(\mathcal{O}(1))$ eV
to explain the tiny neutrino mass correctly. Due to this fact, the neutrino Dirac Yukawa coupling could be order 1. It has been discussed in \cite{Davidson:2009ha} that a global softly broken $U(1)_X$ symmetry
can forbid the Majorana mass terms of the RHNs; a hidden $U(1)$ gauge symmetry can be also applied to realize $\nu$THDM as in ref.~\cite{Nomura:2017wxf}. In this model all the SM fermions obtain Dirac mass terms via Yukawa interactions with the SM-like Higgs doublet $(\Phi_2)$ whereas only the neutrinos get
Dirac masses through the Yukawa coupling with the other Higgs doublet $(\Phi_1)$. Another scenario of the generation of Dirac neutrino mass through a dimension five operator has been studied in \cite{CentellesChulia:2018gwr}. The corresponding Yukawa interactions of the Lagrangian can be written as
\begin{eqnarray}
\mathcal{L}_{Y}=-\overline{Q}_L Y^u \widetilde{\Phi}_2 u_R -\overline{Q}_L Y^d \Phi_2 d_R
-\overline{L}_L Y^e \Phi_2 e_R -\overline{L}_L Y^\nu \widetilde{\Phi}_1 \nu_R +\rm{H. c.}
\label{Yuk1}
\end{eqnarray}
where $\widetilde{\Phi}_i = i \sigma_2 \Phi_i^* (i=1,2)$, $Q_L$ is the SM quark doublet, $L_L$ is the SM lepton doublet, $e_R$ is the right handed charged lepton, $u_R$ is the right handed up-quark, $d_R$ is the right handed down-quark and $\nu_R$ are the RHNs.
The $\Phi_1$ and $\nu_R$ are assigned with the global charge $3$ under the $U(1)_X$ group. The global symmetry forbids the
Majorana mass term between the RHNs. In the original model~\cite{Davidson:2009ha}, the global symmetry is softly broken by the mixed mass
term between $\Phi_1$ and $\Phi_2$ $(m_{12}^2 \Phi_1^\dagger \Phi_2)$ such that a small VEV is obtained by seesaw-like formulas
\begin{eqnarray}
v_1 =\frac{m_{12}^2 v_2}{M_A^2},
\end{eqnarray}
where $M_A$ is the pseudo-scalar mass in \cite{Davidson:2009ha}. If $M_A \sim 100$ GeV and $m_{12} \sim {\cal O}(100)$
keV then $v_1$ can be obtained as $\mathcal{O}(1)$ eV. In the paper~\cite{Baek:2016wml}, the model is extended to
include singlet scalar $S$ which breaks the $U(1)_X$ symmetry. The soft term $m_{12}^2$ is identified with
$\mu \langle S \rangle$ where $\mu$ is the Higgs mixing term, $\mu \Phi_1^\dagger \Phi_2 S + h.c.$.
It has been studied in~\cite{Baek:2016wml}
that an SM singlet fermion being charged under $U(1)_X$ could be a potential DM candidate.
In this paper we extend the model with a natural scalar Dark Matter (DM)
candidate $(X)$. In this model the global $U(1)_X$ symmetry is spontaneously broken down to $Z_2$ symmetry by VEV of a new singlet scalar $S$.
The remnant of the $Z_2$ symmetry makes the DM candidate stable.
The $Z_2$ symmetry would be broken by quantum gravity effect and DM would decay via effective interaction \cite{Mambrini:2015sia}. This can be avoided if the $U(1)_X$ is a remnant of local symmetry at a high energy scale
and we assume the $Z_2$ symmetry is not broken.
A CP odd component of $S$
becomes the Goldstone boson and hence we study the DM annihilation from this model and compare with the current experimental sensitivity.
The papers is organized as follows. In Sec.~\ref{Model} we describe the model. In Sec.~\ref{DMP} we discuss the DM phenomenology and finally in Sec.~\ref{Conc} we conclude.
\section{The Model}
\label{Model}
We discuss the extended version of the model in \cite{Davidson:2009ha} with a scalar field $(X)$. We write the scalar and the RHN sectors of the particle content in Tab.~\ref{tab1}
\begin{table}[h]
\centering
\begin{tabular}{|c||c|c|c|c||c|}\hline\hline
&\multicolumn{4}{c||}{Scalar Fields} & \multicolumn{1}{c|}{New Fermion} \\\hline
& ~$\Phi_1$~ & ~$\Phi_2$~ & ~$S$ ~ & ~$X$~ & ~$\nu_R$ \\\hline
$SU(2)_L$ & $\bf{2}$ & $\bf{2}$ & $\bf{1}$ & $\bf{1}$ & $\bf{1}$ \\\hline
$U(1)_Y$ & $\frac12$ & $\frac12$ & $0$ & $0$ & $0$ \\\hline
$U(1)_X$ & $3$ & $0$ & $3$ & $1$ & $3$ \\\hline
\end{tabular}
\caption{Scalar fields and new fermion in our model.}
\label{tab1}
\end{table}
The gauge singlet Yukawa interaction between the lepton doublet $(L_L)$, the doublet scalars $(\Phi_1, \Phi_2)$ and the RHNs $(\nu_R)$ can be written as
\begin{eqnarray}
\mathcal{ L} &\supset & - Y_{ij}^e \bar L_{L_{i}} \Phi_2 e_{Rj} - Y^\nu_{ij} \bar L_{L_{i}} \tilde \Phi_1 \nu_{Rj} + \rm{H.c}.
\label{eq:Yukawa}
\end{eqnarray}
We assume that the Yukawa coupling constants $Y_{ij}^e$ and $Y_{ij}^\nu$ are real. The scalar potential can be written by
\begin{eqnarray}
V(\Phi_1, \Phi_2, S) &= & - m_{11}^2 \Phi^\dagger_1 \Phi_1 - m_{22}^2 \Phi_2^\dagger \Phi_2 - m_{S}^2 S^\dagger S + M_X^2 X^\dagger X - (\mu \Phi^\dagger_1 \Phi_2 S + h.c.) \nonumber \\
&& + \lambda_1 (\Phi_1^\dagger \Phi_1)^2 + \lambda_2 (\Phi_2^\dagger \Phi_2)^2 + \lambda_3 (\Phi_1^\dagger \Phi_1)( \Phi_2^\dagger \Phi_2)
+ \lambda_4 (\Phi_1^\dagger \Phi_2)( \Phi_2^\dagger \Phi_1) \nonumber \\
&&+ \lambda_S (S^\dagger S)^2+ \lambda_{1S} \Phi_1^\dagger \Phi_1 S^\dagger S + \lambda_{2S} \Phi_2^\dagger \Phi_2
S^\dagger S +\lambda_X (X^\dagger X)^2 + \lambda_{1X} \Phi_1^\dagger \Phi_1 X^\dagger X \nonumber \\
&& + \lambda_{2X} \Phi_2^\dagger \Phi_2 X^\dagger X+ \lambda_{SX} S^\dagger S X^\dagger X
- (\lambda_{3X} S^\dagger X X X + \rm{H.c.}) ,
\label{eq:potential}
\end{eqnarray}
The Dirac mass terms of the neutrinos are generated by the small VEV of $\Phi_1$. According to \cite{Davidson:2009ha, Wang:2006jy} we assume that the VEV of $\Phi_1$ is much smaller than the electroweak scale.
The vacuum stability analysis of a general scalar potential has been studied in \cite{Kannike:2016fmd}.
Additionally, a remaining $Z_3$ symmetry is also involved when $U(1)_X$ is broken by non-zero VEV of S.
Here $X$ is the only $Z_3$ charged stable (scalar) particle and as a result $X$ could be considered as a potential Dark Matter
(DM) candidate. The mass term $M_X$ of $X$ in Eq.~\ref{eq:potential} is positive definite which forbids $X$ to get VEV and as a result the
$Z_3$ symmetry promotes the stability of $X$ as a DM candidate. It has already been discussed in \cite{Baek:2016wml} that a
CP-odd component in $S$ becomes massless Goldstone boson. Then we write scalar fields as follows
\begin{eqnarray}
\Phi'_1 &=& \begin{pmatrix} \phi^+_1 \\ \frac{1}{\sqrt{2}} (v_1 + h_1 + i a_1) \end{pmatrix}, \quad
\Phi_2 = \begin{pmatrix} \phi^+_2 \\ \frac{1}{\sqrt{2}} (v_2 + h_2 + i a_2) \end{pmatrix}, \\
\quad X &=& X' e^{ i\frac{a_S}{2v_S}}, \quad \Phi_1 = \Phi'_1 e^{ i\frac{a_S}{v_S}}, ~~S = \frac{1}{\sqrt{2}} r_S e^{ i\frac{a_S}{v_S}},
\end{eqnarray}
where $r_S = \rho + v_S$. We assume $X$ does not develop a VEV while the VEVs of $\Phi_1$, $\Phi_2$ and $S$ are obtained by requiring the stationary conditions $\partial V(v_1,v_2,v_S)/\partial v_i =0$ following
\begin{eqnarray}
-2 m_{11}^2 v_1 + 2 \lambda_1 v_1^3 + v_1 (\lambda_{1S} v_S^2 + \lambda_3 v_2^2 + \lambda_4 v_2^2) - \sqrt{2} \mu v_2 v_S &=&0, \nonumber \\
-2 m_{22}^2 v_2 + 2 \lambda_2 v_2^3 + v_2 (\lambda_{2S} v_S^2 + \lambda_3 v_1^2 + \lambda_4 v_1^2) - \sqrt{2} \mu v_1 v_S &=& 0, \nonumber \\
-2 m_{S}^2 v_S + 2 \lambda_S v_S^3 + v_S (\lambda_{1S} v_1^2 + \lambda_{2S} v_2^2 ) - \sqrt{2} \mu v_1 v_2 &=& 0.
\label{eqstn}
\end{eqnarray}
We then find that these conditions can be satisfied with $v_1 \simeq \mu \ll \{ v_2, v_S \}$ and SM Higgs VEV is given
as $v \simeq v_2 \simeq 246$ GeV. From the first one of the Eq.~\ref{eqstn} we find that $v_1$ is proportional to and of the same order with $\mu$ such that
\begin{eqnarray}
v_1 \simeq \frac{\sqrt{2} \mu v_2 v_S}{\lambda_{1S} v_S^2 +(\lambda_3+\lambda_4) v_2^2 -2 m_{11}^2}.
\end{eqnarray}
The small order of $v_1 (\sim \mu)$ is required to keep $v_2$ and $v_S$ in the electroweak scale. Considering the
neutrino mass scale as $m_\nu \sim 0.1$ eV, the value of $\mu/v_2$ should be small such as $\mu/v_2 \sim {\mathcal
O}(10^{-12})$ ensuring $Y^\nu$ as ${\mathcal O}(1)$ such that $m_e/v_2 \sim {\mathcal O}(10^{-6})$. Hence $v_1$ is
considered to be smaller than the other VEVs. It also interesting to notice that $\mu=0$ restores the symmetry of the
Lagrangian hence a technically natural small value of $\mu$ is
acceptable \cite{tHooft:1979rat,Baek:2014sda}.
It is also interesting to notice that $\mu=0$ enhances the symmetry of the Lagrangian in the sense that we can assign
arbitrary $U(1)_X$ charge to $\Phi_1$, which ensures the radiative generation of the $\mu$-term is proportional to $\mu$ itself. Hence a small
value of $\mu$ is technically natural \cite{tHooft:1979rat,Baek:2014sda}.
Now we identify mass spectra in the scalar sector.
{\tt Charged scalar:} In this case we calculate the mass matrix in the basis $(\phi_{1}^{\pm}, \phi_{2}^{\pm})$ where $\phi_1^\pm $ is approximately physical charged scalar while $\phi_2^\pm$ is approximately NG boson absorbed by $W^\pm$ boson.
In the following we write physical charged scalar field as $H^\pm \simeq \phi^\pm_1$.
The charged scalar mass matrix can be written as
\begin{equation}
M^2_{H^\pm} = \begin{pmatrix} \frac{v_2 (\sqrt{2} \mu v_S - \lambda_4 v_1 v_2 )}{2v_1} & - \frac{1}{2} (\sqrt{2}\mu v_S - \lambda_4 v_1 v_2) \\
- \frac{1}{2} (\sqrt{2}\mu v_S - \lambda_4 v_1 v_2) & \frac{v_1 (\sqrt{2} \mu v_S - \lambda_4 v_1 v_2)}{2 v_2} \end{pmatrix}
\simeq \begin{pmatrix} \frac{v_2 (\sqrt{2} \mu v_S - \lambda_4 v_1 v_2 )}{2v_1}& 0 & \\ 0 & 0 & \end{pmatrix}.
\end{equation}
The charged Higgs mass can be written as
\begin{eqnarray}
m_{H^{\pm}}^2 \simeq \frac{v_2 (\sqrt{2} \mu v_S-\lambda_4 v_1 v_2)}{2 v_1}.
\end{eqnarray}
{\tt CP-even neutral scalar:}
In the case of CP-even scalar all three components are physical. Hence the mass matrix can be written in the basis of $(h_1,h_2, \rho)$ as
\begin{align}
M^2_H &= \begin{pmatrix} 2 \lambda_1 v_1^2 + \frac{\mu v_2 v_S}{\sqrt{2} v_1} & (\lambda_3 + \lambda_4) v_1 v_2 - \frac{\mu v_S}{\sqrt{2}} & \lambda_{1S} v_1 v_S - \frac{\mu v_2}{\sqrt{2}} \\
(\lambda_3 + \lambda_4) v_1 v_2 - \frac{\mu v_S}{\sqrt{2}} & 2 \lambda_2 v_2^2 + \frac{\mu v_1 v_S}{\sqrt{2} v_2} & \lambda_{2S} v_2 v_S - \frac{\mu v_1}{\sqrt{2}} \\
\lambda_{1S} v_1 v_S - \frac{\mu v_2}{\sqrt{2}} & \lambda_{2S} v_2 v_S - \frac{\mu v_1}{\sqrt{2}} & 2 \lambda_S v_S^2 + \frac{\mu v_1 v_2}{\sqrt{2} v_S} \end{pmatrix} \nonumber \\
& \simeq \begin{pmatrix} \frac{ \mu v_2 v_S }{\sqrt{2} v_1} & 0 & 0 \\ 0 & 2 \lambda_2 v_2^2 & \lambda_{2S} v_2 v_S \\ 0 & \lambda_{2S} v_2 v_S & 2 \lambda_{S} v_S^2 \end{pmatrix}.
\end{align}
We find that all the masses of the mass eigenstates, $H_i (i=1,2,3)$, are at the electroweak scale and the mixings
between $h_1$ and other components are negligibly small while
the $h_2$ and $\rho$ can have sizable mixing. The mass eigenvalues and the mixing angle for $h_2$ and $\rho$ system can be given by
\begin{align}
& m_{H_2,H_3}^2 = \frac{1}{2} \left[ m_{22}^2 + m_{33}^2 \mp \sqrt{(m_{22}^2-m_{33}^2)^2 + 4 m_{23}^4} \right], \\
& \tan 2 \theta = \frac{-2 m_{23}^2}{m_{22}^2 - m_{33}^2}, \\
& m_{22}^2 = 2 \lambda_2 v_2^2, \quad m_{33}^2 = 2 \lambda_{S} v_S^2, \quad m_{23}^2 = \lambda_{2S} v_2 v_S.
\end{align}
Hence the mass eigenstates are obtained as
\begin{equation}
\begin{pmatrix} H_1 \\ H_2 \\ H_3 \end{pmatrix} \simeq \begin{pmatrix} 1 & 0 & 0 \\ 0 & \cos \theta & - \sin \theta \\ 0 & \sin \theta & \cos \theta \end{pmatrix} \begin{pmatrix} h_1 \\ h_2 \\ \rho
\end{pmatrix}.
\label{eq:eigenstates}
\end{equation}
Here $H_2$ is the SM-like Higgs, $h$, and $m_{H_{2}} \simeq m_h$
where the mixing angle $\theta$ between $H_2$ and $H_3$ is constrained as $\sin\theta\leq0.2$ by the LHC Higgs data \cite{Chpoi:2013wga, Cheung:2015dta,Cheung:2015cug} using the numerical analyses on the Higgs decay followed by \cite{Djouadi:1997yw, Djouadi:2006bz}.
{\tt CP-odd neutral scalar:} Calculating the mass matrix of the pseudo-scalars in a basis $(a_1, a_2, a_S)$ we get the mass matrix as
\begin{equation}
M^2_A = \frac{\mu}{\sqrt{2}} \begin{pmatrix} \frac{v_2 v_S}{v_1} & - v_S & - v_2 \\ -v_S & \frac{v_1 v_S}{v_2} & v_1 \\
-v_2 & v_1 & \frac{v_1 v_2}{v_S} \end{pmatrix}
\simeq \begin{pmatrix} \frac{\mu v_2 v_S}{\sqrt{2} v_1} & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix},
\end{equation}
using $S\simeq \frac{v_S+\rho+ i a_S}{\sqrt{2}}$. In the last step we used the approximation, $v_1 (\sim \mu) \ll v_2, v_S$.
We find three mass eigenstates,
\begin{align}
A &= a_1 -\frac{v_1}{v_2} a_2 - \frac{v_1}{v_S} a_S, \nonumber \\
G^0 &= \frac{v_1}{v_2} a_1 +a_2, \nonumber \\
a &= \frac{v_1}{v_S} a_1 -\frac{v_1^2}{v_2 v_S} a_2 +\left(1+\frac{v_1^2}{v_2^2} \right) a_S,
\end{align}
up to normalization. They correspond to massive pseudo-scalar, the masslesss Nambu-Goldstone (NG) mode which is absorbed
by the $Z$ boson, and a massless physical Goldstone boson associated with the $U(1)_X$ breaking, respectively.
Hence the mass of $A$ is given by
\begin{equation}
m_A^2 =\frac{\mu(v_1^2 v_2^2 + v_1^2 v_S^2 + v_2^2 v_S^2)}{\sqrt{2} v_1 v_2 v_S} \simeq
\frac{\mu v_2 v_S}{\sqrt{2} v_1},
\end{equation}
which is at the electroweak scale. It can be shown~\cite{Baek:2016wml}
that the Goldstone boson, $a$, is safe from the phenomenological
constraints such as $Z \to H_i a (i=1,2,3) $ decay, stellar cooling from the interaction $a \overline{e} \gamma_5 e$,
{\it etc.}, because it interacts with the SM particles only via highly-suppressed ($\sim v_1/v_{2,S}$) mixing with the
SM Higgs.
Note that, in our analysis below, we approximate pseudo-scalars as $A \simeq a_1$, $G^0 \simeq a_2$ and $a \simeq a_S$ since we assume $v_1 \ll v_2, v_S$ in realizing small neutrino mass.
Here we also discuss decoupling of the physical Goldstone boson from thermal bath where we assume it is thermalized via Higgs portal interaction.
The interactions $ \rho \partial_\mu a_S \partial^\mu a_S/v_S$ , $\lambda_{2S} v_S v_2 \rho h_2$ and the SM Yukawa interactions
generate the effective interaction among the Goldstone boson $a$ and the SM fermions
\begin{equation}
- \frac{\lambda_{2S} m_f}{2 m_{H_3}^2 m_{H_2}^2} \partial_\mu a \partial^\mu a \bar f f,
\end{equation}
where $m_f$ is the mass of the SM fermion $f$, and we used $a_s \simeq a$.
The temperature, $T_a$, at which $a$ decouples from thermal bath is roughly estimated by~\cite{Weinberg:2013kea}
\begin{equation}
\frac{\text{collision rate}}{\text{expansion rate}} \simeq \frac{\lambda_{2S}^2 m_f^2 T_a^5 m_{PL}}{m_{H_2}^4 m_{H_3}^4} \sim 1,
\label{eq:decoup_a}
\end{equation}
where $m_{PL}$ denotes the Planck mass and $m_f$ should be smaller than $T_a$ so that $f$ is in thermal bath. The decoupling temperature is then calculated by
\begin{equation}
T_a \sim 2 \, {\rm GeV} \left( \frac{m_{H_3}}{100 \, {\rm GeV}} \right)^{\frac{4}{5}} \left( \frac{{\rm GeV}}{m_f} \right)^{\frac{2}{5}} \left( \frac{0.01}{\lambda_{2S}} \right)^{\frac{2}{5}}.
\end{equation}
Thus Goldstone boson $a$ can decouple from thermal bath sufficiently earlier than muon decoupling and does not contribute to the effective
number of active neutrinos\footnote{If $m_{H_3} \approx 500$ MeV and $\lambda_{2S} \approx 0.005$, then $a$ can make sizable
contribution: $\Delta N_{\rm eff}=4/7$~\cite{Weinberg:2013kea}.}~\cite{Brust:2013xpv}.
Note that the Goldstone boson should be in thermal bath at temperature below that of freeze-out of DM when we consider the relic density of
DM, $X$, is explained by the process, $ X \bar{X} \to a a $, in our analysis below.
Taking minimum DM mass as $\sim 100$ GeV freeze-out temperature $T_f$ is larger than $\sim 100/x_f$ GeV $\sim 4$ GeV where $x_f = m_{\rm DM}/T_f
\sim 25$. Therefore we can get $T_f > T_a$ even with small $\lambda_{2S} (=0.01)$ as long as $m_{H_3}$ is not much heavier than the electroweak scale.
As the phenomenology of the Higgs sector has been discussed in \cite{Davidson:2009ha,Machado:2015sha, Bertuzzo:2015ada, Baek:2016wml},
we concentrate on the DM phenomenology in the following analysis.
\section{DM phenomenology}
\label{DMP}
In this section, we discuss DM physics of our model such as relic density, direct and indirect detections which are compared with experimental constraints.
Since the Higgs portal interaction is strongly constrained by DM direct
detection~\cite{Baek:2011aa,Baek:2014kna,Baek:2012se,Cline:2013gha}, we consider the case of small mixing
so that $h_1 \simeq H_1$, $h_2 \simeq H_2$ and $\rho \simeq H_3$; here $H_2$ is the SM-like Higgs in our DM analysis.
\begin{figure}[t]
\begin{center}
\includegraphics[bb=0 0 581 250, scale=0.23]{diagram1.pdf} \qquad \qquad
\includegraphics[bb=0 0 581 250, scale=0.23]{diagram2.pdf}
\includegraphics[bb=0 0 650 450, scale=0.23]{diagram3.pdf} \qquad \qquad \qquad
\includegraphics[bb=0 0 450 450, scale=0.23]{diagram4.pdf}
\end{center}
\caption{Diagrams in (I), (II), (III) and (IV) correspond to DM annihilation process in scenario-I, II, III and IV.
\label{fig:diagram1} }
\end{figure}
\subsection*{Dark matter interaction}
Firstly masses of dark matter candidates $X$ is given by~\cite{Baek:2014kna}
\begin{align}
m_{X}^2 = M_X^2 + \frac{\lambda_{1X}}{2} v_1^2 + \frac{\lambda_{2X}}{2} v_2^2 + \frac{\lambda_{SX}}{2} v_S^2
\end{align}
where the real and imaginary part of $X$ has the same mass and $X$ is taken as a complex scalar field; this is due to remnant $Z_3$ symmetry. The interactions relevant to DM physics are given by
\begin{align}
{\cal L} \ \supset \ & \frac{1}{v_S} \partial_\mu a (X \partial^\mu X^* - X^* \partial^\mu X) + \frac{1}{4 v_S^2} \partial_\mu a \partial^\mu a X^* X \nonumber \\
& + \frac{\lambda_{1X}}{2} \left(H^+ H^- + \frac{1}{2} H_1^2 + A^2 \right)X^* X + \frac{\lambda_{2X}}{4} (2 v_2 H_2 + H_2^2)X^* X \nonumber \\
& + \frac{\lambda_{SX}}{4} (2v_S H_3 + H_3^2)X^* X + \frac{\lambda_{3X}}{2} (v_S + H_3) (X X X + c.c.) \nonumber \\
& - \mu_{SS} H_3^3 + \frac{1}{v_S} H_3 \partial_\mu a \partial^\mu a - \mu_{1S} H_3 \left(H^+ H^- + \frac{1}{2} (H_1^2 + A^2) \right) - \frac{\mu_{2S}}{2} H_3 H_2^2,
\label{eq:intDM}
\end{align}
where we ignored terms proportional to $v_1$ since the value of VEV is tiny, $\mu_{SS} \equiv m_{H_3}^2/(2v_S)$,
$\mu_{1S} \equiv \lambda_{1S} v_S$, $\mu_{2S} \equiv \lambda_{2S} v_S$, and omitted scalar mixing $\sin \theta(\cos
\theta)$ assuming $\cos \theta \simeq 1$ and $\sin \theta \ll 1$.
Thus relevant free parameters to describe DM physics are summarized as;
\begin{equation}
\{ m_{X}, m_{H_1}, m_{H_3}, m_A, m_{H^\pm}, v_S, \lambda_{1X}, \lambda_{2X}, \lambda_{SX}, \lambda_{3X}, \mu_{1S}, \mu_{2S} \},
\end{equation}
where we choose $\mu_{1S,2S}$ as free parameter instead of $\lambda_{1S,2S}$ and we use $\mu_{SS} = m_{H_3}^2/(2v_S)$.
In our analysis, we focus on several specific scenarios for DM physics by making assumptions for model parameters to
illustrate some particular processes of DM annihilations.
These scenarios are given as follows:
\begin{itemize}
\item Scenario-I: 100 GeV $< v_S < 2000$ GeV, $\{ \lambda_{1X}, \lambda_{2X}, \lambda_{SX}, \lambda_{3X}, \mu_{1S}/v \} \ll 1$.
\item Scenario-II : $v_S \gg v$, $\{ \lambda_{SX}, \mu_{1S}/v \} \gg \{ \lambda_{1X}, \lambda_{2X}, \lambda_{3X}, \mu_{1S}/v \} $.
\item Scenario-III: $v_S \gg v$, $\lambda_{1X} \gg \{ \lambda_{2X}, \lambda_{SX}, \lambda_{3X}, \mu_{1S}/v \} $.
\item Scenario-IV: $v_S \gg v$, $\lambda_{X3} \gg \{\lambda_{1X}, \lambda_{2X}, \lambda_{SX}, \mu_{1S}/v \} $.
\end{itemize}
Here we set $v\equiv v_2 \simeq 246\, {\rm GeV}$ since $v_1 \ll v_2$.
In scenario-I DM mainly annihilates into $a_S a_S$ and $a_S H_3$ final state as shown in Fig.~\ref{fig:diagram1}-(I). In scenario-II DM
annihilates via $H_3$ portal interaction as Fig.~\ref{fig:diagram1}-(II).
In scenario-III DM annihilates into components of $\Phi_1$ through contact interaction with coupling
$\lambda_{1X}$ as shown Fig.~\ref{fig:diagram1}-(III).
Finally scenario-IV represents semi-annihilation processes
$X X \to X H_3$ as shown in Fig.~\ref{fig:diagram1}-(IV).
In our analysis, we assumed $\lambda_{2S} \ll {\cal O}(1)$ so that we can neglect the case of DM annihilation via the SM
Higgs portal interaction since it is well known and constraints from direct detection experiments are strong.
\begin{figure}[t]
\begin{center}
\includegraphics[bb=-100 0 581 230, scale=1]{Vs_vs_MXR.pdf}
\end{center}
\caption{ Scatter plot for parameters on $m_{X}$-$v_S$ plane under the DM relic abundance bound in Scenario-I.}
\label{fig:DM1}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[bb=0 20 581 250, scale=0.94]{LambdaSX_vs_MXR.pdf}
\includegraphics[bb=-300 0 581 0, scale=0.81]{LambdaSX_vs_M1S.pdf}
\end{center}
\caption{ Scatter plot for parameters on $m_{X}$-$\lambda_{SX}$ and $\mu_{1S}$-$\lambda_{SX}$ planes in left and right panels under the DM relic abundance bound in Scenario-II.}
\label{fig:DM2}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[bb=0 20 581 250, scale=0.94]{Lambda1X_vs_MXR.pdf}
\includegraphics[bb=-230 0 581 0, scale=0.94]{Lambda12X_vs_MXR.pdf}
\end{center}
\caption{ Left: Scatter plot for parameters on $m_{X}$-$\lambda_{1X}$ plane under the DM relic abundance bound in Scenario-III. Right: that for parameters on $m_{X}$-$\lambda_{12X}$ in Scenario-IV}
\label{fig:DM3}
\end{figure}
\subsection*{Relic density}
Here we estimate the thermal relic density of DM for each scenario given above.
The relic density is calculated numerically with {\tt micrOMEGAs 4.3.5}~\cite{Belanger:2014vza} to solve the Boltzmann equation by implementing relevant interactions.
In numerical calculations we apply randomly produced parameter sets in the following parameter ranges.
For all scenarios we apply parameter settings as
\begin{align}
&m_{X} \in [50,500] \ {\rm GeV}, \quad \mu_{2S} = 1 \ {\rm GeV}, \quad M_{H_1} = M_{A} = M_{H^\pm} \in [100, 1000] \ {\rm GeV}, \nonumber \\
& \lambda_{2X} \ll 1,
\end{align}
where the setting for $\lambda_{2X}$ is to suppress the SM Higgs portal interactions and small value of $\mu_{2S}$ is to suppress scalar mixing.
Then we set parameter region for each scenarios as follows:
\begin{align}
{\rm Scenraio-I}: \ \ & \ v_S \in [100, 2000] \ {\rm GeV}, \quad \lambda_{SX, 1X, 3X} \in [10^{-8}, 10^{-4}], \nonumber \\
& \ \mu_{1S} \in [0.001, 0.1] \ {\rm GeV}, \quad M_{H_3} \in [10, 30] \ {\rm GeV}, \\
{\rm Scenario-II}: \ & \ v_S \in [3000, 10000] \ {\rm GeV}, \quad \lambda_{SX} \in [10^{-3}, 1], \quad \lambda_{1X, 3X} \in [10^{-8}, 10^{-4}], \nonumber \\
& \ \mu_{1S} \in [100, 1000] \ {\rm GeV}, \quad M_{H_3} \in [150, 2000] \ {\rm GeV}, \\
{\rm Scenario-III}: & \ v_S \in [3000, 10000] \ {\rm GeV}, \quad \lambda_{1X} \in [10^{-3}, 1], \quad \lambda_{SX, 3X} \in [10^{-8}, 10^{-4}], \nonumber \\
& \ \mu_{1S} \in [0.001, 0.1] \ {\rm GeV}, \quad M_{H_3} \in [150, 2000] \ {\rm GeV}, \\
{\rm Scenario-IV}: & \ v_S \in [3000, 10000] \ {\rm GeV}, \quad \lambda_{3X} \in [10^{-3}, 1], \quad \lambda_{SX, 1X} \in [10^{-8}, 10^{-4}], \nonumber \\
& \ \mu_{1S} \in [0.001, 0.1] \ {\rm GeV}, \quad M_{H_3} \in [50, m_{X}] \ {\rm GeV}.
\end{align}
Then we search for the parameter sets which can accommodate with observed relic density.
Here we apply an approximated region~\cite{Ade:2015xua}
\begin{equation}
0.11 \lesssim \Omega h^2 \lesssim 0.13.
\end{equation}
In Fig.~\ref{fig:DM1}, we show parameter points on $m_{X}$-$v_S$ plane which can explain the observed relic density of DM in Scenario-I.
In this scenario, relic density is mostly determined by the cross section of $X X \to a_S a_S$ process which depends on $m_{X}/v_S$ via second term of the Lagrangian in Eq.~(\ref{eq:intDM}). Thus preferred value of $v_S$ becomes larger when DM mass increases as seen in Fig.~\ref{fig:DM1}.
In left and right panel of Fig.~\ref{fig:DM2}, we respectively show parameter points on $m_{X}$-$\lambda_{SX}$ and $\mu_{1S}$-$\lambda_{SX}$ planes satisfying correct relic density in Scenario-II.
In this scenario, the region $m_{X} \lesssim 100$ GeV requires relatively larger $\lambda_{SX}$ coupling since scalar boson modes $\{ H_3 H_3, H_1H_1, AA, H^\pm H^\mp \}$ are forbidden by our assumption for scalar boson masses. On the other hand the region $m_{X} > 100$ GeV allow wider range of $\lambda_{SX}$ around $0.01 \lesssim \lambda_{SX} \lesssim 1.0$ since DM can annihilate into other scalar bosons if kinematically allowed.
In left (right) panel of Fig.~\ref{fig:DM3}, we show parameter region on $m_{X}$-$\lambda_{1X}(\lambda_{ 3X} )$ satisfying the relic density in Scenario-III(IV).
In scenario-III, DM mass should be larger than $\sim 100$ GeV to annihilate into scalar bosons from $\Phi_1$ and required value of the coupling is $0.2 \lesssim \lambda_{1X} \lesssim 1.0$ for $m_{X} \leq 500$ GeV.
In scenario-IV, the required value of the coupling $\lambda_{3X}$ has similar behavior as $\lambda_{1X}$ in the scenario-III for $m_X > 100$ GeV but slightly larger value.
This is due to the fact that semi-annihilation process require larger cross section than that of annihilation process.
\subsection*{Direct detection}
Here we briefly discuss constraints from direct detection experiments estimating DM-nucleon scattering cross section in our model.
Then we focus on our scenario-III since DM can have sizable interaction with nucleon via $H_2$ and $H_3$ exchange and investigate upper limit of mixing $\sin \theta$.
The relevant interaction Lagrangian with mixing effect is given by
\begin{equation}
\mathcal{L} \supset \frac{\lambda_{SX} v_S}{2} X^* X (c_\theta H_3 - s_\theta H_2) + \sum_{q} \frac{m_q}{v} \bar q q (s_\theta H_3 + c_\theta H_2),
\end{equation}
where $q$ denote the SM quarks with mass $m_q$, and we assumed $\mu_X \ll \lambda_{SX} v_S$ as in the relic density calculation.
We thus obtain the following effective Lagrangian for DM-quark interaction by integrating out $H_2$ and $H_3$;
\begin{equation}
\mathcal{L}_{\rm eff} = \sum_q \frac{\lambda_{SX} v_S m_q s_\theta c_\theta}{2v} \left( \frac{1}{m_h^2} - \frac{1}{m_{H_3}^2} \right) X^*X \bar q q,
\end{equation}
where $m_{H_2} \simeq m_h = 125$ GeV is used. The effective interaction can be rewritten in terms of nucleon $N$ instead of quarks such that
\begin{equation}
\mathcal{L}_{\rm eff} = \frac{f_N \lambda_{SX} v_S m_N s_\theta c_\theta}{v} \left( \frac{1}{m_h^2} - \frac{1}{m_{H_3}^2} \right) X^*X \bar N N,
\end{equation}
where $m_N$ is nucleon mass and $f_N$ is the effective coupling constant given by
\begin{equation}
f_N = \sum_q f_q^N = \sum_q \frac{m_q}{m_N} \langle N | \bar q q | N \rangle.
\end{equation}
The heavy quark contribution is replaced by the gluon contributions such that
\begin{align}
\sum_{q=c,b,t} f_q^N = {1 \over m_N} \sum_{q=c,b,t} \langle N | \left(-{ \alpha_s\over 12 \pi} G^a_{\mu\nu}
G^{a\mu\nu}\right) N \rangle,
\label{eq:f_Q}
\end{align}
which is obtained by calculating the triangle diagram for heavy quarks inside a loop.
Then we write the trace of the stress energy tensor as follows by considering the scale anomaly;
\begin{align}
\theta^\mu_\mu =m_N \bar{N} N = \sum_q m_q \bar{q} q - {7 \alpha_s \over 8 \pi} G^a_{\mu\nu} G^{a\mu\nu}.
\label{eq:stressE}
\end{align}
Combining Eqs.~(\ref{eq:f_Q}) and (\ref{eq:stressE}), we get
\begin{align}
\sum_{q=c,b,t} f_q^N = \frac{2}{9} \left( 1 - \sum_{q = u,d,s} f_q^N \right),
\end{align}
which leads
\begin{align}
f_N = \frac29+\frac{7}{9}\sum_{q=u,d,s}f_{q}^N.
\end{align}
Finally we obtain the spin independent $X$-$N$ scattering cross section as follows;
\begin{equation}
\sigma_{\rm SI}(X N \to X N) = \frac{1}{8 \pi} \frac{\mu_{NX}^2 f_N^2 m_N^2 \lambda_{SX}^2 v_S^2 s_\theta^2 c_\theta^2}{v^2 m_{X}^2}
\left( \frac{1}{m_h^2} - \frac{1}{m_{H_3}^2} \right)^2,
\end{equation}
where $\mu_{NX} = m_N m_{X}/(m_N + m_{X})$ is the reduced mass of nucleon and DM.
Here we consider DM-neutron scattering cross section for simplicity where that of DM-proton case gives almost similar result.
In this case, we adopt the effective coupling $f_n \simeq 0.287$ (with $f_u^n = 0.0110$, $f_d^n = 0.0273$, $f_s^b = 0.0447$) in estimating the cross section.
In Fig.~\ref{fig:DD}, we show DM-nucleon scattering cross section as a function of $\sin \theta$ we take $m_{X} = 300$ GeV, $m_{H_3}= 300$ GeV, $v_S = 5000$ GeV, and $\lambda_{SX}= 0.5(0.01)$ for red(blue) line as reference values. We find that some parameter region is constrained by direct detection when $\lambda_{SX}$ is relatively large and $\sin \theta > 0.01$.
More parameter region will be tested in future direct detection experiments.
\begin{figure}[t]
\begin{center}
\includegraphics[bb=-100 0 581 230, scale=0.95]{DD.pdf}
\end{center}
\caption{ DM-Nucleon scattering cross section as a function of $\sin \theta$ in Scenario-II where we take $m_{X} =
300$ GeV, $m_{H_3}= 300$ GeV, $v_S = 5000$ GeV and $\lambda_{SX}=0.5(0.01)$ for red(blue) line as reference
values. The current bounds from XENON1T~\cite{Aprile:2017iyp} and PandaX-II~\cite{Cui:2017nnn}}.
\label{fig:DD}
\end{figure}
The Higgs portal interaction can be also tested by collider experiments.
The interaction can be tested via searches for invisible decay of the SM Higgs for $2 m_X < m_h$ while
collider constraint is less significant compared with direct detection constraints for $2m_X > m_h$~\cite{Khachatryan:2016whc, Hoferichter:2017olk,Aad:2015pla}.
Furthermore DM can be produced via heavier Higgs boson $H_3$ if $2 m_X < m_{H_3}$ and the possible signature will be mono-jet with missing transverse momentum as $pp \to H_3 j \to XX j$.
However the production cross section will be small when the mixing effect $\sin \theta$ is small as we assumed in our analysis.
Such a process would be tested in future LHC with sufficiently large integrated luminosity while detailed analysis is beyond the scope of this paper.
\subsection*{Indirect detection}
Here we discuss possibility of indirect detection in our model by estimating thermally averaged cross section in current Universe with {\tt micrOMEGAs 4.3.5} using allowed parameter sets from relic density calculations. Since $a_Sa_S$ final state is dominant in scenario-I, we focus on the other scenarios in the following.
Fig.~\ref{fig:ID} shows DM annihilation cross section in current Universe as a function of $m_{X}$ where left and right panels correspond to Scenario-II and Scenario-III/IV.
In Scenario-II, the cross section is mostly $\sim O(10^{-26})$cm$^{-3}/$s while some points give smaller(larger) values corresponding to the region with $2 m_{X} \gtrsim(\lesssim) M_{H_3}$ as a consequence of resonant effect.
The annihilation processes in the scenario provide the SM final state via decay of $H_3$ and $\{H_1, H^\pm, A\}$ where $H_3$ decay gives mainly $b \bar b$ via mixing with the SM Higgs and the scalar bosons from second doublet gives leptons. This cross section would be tested via $\gamma$-ray observation like Fermi-LAT~\cite{Ackermann:2013yva} as well as high energy neutrino search such as IceCube~\cite{Aartsen:2015zva, Aartsen:2017mau}, especially when the cross section is enhanced.
In Scenario-III, the cross section is mostly $\sim O(10^{-26})$cm$^{-3}/$s and the final states from DM annihilation include components of $\Phi_1$ that are $\{H_1, H^\pm, A\}$. Thus DM mainly annihilate into neutrinos via the decay these scalar bosons while little amount of charged lepton appear from $H^\pm$. Therefore constraints from indirect detection is weaker in this scenario.
In Scenario-IV, the values of cross section is relatively larger due to the nature of semi-annihilation scenario.
In this case final states from DM annihilation give mostly $b \bar b$ via decays of $H_3$ in the final state. Then it would be tested by $\gamma$-ray search and neutrino observation as in the scenario-II.
\begin{figure}[t]
\begin{center}
\includegraphics[bb=0 20 581 280, scale=0.8]{ID1.pdf}
\includegraphics[bb=-280 0 581 20, scale=0.8]{ID2.pdf}
\end{center}
\caption{ Left: the current DM annihilation cross section in Scenario-II as a function of $m_{X}$. Right: that for Scenario-III and IV represented by red and blue points.}
\label{fig:ID}
\end{figure}
\section{Conclusion}
\label{Conc}
We consider a neutrino Two Higgs Doublet Model ($\nu$THDM) in which small Dirac neutrino masses are explained
by small VEV, $v_1 \sim {\cal O}(1)$ eV, of Higgs $H_1$ associated with neutrino Yukawa interaction. A global $U(1)_X$
symmetry is introduced to forbid seesaw mechanism. The smallness of $v_1$ proportional to soft $U(1)_X$-breaking
parameter $m_{12}^2$ is technically natural.
We extend the model to introduce a scalar dark matter candidate $X$ and scalar $S$ breaking $U(1)_X$ symmetry
down to discrete $Z_2$ symmetry. Both are charged under $U(1)_X$. The lighter state of $X$ is stable since it is the
lightest particle with $Z_2$ odd parity. The soft parameter $m_{12}^2$ is replaced by $ \mu \langle S \rangle$.
The physical Goldstone boson whose dominant component is pseudoscalar part of $S$ is shown to be phenomenologically
viable due to small ratio ($\sim {\cal O}(10^{-9})$) of $v_1$ compared to electroweak scale VEVs of the SM Higgs and
$S$.
We study four scenarios depending on dark matter annihilation channels in the early Universe to simplify the analysis of
dark matter phenomenology. In Scenario I, Goldstone modes are important. Scenario II is $H_3$ portal. In Scenario III,
the dark matter makes use of the portal interaction with $\Phi_1$ which generates Dirac neutrino masses.
In Scenario IV the dominant interaction is $\lambda_{3X} S^\dagger X X X + h.c.$ which induces semi-annihilation process of our dark matter candidate.
In Scenario II, the dark matter scattering cross section with neucleons can be sizable and detected at next generation
direct detection experiments. We calculated indirect detection cross section in Scenarios II, III, and IV, which can be
tested by observing cosmic $\gamma$-ray and/or neutrinos.
\section*{Acknowledgments}
\vspace{0.5cm}
This work is supported in part by National Research Foundation of Korea (NRF) Research Grant NRF-2015R1A2A1A05001869 (SB).
\providecommand{\href}[2]{#2}
\addcontentsline{toc}{section}{References}
\bibliographystyle{JHEP}
|
{
"timestamp": "2018-06-05T02:15:27",
"yymm": "1802",
"arxiv_id": "1802.08615",
"language": "en",
"url": "https://arxiv.org/abs/1802.08615"
}
|
\section{Introduction}
Residual Networks (ResNets) which are feed-forward network models with skip connections have achieved great success on several vision benchmarks~\citep{He_2016_CVPR}.
Recently, researchers have studied the relation between ResNets and dynamical systems \citep{2016arXiv160403640L,E2017,haber2017learning,chang2018reversible,chang2018multi,pmlr-v80-lu18d}.
Forward Euler method, a first-order RK method, has been employed to explain ResNets with full pre-activation \citep{he2016identity} from the dynamical systems view \citep{haber2017learning,chang2018multi}. Nevertheless, there is no firm evidence to prove that the residual block is just forward Euler method but not any other RK method.
We regard the residual mapping as an approximation to the increment in a time-step. The accuracy of the approximation is determined by the structure of the convolutional network.
Wide residual network (WRN) \citep{Zagoruyko2016WRN} has been proposed to improve the ability of the convolutional subnetwork. However, it is not very efficient only to widen the subnetwork. The new explanation of pre-activation ResNet and its variants which focus on improving residual mapping is one of our contributions.
In addition, some improvements on network architecture based on ordinary differential equations (ODEs) are proposed \citep{chang2018reversible,pmlr-v80-lu18d,chen2018neural}. Under the assumption that pre-activation ResNet is forward Euler method, \citet{chang2018reversible,pmlr-v80-lu18d} use special linear multi-step methods (LM methods) with low order to construct
the network. \citet{chen2018neural} utilize a third-party package which offers numerical ODE methods to replace residual block.
There is no efficient network architecture for systematic generalization to high order till now. Nevertheless, a higher-order method can achieve a lower truncation error. Since a lower truncation error likely leads to a high accuracy,
it is necessary to study an efficient network architecture with a high order.
If the process of image classification is deemed a sequence of time-dependent dynamical systems, there should be a series of ODEs to describe these systems. RK methods are widely-used procedures to solve ODEs in numerical analysis \citep{butcher2008numerical}. They are also the building blocks of high-order LM methods. Consequently, these methods can be used to build network models for visual processing.
The neural network community has long been aware of the numerical methods for dynamical systems. Runge-Kutta Neural Network (RKNN) is proposed for identification of unknown dynamical systems in high accuracy \citep{wang1998runge}, but it has not been used to model the visual system nor been extended to convolutional networks. Moreover, RKNNs adopt a specific RK methods by indicating every coefficient for the RK methods. Thus, it is hard to apply high order RK methods in RKNNs. In addition, the time-step size need to be prespecified. Hence, RKNN cannot be used in tasks where the total time is unknown such as image classification. In contrast, we learn all the coefficients and time-step sizes implicitly by training in order to avoid these difficulties. As a result, one of the major contributions of the paper is a novel and effective neural network architecture inspired by the RK methods.
In order to apply RK methods to the image classification problem, the following assumptions are made throughout the paper. Firstly, the image classification procedure is multi-period and there are transitions between adjacent periods. Secondly, each period is modeled by a time-dependent first-order dynamical system. Based on these assumptions, a novel network model called the RKNet is proposed.
In an RKNet, a period is composed of iterations of time-steps. A particular RK method is adopted throughout the time-steps in a period to approximate the increment in each step. The increment in each step is broken down to the increments in several stages according to the adopted RK method. Each stage is approximated by a convolutional subnetwork due to the versatility of neural networks on approximation.
Another contribution of this paper is a theoretical interpretation of DenseNets and CliqueNets from the dynamical systems view. The dense connections in DenseNet resemble the relationship among increments in the stages in explicit RK methods (ERK methods). Similarly, the clique blocks in CliqueNets resemble the relationship among increments in the stages in implicit RK methods (IRK methods). Under some conditions, DenseNets and CliqueNets can be formulated as approximating dynamical systems using multi-stage RK methods. We also propose a method to convert a DenseNet to an explicit RKNet (ERKNet) and a method convert a CliqueNet to an implicit RKNet (IRKNet). Furthermore, DenseNets and CliqueNets have only one time-step in each period, whereas RKNets are more general and can have multiple time-steps in each period.
We evaluate the performance of RKNets on benchmark datasets including CIFAR-10, CIFAR-100~\citep{krizhevsky2009learning}, SVHN~\citep{37648} and ILSVRC2012 classification dataset~\citep{russakovsky2015imagenet}. Experimental results show that both ERKNets and IRKNets conform to the mathematical properties. Additionally, RKNets achieve higher accuracy than the state-of-the-art network models on CIFAR-10 and comparable accuracy on CIFAR-100, SVHN and ImageNet.
The rest of the paper is organized as follows. The related work is reviewed in Section~\ref{related_work}. The architecture of RKNets, the dynamical systems interpretation of DenseNets and CliqueNets, and the conversion from them to RKNets are described in Section~\ref{RKNets}. The performance of RKNets is evaluated in Section~\ref{experiments}. The conclusion and future work is described in Section~\ref{conclusion}.
\section{Related work}
\label{related_work}
ResNets have gained much attention over the past few years since they have obtained impressive performance on many challenging image tasks, such as ImageNet~\citep{russakovsky2015imagenet} and COCO object detection~\citep{lin2014microsoft}. ResNets are deep feed-forward networks with the shortcuts as identity mappings. ResNets with pre-activation can be regarded as an unfolded shallow RNN, which implements a discrete dynamical system~\citep{2016arXiv160403640L}. It provides a novel point of view for explaining pre-activation ResNets from dynamical systems view.
Recently, more work has emerged to connect dynamical systems with deep neural networks~\citep{E2017} or ResNets in particular~\citep{haber2017learning,chang2018reversible,chang2018multi,JMLR:v18:17-653,pmlr-v80-long18a,pmlr-v80-lu18d,wang2018deep,chen2018neural}. \citet{E2017} proposes to use continuous dynamical systems as a tool for machine learning. \citet{chang2018reversible} propose three reversible architectures based on ResNets and ODE systems.
\citet{chang2018multi} propose a novel method for accelerating ResNets training based on the interpretation of ResNets from dynamical systems view \citep{haber2017learning}. \citet{JMLR:v18:17-653} present a training algorithm which can be used in the context of ResNets. \citet{pmlr-v80-lu18d} propose a 2-step architecture based on ResNets. In addition, research combining dynamical system identification and RK methods with neural networks for scientific computing has emerged recently \citep{raissi2017physics1,raissi2017physics2,raissi2018deep}, introducing physics informed neural networks with automatic differentiation. \citet{chen2018neural} utilize a third-party package which offers some numerical methods to compute the numerical solution in each time-step.
DenseNets \citep{Huang_2017_CVPR} are the state-of-the-art network models after ResNets. The dense connection is the main difference from the previous models. There are direct connections from a layer to all subsequent layers in a dense block in order to allow better information and gradient flow. There is no interpretation of DenseNets from dynamical systems view yet.
CliqueNets \citep{Yang_2018_CVPR} are the state-of-the-art network models based on DenseNets. They adopt the alternately updated clique blocks to incorporate both forward
and backward connections between any two layers in the same block. However, there is no interpretation of CliqueNets from dynamical systems view yet.
Given that the process of image classification is regarded as a sequence of time-dependent dynamical systems, there should be a set of ODEs that describes these systems. Consequently, mathematical tools could be employed to construct network models. RK methods are commonly used to solve ODEs in numerical analysis~\citep{butcher2008numerical}.
Higher order RK methods can achieve lower truncation error. Moreover, these methods are usually the building blocks of high-order LM methods. Therefore, RK methods are ideal tools to construct network models from dynamical systems view.
RK methods have been adopted to construct neural networks, which are known as RKNN, for identification of unknown dynamical systems described by ODEs~\citep{wang1998runge}. In that paper, neural networks are classified into two categories: (1) a network that directly learns the state trajectory of a dynamical system is called a direct-mapping neural network (\textbf{DMNN}); (2) a network that learns the rate of change of system states is called a \textbf{RKNN}. Hence, AlexNet~\citep{krizhevsky2012imagenet}, VGGNet~\citep{simonyan2015very}, GoogLeNet~\citep{Szegedy_2015_CVPR} and ResNet~\citep{He_2016_CVPR} all belong to DMNNs. Specifically, the original ResNet~\citep{He_2016_CVPR} is a DMNN because of the ReLU layer after the addition operation. As a result, the ResNet building block learns the state trajectory directly, not the rate of change of the system states. On the contrary, a ResNet with pre-activation~\citep{he2016identity} is an RKNN.
RKNNs are proposed to eliminate several drawbacks of DMNNs, such as the difficulty in obtaining high accuracy for the multi-step prediction of state trajectories. It has been shown theoretically and experimentally that the RKNN has higher prediction accuracy and better generalization capability than the conventional DMNN~\citep{wang1998runge}.
Therefore, it is reasonable to believe that RK methods can be adopted to design effective network architectures for image classification problems. Additionally, the RK methods might improve the performance of image classification since the convolutional subnetworks are able to approximate the rate of change of the dynamical system states more precisely.
\section{RKNets}
\label{RKNets}
The introduction to RK methods are in Section~\ref{subsec:RKmethod}. We describe the overall structure of RKNet in Section~\ref{subsec:rk_rk}. The structure of subnetwork for increment in each time-step is elaborated on in Section~\ref{subsec:erk_irk}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{architecture}
\end{center}
\caption{Architecture of a 3-period RKNet. ${\bm{y}}^{(d)}$ denotes the system state of period $d$. ${\bm{y}}_0^{(d)}$ is the initial state of period $d$. ${\bm{y}}_r^{(d)}$ is the final state after $r$ time-steps in period $d$. $r$ is the total number of time-steps in a period. It can vary in different periods. Period 1 and time-step 1 in it are unfolded as an example. System state changes throughout a period. The final state of a step is estimated as the initial state of this step adding an increment. This operation originates from RK methods. To approximate the increment is the key point in RKNet. The dotted lines are for multiscale feature strategy.}
\label{fig:arch}
\end{figure}
\subsection{Runge-Kutta methods}
\label{subsec:RKmethod}
An initial value problem for a time-dependent first-order dynamical system can be described by the following ODE \citep{butcher2008numerical}:
\begin{equation}
\frac{d{\bm{y}}} {d t} =f\left( t,\ {\bm{y}}(t)\right),\qquad {\bm{y}}\left( t_{0}\right) = {\bm{y}}_{0}.\label{eq:ode}
\end{equation}
where ${\bm{y}}$ is a vector representing the system state. The dimension of ${\bm{y}}$ should be equal to the dimension of the dynamical system. The ODE represents the rate of change of the system states. The rate of change is a function of time and the current system state. RK methods utilize the rate of change calculated from the ODE to approximate the increment in each time-step, and then obtain the predicted final state at the end of each step. RK methods are numerical methods originated from Euler method. There are two types of RK methods: explicit and implicit. Both of them are employed in the RKNet. The family of RK methods is given by the following equations \citep{endre2003an}:
\begin{equation}
{\bm{y}}_{n+1} = {\bm{y}}_{n} + h\sum^{s}_{i=1}b_{i} {\bm{z}}_{i},\qquad t_{n+1}=t_{n}+h,\label{eq:add}
\end{equation}
where
\begin{equation}
{\bm{z}}_{i} = f\left( t_{n}+c_{i}h,\ {\bm{y}}_{n}+h\sum^{s}_{j=1}a_{ij} {\bm{z}}_{j}\right),\qquad 1\leq{i}\leq{s}.\label{eq:general}
\end{equation}
In \eqref{eq:add}, ${\bm{y}}_{n+1}$ is an approximation of the solution to \eqref{eq:ode} at time $t_{n+1}$, i.e. ${\bm{y}}(t_{n+1})$;
${\bm{y}}_0$ is the input initial value;
$h\sum^{s}_{i=1}b_{i}{\bm{z}}_{i}$ is the increment of system state ${\bm{y}}$ from $t_n$ to $t_{n+1}$; $\sum^{s}_{i=1}b_{i}{\bm{z}}_{i}$ is the estimated slope which is the weighted average of the slopes ${\bm{z}}_i$ computed in different stages. The positive integer $s$ is the number of ${\bm{z}}_i$, i.e. the number of \textbf{stages} of the RK method. The \eqref{eq:general} is the general formula of ${\bm{z}}_i$. $h$ is the time-step size which can be adaptive for different time-steps but must be fixed across stages within a time-step. The learned time-step size is proved to be adaptive in Appendix~\ref{sec:adaptive}.
In numerical analysis, $s$, $a_{ij}$, $b_i$ and $c_i$ in \eqref{eq:add} and \eqref{eq:general} need to be prespecified for a particular RK method. These coefficients are displayed in a Butcher tableau. The ERK methods are those methods with $a_{ij} = 0$ when $1\leq{i}\leq{j}\leq{s}$. All the RK methods other than ERK methods are IRK methods. The algebraic relationships of the coefficients
have to meet the order conditions to reach the highest possible order. Different RK methods have different truncation errors which are denoted by the order: an order $p$ indicates that the local truncation error is $O(h^{p+1})$.
If a $s$-stage ERK method has order $p$, then $s\geq{p}$; if $p\geq 5$, then $s>p$~\citep{butcher2008numerical}. Furthermore, a $s$-stage IRK method can has order $p = 2s$ when its coefficients are chosen under some conditions~\citep{butcher2008numerical}.
Therefore, more stages may achieve higher orders, i.e. lower truncation errors. The Euler method is a one-stage first-order RK method with $b_1=1$ and $c_1=0$. In other words, high-order RK methods can be expected to achieve lower truncation errors than Euler method. Thus, the goal of our proposed RKNets is to improve the classification accuracy by taking advantage of high-order RK methods.
It is necessary to specify $h$ in order to control the error of approximation in common numerical analysis. The varying time-step size can be adaptive to the regions with different rates of change. The truncation error is lower when the $h$ is smaller.
\subsection{From RK methods to RKNets}\label{subsec:rk_rk}
There are three components of RKNets: the preprocessor, the multi-periods and the postprocessor. The preprocessor manipulates the raw images and passes the results to the first period. The postprocessor deals with the output from the last period or all the periods while adopting multiscale feature strategy \citep{Yang_2018_CVPR}. Then, it passes the result to the classifier to make a decision. The periods between those two components are divided by the transition layers. These periods can be modeled by time-dependent dynamical systems. Each period of an RKNet is divided into $r$ \textbf{time-steps} as shown in Figure~\ref{fig:arch}. RK methods approximate the final state of every time-step using the rate of change of the system state. Some guiding principles when applying RK methods to RKNets are listed as follows.
Firstly, dimensionality reduction is often carried out to simplify the system identification issue, when the dimension of real dynamical system is too high. The dimension of ${\bm{y}}$ in each period in RKNet is predefined as the multiplication of the size of feature map and the number of channels at the beginning of a period. The dimensions of ${\bm{y}}$ in the same periods of different RKNets can be different due to various degrees of dimensionality reduction. Nevertheless, the dimension of ${\bm{y}}$ is consistent within a period.
Secondly, given that there is no explicit ODE for image classification, a convolutional subnetwork is employed to approximate the increment in each time-step. The number of neurons in each hidden layer can be more than the dimension of ${\bm{y}}$.
Thirdly, the number of stages $s$ in each period is predefined in RKNet but the other coefficients, $a_{ij}$, $b_i$ and $c_i$ in \eqref{eq:add} and \eqref{eq:general} are learned by training. Due to the order conditions \citep{butcher2008numerical}, the relationship among the coefficients are more important than the specific value of any individual coefficient. Hence, the coefficients are learned implicitly but not as explicit parameters. The optimal relationship among the coefficients with a highest possible order is obtained after training.
Lastly, the number of time-steps $r$ in each period is predefined in RKNet, but the step size $h$ is learned by training. $n$ in \eqref{eq:add} and \eqref{eq:general} is limited to the range [0, $r$). The learned $h$ is thus considered adaptive. In theory, the adaptive time-step size can achieve higher accuracy.
A variety of RK methods can be adopted in the different periods of RKNets, but the same RK method is used for all time-steps within one period in an RKNet. The network models are named after the specific method in each period, such as RKNet-3$\times$2\_4$\times$1\_2$\times$5\_1$\times$1. The suffix in the name of an RKNet is composed of several $s\times{r}$ terms; each stands for the method in corresponding period.
The number of such terms equals the total number of periods. $s$ or $r$ can vary in different periods. For example, RKNet-3$\times$2\_4$\times$1\_2$\times$5\_1$\times$1 has four periods: period one has 2 time-steps and each step has 3 stages; period two has 1 time-step and it has 4 stages; period three has 5 time-steps and each step has 2 stages; period four has 1 time-step and it has 1 stage. We use this notation throughout this paper. In addition, ERKNets only adopt ERK methods and IRKNets only adopt IRK methods
Given an RKNet model, $s$ and $r$ can be modified to construct more variants with the same dimensions in the corresponding periods. In other words, $s$ and $r$ control depth of the network while dimensionality reduction controls the width of the network. More stages, more time-steps and larger dimensions usually lead to higher classification accuracy. However, the complexity of an ODE increases with the increase of dimensions. As a result, the convolutional subnetwork which approximates the increment in a time-step need be more complex for larger dimensions. Hence, the accuracy is also associated with the matching degree of the dimension and the convolutional subnetwork. The unmatched high-dimensional network model may have lower accuracy. Additionally, the training method might affect the classification accuracy too.
\subsection{ERKNets and IRKNets}\label{subsec:erk_irk}
In this section, we introduce the architecture of RKNets. As shown in \eqref{eq:add}, the sum of $hb_i{\bm{z}}_i$ represents the increment in a time-step. It is crucial to approximate this increment in RKNet.
For the purpose of constructing an RKNet, it is necessary to hide the time-step size and the coefficients in RK methods.
$hb_i{\bm{z}}_i$ can be described as follows according to \eqref{eq:general}:
\begin{equation}
\begin{split}
hb_{i}{\bm{z}}_{i}&=hb_{i}f\left( t_{n}+c_{i}h,\ {\bm{y}}_{n}+h\sum^{s}_{j=1}a_{ij}{\bm{z}}_{j}\right)\\
&= g_{i}\left(\ {\bm{y}}_{n}+h\sum^{s}_{j=1}a_{ij}{\bm{z}}_{j}\right)\\
&= F_{i}\left( {\bm{y}}_{n},\ ha_{i1} {\bm{z}}_1,\ \ldots,\ ha_{is}{\bm{z}}_{s}\right).
\label{eq:hbizi}
\end{split}
\end{equation}
The above transformation first changes the explicit dependence on the time in \eqref{eq:general} to an implicit one. Since the time parameter $t_n + c_i h$ is different for the different stages, it can be absorbed into $g_i(\cdot)$, which implicitly depends on time for stage $i$. Afterward, the summation in the input parameter of $g_i(\cdot)$ is split into separate terms. $F_i(\cdot)$ denotes the function of these terms for each stage. We verify that $F_i(\cdot)$ can equal to $g_i(\cdot)$ after training by experiment though $F_i(\cdot)$ is more expressive than $g_i(\cdot)$ in expression. Additionally, $F_i(\cdot)$ is more memory efficient than $g_i(\cdot)$ because of saving the storage for the summation inputted to $g_i(\cdot)$.
\subsubsection{Connect ERKNets with DenseNets}\label{subsubsec:conn_dense}
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{time-step_E}
\end{center}
\caption{Architecture of one time-step in ERKNet using an $s$-stage ERK method. ${\bm{y}}_n$ is the approximation of ${\bm{y}}(t_n)$. A dense block grows every $m$ times at a growth rate of $k$ to form a convolutional subnetwork for generating each $hb_i{\bm{z}}_i$. Here, $h$ is time-step size, $b_i$ is coefficient of ERK method, and ${\bm{z}}_i$ is the slope of each stage in ERK method. The total number of growth is $ms$ in a dense block in order to generate $hb_i{\bm{z}}_i$ for $i=1,\ \ldots,\ s$. An explicit summation layer is added after a dense block to complete a time-step.}
\label{fig:time-step_E}
\end{figure}
In order to construct ERKNets, $hb_i{\bm{z}}_i$ can be described by the equation below, according to \eqref{eq:hbizi}.
\begin{equation}
\begin{split}
hb_i{\bm{z}}_i&= e_{i}\left( {\bm{y}}_{n},\ ha_{i1} {\bm{z}}_1,\ \ldots,\ ha_{i(i-1)}{\bm{z}}_{i-1}\right)\\
&=E_{i}\left( {\bm{y}}_{n},\ hb_1 {\bm{z}}_1,\ \ldots,\ hb_{i-1}{\bm{z}}_{i-1}\right).\label{eq:hbizi_E}
\end{split}
\end{equation}
The above transformation first eliminates $ha_{ij}{\bm{z}}_j$ (${i}\leq{j}$) from $F_i(\cdot)$ in \eqref{eq:hbizi} since $a_{ij} = 0$ when $1\leq{i}\leq{j}\leq{s}$ for ERK methods (See \ref{subsec:RKmethod}). As a result, $hb_{i}{\bm{z}}_i$ is denoted by a function of $y_n$ and $ha_{ij}{\bm{z}}_j$ for $j=1,\ \ldots,\ i-1$. It is written as $e_i(\cdot)$. After that, adjusting the coefficients of each parameter from $a_{ij}$ to $b_j$ yields another function $E_i(\cdot)$. It is a function of $y_n$ and $hb_{j}{\bm{z}}_j$ for $j=1,\ \ldots,\ i-1$.
If a convolutional subnetwork is adopted to model $E_i(\cdot)$ in \eqref{eq:hbizi_E}, the most similar network structure is the dense connections in DenseNets. To be specific, a growth in a dense block concatenates all the preceding layers as the input of convolutional subnetwork just like that $hb_{i}{\bm{z}}_i$ uses $y_n$ and all the increments in preceding stages as the input of $E_i(\cdot)$. For the purpose of adopting dense block in ERKNets, the dense blocks must conform to the following rules.
\textbf{Rule 1} The number of channels of ${\bm{y}}_n$ is in the form of $mk$, where $m$ and $k$ are positive integers and $k$ is known as the growth rate in DenseNet literature. The dimension of ${\bm{y}}_n$ is the multiplication of the size of feature map and $mk$.
\textbf{Rule 2}
Every $m$ successive growth constructs a convolutional subnetwork for $E_i(\cdot)$. Each subnetwork outputs $mk$ channels which are regarded as a group according to the number of channels of ${\bm{y}}_n$. Each convolutional subnetwork concatenates ${\bm{y}}_n$ and all the preceding groups as its input. The $i$th group generated by the $i$th subnetwork corresponds to $hb_i{\bm{z}}_i$.
\textbf{Rule 3}
The total number of growth is $ms$, where $s$ is number of stages of RK methods. Consequently, $s$ groups representing $hb_i{\bm{z}}_i$ for $i=1,\ \ldots,\ s$ are generated by $s$ convolutional subnetworks modeling $E_i(\cdot)$ for $i=1,\ \ldots,\ s$ successively in a dense block.
Appending to a restricted dense block conforming to the above rules, ${\bm{y}}_n$ and the groups $h b_i {\bm{z}}_i$ for $i=1, \ldots, s$ are added to obtain ${\bm{y}}_{n+1}$ according to \eqref{eq:add}. Figure~\ref{fig:time-step_E} illustrates one time-step of ERKNet.
In DenseNets, every dense block together with part of the subsequent computation can be regarded as a period using a $s$-stage ERK method with $r=1$ time-step. The transition layers and the postprocessor contain the summation operation in \eqref{eq:add}. This gives an explanation of DenseNets from the dynamical systems view.
\subsubsection{Connect IRKNets with CliqueNets}\label{subsubsec:conn_clique}
$hb_i{\bm{z}}_i$ for IRK methods can be described by the equation below, according to \eqref{eq:hbizi}.
\begin{equation}
\begin{split}
hb_i{\bm{z}}_i&=H_{i}\left( {\bm{y}}_{n},\ hb_1 {\bm{z}}_1,\ \ldots,\ hb_{s}{\bm{z}}_{s}\right) \\
&= G_{i}\left( hb_1 {\bm{z}}_1,\ \ldots,\ hb_{i-1} {\bm{z}}_{i-1},\ hb_{i+1} {\bm{z}}_{i+1},\ \ldots,\ hb_{s}{\bm{z}}_{s}\right) \\
&= I_{i}\left( hb_1 {\bm{z}}_1,\ \ldots,\ hb_{i-1} {\bm{z}}_{i-1},\ {\bm{v}}_{i+1},\ \ldots,\ {\bm{v}}_{s}\right)
\label{eq:hbizi_I}
\end{split}
\end{equation}
where
\begin{equation}
\begin{split}
{\bm{v}}_j &= V_j(hb_j{\bm{z}}_j) \\
&= J_{j}\left( {\bm{y}}_{n},\ {\bm{v}}_1, \ \ldots,\ {\bm{v}}_{j-1} \right).
\label{eq:hbizi_J}
\end{split}
\end{equation}
The above transformation first adjusts the coefficients of each parameter of $F_i(\cdot)$ in \eqref{eq:hbizi} from $a_{ij}$ to $b_j$. It yields another function $H_i(\cdot)$. As a result, every $hb_i{\bm{z}}_i$ is a function of $y_n$. Thus, $hb_i{\bm{z}}_i$ can be denoted by a function of $hb_j{\bm{z}}_j$ for $j=1,\ \ldots,\ s,\ j\not=i$. This function is written as $G_i(\cdot)$.
Inspired by Newton method which is used to implement IRK methods \citep{butcher2008numerical}, $hb_i{\bm{z}}_i$ is initialized using all available information firstly and then updated alternately. Given $v_j$ is the initial value of $hb_j{\bm{z}}_j$, the relationship between them is denoted by the function $V_j(\cdot)$. Therefore, $hb_i{\bm{z}}_i$ can be denoted by a function of $hb_j{\bm{z}}_j$ for $j=1,\ \ldots,\ i-1$ and ${\bm{v}}_j$ for $j=i+1,\ \ldots,\ s$. This function is written as $I_i(\cdot)$. It is the update function of $hb_i{\bm{z}}_i$.
Since every $hb_j{\bm{z}}_j$ is a function of $y_n$, every ${\bm{v}}_j$ is also a function of $y_n$. Thus, ${\bm{v}}_j$ can be denoted by a function of $y_n$ and ${\bm{v}}_q$ for $q=1,\ \ldots,\ j-1$. This function is written as $J_j(\cdot)$. It is the initialization function of $hb_j{\bm{z}}_j$.
The update process is a sequence of iterations till convergence in Newton method. In other words, ${\bm{v}}_j$ is updated for many times to approach $hb_j{\bm{z}}_j$. During updating, $G_i(\cdot)$ with the biased input is used as the update function since $I_i(\cdot)$ is unknown. If using convolutional subnetwork to model each $I_i(\cdot)$, these functions can be learned under the help of training. As a result, each ${\bm{v}}_j$ needs to be updated only once. Therefore, the computational cost is reduced remarkably.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{time-step_I}
\end{center}
\caption{Architecture of one time-step in IRKNet using a 3-stage IRK method. ${\bm{y}}_n$ is the approximation of ${\bm{y}}(t_n)$. A dense block, which is Stage-I of a clique block, grows $k$ channels every time to generate the initial value of each $hb_i{\bm{z}}_i$, written as ${\bm{v}}_i$. Here, $h$ is time-step size, $b_i$ is coefficient of IRK method, and ${\bm{z}}_i$ is the slope of each stage in IRK method. In Stage-II of a clique block, the convolutional subnetwork concatenating the current values of $hb_j{\bm{z}}_j$ for $j=1,\ \ldots,\ 3,\ j\not=i$ to update every $hb_i{\bm{z}}_i$ alternately. An explicit summation layer is added after a clique block to complete a time-step.}
\label{fig:time-step_I}
\end{figure}
If a convolutional subnetwork is adopted to model $J_j(\cdot)$ in \eqref{eq:hbizi_J} and $I_i(\cdot)$ in \eqref{eq:hbizi_I}, the most similar network structure is the clique block in CliqueNets. To be specific, a clique block is composed of Stage-I and Stage-II in CliqueNet literature. Stage-I which initializes all layers in a clique block is regarded as a sequence of $J_j(\cdot)$. Then, Stage-II for updating all layers alternately corresponds to all $I_i(\cdot)$. In CliqueNet literature, all layers in a clique block except the top layer to be updated are concatenated as the bottom layer, i.e. the input of a convolutional subnetwork for updating. It is just like $I_i(\cdot)$ uses $hb_j{\bm{z}}_j$ for $j=1,\ \ldots,\ i-1$ and ${\bm{v}}_j$ for $j=i+1,\ \ldots,\ s$ as input. In order to adopt clique block in IRKNets, the clique blocks must conform to the following rules.
\textbf{Rule 1} The number of channels of ${\bm{y}}_n$ is $k$, which is the growth rate in Stage-I since Stage-I is a dense block. The dimension of ${\bm{y}}_n$ is the multiplication of the size of feature map and $k$.
\textbf{Rule 2}
Every growth in Stage-I constructs a convolutional subnetwork. Each subnetwork outputs $k$ channels which are regarded as a group according to the number of channels of ${\bm{y}}_n$. Each convolutional subnetwork concatenates ${\bm{y}}_n$ and all the preceding groups as its input. The $i$th group generated by the $i$th subnetwork is ${\bm{v}}_i$.
\textbf{Rule 3}
The total number of growth in Stage-I is $s$, which is number of stages of RK methods. Consequently, $s$ groups representing ${\bm{v}}_i$ for $i=1,\ \ldots,\ s$ are generated by $s$ convolutional subnetworks successively in Stage-I. $s$ should be larger than 1 for updating alternately in Stage-II.
Appending to a restricted clique block conforming to the above rules, ${\bm{y}}_n$ and the groups $h b_i {\bm{z}}_i$ for $i=1, \ldots, s$ are added to obtain ${\bm{y}}_{n+1}$ according to \eqref{eq:add}. Figure~\ref{fig:time-step_I} illustrates one time-step of IRKNet using a 3-stage IRK method as an example.
In CliqueNets, every clique block together with part of the subsequent computation can be regarded as a period using a $s$-stage IRK method with $r=1$ time-step. The transition layers and the postprocessor contain the summation operation in \eqref{eq:add}. This gives an explanation of CliqueNets from the dynamical systems view.
\section{Experiments}
\label{experiments}
To verify the theoretical properties of RK methods and evaluate the performance of RKNets on image classification, experiments are conducted using the proposed network architectures. The experimental setup is described in Appendix~\ref{setup}. Some extra techniques, including attentional transition, bottleneck and multiscale feature strategy, can be adopted in RKNets following CliqueNets. They are introduced in Appendix~\ref{extra}.
\begin{table}[t]
\caption{Test errors of ERKNets and IRKNets, evaluated on CIFAR-10 without data augmentation. The growth rate $k$ is 36 in every period of the RKNets. The times of successive growth in each stage, $m$, is 1. The multiscale feature strategy is used.
All the models are run with batchsize 64.}
\label{tb:ERKvsIRK}
\begin{center}
\begin{tabularx}{\textwidth}{lXXXllXXX}
\multicolumn{1}{c}{\bf ERKNet} & \multicolumn{1}{X}{\bf FLOPs (G)} & \multicolumn{1}{X}{\bf Params (M)} & \multicolumn{1}{X}{\bf Error (\%)} & \multicolumn{1}{c}{\bf IRKNet} & \multicolumn{1}{X}{\bf FLOPs (G)} & \multicolumn{1}{X}{\bf Params (M)} & \multicolumn{1}{X}{\bf Error (\%)} \\
\hline \\
-6$\times$1\_6$\times$1\_6$\times$1 & 0.66 & 0.74 & 7.08 & -3$\times$1\_3$\times$1\_3$\times$1 & 0.38 & 0.32 & 7.18 \\
-7$\times$1\_6$\times$1\_6$\times$1 & 0.83 & 0.83 & 7.02 & -4$\times$1\_3$\times$1\_3$\times$1 & 0.62 & 0.40 & 6.89 \\
-7$\times$1\_7$\times$1\_6$\times$1 & 0.87 & 0.91 & 6.67 & -4$\times$1\_4$\times$1\_3$\times$1 & 0.68 & 0.49 & 6.63 \\
-7$\times$1\_7$\times$1\_7$\times$1 & 0.88 & 0.99 & 6.61 & -4$\times$1\_4$\times$1\_4$\times$1 & 0.69 & 0.57 & 6.50 \\
\end{tabularx}
\end{center}
\end{table}
\begin{table}
\caption{Test errors evaluated on CIFAR and SVHN. $k$ is growth rate. The multiscale feature strategy is used in RKNets. A and B represent attentional transition and bottleneck respectively. The bottleneck layers which output $k$ channels to the following layers are used in IRKNets. C10 and C100 stand for CIFAR-10 and CIFAR-100 respectively. "+" indicates standard data augmentation. When data augmentation is not used, dropout layers are added. The values with * are provided by \citet{Huang_2017_CVPR}. The values with $\dagger$ are provided by \citet{Kuen_2017_ICCV}. The values with $\star$ are computed by ourselves. FLOPs and Params are calculated on CIFAR-10 or SVHN. RKNets are run with batchsize 32 on CIFAR but run with batchsize 64 on SVHN. Results that outperform all competing methods are {\bf bold} and the overall best result is \textcolor{blue}{\bf blue}.}
\label{tb:cifar}
\begin{center}
\begin{tabularx}{\textwidth}{lXXXXXXX}
\multicolumn{1}{c}{\bf Model} & \multicolumn{1}{X}{\bf FLOPs (G)} & \multicolumn{1}{X}{\bf Params (M)} & \multicolumn{1}{X}{\bf C10 (\%)} & \multicolumn{1}{X}{\bf C10+ (\%)} & \multicolumn{1}{X}{\bf C100 (\%)} & \multicolumn{1}{X}{\bf C100+ (\%)} & \multicolumn{1}{X}{\bf SVHN (\%)} \\
\hline \\
pre-act ResNet \citep{he2016identity} & - & 10.2 & 10.56* & 4.62 & 33.47* & 22.71 & - \\
\hline \\
WRN \citep{Zagoruyko2016WRN} & 3.10$\dagger$ & 11.0 & - & 4.27 & - & 20.43 & 1.54 \\
& 10.49$\dagger$ & 36.5 & - & 4.00 & - & 19.25 & - \\
\hline \\
DenseNet \citep{Huang_2017_CVPR} & 14.53$\star$ & 27.2 & 5.83 & 3.74 & 23.42 & 19.25 & 1.59 \\
& 10.83$\star$ & 15.3 & 5.19 & 3.62 & \textcolor{blue}{\bf 19.64} & 17.60 & 1.74 \\
& 18.59$\star$ & 25.6 & - & 3.46 & - & 17.18 & - \\
\hline \\
Hamiltonian \citep{chang2018reversible} & - & 1.68 & - & 5.98 & - & 26.11 & - \\
\hline \\
LM-architecture \citep{pmlr-v80-lu18d} & - & 1.7 & - & 5.27 & - & 22.9 & - \\
& - & 68.8 & - & - & - & \textcolor{blue}{\bf 16.79} & - \\
\hline \\
CliqueNet \citep{Yang_2018_CVPR} & 9.45 & 10.14 & 5.06 & - & 23.14 & - & \textcolor{blue}{\bf 1.51} \\
& 10.56$\star$ & 10.48$\star$ & 5.06 & - & 21.83 & - & 1.64 \\
\hline \\
IRKNet-5$\times$1\_5$\times$1\_5$\times$1-AB ($k$=80) & 2.17 & 1.40 & 5.27 & 4.23 & 24.35 & 20.85 & 1.74 \\
IRKNet-5$\times$1\_5$\times$1\_5$\times$1-A ($k$=80) & 5.44 & 4.37 & - & - & - & - & 1.63 \\
IRKNet-5$\times$1\_5$\times$1\_5$\times$1-AB ($k$=150) & 7.62 & 4.87 & {\bf 4.60} & 3.60 & 21.39 & 19.42 & 1.64 \\
IRKNet-6$\times$1\_6$\times$1\_6$\times$1-A ($k$=80) & 7.92 & 6.28 & - & - & - & - & 1.52 \\
IRKNet-5$\times$1\_5$\times$1\_5$\times$1-AB ($k$=180) & 10.98 & 6.99 & \textcolor{blue}{\bf 4.56} & - & 20.88 & 18.61 & - \\
IRKNet-5$\times$1\_5$\times$1\_5$\times$1-AB ($k$=200) & 13.55 & 8.63 & - & 3.54 & 20.67 & 18.11 & - \\
IRKNet-5$\times$1\_5$\times$1\_5$\times$1-AB ($k$=240) & 19.51 & 12.41 & - & \textcolor{blue}{\bf 3.40} & 20.58 & - & - \\
\end{tabularx}
\end{center}
\end{table}
\begin{table}
\caption{Classification errors on ImageNet validation set with a single-crop (224$\times$224). The growth rate $k$ is 32 and $mk$ is the initial number of channels in each period in RKNets. $m_n$ stands for $m$ in the $n$th period. For each RKNet in this table, $m_0$ is 2, $m_1$ is 4 and $m_2$ is 8.
B represents bottleneck. The bottleneck layers which output $4k$ channels to the following layers are used in ERKNets.
}
\label{tb:imagenet}
\begin{center}
\begin{tabularx}{\textwidth}{llXXXX}
\multicolumn{1}{c}{\bf Model} & \multicolumn{1}{c}{$\bm m_3$} & \multicolumn{1}{X}{\bf FLOPs (G)} & \multicolumn{1}{X}{\bf Params (M)} & \multicolumn{1}{X}{\bf Top1 (\%)} & \multicolumn{1}{X}{\bf Top5 (\%)} \\
\hline \\
ERKNet-3$\times$1\_3$\times$1\_3$\times$1\_1$\times$1-B & 16 & 5.20 & 6.95 & 25.47 & 7.81 \\
ERKNet-3$\times$1\_3$\times$1\_4$\times$1\_2$\times$1-B & 20 & 6.35 & 14.49 & 24.12 & 7.17 \\
ERKNet-3$\times$1\_3$\times$1\_6$\times$1\_2$\times$1-B & 28 & 8.50 & 25.51 & 23.14 & 6.66 \\
\end{tabularx}
\end{center}
\end{table}
According to the theoretical results, an RK method with more stages usually has a higher order and a lower truncation error. Therefore, as the number of stages increases, a more precise approximation of the system states in every period leads to more accurate classification.
Table~\ref{tb:ERKvsIRK} shows the number of FLOPs and parameters and classification error on CIFAR-10 for RKNets with varying number of stages in each period. The empirical results are consistent with the theoretical properties.
IRKNets are evaluated on CIFAR-10, CIFAR-100 and SVHN while ERKNets are evaluated on ImageNet to compare with the state-of-the-art network models.
The test errors of IRKNets on CIFAR-10, CIFAR-100 and SVHN are shown in Table~\ref{tb:cifar}. The top-1 and top-5 errors on ImageNet validation set with a single-crop (224$\times$224) are shown in Table~\ref{tb:imagenet}. Figure~\ref{fig:plot} shows
the single-crop top-1 validation errors of DenseNets, CliqueNets and RKNets as a function of the number of parameters (left) and
FLOPs (right). According to the experimental results, RKNets are more efficient than the state-of-the-art models on CIFAR-10 and on par on CIFAR-100, SVHN and ImageNet.
\begin{figure}[t]
\caption{Comparison of the DenseNets, CliqueNets and RKNets. The top-1 error rates (single-crop testing) on the ImageNet validation dataset are shown as a function of learned parameters (left) and FLOPs during test-time (right). RKNets compared here are the models shown in Table~\ref{tb:imagenet}.}
\centering
\begin{minipage}[t]{0.48\textwidth}
\centering
\begin{tikzpicture}
\begin{axis}[
xlabel={\#Parameters (M)},
ylabel={Validation error},
grid=major,
legend entries={DenseNets, CliqueNets, RKNets},
]
\addplot+[sharp plot]
coordinates
{
(7.98, 25.02)
(14.15, 23.80)
(20.01, 22.58)
(28.68, 22.33)
};
\node [anchor=north east, font=\tiny, color=blue] at (7.98, 25.02) {-121};
\node [anchor=north east, font=\tiny, color=blue] at (14.15, 23.80) {-169};
\node [anchor=north east, font=\tiny, color=blue] at (20.01, 22.58) {-201};
\node [anchor=north east, font=\tiny, color=blue] at (28.68, 22.33) {-161 (k=48)};
\addplot+[sharp plot]
coordinates
{
(5.70, 27.52)
(7.96, 26.21)
(10.00, 25.85)
(11.01, 24.82)
(13.17, 24.98)
(14.38, 24.01)
};
\node [anchor=south west, font=\tiny, color=red] at (5.70, 27.52) {S0*};
\node [anchor=south west, font=\tiny, color=red] at (7.96, 26.21) {S1*};
\node [anchor=south west, font=\tiny, color=red] at (10.00, 25.85) {S2*};
\node [anchor=south west, font=\tiny, color=red] at (11.01, 24.82) {S2};
\node [anchor=south west, font=\tiny, color=red] at (13.17, 24.98) {S3*};
\node [anchor=north west, font=\tiny, color=red] at (14.38, 24.01) {S3};
\addplot+[sharp plot]
coordinates
{
(6.95, 25.47)
(14.49, 24.12)
(25.51, 23.14)
};
\node [anchor=south west, font=\tiny, color=brown] at (6.95, 25.47) {ERKNet-3$\times$1\_3$\times$1\_3$\times$1\_1$\times$1-B};
\node [anchor=south west, font=\tiny, color=brown] at (14.49, 24.12) {ERKNet-3$\times$1\_3$\times$1\_4$\times$1\_2$\times$1-B};
\node [anchor=south, font=\tiny, color=brown] at (25.51, 23.14) {ERKNet-3$\times$1\_3$\times$1\_6$\times$1\_2$\times$1-B ----------};
\end{axis}
\end{tikzpicture}
\end{minipage}
\begin{minipage}[t]{0.48\textwidth}
\centering
\begin{tikzpicture}
\begin{axis}[
xlabel={\#FLOPs (G)},
ylabel={Validation error},
grid=major,
legend entries={DenseNets, CliqueNets, RKNets},
]
\addplot+[sharp plot]
coordinates
{
(5.66,25.02)
(6.71,23.80)
(8.57,22.58)
(15.44,22.33)
};
\node [anchor=east, font=\tiny, color=blue] at (5.66,25.02) {-121};
\node [anchor=north east, font=\tiny, color=blue] at (6.71,23.80) {-169};
\node [anchor=north east, font=\tiny, color=blue] at (8.57,22.58) {-201};
\node [anchor=north east, font=\tiny, color=blue] at (15.44,22.33) {-161 (k=48)};
\addplot+[sharp plot]
coordinates
{
(4.33,27.52)
(5.65,26.21)
(5.69,25.85)
(5.69,24.82)
(7.74,24.98)
(7.74,24.01)
};
\node [anchor=south west, font=\tiny, color=red] at (4.33,27.52) {S0*};
\node [anchor=south west, font=\tiny, color=red] at (5.65,26.21) {S1*};
\node [anchor=south west, font=\tiny, color=red] at (5.69,25.85) {S2*};
\node [anchor=south west, font=\tiny, color=red] at (5.69,24.82) {S2};
\node [anchor=south west, font=\tiny, color=red] at (7.74,24.98) {S3*};
\node [anchor=north west, font=\tiny, color=red] at (7.74,24.01) {S3};
\addplot+[sharp plot]
coordinates
{
(5.20,25.47)
(6.35,24.12)
(8.50,23.14)
};
\node [anchor=south west, font=\tiny, color=brown] at (5.20,25.47) {ERKNet-3$\times$1\_3$\times$1\_3$\times$1\_1$\times$1-B};
\node [anchor=west, font=\tiny, color=brown] at (6.35,24.12) {ERKNet-3$\times$1\_3$\times$1\_4$\times$1\_2$\times$1-B};
\node [anchor=south west, font=\tiny, color=brown] at (8.50,23.14) {ERKNet-3$\times$1\_3$\times$1\_6$\times$1\_2$\times$1-B};
\end{axis}
\end{tikzpicture}
\end{minipage}
\label{fig:plot}
\end{figure}
\section{Conclusion}
\label{conclusion}
We propose to employ a type of numerical ODE methods, the RK methods, to construct convolutional neural networks for image classification tasks. The proposed network architecture can systematically generalize to high order. At the same time, we give a theoretical interpretation of the DenseNet and CliqueNet via the dynamical systems view. The model constructed using the RK methods is referred to as the RKNet, which can be converted from a DenseNet or CliqueNet by enforcing theoretical constraints.
The experimental results validate the theoretical properties of RK methods and support the dynamical systems interpretation. Moreover, the experimental results demonstrate that RKNets surpass the state-of-the-art models on CIFAR-10 and are on par on CIFAR-100, SVHN and ImageNet.
With the help of the dynamical systems view and various numerical ODE methods including RK methods, more general neural networks can be constructed. Many aspects of RKNets and the dynamical systems view still require further investigation. We hope this work inspires future research directions.
|
{
"timestamp": "2019-01-18T02:09:10",
"yymm": "1802",
"arxiv_id": "1802.08831",
"language": "en",
"url": "https://arxiv.org/abs/1802.08831"
}
|
\section{Introduction}
Nonnegative matrix factorization (NMF) \cite{Lee:99} extracts the latent factors in a low dimensional subspace. The popularity of NMF is due to its ability to learn \textit{parts}-based representation by the use of nonnegative constraints. Numerous successes have been found in document clustering \cite{xu2004document,lu2017nonconvex}, computer vision \cite{Lee:99}, signal processing \cite{gao2016minimum,lu2017nonconvex}, etc.
Suppose a collection of $N$ samples with $M$ nonnegative measurements is denoted in matrix form $X\in{\mbox{$\mathbf{R}$}} _+^{M\times N}$, where each column is a sample. The purpose of NMF is to approximate $X$ by a product of two nonnegative matrices $B\in{\mbox{$\mathbf{R}$}}_+^{M\times K}$ and $C\in{\mbox{$\mathbf{R}$}}_+^{K\times N}$ with a desired low dimension $K$, where $K\ll \min \{M,N\}$. The columns of matrix $B$ can be considered as a \textit{basis} in the low dimension subspace, while the columns of matrix $C$ are the \textit{coordinates}. NMF can be formulated as an optimization problem in \eqref{opt:nmf}:
\begin{subequations}\label{opt:nmf}
\begin{eqnarray}
\underset{B,C}{\text{minimize}} &&f(B,C)=\frac{1}{2}\norm{X-BC}_F^2\\
\text{subject to} && B,C\geq0,
\end{eqnarray}
\end{subequations}
where \quo{$\geq0$} means element-wise nonnegative, and $\norm{\cdot}_F$ is the Frobenius norm. The problem \eqref{opt:nmf} is nonconvex with respect to variables $B$ and $C$. Finding the global minimum is NP-hard \cite{vavasis2009complexity}. Thus, a practical algorithm usually converges to a local minimum.
Many algorithms have been proposed to solve NMF such as \textit{multiplicative updates} (MU) \cite{lee2001algorithms}, \textit{hierarchical alternating least square} (HALS) \cite{cichocki2007hierarchical,li2009fastnmf}, \textit{alternating direction multiplier method} (ADMM) \cite{zhang2010alternating}, and \textit{alternating nonnegative least square} (ANLS) \cite{kim2011fast}. Amongst those algorithms, ANLS has the largest reduction of objective value per iteration since it exactly solves \textit{nonnegative least square} (NNLS) subproblems using a \textit{block principal pivoting} (BPP) method \cite{kim2011fast}. Unfortunately, the computation of each iteration is costly. The algorithm HALS, on the other hand, solves subproblems inexactly with cheaper computation and has achieved faster convergence in terms of time \cite{kim2011fast,gillis2012accelerated}. Instead of iteratively solving the subproblems, ADMM obtains closed-form solutions by using auxiliary variables. A drawback of ADMM is that it is sensitive to the choice of the tuning parameters, even to the point where poor parameter selection can lead to algorithm divergence \cite{sun2014alternating}.
Most of the proposed algorithms are intended for centralized implementation, assuming that the whole data matrix can be loaded into the RAM of a single computer node. In the era of massive data sets, however, this assumption is often not satisfied, since the number of samples is too large to be stored in a single node. As a result, there is a growing need to develop algorithms in distributed system. Thus, in this paper, we assume the number of samples is so large that the data matrix is collected and stored distributively. Such applications can be found in e-commerce ({\it e.g.}, Amazon), digital content streaming ({\it e.g.}, Netflix) \cite{koren2009matrix} and technology ({\it e.g.}, Facebook, Google) \cite{tan2016faster}, where they have hundreds of millions of users.
Many distributed algorithms have been published recently. The distributed MU \cite{liu2010distributed,yin2014scalable} has been proposed as the first distributed algorithm to solve NMF. However, MU suffers from slow and ill convergence in some cases \cite{lin2007convergence}. \citeauthor{kannan2016high} \cite{kannan2016high} proposed high performance ANLS (HPC-ANLS) using 2D-grid partition of a data matrix such that each node only stores a submatrix of the data matrix. Nevertheless, six communication steps per iteration are required to obtain intermediate variables so as to solve the subproblems. Thus, the communication overhead is significant. Moreover, the computation is costly as they use ANLS framework. The most recent work is distributed HALS (D-HALS) \cite{zdunek2017distributed}. However, they assume the factors $B$ and $C$ can be stored in the shared memory of the computer nodes, which may not be the case if $N$ is large. \citeauthor{boyd2011distributed} \cite{boyd2011distributed} suggested that ADMM has the potential to solve NMF distributively. \citeauthor{du2014maxios} \cite{du2014maxios} demonstrated this idea in an algorithm called Maxios. Similar to HPC-ANLS, the communication overhead is expensive, since every latent factor or auxiliary variable has to be gathered and broadcasted over all computational nodes. As a result, eight communication steps per iteration are necessary. In addition, Maxios only works for sparse matrices since they assume the whole data matrix is stored in every computer node.
In this paper, we propose a distributed algorithm based on block coordinate descent framework. The main contributions of this paper are listed below.
\begin{itemize}
\item We propose a novel distributed algorithm, called \textit{distributed incremental block coordinate descent} (DID). By splitting the columns of the data matrix, DID is capable of updating the coordinate matrix $C$ in parallel. Leveraging the most recent residual matrix, the basis matrix $B$ is updated distributively and incrementally. Thus, only one communication step is needed in each iteration.
\item A scalable and easy implementation of DID is derived using \textit{Message Passing Interface} (MPI). Our implementation does not require a master processor to synchronize.
\item Experimental results showcase the correctness, efficiency, and scalability of our novel method.
\end{itemize}
The paper is organized as follows. In Section 2, the previous works are briefly reviewed. Section 3 introduces a distributed ADMM for comparison purpose. The novel algorithm DID is detailed in Section 4. In Section 5, the algorithms are evaluated and compared. Finally, the conclusions are drawn in Section 6.
\section{Previous Works}
In this section we briefly introduce three standard algorithms to solve NMF problem \eqref{opt:nmf}, {\it i.e.}, ANLS, HALS, and ADMM, and discuss the parallelism of their distributed versions.
\paragraph{Notations.}Given a nonnegative matrix $X\in{\mbox{$\mathbf{R}$}}_+^{M\times N}$ with $M$ rows and $N$ columns, we use $x_i^r\in{\mbox{$\mathbf{R}$}}_+^{1\times N}$ to denote its $i$-th row, $x_j\in{\mbox{$\mathbf{R}$}}_+^{M\times 1}$ to denote the $j$-th column, and $x_{ij}\in{\mbox{$\mathbf{R}$}}_+$ to denote the entry in the $i$-th row and $j$-th column. In addition, we use $x_i^{rT}\in{\mbox{$\mathbf{R}$}}_+^{N\times 1}$ and $x_j^T\in{\mbox{$\mathbf{R}$}}_+^{1\times M}$ to denote the transpose of $i$-th row and $j$-th column, respectively.
\subsection{ANLS}
The optimization problem \eqref{opt:nmf} is biconvex, {\it i.e.}, if either factor is fixed, updating another is in fact reduced to a \textit{nonnegative least square} (NNLS) problem. Thus, ANLS \cite{kim2011fast} minimizes the NNLS subproblems with respect to $B$ and $C$, alternately. The procedure is given by
\begin{subequations}\label{opt:ANLS}
\begin{eqnarray}
&C:=\mathop{\rm argmin}_{C\geq 0}\norm{X-BC}_F^2\\
&B:=\mathop{\rm argmin}_{B\geq 0}\norm{X-BC}_F^2.
\end{eqnarray}
\end{subequations}
The optimal solution of a NNLS subproblem can be achieved using BPP method.
A naive distributed ANLS is to parallel $C$-minimization step in a column-by-column manner and $B$-minimization step in a row-by-row manner. HPC-ANLS \cite{kannan2016high} divides the matrix $X$ into 2D-grid blocks, the matrix $B$ into $P_r$ row blocks, and the matrix $C$ into $P_c$ column blocks so that the memory requirement of each node is $\mathcal{O}(\frac{MN}{P_rP_c}+ \frac{MK}{P_r}+\frac{NK}{P_c})$, where $P_r$ is the number of rows processor and $P_c$ is the number of columns processor such that $P=P_cP_r$ is the total number of processors. To really perform updates, the intermediate variables $CC^T$, $XC^T$, $B^TB$, and $B^TX$ are computed and broadcasted using totally six communication steps. Each of them has a cost of $\log P\cdot(\alpha + \beta\cdot NK)$, where $\alpha$ is \textit{latency}, and $\beta$ is \textit{inverse bandwidth} in a \textit{distributed memory network} model \cite{chan2007collective}. The analysis is summarized in Table \ref{tab:an}.
\subsection{HALS}
Since the optimal solution to the subproblem is not required when updating one factor, a comparable method, called HALS, which achieves an approximate solution is proposed by \cite{cichocki2007hierarchical}. The algorithm HALS successively updates each column of $B$ and row of $C$ with an optimal solution in a closed form.
The objective function in \eqref{opt:nmf} can be expressed with respect to the $k$-th column of $B$ and $k$-th row of $C$ as follows
\begin{equation*}
\resizebox{0.95\hsize}{!}{$
\Big\|X-BC\Big\|_F^2=\Big\| X-\sum_{i=1}^K b_ic_i^r \Big\|=\Big\|X-\sum_{i\neq k}b_ic_i^r-b_kc_k^r\Big\|_F^2
$},
\end{equation*}
Let $A\triangleq X-\sum_{i\neq k}b_ic_i^r$ and fix all the variables except $b_{k}$ or $c_k^r$. We have subproblems in $b_k$ and $c_k^r$
\begin{subequations}
\begin{align}
&\underset{b_{k}\geq 0}{\min} \; \norm{A-b_kc_k^r}_F^2, \\
&\underset{c_k^r\geq 0}{\min} \; \norm{A-b_kc_k^r}_F^2
\end{align}
\end{subequations}
By setting the derivative with respect to $b_k$ or $c_k^r$ to zero and projecting the result to the nonnegative region, the optimal solution of $b_k$ and $c_k^r$ can be easily written in a closed form
\begin{subequations}
\begin{align}
b_{k}&:=\left[(c_k^rc_k^{rT})^{-1}(Ac_k^{rT})\right]_+\\
c_{k}^r&:=\left[(b_k^Tb_k)^{-1}(A^Tb_k)\right]_+
\end{align}
\end{subequations}
where $\left[z\right]_+$ is $\max\{0, z\}$. Therefore, we have $K$ inner-loop iterations to update every pair of $b_k$ and $c_k^r$. With cheaper computational cost, HALS was confirmed to have faster convergence in terms of time.
\citeauthor{zdunek2017distributed} in \citeyear{zdunek2017distributed} proposed a distributed version of HALS, called DHALS. They also divide the data matrix $X$ into 2D-grid blocks. Comparing with HPC-ANLS, the resulting algorithm DHALS only requires two communication steps. However, they assume matrices $B$ and $C$ can be loaded into the shared memory of a single node. Therefore, DHALS is not applicable in our scenario where we assume $N$ is so big that even the latent factors are stored distributively. See the detailed analysis in Table 1.
\subsection{ADMM}
The algorithm ADMM \cite{zhang2010alternating} solves the NMF problem by alternately optimizing the Lagrangian function with respect to different variables. Specifically, the NMF \eqref{opt:nmf} is reformulated as
\begin{subequations}\label{opt:admm}
\begin{eqnarray}
\underset{B,C,W,H}{\text{minimize}} &&\frac{1}{2}\norm{X-WH}_F^2\\
\text{subject to} && B=W, C=H\\
&& B,C\geq0,
\end{eqnarray}
\end{subequations}
where $W\in{\mbox{$\mathbf{R}$}}^{M\times K}$ and $H\in{\mbox{$\mathbf{R}$}}^{K\times N}$ are \textit{auxiliary variables} without \textit{nonnegative} constraints. The \textit{augmented Lagrangian function} is given by
\begin{multline}
\mathcal{L}(B,C,W,H;\Phi,\Psi)_\rho=\frac{1}{2}\norm{X-WH}_F^2+\pair{\Phi}{B-W}\\+\pair{\Psi}{C-H}+\frac{\rho}{2}\norm{B-W}_F^2 + \frac{\rho}{2}\norm{C-H}_F^2
\end{multline}
where $\Phi\in{\mbox{$\mathbf{R}$}}^{M\times K}$ and $\Psi\in{\mbox{$\mathbf{R}$}}^{K\times N}$ are \textit{Lagrangian multipliers}, $\pair{\cdot}{\cdot}$ is the matrix inner product, and $\rho>0$ is the penalty parameter for equality constraints. By minimizing $\mathcal{L}$ with respect to $W$, $H$, $B$, $C$, $\Phi$, and $\Psi$ one at a time while fixing the rest, we obtain the update rules as follows
\begin{subequations}
\begin{align}
W &:= (XH^T+\Phi+\rho B)(HH^T+\rho I_K)^{-1}\\
H &:= (W^TW+\rho I_K)^{-1}(W^TX+\Psi+\rho C)\\
B &:= \left[W-\Phi/\rho\right]_+\\
C &:= \left[H-\Psi/\rho\right]_+\\
\Phi&:= \Phi+\rho(B-W)\\
\Psi&:= \Psi+\rho(C-H)
\end{align}
\end{subequations}
where $I_K\in{\mbox{$\mathbf{R}$}}^{K\times K}$ is the identity matrix. The auxiliary variables $W$ and $H$ facilitate the minimization steps for $B$ and $C$. When $\rho$ is small, however, the update rules for $W$ and $H$ result in unstable convergence \cite{sun2014alternating}. When $\rho$ is large, ADMM suffers from a slow convergence. Hence, the selection of $\rho$ is significant in practice.
Analogous to HPC-ANLS, the update of $W$ and $B$ can be parallelized in a column-by-column manner, while the update of $H$ and $C$ in a row-by-row manner. Thus, Maxios \cite{du2014maxios} divides matrix $W$ and $B$ in column blocks, and matrix $H$ and $C$ in row blocks. However, the communication overhead is expensive since one factor update depends on the others. Thus, once a factor is updated, it has to be broadcasted to all other computational nodes. As a consequence, Maxios requires theoretically eight communication steps per iteration and only works for sparse matrices. Table 1 summarizes the analysis.
\begin{table*}[th]
\resizebox{\textwidth}{!}{
\begin{tabular}{ |c|r|r|r|r| }
\hline
Algorithm & Runtime& Memory per processor & Communication time& Communication volume\\ \hline
HPC-ANLS & BPP & $\bigo{MN/(P_cP_r)+MK/P_r+NK/P_c}$ & $3(\alpha+\beta NK)\log P_r+3(\alpha+\beta MK)\log P_c$&$\bigo{MKP_c+NKP_r}$\\ \hline
D-HALS & $\bigo{MNK(1/P_c+1/P_r)}$ & $\bigo{MN/(P_cP_r)+MK+NK}$ & $(\alpha+\beta NK)\log P_r+(\alpha+\beta MK)\log P_c$&$\bigo{MKP_c+NKP_r}$\\ \hline
Maxios & $\bigo{K^3+MNK/P}$ & $\bigo{MN}$ & $4(2\alpha + \beta (N+M)K)\log P$& $\bigo{(M+N)KP}$\\ \hline
DADMM & BPP & $\bigo{MN/P+MK}$ & $(\alpha + \beta MK)\log P$& $\bigo{MKP}$\\ \hline
DBCD & $\bigo{MNK/P}$ & $\bigo{MN/P+MK}$ & $K(\alpha + \beta MK)\log P$& $\bigo{MKP}$\\ \hline
DID & $\bigo{MNK/P}$ & $\bigo{MN/P+MK}$ & $(\alpha + \beta MK)\log P$ & $\bigo{MKP}$\\ \hline
\end{tabular}
}
\caption{Analysis of distributed algorithms per iteration on runtime, memory storage, and communication time and volume.}\label{tab:an}
\end{table*}
\section{Distributed ADMM}\label{sec:DADMM}
This section derives a \textit{distributed ADMM} (DADMM) for comparison purpose. DADMM is inspired by another centralized version in \cite{boyd2011distributed,hajinezhad2016nonnegative}, where the update rules can be easily carried out in parallel, and is stable when $\rho$ is small.
As the objective function in \eqref{opt:nmf} is \textit{separable} in columns, we divide matrices $X$ and $C$ into column blocks of $P$ parts
\begin{equation}
\frac{1}{2}\norm{X-BC}_F^2=\sum_{i=1}^P \frac{1}{2}\norm{X_i-BC_i}_2^2,
\end{equation}
where $X_i\in {\mbox{$\mathbf{R}$}}_+^{M\times N_i}$ and $C_i\in {\mbox{$\mathbf{R}$}}_+^{K\times N_i}$ are column blocks of $X$ and $C$ such that $\sum_{i=1}^{P}N_i=N$.
Using a set of auxiliary variables $Y_i\in{\mbox{$\mathbf{R}$}}^{M\times N_i}$, the NMF \eqref{opt:nmf} can be reformulated as
\begin{subequations}\label{opt:admm2}
\begin{eqnarray}
\underset{Y_i,B,C}{\text{minimize}} &&\sum_{i=1}^P\frac{1}{2}\norm{X_i-Y_i}_F^2\\
\text{subject to} && Y_i=BC_i,\quad\text{for } i=1,2,\cdots, P\\
&& B,C\geq0.
\end{eqnarray}
\end{subequations}
The associated augmented Lagrangian function is given by
\begin{multline}
\mathcal{L}(Y_i,B,C;\Lambda_i)_\rho=\sum_{i=1}^P\frac{1}{2}\norm{X_i-Y_i}_F^2\\+\sum_{i=1}^P\pair{\Lambda_i}{Y_i-BC_i}+\sum_{i=1}^P\frac{\rho}{2}\norm{Y_i-BC_i}_F^2,
\end{multline}
where $\Lambda_i\in{\mbox{$\mathbf{R}$}}^{M\times K}$ are the Lagrangian multipliers. The resulting ADMM is
\begin{subequations}
\begin{align}
Y_i&:=\mathop{\rm argmin}_{Y_i}\frac{1}{2}\norm{X_i-Y_i}_2^2+\frac{\rho}{2}\norm{\Lambda_i/\rho+Y_i-BC_i}_F^2\\
C_i&:=\mathop{\rm argmin}_{C_i\geq0}\norm{\Lambda_i/\rho+Y_i-BC_i}_2^2\\
B&:=\mathop{\rm argmin}_{B\geq 0}\norm{\Lambda/\rho + Y-BC}_F^2\label{up:B}\\
\Lambda_i&:=\mathop{\rm argmax}_{\Lambda_i}\pair{\Lambda_i}{Y_i-BC_i}
\end{align}
\end{subequations}
where $\Lambda\triangleq [\Lambda_1\;\Lambda_2\cdots \Lambda_P]$ and $Y\triangleq [Y_1\;Y_2\cdots Y_P]$. Clearly, the $Y_i$ update has a closed-form solution by taking the derivate and setting it to zero, {\it i.e.},
\begin{align}
Y_i:=\frac{1}{1+\rho}(X_i -\Lambda_i + \rho BC_i)
\end{align}
Moreover, the updates for $Y_i$, $C_i$, and $\Lambda_i$ can be carried out in \textit{parallel}. Meanwhile, $B$ needs a central processor to update since the step \eqref{up:B} requires the whole matrices $Y$, $C$, and $\Lambda$. If we use the solver BPP, however, we do not really need to gather those matrices, because the solver BPP in fact does not explicitly need $Y$, $C$, and $\Lambda$. Instead, it requires two intermediate variables $W\triangleq CC^T$ and $H\triangleq (\Lambda/\rho+Y)C^T$, which can be computed as follows:
\begin{subequations}
\begin{align}
W&\triangleq CC^T=\sum_{i=1}^P C_iC_i^T,\\
H&\triangleq (\Lambda/\rho+Y)C^T = \sum_{i=1}^P (\Lambda_i/\rho+Y_i)C_i^T.
\end{align}
\end{subequations}
It is no doubt that those intermediate variables can be calculated distributively. Let $U_i = \Lambda_i/\rho$, which is called \textit{scaled dual variable}. Using the scaled dual variable, we can express DADMM in a more efficient and compact way. A simple MPI implementation of algorithm DADMM on each computational node is summarized in Algorithm \ref{alg:DADMM}.
\begin{algorithm}
\KwIn{$X_i$, $C_i$, $B$}
\textbf{Initialize} $P$ processors, along with $Y_i$, $B$, $C_i$, $X_i$\\
\Repeat{stopping criteria satisfied}{
\nl $U_i:=U_i+(Y_i-BC_i)$\\
\nl $Y_i:=\frac{1}{1+\rho}(X_i-\rho U_i+\rho BC_i)$\\
\nl $C_i:=\mathop{\rm argmin}_{C_i\geq0}\norm{U_i+Y_i-BC_i}_2^2$\\
\nl $(W,H):=Allreduce(C_iC_i^T, (U_i+Y_i)C_i^T)$\label{DADMM:comm}\\
\nl $B:=\text{BPP}(W,H)$
}
\caption{DADMM for each computational node}\label{alg:DADMM}
\end{algorithm}
At line \ref{DADMM:comm} in Algorithm \ref{alg:DADMM}, theoretically we need a master processor to \textit{gather} $C_iC_i^T$ and $(U_i+Y_i)C_i^T$ from every local processor and then \textit{broadcast} the updated value of $CC^T$ and $(U+Y)C^T$ back. As a result, the master processor needs a storage of $\bigo{MKP}$. However, we use a collaborative operation called \textit{Allreduce} \cite{chan2007collective}. Leveraging it, the master processor is discarded and the storage of each processor is reduced to $\bigo{MK}$.
\section{Distributed Incremental Block Coordinate Descent}
The popularity of ADMM is due to its ability of carrying out subproblems in parallel such as DADMM in Algorithm \ref{alg:DADMM}. However, the computation of ADMM is costly since it generally involves introducing new auxiliary variables and updating dual variables. The computational cost is even more expensive as it is required to find optimal solutions of subproblems to ensure convergence. In this section, we will propose another distributed algorithm that adapts block coordinate descent framework and achieves approximate solutions at each iteration. Moreover, leveraging the current residual matrix facilitates the update for matrix $B$ so that columns of $B$ can be updated incrementally.
\subsection{Distributed Block Coordinate Descent}
We firstly introduce a naive parallel and distributed algorithm, which is inspired by HALS, called \textit{distributed block coordinate descent} (DBCD). Since the objective function in \eqref{opt:nmf} is separable, the matrix $X$ is partitioned by columns, then each processor is able to update columns of $C$ in parallel, and prepare messages concurrently to update matrix $B$.
Analogous to DADMM, the objective function in \eqref{opt:nmf} can be expanded as follows
\begin{align*}
\resizebox{\hsize}{!}{$
\Big\|X-BC\Big\|_F^2 =\sum_{j=1}^{N} \Big\|x_j-Bc_j\Big\|^2=\sum_{j=1}^{N}\Big\|x_j-\sum_{k=1}^{K}b_kc_{kj}\Big\|^2
$}
\end{align*}
By coordinate descent framework, we only consider one element at a time. To update $c_{ij}$, we fix the rest of variables as constant, then the objective function becomes
\begin{align}
\sum\nolimits_{j=1}^{N}\norm{x_j-\sum\nolimits_{k\neq i}b_kc_{kj}- b_ic_{ij}}^2.\label{obj:dbcd}
\end{align}
Taking the partial derivative of the objective function \eqref{obj:dbcd} with respect to $c_{ij}$ and setting it to zero, we have
\begin{align}
b_i^T\left(b_ic_{ij}- \left(x_j-\sum\nolimits_{k\neq i}b_kc_{kj}\right)\right) = 0.
\end{align}
The optimal solution of $c_{ij}$ can be easily derived in a closed form as follows
\begin{subequations}
\begin{align}
c_{ij} :=& \left[\frac{b_i^T(x_j-\sum_{k\neq i}b_kc_{kj})}{b_i^Tb_i}\right]_+\\
=&\left[\frac{b_i^T(x_j-Bc_j + b_ic_{ij})}{b_i^Tb_i}\right]_+\\
=&\left[c_{ij} + \frac{b_i^T(x_j-Bc_j)}{b_i^Tb_i}\right]_+\label{sol:c_ij}
\end{align}
\end{subequations}
Based on the equation \eqref{sol:c_ij}, the $j$-th column of $C$ is required so as to update $c_{ij}$. Thus, updating a column $c_j$ has to be sequential. However, the update can be executed in parallel for all $j$'s. Therefore, the columns of matrix $C$ can be updated independently, while each component in a column $c_j$ is optimized in sequence.
The complexity of updating each $c_{ij}$ is $\bigo{MK}$. Thus, the entire complexity of updating matrix $C$ is $\bigo{MNK^2/P}$. This complexity can be reduced by bringing $x_j-Bc_j$ outside the loop and redefining as $e_j\triangleq x_j-Bc_j$. The improved update rule is
\begin{subequations}
\begin{align}
e_j &:= e_j + b_ic_{ij}\\
c_{ij} &:= \left[\frac{b_i^Te_j}{b_i^Tb_i}\right]_+\\
e_j &:= e_j - b_ic_{ij}
\end{align}\label{up:c}
\end{subequations}
By doing so, the complexity is reduced to $\bigo{MNK/P}$.
The analogous derivation can be carried out to update the $i$-th column of matrix $B$, {\it i.e.}, $b_i$. By taking partial derivative of the objective function \eqref{obj:dbcd} with respect to $b_i$ and setting it to zero, we have equation
\begin{align}
\sum_{j=1}^{N}\left(b_ic_{ij}-\left(x_j-\sum\nolimits_{k\neq i}b_kc_{kj}\right)\right)c_{ij}=0
\end{align}
Solving this linear equation gives us a closed-form to the optimal solution of $b_i$
\begin{subequations}
\begin{align}
b_i:=&\left[\frac{\sum_{j=1}^{N}(x_j-Bc_j+b_ic_{ij})c_{ij}}{\sum_{j=1}^{N}c_{ij}^2}\right]_+\label{sol:b_i 2}\\
=&\left[b_i+\frac{\sum_{j=1}^{N}(x_j-Bc_j)c_{ij}}{\sum_{j=1}^{N}c_{ij}^2}\right]_+\label{sol:b_i 3}\\
=&\left[b_i + \frac{(X-BC)c_i^{rT}}{c_i^rc_i^{rT}}\right]_+\label{sol:b_i 4}
\end{align}
\end{subequations}
Unfortunately, there is no way to update $b_i$ in parallel since the equation \eqref{sol:b_i 4} involves the whole matrices $X$ and $C$. That is the reason why sequential algorithms can be easily implemented in the shared memory but cannot directly be applied in distributed memory. Thus, other works \cite{kannan2016high,zdunek2017distributed,du2014maxios} either use \textit{gather} operations to collect messages from local processors or assume small size of the latent factors.
By analyzing the equation \eqref{sol:b_i 2}, we discover the potential parallelism. We define a vector $y_j$ and a scaler $z_j$ as follows
\begin{subequations}
\begin{align}
y_{j}&\triangleq (x_j-Bc_j+b_ic_{ij})c_{ij}=(e_j+b_ic_{ij})c_{ij}\\
z_j &\triangleq c_{ij}^2
\end{align}
\end{subequations}
The vector $y_j$ and scaler $z_j$ can be computed in parallel. After receiving messages including $y_j$'s and $z_j$'s from other processors, a master processor updates the column $b_i$ as a scaled summation of $y_{j}$ with scaler $z \triangleq \sum_{j=1}^Nz_j$, that is,
\begin{align}
b_i := \left[y/z\right]_+
\end{align}
where $y \triangleq \sum\nolimits_{j=1}^{N}y_{j}$. Thus, the update for matrix $B$ can be executed in parallel but indirectly. The complexity of updating $b_i$ is $\bigo{MN/P}$ as we reserve error vector $e_j$ and concurrently compute $y_j$ and $z_j$. The complexity of updating entire matrix $B$ is $\bigo{MNK/P}$.
By partitioning the data matrix $X$ by columns, the update for matrix $C$ can be carried out in parallel. In addition, we identify vectors $y_j$'s and scalars $z_j$'s to update matrix $B$, and their computation can be executed concurrently among computational nodes. A MPI implementation of this algorithm for each processor is summarized in Algorithm \ref{alg:DBCD}.
\begin{algorithm}
\KwIn{$x_j$, $c_j$, $B$}
\Repeat{stopping criteria satisfied}{
\tcp{Update $C$}
$e_j:= x_j-Bc_j$\\
\For {all $i\in\{1,2,\cdots,K\}$}{
Update $c_{ij}$ using equations \eqref{up:c}
}
\tcp{Update $B$}
\For{all $i\in\{1,2,\cdots,K\}$} {
$e_j=e_j + b_ic_{ij}$\\
$y_{j}=e_jc_{ij}$\\
$z_j = c_{ij}^2$\\
$(y,z)=Allreduce(y_{j}, z_{j})$\\
$b_i := \left[y/z\right]_+$\\
$e_j=e_j - b_ic_{ij}$
}
}
\caption{DBCD for each computational node}\label{alg:DBCD}
\end{algorithm}
\subsection{Incremental Update for $b_i$}
The complexity of algorithm DBCD is $\bigo{MNK/P}$ per iteration, which is perfectly parallelizing a sequential block coordinate descent algorithm. However, the performance of DBCD could be deficient due to the delay in network. In principle, DBCD sends totally $KP$ messages to a master processor per iteration, which is even more if we implement DBCD using \textit{Allreduce}. Any delay of a message could cause a diminished performance. In contrast, the algorithm DID has a novel way to update matrix $B$ incrementally using only a single message from each processor per iteration.
To successfully update matrix $B$, the bottleneck is to iteratively compute $y_{j}$ and $z_j$ for associated $b_i$ since once $b_i$ is updated, the $y_{j}$ and $z_j$ have to be recomputed due to the change occurred in matrix $B$ from equation \eqref{sol:b_i 3}. Nevertheless, we discovered this change can be represented as several arithmetic operations. Thus, we in fact do not need to communicate every time in order to update each $b_i$.
Suppose that after $t$-th iteration, the $i$-th column of matrix $B$ is given, {\it i.e.}, $b_i^t$, and want to update it to $b_i^{t+1}$. Let $E=X-BC$, which is the most current residual matrix after $t$-th iteration. From equation \eqref{sol:b_i 4}, we have
\begin{align}
b_i^{t+1} :=\left[b^t_i+\frac{Ec_i^{rT}}{c_i^rc_i^{rT}}\right]_+\label{eq:b_i^{t+1}}
\end{align}
Once we update $b_i^t$ to $b_i^{t+1}$, we need to update $b_i$ in matrix $B$ so as to get new $E$ to update the next column of $B$, {\it i.e.}, $b_{i+1}$. However, we do not really need to recalculate $E$. Instead, we can update the value by
\begin{align}
E := E + b_i^tc_i^r - b_i^{t+1}c_i^r
\end{align}
We define and compute a variable $\delta b_i$ as
\begin{align}
\delta b_i\triangleq b_i^{t+1}-b_i^t.
\end{align}
Using the vector $\delta b_i$, we have a compact form to update $E$
\begin{align}
E := E - \delta b_ic_i^r
\end{align}
The updated $E$ is substituted into the update rule of $b_{i+1}$ in equation \eqref{eq:b_i^{t+1}}, and using $b_{i+1}^t$ we obtain
\begin{subequations}
\begin{align}
b_{i+1}^{t+1} :=&\left[b^t_{i+1}+\frac{(E - \delta b_ic_i^r)c_{i+1}^{rT}}{c_{i+1}^rc_{i+1}^{rT}}\right]_+\\
=&\left[b^t_{i+1}+\frac{Ec_{i+1}^{rT}}{c_{i+1}^rc_{i+1}^{rT}} - \frac{c_i^rc_{i+1}^{rT}}{c_{i+1}^rc_{i+1}^{rT}} \delta b_i\right]_+\label{eq:b_i+1}
\end{align}
\end{subequations}
In the equation \eqref{eq:b_i+1}, the first two terms is the same as general update rule for matrix $B$ in DBCD, where $Ec_{i+1}$ can be computed distributively in each computational node. On the other hand, the last term allows us to update the column $b_{i+1}$ still in a closed form but without any communication step. Therefore, the update for matrix $B$ can be carried out incrementally and the general update rule is given by
\begin{align}
b_{i}^{t+1} :=
&\left[b^t_{i}+\frac{Ec_i^{rT}}{c_{i}^rc_{i}^{rT}} - \frac{\sum_{k<i}(c_i^rc_k^{rT})\delta b_k}{c_{i}^rc_{i}^{rT}} \right]_+
\end{align}
Comparing to the messages used in DBCD, {\it i.e.}, $(y_j, z_j)$, we need to compute the coefficients for the extra term, that is, $c_i^rc_k^{rT}$ for all $k<i$. Thus, a message communicated among processors contains two parts: the weighted current residual matrix $W_j$, and a lower triangular matrix $V_j$ maintaining the inner product of matrix $C$. The matrices $W_j$ and $V_j$ are defined as below
\begin{align}
W_j&\triangleq \begin{bmatrix}
\vert & \vert & \cdots & \vert\\
e_jc_{1j} & e_jc_{2j} & \cdots & e_jc_{Kj}\\
\vert & \vert & \cdots & \vert
\end{bmatrix}\label{W_j}
\\
V_j&\triangleq \begin{bmatrix}
c_{1j}^2 & 0 & 0 & \cdots & 0\\
c_{2j}c_{1j} & c_{2j}^2 & 0 & \cdots & 0\\
\vdots & \vdots & \ddots & \vdots & 0\\
c_{Kj}c_{1j} & c_{Kj}c_{2j} & c_{Kj}c_{3j} & \cdots &c_{Kj}^2
\end{bmatrix}\label{V_j}
\end{align}
Using variables $W_j$ and $V_j$, the update rule to columns of matrix $B$ becomes
\begin{align}
b_i := \left[b_i + w_i/v_{ii}-\sum_{k<i}(v_{ik}/v_{ii})\delta b_k\right]_+
\end{align}
where $w_i$ is the $i$-th column of matrix $W$, $v_{ij}$ is the $i$-th component of $j$-th column of matrix $V$, and matrices $W$ and $V$ are the summations of matrices $W_j$ and $V_j$, respectively, {\it i.e.}, $W \triangleq \sum\nolimits_{j=1}^{N}W_j$ and $V \triangleq \sum\nolimits_{j=1}^{N}V_j$.
For each processor, they store a column of $X$, a column of $C$, and the matrix $B$. They execute the same algorithm and a MPI implementation of this incremental algorithm for each computational node is summarized in Algorithm \ref{alg:DID}. Clearly, the entire computation is unchanged and the volume of message stays the same as DBCD, but the number of communication is reduced to once per iteration.
\begin{algorithm}
\KwIn{$x_j$, $c_j$, $B$}
\Repeat{stopping criteria satisfied}{
\tcp{Update $C$}
$e_j:= x_j-Bc_j$\\
\For {all $i\in\{1,2,\cdots,K\}$}{
Update $c_{ij}$ using equations \eqref{up:c}
}
Compute $W_j$ and $V_j$ from equations \eqref{W_j} and \eqref{V_j}.\\
$(W,V):=Allreduce(W_j, V_j)$\\
\tcp{Update $B$}
\For{all $i\in\{1,2,\cdots,K\}$} {
$b_i^{t+1} := \left[b_i^t + w_i/v_{ii}-\sum_{k<i}(v_{ik}/v_{ii})\delta b_k\right]_+$\\
$\delta b_i := b_{i}^{t+1}-b_i^t$
}
}
\caption{DID for each computational node}\label{alg:DID}
\end{algorithm}
\section{Experiments}
\begin{figure*}[ht]
\centering
\begin{minipage}{.3\linewidth}
\includegraphics[width=\textwidth]{100M-seq}
\caption*{(a) Sequential}
\end{minipage}%
\begin{minipage}{.3\linewidth}
\includegraphics[width=\textwidth]{100M-dist}
\caption*{(b) Distributed}
\end{minipage}
\begin{minipage}{.3\linewidth}
\includegraphics[width=\textwidth]{100M-bar}
\caption*{(c) Computation v.s. communication}
\end{minipage}
\caption{Convergence behaviors of different algorithms with respect to time consumption of communication and computation on the dataset with $N=10^8$ samples.}
\label{fig:conv}
\end{figure*}
\begin{table*}[ht]
\resizebox{\textwidth}{!}{
\centering
\begin{tabular}{|c|r|r|r|r|r|r|r|r||r|r|r|r|r|r|r|r|}
\hline
& \multicolumn{8}{|c||}{Number of iterations} & \multicolumn{8}{|c|}{Time (seconds)}\\ \hline
$N$& HALS& ANLS & ADMM & BCD &HPC-ANLS & DADMM & DBCD&\textbf{DID}& HALS& ANLS & ADMM & BCD & HPC-ANLS& DADMM&DBCD &\textbf{DID} \\ \hline
$10^5$&1281& \textbf{141} & 170 & 549 & \textbf{141} & 170& 549&549 & 16.88& 56.59 & 45.88 & 10.61 & 4.31 &3.46& 1.42& \textbf{1.17}\\ \hline
$10^6$&225& 238 & \textbf{115} & 396 & 238 & \textbf{115}&396& 396 & 36.86& 630.43 & 476.83 & 95.24 &50.47 & 37.06& 14.04& \textbf{8.61}\\ \hline
$10^7$&\textbf{596}& 1120 & 1191 & 654 & 1120 & 1191&654 &654 & 587.47& 29234.61 & 31798.51 & 909.76 & 2372.47& 2563.47& 126.01& \textbf{106.60} \\ \hline
$10^8$&339& 163 & \textbf{97} & 302 &163 & \textbf{97}&302& 302 & 3779.11& 43197.12 & 27590.16 & 8808.92 & 10172.55& 5742.37& 785.57&\textbf{610.09}\\ \hline \hline
MNIST&495 &\textbf{197} &199 &492 &\textbf{197} &\textbf{199} &492 &492& 705.32& 395.61& 610.65&942.68& \textbf{31.84} & 46.17&170.65 &133.50\\ \hline
20News&302& \textbf{169}& \textbf{169}& 231& \textbf{169}& \textbf{169}& 231& 231& 2550.02& 745.28& 714.61& 2681.49& \textbf{131.12}& 172.69& 651.52&559.70\\ \hline
UMist&677& 1001& 953& \textbf{622}& 1001& 953 & \textbf{622}& \textbf{622}& 314.72& 657.14& 836.76& 422.11& 492.72& 471.01& 92.49& \textbf{82.34}\\ \hline
YaleB&1001& 352& \textbf{224}& 765& 352& \textbf{224}& 765& 765& 223.58& 201.22& 149.35& 236.13& 50.69& 40.61& 44.08&\textbf{36.45}\\ \hline
\end{tabular}
}
\caption{Performance comparison for algorithms on synthetic and real datasets with $P=16$ number of computing nodes.}\label{tab:result}
\end{table*}
We conduct a series of numerical experiments to compare the proposed algorithm DID with HALS, ALS, ADMM, BCD, DBCD, DADMM, and HPC-ANLS. The algorithm BCD is the sequential version of DBCD. Due to the ill convergence of ADMM and Maxios in \cite{zhang2010alternating,du2014maxios}, we derive DADMM in Section \ref{sec:DADMM} and set $\rho=1$ as default. Since we assume $M$ and $K$ are much smaller than $N$, HPC-ANLS only has column partition of the matrix $X$, {\it i.e.}, $P_c=P$ and $P_r=1$.
We use a cluster\footnote{http://www.hpc.iastate.edu/} that consists of 48 SuperMicro servers each with 16 cores, 64 GB of memory, GigE and QDR (40Gbit) InfiniBand interconnects. The algorithms are implemented in C code. The linear algebra operations use GNU Scientific Library (GSL) v2.4\footnote{http://www.gnu.org/software/gsl/} \cite{gough2009gnu}. The Message Passing Interface (MPI) implementation OpenMPI v2.1.0\footnote{https://www.open-mpi.org/} \cite{gabriel04:_open_mpi} is used for communication.
Note that we do not use multi-cores in each server. Instead, we use \textit{a single core per node} as we want to achieve consistent communication overhead between cores.
Synthetic datasets are generated with number of samples $N=10^5,10^6,10^7$ and $10^8$. Due to the storage limits of the computer system we use, we set the dimension $M=5$ and low rank $K=3$, and utilize $P=16$ number of computational nodes in the cluster. The random numbers in the synthetic datasets are generated by the Matlab command \texttt{rand} that are uniformly distributed in the interval $[0,1]$.
We also perform experimental comparisons on four real-world datasets. The MNIST dataset\footnote{http://yann.lecun.com/exdb/mnist/} of handwritten digits has 70,000 samples of 28x28 image. The 20News dataset\footnote{http://qwone.com/\texttildelow jason/20Newsgroups/} is a collection of 18,821 documents across 20 different newsgroups with totally 8,165 keywords. The UMist dataset\footnote{https://cs.nyu.edu/\texttildelow roweis/data.html} contains 575 images of 20 people with the size of 112x92. The YaleB dataset\footnote{http://www.cad.zju.edu.cn/home/dengcai/Data/FaceData.html}includes 2,414 images of 38 individuals with the size of 32x32. The MNIST and 20News datasets are sparse, while UMist and YaleB are dense.
The algorithms HALS, (D)BCD, and DID could fail if $\norm{b_i}$ or $\norm{c_i^r}$ is close to zero. This could appear if $B$ or $C$ is badly scaled. That means the entries of $E=X-BC$ are strictly negative. We avoid this issue by using well scaled initial points for the synthetic datasets and $K$-means method to generate the initial values for the real datasets. All the algorithms are provided with the same initial values.
When an iterative algorithm is executed in practice, a stopping criteria is required. In our experiments, the stopping criteria is met if the following condition is satisfied
\begin{align}
\norm{E^t}^2_F \leq \epsilon \norm{E^0}^2_F,
\end{align}
where $E^t$ is the residual matrix after $t$-th iteration. Throughout the experiments, we set $\epsilon=10^{-6}$ as default. In addition, we combine the stopping criterion with \textit{a limit on time} of $24$ hours and \textit{a maximum iteration} of 1000 for real datasets. The experimental results are summarized in the Table \ref{tab:result}.
\paragraph{Correctness}In principle, the algorithms HALS, (D)BCD, and DID have the same update rules for the latent factors $B$ and $C$. The difference is the update order. The algorithm DID has the exact same number of iterations as BCD and DBCD, which demonstrates the correctness of DID.
\paragraph{Efficiency}As presented in Table \ref{tab:result}, DID always converges faster than the other algorithms in term of time. HALS and BCD usually use a similar number of iterations to reach the stopping criteria. ANLS and ADMM use much fewer iterations to converge. Thanks to auxiliary variables, ADMM usually converges faster than ANLS. Figure \ref{fig:conv}(a) shows that comparing with HALS, BCD actually reduces the objective value a lot at the beginning but takes longer to finally converge. Such phenomenon can also be observed in the comparison between ANLS and ADMM. In Figure \ref{fig:conv}(b), DID is faster than DBCD. The reason is shown in Figure \ref{fig:conv}(c) that DID involves much less communication overhead than DBCD. Based on the result in Table \ref{tab:result}, DID is about 10-15\% faster than DBCD by incrementally updating matrix $B$. (HPC-)ANLS works better in MNIST and 20News datasets because these datasets are very sparse.
\paragraph{Scalability} As presented in Table \ref{tab:result}, the runtime of DID scales linearly as the number of samples increases, which is much better than the others. It can usually speed up a factor of at least $10$ to BCD using $16$ nodes. (D)ADMM is also linearly scalable, which is slightly better than (HPC-)ANLS. Due to the costly computation, (D)ADMM is not preferred to solve NMF problems.
\section{Conclusion}
In this paper, we proposed a novel distributed algorithm DID to solve NMF in a distributed memory architecture. Assume the number of samples $N$ to be huge, DID divides the matrices $X$ and $C$ into column blocks so that updating the matrix $C$ is perfectly distributed. Using the variables $\delta b$, the matrix $B$ can be updated distributively and incrementally. As a result, only a single communication step per iteration is required. The algorithm is implemented in C code with OpenMPI. The numerical experiments demonstrated that DID has faster convergence than the other algorithms. As the update only requires basic matrix operations, DID achieves linear scalability, which is observed in the experimental results. In the future work, DID will be applied to the cases where updating matrix $B$ is also carried out in parallel. Using the techniques introduced by \cite{hsieh2011fast} and \cite{gillis2012accelerated}, DID has the possibility to be accelerated. How to better treat sparse datasets is also a potential research direction.
\clearpage
\bibliographystyle{aaai}
|
{
"timestamp": "2018-02-27T02:08:50",
"yymm": "1802",
"arxiv_id": "1802.08938",
"language": "en",
"url": "https://arxiv.org/abs/1802.08938"
}
|
\section{Introduction}
A question of fundamental interest in the study of quantum gravity is: which low-energy gravitational effective field theories (EFTs) admit an ultraviolet (UV) completion? Said differently, what are the low-energy predictions of quantum gravity?
String theory provides a very large class of quantum gravities, with a wide variety of possible low-energy EFTs (see, e.g., \cite{Douglas:2004zg, Taylor:2015xtz}). But despite this enormous ``landscape'' of vacua, there is growing evidence that they all share identifiable common features, distinguishing them from ``swampland''~\cite{Vafa:2005ui}, a large class of seemingly-consistent gravitational EFTs with no quantum gravity UV completion. Thus, our original question becomes: how do we distinguish the swampland from the landscape?
There are several candidate criteria for fencing off parts of the swampland. In this paper, we focus on the ``Swampland Conjectures'' of~\cite{Vafa:2005ui,Ooguri:2006in} concerning the moduli space of a quantum gravity. We will apply ideas about effective field theory and the infrared emergence of weak coupling recently developed in~\cite{Heidenreich:2017sim} (see also~\cite{Harlow:2015lma}), where they were used to understand another candidate swampland criterion, the weak gravity conjecture (WGC)~\cite{ArkaniHamed:2006dz}. Other recent works on the Swampland Conjectures include~\cite{Baume:2016psm, Klaewer:2016kiy, Blumenhagen:2017cxt, Palti:2017elp, Hebecker:2017lxm}.
Controlled string compactifications typically have many ``moduli'': light scalars with gravitational-strength couplings. This is related to the fact that string theory has no continuous parameters, so any freely adjustable coupling must be controlled by the vacuum expectation value (vev) of a scalar field. In the simplest supersymmetric examples the moduli are exactly massless and parameterize a continuous moduli space of vacua. More realistic examples require a moduli potential with isolated minima---the vacua of the theory---but the only reliable way to generate vacua in the weak-coupling regime is through non-perturbative effects. Since these are exponentially small at weak coupling, the moduli masses are likewise exponentially suppressed relative to, e.g., the Kaluza-Klein (KK) scale or the string scale.
``Conjecture 0'' of~\cite{Ooguri:2006in} generalizes these observations to any quantum gravity:
\setcounter{conj}{-1}
\begin{conj}
Every continuous parameter in a quantum gravity is controlled by the vev of a scalar field.
\end{conj}
\noindent Thus, we expect moduli to be ubiquitous in the landscape, and it is natural to investigate the properties of the moduli space of a quantum gravity. In particular, we focus on two related conjectures from~\cite{Ooguri:2006in}:
\begin{conj}
The moduli space has infinite diameter (despite finite volume~\cite{Vafa:2005ui,Douglas:2005hq}).
\end{conj}
\begin{conj}
At large distances in moduli space, an infinite tower of resonances becomes light exponentially quickly with increasing distance.
\end{conj}
\noindent Here distances in moduli space are defined by geodesic distances with respect to the moduli space metric $g_{i j}(\phi)$:
\begin{equation}
\mathcal{L}_{\rm kin}^{(\mathrm{mod})} = \frac{1}{2} g_{i j}(\phi) \partial \phi^i \cdot \partial \phi^j.
\end{equation}
Implicit in these conjectures is the notion that parametrically distant points in moduli space correspond to weak-coupling limits.
In this paper, we explore the connection between weak coupling, a tower of light resonances, and large distances in moduli space. Central to our arguments is the assumption, developed in~\cite{Heidenreich:2017sim}, that weak coupling in quantum gravities is a long-distance phenomenon, with all physics becoming strongly coupled in the ultraviolet at a common ``quantum gravity scale.'' This occurs in many concrete quantum gravities, such as large volume compactifications of M-theory. While there are possible counterexamples as well, see~\cite{Heidenreich:2017sim}, a suitable generalization of this notion may address these, and it is worthwhile to understand the consequences regardless.
This assumption clarifies the connection between light resonances and weak coupling. For example, light charged resonances screen gauge forces, leading to weak gauge couplings in the infrared: this is essentially the mechanism of~\cite{Heidenreich:2017sim}.
We now explain the connection between light resonances and parametrically large distances in moduli space, applying the same assumption. Applying similar reasoning, we further consider the effect of moving a large but bounded distance in moduli space, with particular attention to the case of an axion with a transplanckian decay constant.\footnote{While this paper was in preparation (following a strategy sketched in the conclusions of \cite{Heidenreich:2017sim}), we became aware of independent work~\cite{PaltiValenzuela} with some overlapping results.}
\section{Light resonances and large distances}
Motivated by Conjecture 2, we consider a trajectory in moduli space approaching a singular point where an infinite tower of resonances becomes massless. This trajectory is parameterized by a scalar field $\phi$; we take the singular point to be $\phi = 0$ without loss of generality. We assume that the spectrum is dominated by an infinite tower of particles that become \emph{uniformly} light as $\phi\to0$, so
\begin{equation}
m_n \approx \mu_n \phi + {\cal O}(\phi^2)
\label{eq:tower}
\end{equation}
after an appropriate redefinition of $\phi$.
In making this ansatz, we are no longer free to assume that $\phi$ has a canonical kinetic term.
The ansatz~\eqref{eq:tower} is quite general. For instance, suppose there are multiple towers with masses trending to zero at different rates. Then, by moving far enough in the moduli space, we can focus on the tower that becomes light fastest and parameterize the modulus $\phi$ so that this tower becomes light in linear fashion. Thus, aside from being justified by specific examples, we believe that \eqref{eq:tower} captures the most general intended meaning of ``tower'' in Conjecture 2.
For concreteness we take the particles in the tower to be Dirac fermions, though similar results apply for other spins. The Lagrangian is then
\begin{equation}
{\cal L} = \frac{1}{2}K(\phi)(\partial \phi)^2 - V(\phi) + \sum_n \left[\overline{\psi}_n \left({\rm i} \slashed{\partial} - \mu_n \phi\right)\psi_n\right].
\label{eq:Lfermions}
\end{equation}
Fermion loops will correct the $\phi$ propagator as well as $S$-matrix elements for scattering of $\phi$ particles. At some energy scale $\Lambda$ these loop corrections become as large as the tree-level contribution and the EFT breaks down. We follow the approach of \cite{Heidenreich:2017sim} to ascertain the scale $\Lambda$, with the new ingredient that $\Lambda(\phi)$ depends on the value of the modulus.\footnote{We work in Einstein frame, holding the $D$-dimensional Newton's constant fixed as $\phi$ varies. Alternatively, one could hold the cutoff $\Lambda$ fixed and view the strength of gravity as varying across the moduli space, which may be more natural when $\phi \to 0$ corresponds to a decompactification limit.}
We now compute the loop correction to the $\phi$ propagator from the tower of fermions, perturbing around a fixed expectation value $\langle\phi\rangle = \phi_0$.\footnote{For general $V(\phi)$ and general $\phi_0$ there is a tadpole and the field will begin to roll. Implicit in the notion of a modulus is that the potential is relatively flat, hence there is a finite time scale over which our computation of scattering with fixed $\langle\phi\rangle = \phi_0$ makes sense. We comment further on the flatness of the potential below.} The canonically normalized fluctuation $\overline{\phi} := \sqrt{K(\phi_0)}(\phi - \phi_0)$ has propagator
\begin{equation}
\langle \widetilde{\overline{\phi}} (p) \widetilde{\overline{\phi}} (- p) \rangle \sim \frac{1}{p^2 - m_\phi^2 + {\rm i}
\varepsilon} \frac{1}{1 + \Pi (p^2)} .
\end{equation}
By adding appropriate counterterms, we can choose the
renormalization condition $\Pi (0) = 0$ (up to possible infrared divergences if there is a massless particle coupling to $\phi$).
Parametrically, the large-$p$ behavior of the contribution to the one-loop integral from fermion $n$ is
\begin{equation}
\left| \Pi_n (p^2 \gg m_n^2) \right| \sim \frac{\mu_n^2}{K(\phi_0)} p^{D - 4} \sim \frac{1}{K(\phi_0)} \biggl(\frac{\partial m_n}{\partial \phi}\biggr)^2 p^{D - 4}.
\end{equation}
To assess the strong coupling scale, we define a function $\lambda_\phi(p)$ that captures the parametric contribution to $\Pi(p^2)$ from the sum over {\em only} those particles with mass less than $p$. We have
\begin{equation}
\lambda_{\phi} (p) := \frac{p^{D - 4}}{K (\phi_0)} \sum_{n \, | \, m_n(\phi_0) < p}
\left( \frac{\partial m_n}{\partial \phi} \right)^2.
\label{eq:lambdadef}
\end{equation}
From this expression one can read off the strong coupling scale $\Lambda(\phi_0)$ as the value of $p$ where $\lambda_\phi(p) \sim 1$. The result clearly depends on the unknown function $K(\phi_0)$.
\section{Strong coupling and gravity}
The loop corrections from the tower of light particles also affect the graviton propagator, leading to strong coupling \cite{Dvali:2007hz,Dvali:2007wp}. These loop corrections are parametrically controlled by
\begin{equation}
\lambda_{\rm grav}(p) := G_N \,p^{D-2} N(p),
\end{equation}
where $N(p) = \sum_{n \, | \, m_n(\phi_0) < p} 1$ is the number of weakly coupled particles with mass below $p$.
Consider the ``species bound'' scale $\Lambda_{\rm QG}(\phi_0)$ at which $\lambda_{\rm grav}(p) \sim 1$:
\begin{equation}
\Lambda^{D-2}_{\rm QG} = \frac{1}{G_N N(\Lambda_{\rm QG})}.
\label{eq:speciesbound}
\end{equation}
In terms of this scale we have
\begin{equation}
\lambda_\phi(\Lambda_{\rm QG}) = \frac{1}{\Lambda_{\rm QG}^2 K(\phi_0) G_N} \left<\mu^2\right>_{\Lambda_{\rm QG}},
\label{eq:lambdaphi}
\end{equation}
where
\begin{align}
\left<\mu^2\right>_\Lambda &:= \frac{1}{N(\Lambda)} \sum_{n\, |\, m_n(\phi_0) < \Lambda} \left(\frac{\partial m_n}{\partial \phi}\right)^2 \nonumber \\
&\approx \frac{1}{N(\Lambda) \phi^2} \sum_{n\, |\, m_n(\phi_0) < \Lambda} m_n^2 \sim \frac{\Lambda^2}{\phi^2}.
\end{align}
In the last line we have made a mild but crucial assumption that most of the particles in the tower lie near the cutoff, as in any tower with an increasing density of states $d N/d p$, or with a power-law (increasing or decreasing) density of states.
Given this assumption, if we demand that the modulus becomes strongly coupled at the ``quantum gravity scale'' $\Lambda_{\rm QG}$ in \eqref{eq:speciesbound}, we constrain the form of the kinetic term:
\begin{equation}
\lambda_\phi(\Lambda_{\rm QG}) \sim 1 \Rightarrow K(\phi_0) \sim \frac{1}{G_N \phi_0^2}.
\end{equation}
That is, for a wide variety of spectra for the tower of states that become light as $\phi \to 0$, the condition that the modulus becomes strongly coupled at the quantum gravity scale fixes the kinetic term of the modulus to be
\begin{equation}
{\cal L}_{\rm kin} \sim M_{\rm Pl}^{D-2} \frac{1}{\phi^2} (\partial \phi)^2.
\label{eq:finalkinetic}
\end{equation}
In particular, distances in field space grow logarithmically with the value of $\phi$. Equivalently, the particles in the tower become exponentially light in the field-space distance, precisely as required by Conjecture 2. Slightly different calculations give the same results for a tower of scalars;\footnote{There are some subtleties related to counterterm contributions in $D = 4$, which however do not change the result.} importantly, unlike mass corrections, these kinetic corrections do not cancel between scalars and fermions in supersymmetric theories.
Under similar assumptions about the density of states within the tower, we obtain the energy-dependent statement $\sum_{n\, |\, m_n(\phi_0) < p} m_n^2 \sim N(p)\, p^2$ for any $p \gg p_0$, with $p_0$ some characteristic scale (e.g., the lowest mass threshold in the tower).
Applying \eqref{eq:finalkinetic}, we find
\begin{equation}
\lambda_\phi(p) \sim G_N N(p) p^{D-2} \sim \lambda_{\rm grav}(p), \quad p\gg p_0.
\end{equation}
This can be interpreted as a type of ``unification'' of the strengths of loop effects for gravity and for the modulus at energies above $p_0$, similar to the unification of gauge and gravity loops discussed in \cite{Heidenreich:2017sim}.
Note that \eqref{eq:finalkinetic} implies that the modulus has gravitational-strength couplings to the particles in the tower; this is a well-known characteristic of moduli, here a natural consequence of our assumptions.
\section{Transplanckian distances and EFT}
Conjecture 2 implies that a single EFT cannot describe parametrically large distances in moduli space, since a parametrically large number of massive particles inevitably become light, and we need UV information to determine their properties.
However, the conjecture places no restrictions on traversals that are larger than $M_{\rm Pl}$, but not parametrically large. For instance, there are examples in which the density of states is essentially constant along a trajectory of super-Planckian length.\footnote{Such a trajectory is not necessarily a geodesic in moduli space~\cite{Hebecker:2017lxm}.} This typically occurs when the modulus in question is an axion, consistent with Conjecture 2 because the compact field space prevents an arbitrarily large excursion. While axion excursions do not lead to a tower of particles becoming light, we still expect a tower of particles with masses that change as the axion expectation value varies. When the axion field completes a full circuit, the spectrum must return to where it started, perhaps with a nontrivial monodromy.
Motivated by axions, we reconsider the effect of a tower of particles with masses $m_n(\phi)$ that depend on a modulus $\phi$, dropping the assumption that the masses vanish as $\phi \rightarrow 0$. By the same analysis as above, if the modulus becomes strongly coupled at the quantum gravity scale $\Lambda_{\rm QG}$, i.e., if $\lambda_{\phi}(\Lambda_{\rm QG}) \sim 1$, then the modulus kinetic term is approximately
\begin{align}
K(\phi) \sim \Lambda_{\rm QG}(\phi)^{D-4} \sum_{n \, | \, m_n(\phi) < p}
\left( \frac{\partial m_n}{\partial \phi} \right)^2.
\label{eq:axionK}
\end{align}
While in general both $K(\phi)$ and $\Lambda_{\rm QG}(\phi)$ will depend on the modulus, if $\phi$ is an axion then typically both are approximately constant---roughly speaking because the axion has an approximate shift symmetry. In this case, by averaging over all particles with mass below the cutoff, we obtain
\begin{align}
\Lambda_{\rm QG}^{D-4} N(\Lambda_{\rm QG}) \left< \left(\frac{\partial m_n}{\partial \phi}\right)^2 \right> \sim K.
\end{align}
We can eliminate $N(\Lambda_{\rm QG})$ from this expression using the species bound (\ref{eq:speciesbound}). Under a traversal $\phi \rightarrow \phi+ \Delta \phi$, we find\footnote{Here we assume that the spectrum is not rapidly oscillating as a function of $\phi$ in order to approximate $\frac{\partial m_n}{\partial \phi}\sim \frac{\Delta m_n}{\Delta \phi}$.}
\begin{equation} \label{eq:axiontraversal}
\langle (\Delta m_n)^2 \rangle \sim \frac{(\Delta \phi)^2 K}{M_{\rm Pl}^{D-2}} \Lambda_{\rm QG}^2.
\end{equation}
Since $\Delta \phi \sqrt{K}/M_{\rm Pl}^{(D-2)/2}$ is the length of the traversal in Planck units, we learn that as $\phi$ rolls a Planckian distance in field space, the typical massive mode shifts by an amount of order the quantum gravity scale $\Lambda_{\rm QG}$.
This is reminiscent of a phenomenon observed in \cite{Heidenreich:2015wga}: in certain models of large-field axion inflation (including some models of decay constant alignment \cite{kim:2004rp} and axion monodromy \cite{mcallister:2008hb,silverstein:2008sg}), modes that begin above the UV cutoff become very light as the inflaton rolls. However, \eqref{eq:axiontraversal} does not necessarily imply that modes above $\Lambda_{\rm QG}$ become very light under a super-Planckian traversal $\Delta \phi \sqrt{K} > M_{\rm Pl}^{(D-2)/2}$: a mode with mass of order $\Lambda_{\rm QG}$ generically acquires a different mass of order $\Lambda_{\rm QG}$. On the other hand, \eqref{eq:axiontraversal} does imply that generically an order-one fraction of the modes will pass through the cutoff during this traversal.
To illustrate these points, consider an axion $\theta$ arising from compactification of a $5$-dimensional $U(1)$ gauge theory with coupling constant $e_5$ on a circle of radius $R$. Consistent with recent ideas about the WGC~\cite{Heidenreich:2015nta,Heidenreich:2016aqi,Montero:2016tif,Andriolo:2018lvp} and the analysis of~\cite{Heidenreich:2017sim}, we assume the 5d theory has a tower of near-extremal charged particles, which leads to a 4d KK spectrum of the form
\begin{align}
m_{n_1,n_2}^2 \sim \left( n_1^2 e_1^2 + e_2^2 (n_2 - n_1 \theta)^2 \right) M_{\rm Pl}^2.
\label{eq:axionmass}
\end{align}
Here $e_1^2 = e_5^2/(2 \pi R)$ and $e_2^2 = 2/(R M_{\rm Pl})^2$ are the 4d gauge couplings and $\theta \cong \theta +1$ is the axion. Assuming a sufficiently large number of modes charged under each $U(1)$ lie below the cutoff, we can approximate the sums over $n_1$ and $n_2$ in (\ref{eq:axionK}) as integrals, giving
\begin{align}
(2 \pi f)^2 := K \sim (e_2/e_1)^2 M_{\rm Pl}^2,
\end{align}
in agreement with tree-level dimensional reduction, where we express the result in terms of the ``axion decay constant'' $f$.
Assuming a particular mode has mass $m_{n_1,n_2} \sim \Lambda_{\rm QG}$ when $\theta =0$, what is the lightest this mode can become under $\theta \rightarrow \theta + \Delta \theta$? The ideal situation occurs when $n_2 \approx n_1 \Delta \theta$, so that the second term in the mass formula (\ref{eq:axionmass}) becomes negligible after the shift. Setting $m_{n_1,n_2}^2 \sim e_2^2 n_2^2 M_{\rm Pl}^2 \sim \Lambda_{\rm QG}^2$ at $\theta=0$, we find a final mass of
\begin{align}
m_f^2 \sim e_1^2 n_1^2 M_{\rm Pl}^2 \sim \left( \frac{M_{\rm Pl}}{2 \pi f \Delta \theta}\right)^2 \Lambda_{\rm QG}^2 .
\end{align}
Thus, the mass of the lightest mode that begins above the cutoff $\Lambda_{\rm QG}$ is inversely proportional to the axion shift in Planck units. For a modest super-Planckian traversal $2 \pi f \Delta \theta \sim \mbox{10--100}\ M_{\rm Pl}$ (as required for large-field axion inflation), this mass is not very light.
To estimate the number of modes passing through the cutoff as $\theta \to \theta+ \Delta \theta$, note that for fixed $n_1$, $2n_1$ modes pass upwards/downwards through the cutoff over a full period, $\Delta \theta = 1$. Thus, the total number of modes passing through the cutoff is $\Delta N \sim \Delta \theta \sum_{n_1} 2 n_1 \sim \Delta\theta (\Lambda_{\rm QG}/e_1 M_{\rm Pl})^2$. Comparing with the total number of light modes, we obtain
\begin{equation}
\frac{\Delta N}{N} \sim \frac{2 \pi f \Delta \theta}{M_{\rm Pl}}.
\end{equation}
Thus, for a Planckian field traversal, an order-one fraction of the modes will pass through the cutoff, and for a larger field traversal almost all the modes will be recycled.
What does this mean for EFT and axion inflation? On the one hand, we could integrate out all the modes with mass $e_1 M_{\rm Pl}$ or above, obtaining an EFT with a lower cutoff but with no apparent drama as the axion traverses a super-Planckian distance. On the other hand, if we wish to compute the axion potential using EFT then we cannot take this approach, since in this example the axion potential is generated by the Casimir energy of charged particles in the 5d parent theory~\cite{Hosotani:1983xw,Cheng:2002iz,ArkaniHamed:2003wu}, whose masses start at $e_1 M_{\rm Pl}$. If we raise the cutoff to include some of these charged particles in the EFT then we once again face the twin issues of modes emerging from the cutoff and becoming light and a large fraction of the modes passing through the cutoff during the axion traversal.\footnote{In the language of 5d EFT, these issues are related to the difficulty of imposing a gauge-invariant cutoff on loops of charged particles.} Both issues suggest that an EFT incorporating these modes is not well controlled for a super-Planckian field excursion in the absence of additional UV input.
Thus, there are potential subtleties in using an EFT (even a string theory-derived EFT) to compute the potential of an axion over a very super-Planckian field range. Our arguments suggest that these subtleties extend beyond the extra-natural context explored in~\cite{Heidenreich:2015wga} and to other moduli besides axions. It remains to be seen whether these subtleties are of critical importance in candidate large field models.
\section{Modulus Potential}
Above we assumed a light modulus that can be described within EFT.
We should test this assumption: supersymmetric theories can have exactly massless moduli, but more generally, do loops tend to give moduli a large mass? If we sum up the loop corrections to a modulus from a tower of fermions, we find power-divergent diagrams. Cutting these off at $\Lambda_{\rm QG}$, we have
\begin{equation}
\delta m_{\rm mod}^2 \sim\!\!\!\! \sum_{n\, | \, m_n < \Lambda_{\rm QG}}\! \frac{m_n^2}{M_{\rm Pl}^{D-2}} \Lambda_{\rm QG}^{D-2} \sim \frac{N(\Lambda_{\rm QG}) \Lambda_{\rm QG}^D}{M_{\rm Pl}^{D-2}} \sim \Lambda_{\rm QG}^2.
\end{equation}
However, in the presence of supersymmetry, contributions from scalars and their fermionic partners approximately cancel to leave a remainder of order the average SUSY-breaking splitting within the tower of states:
\begin{equation}
\left.\delta m_{\rm mod}^2\right|_{\rm SUSY} \sim \left<m_{\rm boson}^2 - m_{\rm fermion}^2\right>.
\end{equation}
In the absence of supersymmetry, loop corrections will drive the modulus mass to the cutoff $\Lambda_{\rm QG}$. This must be reconciled with examples, like compactification of a nonsupersymmetric theory on a circle, in which we know that a radion modulus exists with a controlled (Casimir) potential. The resolution of this puzzle is that tuning away the cosmological constant in the parent theory in turn fine-tunes the modulus potential of the lower-dimensional theory; a c.c.~of the naive size in the parent theory becomes precisely a modulus potential with curvature $\Lambda_{\rm QG}^2$ in the daughter theory.
In supersymmetric theories, the naive expectation is that moduli masses are of order the gravitino mass. It is known that no-scale structure allows lighter moduli \cite{Cremmer:1983bf, Luty:1999cz, Balasubramanian:2005zx}. For consistency with the estimate above, this means that the tower of states with masses controlled by the no-scale modulus should {\em also} have small SUSY breaking. We can check this explicitly in theories where no-scale structure arises from the overall volume modulus of a compactification, which happens in 4d theories when reducing from 5d or 10d Type IIB supergravity \cite{Reece:2015qbf}. No-scale structure is manifest in the ``keinstein'' frame where the modulus chiral superfield $\bm{T}$ obtains a kinetic term entirely by mixing with the graviton, appearing as $\int d^4 \theta M_*^3 \bm{\Phi^\dagger \Phi} (\bm{T^\dagger} + \bm{T})$. Here $\bm{\Phi}$ is the conformal compensator chiral superfield and the linear coupling to $\bm{T}$ naturally produces $|F_\Phi / \Phi| \ll m_{3/2}$. One can show that in keinstein frame, a field $\bm{\chi}$ propagating in the bulk in the higher dimensional theory has KK modes in the 4d theory with K\"ahler potential $\int d^4 \theta \bm{\Phi^\dagger \Phi} (\bm{T^\dagger} + \bm{T}) \bm{\chi_n^\dagger \chi_n}$, giving rise to soft masses $\widetilde{m}_{\chi_n}^2 \sim |F_\Phi/\Phi| |F_T/T| \sim m_{3/2} |F_\Phi / \Phi| \ll m_{3/2}^2$. In other words, no-scale protection from large SUSY breaking naturally extends to the Kaluza-Klein modes of bulk fields. This shows that no-scale structure is consistent with our simple estimate of summing loop corrections from the KK tower.
\section{Conclusions}
The Sublattice WGC \cite{Heidenreich:2015nta, Heidenreich:2016aqi, Montero:2016tif} and the recently proposed Tower WGC \cite{Andriolo:2018lvp}---strengthened versions of the Weak Gravity Conjecture motivated by dimensional reduction and evidence from string theory---require an infinite tower of light particles, closely linking them to the Swampland Conjectures. In some cases, the connection is direct: the gauge coupling $g$ is related to the vev of a scalar modulus and the $g \to 0$ limit brings down a single tower of light charged particles, satisfying both conjectures.
In other cases the Swampland Conjectures strengthen the Sublattice WGC by demanding that a tower of particles can be accessed within EFT. For instance, in \cite{Heidenreich:2017sim} we observed that in some examples the WGC tower is predicted to lie above $\Lambda_{\rm QG}$. A concrete case is an approximately isotropic 4d compactification of Type IIB string theory with gauge fields on D7 branes. The gauge coupling $g \sim 1/(M_s R)^2$, so the WGC tower is at $g M_{\rm Pl}$; the string scale is lower, at $g^{3/2} M_{\rm Pl}$; but the Kaluza-Klein tower is lower still, at $g^2 M_{\rm Pl}$. The WGC tower is outside the low-energy EFT, but the KK tower is not. It accounts for the infinite distance in moduli space as $g \to 0$ and the generation of strongly coupled gravity via the species bound. This phenomenon, with multiple towers of particles becoming light at different rates as one moves to infinity in moduli space, can arise in a variety of examples.
We have argued that the assumption of a universal strong coupling scale for fields in quantum gravity can serve as a more fundamental replacement for some of the Swampland Conjectures. It is still important to put the most basic aspects of these conjectures on a firmer footing: can we prove rigorously that moduli exist and that large-moduli limits always send infinite towers of particles to zero mass? These basic assumptions have a similar flavor to the statement that quantum gravity theories have no global symmetries, and deserve more attention.
\begin{acknowledgments}
\vspace*{1cm}
{\bf Acknowledgments.} The research of BH was supported by Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported by the Government of Canada through the Department of Innovation, Science and Economic Development, and by the Province of Ontario through the Ministry of Research, Innovation and Science. MR is supported in part by the DOE Grant {DE-SC}0013607 and the NASA ATP Grant NNX16AI12G. TR is supported by the Carl P. Feinberg Founders' Circle Membership and the NSF Grant PHY-1314311.
\end{acknowledgments}
|
{
"timestamp": "2018-02-27T02:00:26",
"yymm": "1802",
"arxiv_id": "1802.08698",
"language": "en",
"url": "https://arxiv.org/abs/1802.08698"
}
|
\subsection{Please Capitalize the First Letter of Each Notional Word in Subsection Title}
\section{Introduction}
\label{sect:intro}
The Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST, also named the “Guo-Shou-Jing Telescope”) is a meridian reflecting Schmidt (Wang-Su-type,~\cite{cui2012}) telescope. LAMOST, as a Chinese national scientific research facility, is operated by the National Astronomical Observatories, Chinese Academy of Sciences (NAOC), and is located at Xinglong Station, which is a national facility open to the astronomical community. As the telescope having the highest spectrum acquisition rate in the world, LAMOST has already successfully observed and processed more than 7.68 million spectra \footnote{http://dr4.lamost.org}, and is a powerful tool for wide-field and large-sample astronomy (~\cite{zhao2015}).
The main spectroscopic survey instruments of LAMOST are 16 low-resolution spectrographs (~\cite{hou2010}), each of which contains two scientific charge-coupled-device (CCD) cameras (English Electric Valve Company (EEV) 203-82, 4K x 4K pixels); thus, 32 CCDs are used for observation (~\cite{zou2006}). In 2009, the present authors developed CCD control software to coordinate procedures and to collect status information for the 32 CCDs (~\cite{deng2010}, ~\cite{deng2011}). A schematic diagram of the spectrographs and CCDs is shown in Figure 1.
\begin{figure}[!htbp]
\centering
\includegraphics[width=\textwidth, angle=0]{ms20170246fig1.eps}
\caption{spectrographs and CCDs in LAMOST}
\label{Fig1}
\end{figure}
The developed software has worked for 8 years, and satisfied its basic design goal. However, this software is excessively complex, maintenance is tedious, and full integration into the LAMOST observatory control system (OCS) is difficult. With the development of the LAMOST spectrum survey, in order to improve software stability and observation efficiency, automation and robustness have become new requirements. It is apparent that the existing CCD control software cannot meet these challenges; thus, a software upgrade is required.
Computer-controlled telescope technology began to appear in the 1980s, following the introduction of personal computers. The considerable advancements in modern information technology since the beginning of the 21st century have contributed to its rapid development, and the goals of automation and robustness have now been achieved for a large number of telescopes. The development history of computer-controlled telescope technology can be divided into four stages: Automated Scheduled Telescopes, Remotely Operated Telescopes, Robotic Autonomous Observatories, and Future Robotic Intelligent Observatories (~\cite{castro2010}). Thus, the present study targets development related to the third stage, Robotic Autonomous Observatories.
Following an in-depth study of telescope automation control technology and software (~\cite{cui2013}), we chose Remote Telescope System 2nd Version (RTS2), a representative third-generation control technology, for introduction to the LAMOST OCS. The first stage of this upgrade project involves implementation of RTS2 to drive the 32 CCDs, as one of the core OCS components. RTS2 is an open-source package for remote telescope control under the Linux operating system and is written in C++. It is designed to run in fully autonomous mode, which can be used to coordinate all autonomous scheduling, telescope pointing, data acquisition, instrument hardware monitoring, etc. (~\cite{peter2006}).
The autonomous capabilities and robustness of RTS2 benefit from its object-oriented design and reasonable class hierarchy. With regard to communication, the code is divided into three levels: the communication code, device-type specific library, and native drivers. The communication code is based on transmission control protocol/internet protocol (TCP/IP), using custom protocol and heartbeat technology, and can handle various network transport transactions and error recovery. The device-type library, which summarizes the common steps of the general telescope equipment, handles device commands (initiation of mount movement and exposure, etc.). The native drivers can drive various device types (CCDs, mounts, sensors, etc.). By inheriting the base device-type class and overriding parts of the virtual function interfaces, a software re-developer can quickly create a new dedicated device driver, which can then be customized for application to various tasks.
As a result of continuous improvement, RTS2 has evolved into a modular package supporting a variety of hardware devices and providing reliable general-purpose observatory control software. Thus, RTS2 is currently running on various small telescope setups and laboratory equipment control systems worldwide\footnote{http://rts2.org/wiki/obs:start} (~\cite{peter2012-1}, ~\cite{zhang2016}). However, because of its complexity, RTS2 has not been successfully applied to large-aperture telescopes to date.
The aim of this study is to upgrade the existing LAMOST camera-control software to adapt to the RTS2 framework, as the first stage of the ultimate project aim of introducing RTS2 to the LAMOST OCS. Here, the developed RTS2-based camera-control system is described in detail and implemented in the actual environment, with multiple test observations being conducted..
\section{Software Design Architecture and Implementation}
\label{sect:SDAI}
As the 32 CCDs are the most important acquisition components of the LAMOST spectral equipment, their control system must be stable, automated, and easy to maintain. To satisfy these requirements, we have introduced an RTS2-based software framework to the LAMOST CCD control system to upgrade the existing software. The CCD control system is shown in the context of the overall LAMOST software framework in Figure 2.
\begin{figure}[!htbp]
\centering
\includegraphics[width=\textwidth, angle=0]{ms20170246fig2.eps}
\caption{spectrographs and CCDs in LAMOST (SSS: Survey Strategy System, which is responsible for generating observation plans and arranging observation sequences)}
\label{Fig2}
\end{figure}
An object-oriented design method is employed for RTS2, which can support many kinds of cameras. A method for application of RTS2 for customized devices also exists\footnote{http://rts2.org/wiki/doku.php?id=code:camera\_driver}, and this technology is very simple and convenient to use. In contrast, the existing LAMOST CCD control application is more complex. Note that the type of CCD camera employed by LAMOST is not supported by RTS2 directly. Further, no use case involving the control and coordination of such a large number of cameras by RTS2 has been reported to date. Therefore, it was necessary to design the system meticulously and to perform careful testing.
\subsection{Design Architecture}
In order to bridge RTS2 with the 32 EEV CCDs (real cameras) of LAMOST, the designed software is divided into three layers: the center controller layer (OCS, integrated with RTS2), virtual device layer (named “CCD-Master”), and real control layer (“CCD-Slave” and Universal Camera, also named UCAM). UCAM is a stand-alone driver software for scientific CCD camera develeped by the LICK Observatory. Becase of its good versatility and stability, LAMOST used it as the driver for EEV CCDs (~\cite{zou2006}, ~\cite{jia2010}). The software architecture is depicted in Figure 3.
\begin{figure}[!htbp]
\centering
\includegraphics[width=\textwidth, angle=0]{ms20170246fig3.eps}
\caption{Overall software architecture (CentralD: Name resolver and observatory housekeeper; EXEC: “Executor” device scheduler)}
\label{Fig3}
\end{figure}
In the center control layer, a customized RTS2 camera component (named “LAMOST-CCD”) is created, which processes related RTS2 operations. The master-slave paradigm is used in the virtual device layer and real control layer. In the virtual device layer, a master, i.e., CCD-Master, is designed, which distributes commands to and collects status information from the slaves. It also maintains a connection with LAMOST-CCD, to receive command-and-response general status information. In the real control layer, CCD-Slave is implemented as a slave, which is a bridge connecting CCD-Master and the EEV CCD stand-alone driver software, i.e., UCAM.
This design scheme preserves the advantages of RTS2, e.g., target scheduling, control automation, and fault tolerance. We also attempted to retain the existing software interface. Separation of the command/status stream was used to achieve a simple design, accelerating the code development progress.
\subsection{Implementation of customized camera class in RTS2 (LAMOST-CCD)}
A new RTS2 camera class was created to control the LAMOST CCDs, named “LAMOST-CCD.” This is an RTS2 camera module and inherits from rts2camd::Camera. The inheritance diagram of our camera class in the RTS2 framework is shown in Figure 4.
\begin{figure}[!htbp]
\centering
\includegraphics[width=\textwidth, angle=0]{ms20170246fig4.eps}
\caption{Inheritance diagram of customized camera module class in RTS2 framework}
\label{Fig4}
\end{figure}f
The rts2camd::Camera class is a generic camera module class provided by RTS2, being an abstract class for all kinds of cameras. It simply summarizes the general control steps and statuses of astronomy cameras and reserves a large number of (virtual functions) interfaces (~\cite{wei2014}). In this project, LAMOST-CCD overrides a small number of virtual functions. These virtual functions can transmit special commands and statuses defined by the authors (for details of these custom commands, see Section 3).
The RTS2 framework implements connections between its modules using TCP sockets, and uses its own communication protocol. The core communication class is rts2core::Connection (~\cite{peter2008}). To satisfy the communication requirement of RTS2, we created a connection class named “DevConnectionLAMOSTCCD,” which inherits from rts2core::Connection. This class is used to manage the connection between LAMOST-CCD and CCD-Master. We override its interfaces using functions such as processLine() and setState(). The detailed unified modeling language (UML) diagram is shown in Figure 5.
\begin{figure}[!htbp]
\centering
\includegraphics[width=\textwidth, angle=0]{ms20170246fig5.eps}
\caption{LAMOST-CCD class UML diagram}
\label{Fig5}
\end{figure}
Following the RTS2 rules, when LAMOST-CCD runs on a separate process, it attempts to register itself into the RTS2 CentralD (the name resolver and observatory housekeeper) on rts2core::Connection. Meanwhile, it creates a DevConnectionLAMOSTCCD object to connect to CCD-Master. When this connection has been established, LAMOST-CCD adds this DevConnectionLAMOSTCCD object to its connections list. Subsequently, LAMOST-CCD functions in the same manner as any other RTS2 device module.
When the RTS2 executor or clients send a command to LAMOST-CCD, the latter transmits a command to CCD-Master through the TCP connection managed by DevConnectionLAMOSTCCD. When CCD-Master responds with aggregate statuses, DevConnectionLAMOSTCCD translates these statuses into custom events (EVENT\_LAMOST\_EXPOSURE\_START, EVENT\_LAMOST\_READOUT\_END, etc.), and delivers these events to LAMOST-CCD. Finally, LAMOST-CCD, as a RTS2 device module, changes its own state and performs the appropriate post-processing.
As RTS2 provides a Daemon-running framework, connection management framework, event handling mechanism, we can implement the LAMOST-CCD class very simply and rapidly.
\subsection{CCD-Master}
In order to map the 32 LAMOST CCDs to the RTS2 camera module, we added a virtual layer between them and implemented a master-slave program paradigm.
CCD-Master has two tasks. As the first task, CCD-Master (as a socket server) accepts connections and receives commands from LAMOST-CCD. Then, it processes these commands (translates, sets special parameters, etc.). Finally, it connects to the 32 CCD-Slaves (as a socket client) and distributes these commands. As the second task, CCD-Master (as a socket server) accepts connections and receives status messages from the CCD-Slaves. Then, it judges whether all cameras are working well. Finally, it provides a total status report to LAMOST-CCD as one virtual camera. The internal structure of CCD-Master is shown in Figure 6. As a simple synchronous mode through socket communication is used to ensure command and status transmissions are not blocked by each other, CCD-Master implements two processes separately, one for command and the other for status.
\begin{figure}[!htbp]
\centering
\includegraphics[width=\textwidth, angle=0]{ms20170246fig6.eps}
\caption{CCD-Master internal structure diagram (GUI: Graphical user interface)}
\label{Fig6}
\end{figure}
In an ideal scenario, CCD-Master should accurately distribute commands to the 32 cameras simultaneously, especially when the commands are “start/stop exposure.” However, because of the program operation principles, this cannot be achieved very precisely. Fortunately, the nocturnal observations conducted at LAMOST typically involve long-term exposure (600 s $-$ 1800 s), and the UCAM time sensitivity is 0.01 s. Therefore, a TCP-based multi-thread design plan for command distribution was adopted. This plan is stable and low-cost, and satisfies the engineering requirements well. The time difference between multi-threads can be negligible in this case. Note that this approach was examined in experiment in this study, and detailed results are presented in Section 4.1.
In the status process, one thread is used, because the status-collection time accuracy is not critical. We added a storer to cache the status results for the 32 cameras. When RTS2 send command ``Get Status'', the CCD-Master's command \& status forwarder (in the command process) transfer this command to the storer. Then the storer sends the status results for the 32 cameras back to the status processer, and the status processer generates a virtual status. Finally, the status sender obtains this virtual status and returns it to LAMOST-CCD (RTS2). The status reception performance of CCD-Master is reported in Section 4.2.
A graphical user interface (GUI) was implemented for CCD-Master, which provides observers with the option to obtain more detailed information on the real cameras. The GUI is shown in Section 4.3.
\subsection{CCD-Slave}
The internal structure of CCD-Slave is shown in Figure 7. This design can retain the original interfaces of the existing software. This concept is very important when upgrading the existing software of a large telescope that has been running stably for a long time. CCD-Slave is a bridge between CCD-Master and UCAM, which is located in the personal computer (PC) of each CCD controller.
\begin{figure}[!htbp]
\centering
\includegraphics[width=\textwidth, angle=0]{ms20170246fig7.eps}
\caption{CCD-Slave internal structure diagram}
\label{Fig7}
\end{figure}
Like CCD-Master, CCD-Slave also has two separate processes, one of which manages commands, while the other handles status messages. The command listener creates a socket and receives commands from CCD-Master, before translating the commands to a special string (or binary codes) that can be understood by UCAM. When CCD-Master sends a "Get Status" command, the command-listener status flag changes to TRUE. Then, the command listener connects to the CCD-Master status endpoint and sends the status information, which is cached in "status buffer."
The status collector process registers itself into the UCAM information channel and subscribes to certain status topics; then, it receives messages continuously. When a new status message is received, the status collector pushes it into the status buffer through the pipe, which is a bridge that connects the two processes. The pipe is created when CCD-Slave is initiated.
\section{Internal Communication Protocol}
\label{sect:protocol}
RTS2 employs its own communication protocol (~\cite{peter2008}), which is based on sending American Standard Code for Information Interchange (ASCII) strings over TCP/IP sockets. It is fast, simple, and robust. As a device module, LAMOST-CCD uses the existing RTS2 protocol to communicate with other RTS2 modules. To communicate with CCD-Master, LAMOST-CCD uses customized commands and statuses based on the RTS2 protocol. Table 1 lists various custom-defined commands and status messages.
For convenience, we used a key-value pattern to transport and store various parameters. Implementation of this approach is very simple and this method is also very flexible. Using this technique, we could easily add and modify parameters throughout the entire code development process. For example, when LAMOST-CCD receives a "set parameters" command from the RTS2 internal module, the former then translates the command according to Table 1 and adds a "key-value" pair (e.g., $<$selected=all$>$). Then, LAMOST-CCD sends the translated command to CCD-Master. CCD-Master receives that command, parses it, obtains the value of "selected," and then forwards the command to all CCD-Slaves. Each CCD-Slave receives and parses the command and then drives its CCD camera.
\begin{table}
\begin{center}
\caption[]{ Custom-defined commands and status messages based on RTS2 protocol}\label{Tab:custom-protocol}
\begin{tabular}{clcl}
\hline\noalign{\smallskip}
Format & Explanation & Sender & Receiver \\
\hline\noalign{\smallskip}
GV $<$key$>$ & Get parameters & LAMOST-CCD & CCD-Master\\
XV $<$key$>$ $<$value$>$ & Set parameters & LAMOST-CCD & CCD-Master\\
EX & Exposure start & LAMOST-CCD & CCD-Master\\
SX & Exposure stop & LAMOST-CCD & CCD-Master\\
RD & Readout & LAMOST-CCD & CCD-Master\\
A $<$Message$>$ & Warning information & CCD-Master & LAMOST-CCD\\
+ $<$Message$>$ & Success information & CCD-Master & LAMOST-CCD\\
- $<$Message$>$ & Failed information & CCD-Master & LAMOST-CCD\\
$\cdots\cdots$ & Other control command & LAMOST-CCD & CCD-Master\\
\noalign{\smallskip}\hline
\end{tabular}
\end{center}
\end{table}
\section{Software Deployment and Testing}
\label{sect:testing}
After the software development was completed, the software was deployed in the real camera control environment of LAMOST (shown in Figure 1). The 32 LAMOST cameras (EEV CCDs) obtain light signals from 4000 optical fibers. Each camera has a controller PC, which has an Intel(R) Core(TM) i3-2100 3.10-GHz central processing unit (CPU) and 4 GB of memory. The CCD-Master server has a 2-way Intel Xeon 2.53-GHz processor (containing 16 logical CPU cores in total) and a 12 GB memory. The hardware configuration of RTS2 LAMOST-CCD PC is identical to that of the camera-control PC. All computers are connected to a Gigabit Switch and a CentOS 6.8 operating system is installed on each computer.
We designed a series of experiments to test the performance of the new control software, to verify whether this software can satisfy the actual observation requirements of LAMOST. The results are reported in the following subsections.
\subsection{Command distribution performance}
We designed the first experiment to test the performance of the TCP-based command distribution, and to determine whether the new system satisfies the engineering requirements for controlling 32 real CCD cameras. Note that the performance of the existing User Datagram Protocol (UDP)-based system is also discussed in this subsection, for comparison with the upgraded system. We produced alphabetic character command data (8 B, 12 B, ..., 1 KB), and distributed them via both the existing software and the new system. To guarantee experimental accuracy, the simulated commands of every size were each tested for five loops, where each loop distributed the command 1 million times. The averaged results for the distribution speed and network bandwidth occupancy are shown in Figure 8 and Figure 9, for the existing software and new system, respectively.
\begin{figure}[!htbp]
\centering
\includegraphics[width=\textwidth, angle=0]{ms20170246fig8.eps}
\caption{ Command distribution performance of existing software, based on transmission rate and bandwidth as functions of command size}
\label{Fig8}
\end{figure}
\begin{figure}[!htbp]
\centering
\includegraphics[width=\textwidth, angle=0]{ms20170246fig9.eps}
\caption{Command distribution performance of new system, based on transmission rate and bandwidth as functions of command size}
\label{Fig9}
\end{figure}
In the existing UDP-based software, the command distribution speed is constant at 1.94 ms$/$piece, and the network occupancy increases with the command packet size. However, in the new, TCP-based system, because multi-thread technology is employed, the network occupancy is always close to the ultimate transmission speed (the gigabit switch ultimate speed is 1 billion bps $/$ 8 $\approx$ 119.21 MBps), and the command delivery speed increases with the command pack size. From comparison of the two figures, it is apparent that the new system exhibits superior performance to the existing system when the command pack size is less than 200 B. In contrast, when the command pack size exceeds 200 B, the new system exhibits inferior performance to the existing software. However, even when the command size is 1 KB, the delivery speed of the new system is as high as 8.94 ms$/$piece, which is far less than 100 ms , the UCAM time sensitivity. Therefore, the command distribution speeds of both software systems can satisfy the actual engineering requirements of LAMOST.
In real observation scenarios, our camera commands are usually smaller than 200 B; thus, use of TCP is preferable to UDP. Further, using TCP, there is no risk of packet loss or out-of-order packets. Moreover, TCP provides more options for the already mature library implementation, which is expected to reduce the maintenance and redevelopment workload significantly. Considering all the above, we can conclude that the new system can more effectively satisfy the actual engineering requirements of LAMOST.
\subsection{Status transmission performance}
In the second experiment, we tested the status collection performance of the new system to determine whether it satisfies the engineering requirements for collecting and processing the status information of the 32 real CCD cameras. Status data packets (8 B, 12 B, ..., 1 KB) were created for each of the 32 cameras. Then, all these status packets were sent to our new system simultaneously. As previously, to guarantee the experiment accuracy, the simulated status packets of each size were tested for five loops, and each loop involved one million distributions. The averaged results for the transmission speed and occupancy network bandwidth are shown in Figure 10.
\begin{figure}[!htbp]
\centering
\includegraphics[width=\textwidth, angle=0]{ms20170246fig10.eps}
\caption{Status collection performance of new system}
\label{Fig10}
\end{figure}
For simple development and maintenance, we designed a single thread and one endpoint only to receive and process all status packets. From Figure 10, the status processing speed increases with the status packet size, as for the command experiment. However, the network occupancy is not very high, and exhibits a peak at a status packet size close to 48 B. The reason for this phenomenon is that the software is limited by input/output (IO) blocking and sender judgment logic. However, even when the status packet size is 1 KB, the status receiving and processing speed is only 3.2 ms$/$piece. Because it is not critical for the status processing requirements to be satisfied in real observation scenarios, the single-camera status generation rate is approximately second level. Thus, the status processing performance of our developed system is adequate.
For details of the reception performance and packet loss rate of the existing system, please see ~\cite{dong2009}. As this information has already been published, it is not described again here.
\subsection{Automatic observation}
We wish to introduce RTS2 to the LAMOST OCS so as to implement its automatic telescope control capability and robustness, thereby overcoming the shortcomings of the existing camera-control software. Therefore, a third experiment was designed to test the automation and robustness of the new system.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth, angle=0]{ms20170246fig11.eps}
\caption{The schematic diagram of RTS2 automatic observation}
\label{Fig11}
\end{figure}
RTS2 provides an observation decision-maker named "selector" (SEL), which can automatically select targets from a database according to various strategies (~\cite{peter2012-2}). And the targets in database can be insert or updated by RTS2scripts, e.g. rts2-newtarget, rts2-target. RTS2 also provides a device scheduler named "executor" (EXEC). These two core modules, along with the CentralD status synchronizer, guarantee the RTS2 automation capability. In addition, all RTS2 modules have pluggable implementation. Thus, the modules run independently and are connected dynamically. This property provides RTS2 with fault tolerance and robustness. It is possible to remove and add devices during observation without affecting the entire observation. Finally, RTS2 also provides a large number of dummy device modules for various types of astronomical equipment. The relationship between RTS2 modules for automatic observation is shown in Figure 11. Therefore, when our camera control software was realized, we could register our new system in the RTS2 environment and immediately begin simulated observations using other dummy device modules and servers.
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth, angle=0]{ms20170246fig12.eps}
\caption{Observation using new camera-control software with RTS2 environment}
\label{Fig12}
\end{figure}
We performed multiple observation tests continuously during a LAMOST maintenance session, using a calibration lamp instead of real observation targets. A three-loop observation was performed for each target, and a custom script was run for each observation to yield exposures of 3 times and 5 seconds. Figure 11 shows the procedure of one of these tests. In the observation experiment, we created a target table in a PostgreSQL database and ran the RTS2 automatic mode. Then, RTS2 SEL selected a suitable target from the database and sent the target identifier to EXEC. EXEC then drove the telescope (T0 in Figure 11), which was an RTS2 dummy device module in this case. Our new camera-control system (C0 in Figure 11) worked with EXEC to complete the observation task.
Figure 12 is a screenshot of the GUI of the new system for a given observation. This GUI can provide more detailed information on the statuses of the 32 EEV CCD cameras during operation than the existing system.
\begin{figure}[htbp]
\centering
\includegraphics[width=\textwidth, angle=0]{ms20170246fig13.eps}
\caption{GUI of new camera-control software during observation}
\label{Fig13}
\end{figure}
Figure 13 shows the results of a selected observation. Note that, after one observation, we obtain 32 target Flexible Image Transport System (FITS) files (Arc FITS files, to be specific).
\begin{figure}[!htbp]
\centering
\includegraphics[width=\textwidth, angle=0]{ms20170246fig14.eps}
\caption{Results provided by new system for one observation}
\label{Fig14}
\end{figure}
The results of the observation experiment indicate that the new RTS2-based camera-control software runs automatically and stably, and satisfies the LAMOST observation requirements.
\section{Discussion}
\label{sect:discussion}
\subsection{Virtual camera or 32 camera modules added to RTS2 framework}
To create the customized RTS2 device module for LAMOST, two design schemes were considered, as discussed below.
\subsubsection{First scheme}
According to this scheme, we would create 32 of our own camera modules, all of which would inherit from the RTS2::Camera class and overwrite certain interfaces (virtual functions). The number of customized devices would simply be increased. This process appears simple; however, there are many hidden disadvantages and risks. RTS2 was originally designed for small telescopes, and the intermodule communication mechanism uses a TCP-based network structure topology (not a star structure topology). When we register new modules in RTS2, the number of intermodule connections increases geometrically. An excessive number of connections not only reduces the stability of the RTS2 system, but also occupies additional bandwidth (many misuse messages are transferred between the new camera modules).
\subsubsection{Second scheme}
According to this scheme, a virtual layer that bridges RTS2 and the 32 real cameras is added. In this scheme, we create a customized camera and register it in the RTS2 framework like a simple camera. Then, we add a virtual layer (proxy) responsible for command distribution, coordination of the 32 real cameras, and status collection. This virtual layer simply provides an aggregate message to the RTS2 customized camera as feedback. By adding the virtual layer, we simplified the development process and reduced the coupling between the RTS2 and 32 real cameras. Another benefit also exists, i.e., we can reconstruct and upgrade the existing LAMOST CCD control software to reduce the workload and avoid development duplication.
Based on the above (and the previous sections), it is apparent that we chose the second scheme for implementation of the LAMOST camera-control system.
\subsection{Use of TCP instead of UDP}
In the existing camera-control software, a UDP broadcast mechanism is used, with the expectation that commands can arrive at each camera simultaneously. In order to improve the UDP reliability, many auxiliary codes have been added. However, packet loss and out-of-order problems always occur (~\cite{dong2009}). This problem is particularly serious in real observation scenarios, especially for cases involving large information throughput or a heavy network load. In addition, these auxiliary codes render the software obscure and increase the maintenance difficulty.
On the other hand, TCP provides reliable, ordered, and error-checked delivery of a stream of octets. This approach can overcome the shortcomings of UDP unreliability in the LAMOST high-speed local area network. From a technical perspective, TCP has lower efficiency than UDP. However, experiments have shown that careful TCP design can yield efficiency close to that for UDP. This is completely in line with the LAMOST engineering requirements (see Section 4.1 and Section 4.2). Further, use of TCP as the transport protocol can render the software both simple and stable. Moreover, use of TCP simplifies integration of the RTS2 framework into the new LAMOST camera-control system.
In summary, in the new RTS2-based LAMOST camera-control system, the UDP-based communication mechanism is replaced with a TCP transport protocol.
\subsection{Fault tolerance and exception handling mechanism}
As an engineering project, the fault tolerance and exception handling mechanism is very important, it ensures the stable operation of the entire software system and provides human-computer interaction mechanism if necessary.The fault tolerance and exception handling mechanism of the new RTS2-based LAMOST camera-control system is in the fllowing aspects:
Firstly, as a device module of RTS2 framework, the new system inherits the fault tolerance of RTS2 directly. All RTS2 programs are designed as fault tolerant. The failure of one device does not affect other devices executing daemons(~\cite{peter2006}). It is possible to remove and add our virtual camera module during observation without affecting whole observation.
Secondly, in CCD-Master, status processor allocates a timer for each CCD-Slave to track the key operation steps regularly. For example, in reading process, each CCD-Slave transfers a message of readout progress (generated by UCAM) every second. The timers monitor these messages and create a warning information, if one CCD-Slave do not update the message within the specified time. The warning information will be broadcast to other RTS2 modules via RTS2's message mechanism( logStream() function). And it also can drive GUI to pop-up message box to remind the observer.
Thirdly, follow the RTS2 connections paradigms, the connection between CCD-Slave and CCD-Master is allows reconnection. When a single CCD camera encounters serious hardware problems, engineer can close this CCD-Slave and remove the connection. And after the problem is solved, CCD-Slave can be started and the connection will be reconnected is re-established automatically. This process does not affect other normal CCD operation.
The above considerations of fault tolerance and exception handling measures provide the necessary guarantees for the stable operation of the new camera-control system.
\subsection{RTS2 extension to entire LAMOST OCS}
Virtualization of complex devices is a very innovative concept for upgrading the control software of a large-scale astronomical telescope. This approach can add the features of the new software framework while retaining the interfaces of the existing software with the minimum possible changes. Hence, the software design is simplified and bugs caused by adding too much new code are reduced, with development being accelerated.
In order to improve the LAMOST automatic observation capability, we introduced the RTS2 framework to the LAMOST camera-control software. The 32 CCD cameras of LAMOST were mapped into one RTS2 virtual camera module. The results of the validation experiments conducted in this study prove the automatic observation capability of the new software.
The LAMOST OCS is an extremely large and complex software system, with the camera-control system being only one subsystem. To realize automatic observation capability for LAMOST, we must reconstruct other OCS subsystems, i.e., the tracking controller, fiber-positioning controller, etc. The architecture diagram for extension of RTS2 to the entire LAMOST OCS is shown in Figure 14.
\begin{figure}[!htbp]
\centering
\includegraphics[width=\textwidth, angle=0]{ms20170246fig15.eps}
\caption{Architecture diagram for extension of RTS2 to the entire LAMOST OCS}
\label{Fig15}
\end{figure}
Using the device virtualization approach, we will create additional virtual device modules to communicate with other existing OCS subsystems. Finally, by extending the RTS2 framework to the entire OCS, we will realize the ultimate goal of using RTS2 to control LAMOST.
Currently, modular design and layered implementation is the general trend for software development, and a large number of excellent new software frameworks are being used to implement this concept. To exploit the advanced features provided by these frameworks, compatibility issues between them and existing software must be addressed. The concept of existing device or software virtualization is an important method for resolving this problem. We believe that this concept is the guiding principle for introduction of the RTS2 framework to LAMOST and enhancing the automatic observation capability. In addition, this concept and its implementation will provide a reference for other large astronomical telescope software upgrades. .
\section{Conclusions}
\label{sect:conclusion}
In this study, we designed and implemented a new camera-control system based on RTS2 for LAMOST. The results of a series of experiments and observation tests prove the automation and stability of the new system, and indicate that the new system satisfies the observation requirements of LAMOST. In developing the new system, we reconstructed the existing software and implemented the concept of a virtual layer. These technologies greatly simplified the software development. Further, the approach described in this study will also provide a reference for software upgrades of other large-aperture astronomical telescopes, with the aim of realizing automation.
The camera-control system is only one part of the LAMOST OCS. In the future, other existing software packages (the mount, guider, online data handler, etc.) will be reconstructed for adaptation to the RTS2 framework. By retaining the concept of device virtualization, we are certain this task will be accomplished.
\begin{acknowledgements}
This study is supported by the National Key Research and Development Program of China (Grant No. 2016YFE0100300), the Joint Research Fund in Astronomy (Grant Nos. U1531132 U1631129 U1231205) under cooperative agreement between the National Natural Science Foundation of China (NSFC) and the Chinese Academy of Sciences (CAS), the National Natural Science Foundation of China (Grant Nos. 11603044 11703044 11503042 11403009 11463003). The Guo Shou Jing Telescope ( the Large Sky Area Multi-Object Fiber Spectroscopic Telescope, LAMOST) is a National Major Scientific Project built by the Chinese Academy of Sciences. Funding for the project has been provided by the National Development and Reform Commission. LAMOST is operated and managed by the National Astronomical Observatories, Chinese Academy of Sciences. We also thank the reviewers for suggestions that improved the paper. The authors also gratefully acknowledge the helpful comments and suggestions of the reviewers.
\end{acknowledgements}
|
{
"timestamp": "2018-02-27T02:05:33",
"yymm": "1802",
"arxiv_id": "1802.08834",
"language": "en",
"url": "https://arxiv.org/abs/1802.08834"
}
|
\section{INTROCUTION}
\label{sec:intro}
Observations of core collapse supernovae (CCSNe) and their analysis indicate that a non-negligible fraction of their progenitors eject a substantial amount of mass tens of years to several days before explosion (e.g., \citealt{Foleyetal2007, Mauerhanetal2013, Ofeketal2013, SvirskiNakar2014, Moriya2015, Goranskijetal2016, Tartagliaetal2016, Arcavietal2017, Marguttietal2017, Nyholmetal2017, Reillyetal2017, Yaronetal2017, BoianGroh2018, Liuetal2018, Pastorelloetal2018}). Just before collapse, nuclear reactions in the core release a huge amount of energy. Most of it is carried away by neutrinos (e.g., \citealt{ZirakashviliPtuskin2016}), but some fraction of this energy might find its way to the stellar envelope.
Mechanisms that might carry energy from the violent nuclear burning to the envelope, like waves \citep{QuataertShiode2012, ShiodeQuataert2014} and magnetic activity \citep{SokerGilkis2017a}, are likely to cause mainly envelope expansion rather than mass ejection (e.g., \citealt{Soker2013, McleySoker2014, Fuller2017}). One way to utilize the expanding envelope for the ejection of mass is by a binary interaction. A stellar companion that was detached from the envelope of the primary star, i.e., the progenitor of the supernova, before the envelope was inflated starts to accrete mass from the inflated envelope. The mass flows onto the secondary star through an accretion disc, and this disc launches jets that remove mass from the inflated envelope (e.g., \citealt{KashiSoker2010a, Soker2013, McleySoker2014, DenieliSoker2018}).
A similar type of outburst might take place in the case that the primary star suffers a rapid expansion in late stages of evolution even when it is yet far from explosion. The most prominent example is the Great Eruption of the binary system Eta Carinae (on the Great Eruption itself see, e.g., \citealt{DavidsonHumphreys2012}). In earlier papers two of us suggested that accretion of mass from the primary star onto the secondary more compact star powered the Great Eruption (e.g., \citealt{KashiSoker2010a}).
Part of the accretion energy is channeled to light, by the accretion process itself, by the collision of the jets with the envelope, and/or from the collision of the freshly ejected envelope mass with previously ejected slower mass. The bright event might mimic a supernova explosion, and hence it is referred to as a supernova impostor. Supernova impostors overlap with major eruptions of luminous blue variables (LBVs), and both groups are part of the larger and heterogeneous group of intermediate luminosity optical transients (ILOTs; \citealt{KashiSoker2016}).
If the accreting companion is a compact object, the energy release will be larger, and the conditions in the accretion flow might be extreme in terms of density and accretion rate. Several studies consider the possible interaction between a neutron star (NS) or a black hole (BH) and the envelope (or even the core) of its larger companion, as mechanisms for gamma-ray bursts \citep{FryerWoosley1998,ZhangFryer2001}, supernova-like explosions \citep{BarkovKomissarov2011,Chevalier2012,SokerGilkis2018}, or both \citep{Thoneetal2011,Fryeretal2014}. Jets might be launched from accretion onto a white dwarf (WD) or a main sequence (MS) star \citep{Soker2004}, or, more pertinent for the current study, from an accretion disc forming around a NS \citep*{ArmitageLivio2000,Papishetal2015,SokerGilkis2018}. The case of a NS companion that by launching jets explodes and terminates the evolution of the primary giant star was termed by \cite{SokerGilkis2018} a common envelope jets supernova (CEJSN).
Two major uncertainties in the process where a compact star accretes mass inside the envelope of a giant star are the accretion rate and the formation of an accretion disc. Different studies, in particular hydrodynamic simulations of accretion onto a compact object inside a common envelope (e.g., \citealt*{RasioShapiro1991,Fryeretal1996,Lombardietal2006,RickerTaam2008,MacLeod2015,MacLeod2017}), have reached different conclusions on the accretion rate and on whether accretion discs are formed or not. There are two key processes that facilitate the formation of an accretion disc. Firstly, the jets themselves removes energy and high entropy material from the vicinity of the accreting object \citep*{Shiberetal2016} and by that reduce the pressure near the accreting object. \cite{Chamandyetal2018} show in their hydrodynamical simulations that this energy removal allows a high accretion rate. If this energy removal by the jets is not considered, then pressure is built-up near the accreting object and the accretion rate is much lower (e.g. \citealt{RickerTaam2012, MacLeodRamirezRuiz2015b}). Secondly, it is very likely that an accretion disc is formed before the compact companion enters the envelope and it continues to exist inside the envelope \citep{Staffetal2016}. We return to discuss these two processes in the relevant sections.
We note also that jets (and disc winds) remove angular momentum from the accretion flow. \cite{MacLeodRamirezRuiz2015b} argue that the steep density gradient in the envelope imposes an angular momentum barrier to accretion onto a compact object in the envelope. Yet during Roche-lobe overflow mass is flowing onto the compact object from one side, and an accretion disc does form. In this case the envelope of the donor is synchronized with the orbital motion. Even if there is no synchronization, rotation of the envelope, that must exist to some degree, facilitates the formation of an accretion disc. \cite{Staffetal2016} include envelope rotation and find the formation of an accretion disc before the companion enters the envelope. Once an accretion disc exists, jets can remove angular momentum and maintain the disc. Even if the initial disc is small, friction within the disc leads to its expansion and the material in the disc collides then with the accreted gas at increasing distances. \cite{Chamandyetal2018} do not include envelope rotation, but nonetheless find that an accretion disc can be formed inside the common envelope. They do remove mass and energy from the vicinity of the accreting body, as we expect jets to do. For the above reasons, in our study we assume that such accretion discs form and launch jets. We expect the formation of these discs to start while the companion is still outside the envelope of the giant, and this might be an important feature in a thoroughly self-consistent model.
In the present study we consider non-terminal eruptions that can be classified as supernova impostors, or more generally as ILOTS, by an accreting NS companion that launches jets while orbiting inside the envelope or while grazing the envelope. We term this a CEJSN impostor. In some of these cases, perhaps when the orbit is rather eccentric, the NS will exit the envelope after the eruption and the process can repeat itself. In section \ref{sec:scenarios} we discuss scenarios which can bring a NS into the envelope of a supergiant star, and initiate the accretion and outflow which powers an energetic outburst. In section \ref{sec:parameters} we derive scaled relations to show the outburst characteristics, focusing on the scenario of rapid expansion of a massive star due to an instability in its core. In section \ref{sec:mesamodels} we apply our derivations to models of supergiants which did not experience a rapid expansion. In section \ref{sec:sn2009ip} we discuss the application of our model for SN~2009ip, and some other similar transient events. We summarize our main findings in section \ref{sec:summary}.
\section{SCENARIOS}
\label{sec:scenarios}
The general scenario we consider is the passage of a NS through the envelope of a larger star, such as a supergiant. The NS is likely to be in an eccentric orbit, following the formation of the NS with a natal kick, and it plunges deep into the envelope only near periastron passages. Accretion onto the NS through an accretion disc is followed by an energetic bipolar outflow we term `jets', powering a luminous transient, or outburst. The three following scenarios can bring a NS into the envelope of a massive supergiant star.
(i) The supergiant experiences a phase of rapid expansion near the end of its evolution (e.g., \citealt{QuataertShiode2012,McleySoker2014,SokerGilkis2017a}). This might be observed as a \textit{pre-explosion outburst}, occurring just before the CCSN explosion of the supergiant.
(ii) The companion star reaches the end of its evolution and becomes a NS through a CCSN explosion, receiving a natal kick \citep{NSKicks1,NSKicks2,NSKicks3,NSKicks4} which brings its orbit to interact with the envelope of the supergiant, causing a \textit{post-explosion outburst}. This is conceptually similar to the scenario discussed by \cite{MichaelyPerets2018}, where a CCSN precedes the merger event of two compact objects.
(iii) A dynamical perturbation due to a tertiary star (e.g., \citealt{PeretsKratter2012}) changes the orbit of the inner binary, causing the NS to enter the envelope of the larger star. In this case the ensuing outburst might be unrelated directly to a CCSN explosion.
In the latter two scenarios, the envelope structure of the supergiant star will be similar. In principle, the engulfing star can also be a MS star, but we will focus on an evolved supergiant in this study. The envelope structure for the first scenario in the list above is expected to be different, as the envelope has expanded significantly due to energy deposition following an instability in the core.
In all the scenarios listed above, if the NS is captured in the envelope, it can spiral-in all the way to the core and completely disrupt the star (e.g., \citealt{SokerGilkis2018}). In this case an energetic terminal explosion occurs that is termed a CEJSN. Otherwise, the outburst is related or unrelated to a CCSN depending on which scenario, from those listed above, has brought the NS into the envelope of the supergiant.
A point which is relevant for all considered scenarios is the existence of a NS companion to a massive star, which can theoretically be more massive than the progenitor of the NS. This is possible if mass transfer occurred earlier in the evolution, with the initially more massive star transferring some material onto its companion before collapsing into a NS. Also, it might be that the relation between the initial mass and the compact remnant is non-trivial and non-monotonic. We will not discuss further this point.
A key assumption in our study is that a NS that accretes mass at a high rate and with sufficient angular momentum to form an accretion disc launches jets. There are several simulations that show the formation of an accretion disc around the compact object (a NS or a BH) that is formed from the collapsing core of a massive star (e.g., \citealt{MacFadyenWoosley1999, SekiguchiShibata2011, Tayloretal2011, BattaLee2016, Gilkis2018}). The two-dimensional simulations of \cite{Nagatakietal2007} which include magneto-hydrodynamic effects show the formation of jets with an energy of about $2 \times 10^{49} ~\rm{erg}$, supporting the notion that a NS or a BH can launch jets when accreting mass from a disc at a high rate and with sufficient angular momentum. \cite{Nagatakietal2007} conclude that their results cannot represent gamma-ray burst jets (see also \citealt{Fujimotoetal2006}), but this has no significance for our study since we do not look for jets with high Lorentz factors.
While the collapse of a massive star is simulated with high resolution by focusing on the inner core region, the accretion onto a compact object orbiting inside the envelope of its companion is considerably more difficult to fully simulate due to its inherent multi-scale nature. Yet we can find some inspiration from qualitatively similar scenarios. \cite{Staffetal2016} show in their hydrodynamical simulation of a MS companion in an eccentric orbit around an asymptotic giant branch star that near periastron an accretion disc is formed. A NS is much smaller than a MS star and much smaller than the resolution in their simulations. This implies that (i) even gas accreted with much less specific angular momentum can form an accretion disc, and (ii) the disc is tightly bound to the NS. The process is such that the accretion disc is formed before the NS enters the envelope, or when the NS is near the surface, and the disc survives as the NS orbits inside the envelope.
Further support comes from observations. \cite{BlackmanLucchini2014} suggest that the large momenta in some bipolar planetary nebulae (PNe) require that the energetic jets that inflated the lobes were launched inside a common envelope. In PNe the companion is most likely a MS star, or maybe a WD. It is easier still to form an accretion disc around a NS.
\section{The characteristics of the interaction}
\label{sec:parameters}
The proposed scenario is based on the possibility of a NS to accrete mass at very high rates thanks to cooling by neutrinos \citep{HouckChevalier1991, Chevalier1993, Chevalier2012}. Neutrino cooling is efficient when the mass accretion rate is $\dot M_{\rm acc} \ga 10^{-3} ~\rm{M_{\sun}} ~\rm{yr}^{-1}$ \citep{HouckChevalier1991}. Furthermore, if jets are launched, as we assume in the present study, then cooling by jets takes away energy from the accretion disc.
To estimate the power of the jets, we assume that cooling by neutrinos plays the same role as cooling by photons (radiation) in traditional geometrically-thin accretion discs. In young stellar objects (YSOs), for example, the geometrically-thin accretion disc implies a very efficient radiative cooling. Despite this efficient cooling the canonical assumption for jets in YSOs is that they carry $\approx 10$--$40 \%$ of the accreted mass (e.g., \citealt{Federrathetal2014} and references therein), and their terminal velocity is about the escape speed from the YSO. We expect that the accretion disc in our studied case will be turbulent and contain strong magnetic fields much as geometrically-thin discs around YSOs. These ingredients are generally considered to be required for jet launching from accretion discs. In what follows, therefore, we assume that the jets carry a fraction of $\epsilon_j\approx 0.1$ of the accreted mass, and that their terminal velocity is about the escape velocity from a NS, $v_j \simeq 10^5 ~\rm{km} ~\rm{s}^{-1}$.
In the proposed scenario, a NS orbits a massive supergiant star. At a certain point, the NS finds itself inside the envelope, as discussed in section \ref{sec:scenarios}. The velocity of the NS relative to the envelope, $v_{\rm rel}$, will be about the Keplerian velocity. The mass accretion rate is estimated as
\begin{equation}
\dot M_{\rm acc} \simeq \pi R^2_{\rm acc} \rho (r) v_{\rm rel},
\label{eq:dotMacc1}
\end{equation}
where $\rho(r)$ is the envelope density at the location of the NS, and the accretion radius is given according to the Bondi paradigm as
\begin{equation}
R_{\rm acc} = \frac {2 G M_{\rm NS}}{v^2_{\rm rel} + c^2_s},
\label{eq:Racc}
\end{equation}
where $c_s(r)$ is the sound speed in the envelope and $M_{\rm NS}$ is the mass of the NS. As the envelope might rotate, the relative velocity is somewhat smaller than the orbital velocity of the NS. For the purpose of the present study we neglect the sound speed in equation (\ref{eq:Racc}); this increases the accretion radius. For estimating the relative velocity between the NS and the envelope we take the orbital velocity to be that of a circular orbit and neglect the rotation of the envelope. Doing so causes the accretion radius to decrease. Namely, these two assumptions contribute in opposite ways to the size of the accretion radius and about counterbalance each other, simplifying the calculation we perform. Substituting the orbital velocity for a massive star with mass $M_1 \gg M_{\rm NS}$ we derive a simple expression for the accretion rate. We scale the quantities with typical values, and derive
\begin{eqnarray}
\begin{aligned}
\dot M_{\rm acc} & \simeq 0.18
\left( \frac{M_{\rm NS}}{0.1 M_1} \right)^{2}
\left( \frac{r}{2 ~\rm{au}} \right)^{2} \\
& \times
\left( \frac{\rho(r)}{10^{-8} ~\rm{g} ~\rm{cm}^{-3}} \right)
\left( \frac{v_{\rm rel}}{100 ~\rm{km} ~\rm{s}^{-1}} \right)
~\rm{M_{\sun}} ~\rm{yr}^{-1} .
\label{eq:dotMacc2}
\end{aligned}
\end{eqnarray}
Over one orbit at this rate the NS accretes a mass of
\begin{eqnarray}
\begin{aligned}
& M_{\rm acc} ({\rm orbit}) \simeq 8 \pi^2
\left( \frac{M_{\rm NS}}{M_1} \right)^{2} r^3 \rho (r) \\
& = 0.11
\left( \frac{M_{\rm NS}}{0.1 M_1} \right)^{2}
\left( \frac{r}{2 ~\rm{au}} \right)^{3}
\left( \frac{\rho(r)}{10^{-8} ~\rm{g} ~\rm{cm}^{-3}} \right) ~\rm{M_{\sun}}.
\label{eq:MaccOrb}
\end{aligned}
\end{eqnarray}
We expect the envelope to have a shallow density profile when a pre-explosion star experiences a rapid expansion. For example, in the model that \cite{McleySoker2014} studied the density profile after the expansion can be approximated as
\begin{equation}
\rho (r) \approx 3 \times 10^{-9} \left( \frac{r}{4 ~\rm{au}} \right)^{-\beta} ~\rm{g} ~\rm{cm}^{-3},
\label{eq:rho}
\end{equation}
with $\beta \approx 1$. For a density profile with $\beta =1$ and for a NS reaching $r=0.7 R_1$, where $R_1$ is the radius of the supergiant, the envelope mass outside the radius $r$ is $M_{\rm e,out}\simeq 6.5 \rho (r) r^3$. As the NS orbits in the outer envelope, the jets that are launched by the NS will not interact directly with the envelope mass along the primary star's polar directions \citep*{Shiberetal2017}. The jets interact with a fraction $\epsilon_e$ of the envelope mass
\begin{equation}
M_{\rm e,int} \simeq 5 \epsilon_e \rho (r) r^3.
\label{eq:Meint}
\end{equation}
Note that the density profile as given by equation (\ref{eq:rho}) is for an inflated envelope caused by a short disturbance in the stellar interior. The density profile in the undisturbed envelope is much steeper (see section \ref{sec:mesamodels}).
We estimate the value of $\epsilon_e$ as follows. Since the NS moves through the envelope, the jets it launches do not punch a hole through the envelope and escape unimpeded. The jets are shocked and inflate large hot low-density bubbles, as seen in 3D hydrodynamical simulations of jets in common envelope evolution and of grazing envelope evolution (e.g., \citealt{Sokeretal2013, Shiberetal2017, LopezCamaraetal2018, ShiberSoker2018}). As the inflated bubbles make their way out of the envelope they interact with most of the envelope material in the region from the jets' origin (which changes its position) to the surface, including even some envelope gas near the equatorial plane. The condition for a jet to inflate a bubble rather than to readily escape is that its axis changes its location and/or direction on a timescale shorter than the time it takes for the jet to penetrate out from the envelope (e.g., \citealt{Soker2016} for review). This condition is met here owing to the orbital motion of the NS. As the NS makes about half an orbit inside the giant, and because the bubble interacts with some material inward to its orbit, we approximately take $\epsilon_e \approx 0.5$.
Let a fraction $\epsilon_j$ of the accreted mass be launched in the jets at a velocity of $v_j$. Using equations (\ref{eq:MaccOrb}) and (\ref{eq:Meint}), we find the ratio of the mass in the jets to that in the envelope it interacts with to be
\begin{eqnarray}
\begin{aligned}
\frac{M_j}{M_{\rm e,int}} & \simeq
\frac {8 \pi^2 \epsilon_j
\left( M_{\rm NS}/M_1 \right)^{2} } {5 \epsilon_e }\\
& \simeq 0.03
\left( \frac{M_{\rm NS}}{0.1 M_1} \right)^{2}
\left( \frac{\epsilon_j}{0.1} \right)
\left( \frac{\epsilon_e}{0.5} \right)^{-1} .
\label{eq:MjMeint}
\end{aligned}
\end{eqnarray}
Conservation of energy implies that the jets eject the envelope with a typical velocity of
\begin{eqnarray}
\begin{aligned}
v_{\rm e,ej} & \approx
\left( \frac{M_j}{M_{\rm e,int}} \right)^{1/2} v_j
\simeq 1.7 \times 10^4
\left( \frac{M_{\rm NS}}{0.1 M_1} \right)
\\
& \times
\left( \frac{\epsilon_j}{0.1} \right)^{1/2}
\left( \frac{\epsilon_e}{0.5} \right)^{-1/2}
\left( \frac{v_j}{10^5 ~\rm{km} ~\rm{s}^{-1}} \right) ~\rm{km} ~\rm{s}^{-1}.
\label{eq:vjet}
\end{aligned}
\end{eqnarray}
This relation holds as long as the NS does not begin a second orbit and interacts again with the same envelope region as before, that is, for less than one orbit as occurs for example in a periastron passage. In the case of a periastron passage the NS crosses a shorter distance than a circumference, $\delta 2 \pi r$, with $\delta < 1$. The interaction lasts for a duration of
\begin{eqnarray}
\begin{aligned}
\tau_{\rm int} &\approx \frac{\delta 2 \pi r_p}{v_{\rm rel}}
= 2 \left( \frac{\delta}{0.3} \right) \\
& \times
\left( \frac{r_p}{2 ~\rm{au}} \right)
\left( \frac{v_{\rm rel}}{100 ~\rm{km} ~\rm{s}^{-1}} \right)^{-1}
{\rm months},
\label{eq:tauint}
\end{aligned}
\end{eqnarray}
where $r_p$ is the orbital separation at periastron. The jets remove envelope mass from the vicinity of the NS and reduce the accretion rate, or even stop the accretion entirely. Namely, the interaction operates in a negative feedback mechanism. For that, the interaction time can be shorter than the value given by equation (\ref{eq:tauint}), and the typical ejected velocity somewhat smaller than that given by equation (\ref{eq:vjet}).
The energy carried by the jets is
\begin{equation}
E\left(r\right) = \frac{1}{2} \epsilon_j \dot{M}_\mathrm{acc}\left(r\right) \tau_\mathrm{int} \left(r\right) v_j^2.
\label{eq:Emodels}
\end{equation}
Substituting equations (\ref{eq:MaccOrb}) and (\ref{eq:tauint}), we find for the total energy carried by the jets
\begin{eqnarray}
\begin{aligned}
E\left(r\right) &\simeq 3 \times 10^{50}
\left( \frac{\delta}{0.3} \right)
\left( \frac{\epsilon_j}{0.1} \right)
\\ & \times
\left( \frac{v_j}{10^5 ~\rm{km} ~\rm{s}^{-1}} \right) ^2
\left( \frac{M_{\rm NS}}{0.1 M_1} \right)^{2}
\\ & \times
\left( \frac{r_p}{2 ~\rm{au}} \right)^{3}
\left( \frac{\rho (r)}{10^{-8} ~\rm{g} ~\rm{cm}^{-3}} \right) ~\rm{erg}.
\label{eq:Eoutburst}
\end{aligned}
\end{eqnarray}
This energy is the major contribution to the outburst energy, e.g., much larger than the binding energy of the envelope mass that is removed, and hence we consider it to be about the ILOT energy, $E_{\rm ILOT} \simeq E\left(r\right)$.
In our rudimentary derivations in this section we neglect the negative feedback mechanism through which the jets interact with the ambient medium. For that, we somewhat overestimate the interaction time and the outburst (ILOT) energy.
An important question is whether the estimated accretion rate (equation \ref{eq:dotMacc2}) is reasonable, and also how jets are launched from the disc. According to \cite{Chevalier1993}, the trapped radiation cannot stop the accretion rate from reaching the high rate required for neutrino cooling, and this is supported by the simulations of \cite{Fryeretal1996}. Furthermore, jets help the accretion by taking away high entropy material and angular momentum. The recent hydrodynamical common envelope simulations of \cite{Chamandyetal2018} show that if jets (as claimed by \citealt{Shiberetal2016}) or another process remove energy from near the mass-accreting star then accretion can proceed at very high rates, i.e., super-Eddington rates. Another issue is the requirement of the accretion shock to be smaller than the sonic radius of the Bondi accretion (see \citealt{BarkovKomissarov2011}), although we note that the shock radius can be smaller than analytic estimations, due to removal of angular momentum and high entropy gas by jets. Finally, we assume that neutrino cooling does not take away all of the accretion energy, as material outflow must remove angular momentum from the accretion disc. We assume that similar to other astrophysical objects accreting from a disc, the bipolar outflow will be at about the escape velocity and carry $\approx 10 \%$ of the accreted mass, i.e., $\epsilon_j \approx 0.1$, as implied in our scaling of equation (\ref{eq:Eoutburst}). The full details of accretion with neutrino cooling and jet launching will have to be studied in considerably arduous and advanced future simulations.
\section{Application to supergiant models}
\label{sec:mesamodels}
To further demonstrate our proposed scenarios, we use values in the envelopes of stellar models evolved with Modules for Experiments in Stellar Astrophysics (\texttt{MESA} version 10108; \citealt{Paxtonetal2011,Paxtonetal2013,Paxtonetal2015,Paxtonetal2018}). The models are non-rotating and have a metallicity of $Z=0.02$. Mixing processes include convection according to the Mixing-Length Theory \citep{BohmVitense1958} with $\alpha_\mathrm{MLT}=1.5$, semiconvective mixing \citep{Langer1983, Langer1991} with $\alpha_\mathrm{sc}=0.1$, and exponential convective overshooting is applied as in \cite{Herwig2000} above and below non-burning and hydrogen-burning regions (with the fraction of the pressure scale height for the decay scale of $f=0.016$). We evolve two masses, $M_\mathrm{ZAMS}=15 ~\rm{M_{\sun}}$ and $M_\mathrm{ZAMS}=40 ~\rm{M_{\sun}}$, up to the stage of core carbon depletion. Mass loss is according to \cite{Vink2001} for the MS phase, and according to \cite{deJager1988} during the evolved supergiant phase, and we apply a multiplicative factor $\eta$ to the mass loss at all times (see, e.g., \citealt{Smith2014,Renzo2017}, on mass loss in massive stars). We use $\eta=0.33$ and $\eta=1$, for a total of four models, which we present in Fig. \ref{fig:hr}.
\begin{figure}
\includegraphics[width=0.5\textwidth]{HR.eps}
\caption{Hertzsprung-Russell diagrams of the four supergiant star models we study. Hexagram symbols mark core hydrogen depletion, square symbols the depletion of helium in the core, and the depletion of carbon in the core is marked by diamond symbols.}
\label{fig:hr}
\end{figure}
In Fig. \ref{fig:profiles} we present the density profiles of the four models at the stage where carbon is depleted in the core. The two models with $M_\mathrm{ZAMS}=15 ~\rm{M_{\sun}}$ are red supergiants (RSG) at this stage, while the models with $M_\mathrm{ZAMS}=40 ~\rm{M_{\sun}}$ evolve into a yellow supergiant (YSG) for $\eta=0.33$, and a blue supergiant (BSG) for $\eta=1$. It can be seen in Fig. \ref{fig:profiles} that the BSG has a steeper density decline in the outer envelope compared to the other models.
\begin{figure}
\includegraphics[width=0.5\textwidth]{DensityProfile.eps}
\caption{Density profiles of the four models, at the stage of core carbon depletion. Black circles mark the transition from the core to the envelope, where the hydrogen fraction drops below $0.3$.}
\label{fig:profiles}
\end{figure}
To estimate the characteristics of an outburst powered by a NS interacting with the modeled envelopes, we proceed as follows. We start from equation (\ref{eq:Racc}) where we neglect the sound speed and substitute there and in equation (\ref{eq:dotMacc1}) ${v_\mathrm{rel}=\left(G m_r/r\right)^{1/2}}$ for the NS-envelope relative velocity, where $m_r$ is the mass of the supergiant inner to radius $r$. We derive the following equation for the accretion rate onto the NS
\begin{equation}
\dot{M}_\mathrm{acc}\left(r\right) = 4 \pi \left(\frac{M_\mathrm{NS}}{m_r}\right)^2 r^2 \rho \left(r\right) \left(\frac{G m_r}{r}\right)^{1/2}.
\label{eq:mdotmodels}
\end{equation}
Substituting the relative velocity in equation (\ref{eq:tauint}) and taking for the reduced interaction time there $\delta\left(r\right)=1-r/R_1$, we find
\begin{equation}
\tau_\mathrm{int} \left(r\right) = 2 \pi \left( 1-\frac{r}{R_1} \right)
\left(\frac{r^3}{G m_r}\right)^{1/2}.
\label{eq:tauintmodels}
\end{equation}
Taking for the mass in the jets $M_j=\epsilon_j \dot{M}_\mathrm{acc} \tau_\mathrm{int}$ and for the envelope mass that the jets interact with $M_\mathrm{e,int}=\epsilon_e \left( M_1 - m_r \right)$, and substituting in equation (\ref{eq:vjet}), yields the following expression for the typical velocity of the ejected envelope
\begin{equation}
v_\mathrm{e,ej} \left(r\right) \simeq \left[\frac{\epsilon_j \dot{M}_\mathrm{acc} \left(r\right) \tau_\mathrm{int}\left(r\right)}{\epsilon_e \left( M_1 - m_r \right)}\right]^{1/2} v_j.
\label{eq:veejmodels}
\end{equation}
Finally, for the outburst energy we use equation (\ref{eq:Emodels}).
We first apply equation (\ref{eq:mdotmodels}) to our stellar models, and present the results in Fig. \ref{fig:accretion}. We note that the passage of the NS through the envelope will not be in a circular trajectory, and therefore not at constant $r$. However, as we see in Fig. \ref{fig:accretion}, the accretion rate is not very sensitive to the depth within the envelope from which we take the values for equation (\ref{eq:mdotmodels}). We see that $\dot{M}_\mathrm{acc}\left(r\right)>10^{-3} ~\rm{M_{\sun}} ~\rm{yr}^{-1}$ for essentially all values of $r$ in all models, as required for efficient cooling by neutrinos.
\begin{figure}
\includegraphics[width=0.5\textwidth]{AccretionRate.eps}
\caption{The accretion rate calculated by equation (\ref{eq:mdotmodels}) for four different stellar models (solid lines), for a NS moving inside the envelope at an orbital separation of $0.5 R_1< r <0.95 R_1$. We also present by the dashed lines the accretion rates when the sound speed is considered in the expression for the accretion radius (see text).}
\label{fig:accretion}
\end{figure}
We also checked the effect of not neglecting the sound speed $c_s$ in the expression for the accretion radius (equation \ref{eq:Racc}) that we used in the derivation of equation (\ref{eq:mdotmodels}). As shown in Fig. \ref{fig:accretion}, this has a limited effect. Furthermore, the uncertainty in the relative velocity due to the rotation of the supergiant might bring the accretion rate back up to around the values calculated without taking $c_s$ in the derivation.
In Fig. \ref{fig:interactiontime} we show the duration of the interaction of the NS with the supergiant envelope, calculated using equation (\ref{eq:tauintmodels}). For the models and parameters we employ the interaction times range from days to several months. For the RSG and YSG models the interaction time when the NS does not get deep into the envelope is about a month. The interaction time can last for about half a year when the NS dives deep into the envelope. For the BSG model the interaction lasts between only days to a few weeks, due to its smaller size. As mentioned in section \ref{sec:parameters}, the duration might be overestimated in all cases due the feedback nature of the interaction.
\begin{figure}
\includegraphics[width=0.5\textwidth]{InteractionTime.eps}
\caption{The interaction time according to equation (\ref{eq:tauintmodels}) for our four different stellar models and for periastron orbital separation of $0.5 R_1 < r <0.95 R_1$.}
\label{fig:interactiontime}
\end{figure}
In Fig. \ref{fig:vejecta} we show the estimated velocity of the ejecta from the interaction, using equation (\ref{eq:veejmodels}), with $\epsilon_j=0.1$, $\epsilon_e=0.5$ and $v_j=10^5 ~\rm{km~s^{-1}}$. The range of velocities is between $4\,000 ~\rm{km~s^{-1}}$ and $16\,000 ~\rm{km~s^{-1}}$, differing between stellar models. The sensitivity to $r$ in each model is not large. Taking into account the sound speed $c_s$ in the expression for the accretion radius changes the velocities somewhat. We expect realistic values to be between those calculated with and without the inclusion of $c_s$.
\begin{figure}
\includegraphics[trim= 0.3cm 1.1cm 1.1cm 1.1cm,clip=true,width=1.0\columnwidth]{EjectaVelocity.eps}
\caption{The ejecta velocity calculated for four different stellar models, and using equation (\ref{eq:veejmodels}) with $\epsilon_j=0.1$, $\epsilon_e=0.5$ and $v_j=10^5 ~\rm{km~s^{-1}}$ (solid lines), as function of the orbital separation in the range of $0.5 R_1 < r <0.95 R_1$. The effect of taking into account also the sound speed is shown in the dashed lines.}
\label{fig:vejecta}
\end{figure}
In Fig. \ref{fig:energy} we show the outburst energy estimated using equation (\ref{eq:Emodels}), with $\epsilon_j=0.1$ and $v_j=10^5 ~\rm{km~s^{-1}}$. Similar to our estimation of the interaction duration (Fig. \ref{fig:interactiontime}), we somewhat overestimate the outburst energy due to a negative feedback mechanism through which the jets interact with the ambient gas \citep{Soker2016}. The very high values of $E>10^{51} ~\rm{erg}$ are therefore not realistic. Outburst energies of a few times $10^{50}$, though, are reasonable.
\begin{figure}
\includegraphics[width=0.5\textwidth]{OutburstEnergy.eps}
\caption{The outburst energy according to equation (\ref{eq:Emodels}) with $\epsilon_j=0.1$ and $v_j=10^5 ~\rm{km~s^{-1}}$ (solid lines), for our four stellar models, and in the range $0.5 R_1 < r <0.95 R_1$. The effect of taking into account also the sound speed is shown in the dashed lines.}
\label{fig:energy}
\end{figure}
The results we show in this section are for the stage at which carbon is depleted in the core, just several years before the final collapse of the iron core. We also checked an earlier stage, that of core helium depletion, which is several thousands of years earlier for the $M_\mathrm{ZAMS}=40 ~\rm{M_{\sun}}$ models, and a few tens of thousands of years earlier for the $M_\mathrm{ZAMS}=15 ~\rm{M_{\sun}}$ models. We found very small quantitative differences in $\dot{M}_\mathrm{acc}$, $\tau_\mathrm{int}$, $v_\mathrm{e,ej}$ and $E$. Therefore, our results are not sensitive to the precise evolutionary stage of the supergiant.
The passage of the NS through the envelope of the giant will result in changes to the orbital parameters, with consequences also for subsequent such passages. For the purpose of demonstration, we take a typical accretion rate of ${ \dot{M}_\mathrm{acc} \simeq 0.3 ~\rm{M_{\sun}} ~\rm{yr}^{-1} }$ (see Fig. \ref{fig:accretion}) and an interaction time of $\tau_\mathrm{int} \simeq 0.5 ~\rm{yr}$ (see Fig. \ref{fig:interactiontime}), and consider a plunge to a depth of $r \simeq 3 ~\rm{au}$. The accreted mass is then ${ M_\mathrm{acc} \simeq 0.15 ~\rm{M_{\sun}} \simeq 0.1 M_{\rm NS} }$, similar to the scaled accretion mass in equation (\ref{eq:MaccOrb}). From angular momentum conservation an accretion of such a mass (with zero angular momentum) acts to reduce the semi-major axis of the orbit by about $20 \%$. Since we expect the envelope to rotate in the same direction as the orbital motion of the secondary star, the accreted gas has some angular momentum. This reduces the effect of the accretion on the orbit. Further offsetting the decrease of the semi-major axis, the jets remove a gas mass of about 30 times their own mass from the envelope (equation \ref{eq:MjMeint}), or about 3 times the accreted mass. Overall we estimate the orbit to shrink by about $10 \%$ in one passage.
Another effect which might change the orbit of the NS is that of dynamical friction. We estimate the gravitational drag force (see, e.g., \citealt{Ostriker1999}) as ${ F_\mathrm{dyn} \approx 4 \pi G^2 M_\mathrm{NS}^2 \rho / v_\mathrm{rel}^2 }$, and then the relative velocity change ${ \delta v / v \approx (F_\mathrm{dyn} / M_\mathrm{NS}) \tau_\mathrm{int} \left( r \right) / v_\mathrm{rel} }$ is about a few percent for most of the parameter range considered, and up to $ \approx 20 \% $ for the deepest plunges. This is similar to the kinematic effect of the mass transfer.
The effects described above cause a moderate decrease in the orbital separation, so that the NS can survive several such passages inside the envelope, with some sensitivity to uncertainties in the binary evolution. For example, the basic expectation is that the time between successive outbursts will decrease due to accretion. However, if we consider that the interaction might have been triggered by some instability that caused the inflation of the envelope, then the binding energy of the disturbed envelope might be very low. Both the jets launched by the NS inside the envelope, and the stellar wind after the NS exits the envelope following its eccentric orbit, might remove large amounts of mass such that the orbital period might even increase. This is similar to the case of the orbital separation increasing in the grazing envelope evolution \citep{Soker2017}.
\section{SN 2009\lowercase{ip}}
\label{sec:sn2009ip}
\subsection{Observational properties}
The supernova impostor SN~2009ip first erupted in 2009, soon to be discovered as an impostor of LBV origin, rather than a supernova \citep{Maza2009,Berger2009}. The LBV, located in the spiral galaxy NGC~7259, showed a series of outbursts, the first of which in 2009 and the last in 2012 (e.g., \citealt{Drake2012,Mauerhanetal2013,Pastorello2013,Levesqueetal2014}). The outbursts showed an increase by $\approx 3$--$4$ magnitudes in the V~band in Sep. 2011 and Aug. 2012 (hereafter outburst 2012a), followed by an increase of $\approx 7$ magnitudes in Sep. 2012 (hereafter outburst 2012b). The peak bolometric luminosity of the 2012b outburst at initially estimated to be $L_{\rm p} = 8 \times 10^{42} ~\rm{erg} ~\rm{s}^{-1}$ \citep{Pastorello2013}.
Assuming the erupting star was a non-rotating LBV, \cite{Foley2011} suggested that the ZAMS mass of the erupting star was $M_1 \geq 60 ~\rm{M_{\sun}}$. Assuming a rotating LBV at $40\%$ of its critical velocity, \cite{Marguttietal2014} gave an estimate of $M_1 = 45$--$85 ~\rm{M_{\sun}}$. A later estimate based on multi-spectral observations of the outburst updated the value to $L_{\rm p} = 1.2 \times 10^{43} ~\rm{erg} ~\rm{s}^{-1}$ \citep{Marguttietal2014}. Consequently, the bolometric energy radiated during the outbursts was found to be $E_{{\rm rad,}a}=(1.5 \pm 0.4) \times 10^{48} ~\rm{erg}$ for the 2012a outburst, and $E_{{\rm rad,}b}=(3.2 \pm 0.3) \times 10^{49} ~\rm{erg}$ for the 2012b outburst \citep{Fraseretal2013,Marguttietal2014}. The total energy involved in each of the outbursts is a few times larger than this value and was estimated to be $E_{{\rm tot,}a}=(2 \pm 1) \times 10^{48} ~\rm{erg}$ and $E_{{\rm tot,}b}=(7.5 \pm 2.5) \times 10^{49} ~\rm{erg}$ \citep*{Kashietal2013, Marguttietal2014}.
\cite{Marguttietal2014} suggested that most of the energy radiated in the large 2012b peak came from the kinetic energy of the material ejected during the 2012a outburst. Calibrating the ejecta mass with $\approx 0.5 ~\rm{M_{\sun}}$, \cite{Marguttietal2014} found the total energy of the outbursts to be $E_{{\rm tot}}\approx 10^{50} ~\rm{erg}$. An important characteristic of SN~2009ip which is relevant to our present study is the high ejecta velocity of the 2011 eruption (up to $\approx 13\,000 ~\rm{km~s^{-1}}$; \citealt{Pastorello2013}) and of the 2012a event \citep{SmithMauerhan2012, Mauerhanetal2013}.
\cite{Mauerhanetal2014} observed SN~2009ip during the 2012a outburst and found that the the spectrum showed broad P~Cyg lines. They found polarization that suggests substantial asphericity for the 2012a outflow. The degree of polarization increased during the 2012b event, from which \cite{Mauerhanetal2014} concluded a higher degree of asphericity than 2012a. The asymmetry was later confirmed by observations of \cite{Reillyetal2017}.
\cite{Fraseretal2015} followed the decline of the light curve in 2013--2014, and found that its slope was considerably shallower than expected from nuclear decay slopes of CCSNe. From the spectroscopic and photometric evolution until 820 days after the initiation of the 2012a event, they found no evidence that a CCSN had occurred. \cite{Grahametal2014} and \cite{Grahametal2017} also presented observations of the late evolution of the light curve (the later up to 1000 days post-eruption). They found that the light curve is still decreasing in a linear rate, an expected behavior for eruptions interacting with circumstellar material (CSM). They also compared late spectra of SN~2009ip to various SNe and SN impostors. They could not conclusively tell whether the interaction with the material is the result of an impostor or a real supernova, but found it somewhat better matches a real supernova.
\subsection{Previously proposed models}
\cite{Ouyedetal2013} attributed the 2012a outburst to a standard CCSN, and the 2012b outburst to a dual-shock quark-nova. \cite{Mauerhanetal2013} proposed a second scenario, suggesting that the 2012a event was a terminal supernova explosion, and the 2012b outburst to be the result of collision of fast supernova ejecta from the 2012a outburst with slower gas ejected earlier. The same scenario was also favored by \cite{Prieto2013}. \cite{Marguttietal2014} attributed the 2012b brightening to an explosive shock breakout coming from an interaction between the explosive ejection of the LBV envelope taking place $\approx 20$--$24$ days before the 2012b peak, and shells of material ejected during the 2012a eruption. The results of \cite{Marguttietal2014} disqualify the \cite{Mauerhanetal2013} scenario. The reason, as noted by \cite{Marguttietal2014}, is that the photosphere expansion velocity of $\approx 4500 ~\rm{km} ~\rm{s}^{-1}$ during the 2012b outburst implies that the gas that accelerated the photosphere originated long after the peak of the 2012a event. Namely, the gas was ejected long after the star had ceased to exist according to \cite{Mauerhanetal2013}.
Another scenario favored core instability of a single star that leads to the ejection of shells \citep{Pastorello2013}. A different scenario was suggested by \cite{SokerKashi2013}, who compared the 2012a and 2012b outbursts to the outburst of the ILOT V838~Mon. The latter ILOT is composed of three shell-ejection episodes as a result of a stellar merger event \citep{Tylenda2005}. The ejection of separate shells in the 2012a and 2012b outbursts supports the binary scenario proposed by \cite{SokerKashi2013}, who suggested that SN~2009ip was a massive binary system with an LBV of $M_1=60$--$100 ~\rm{M_{\sun}}$ and a MS companion of $M_2=0.2$--$0.5 M_1$ in an eccentric orbit.
\cite{Kashietal2013} proposed that the major 2012 outburst was powered by an extended and repeated interaction between the LBV and a more compact (MS or Wolf-Rayet star) companion in an eccentric orbit. During the first periastron passage, the companion accreted $2$--$5 ~\rm{M_{\sun}}$ from the LBV envelope. The accreted gas released gravitational energy which can account for the total 2012b outburst energy. Also, in the declining light curve of the 2012b outburst \cite{Kashietal2013} noticed two large peaks in which the extra radiation was similar to the 2009--2011 outbursts. \cite{Kashietal2013} interpreted the peaks as resulting from mass ejected during later periastron passages. In that case the inferred orbital period after the large mass accretion is $\approx 25$ days, suggesting that the companion survived the eruption.
In an additional scenario, \cite{Kashietal2013} considered a terminal binary merger event, but one which occurred only after the system had experienced a second periastron passage after the major one. As in the surviving companion scenario, the major interaction that powered the 2012b outburst was powered by mass accretion, which shortened the orbital period. However, in the merger scenario the orbit was shortened even more, and the second periastron passage occurred $\approx 20 ~\rm{days}$ after the first (major) periastron passage. After the second periastron passage the companion plunged too deep into the envelope to eject more gas.
\cite{Levesqueetal2014} found evidence for the existence of a thin disc around the central star, and suggested that a binary companion is also present. They favored a model in which the observed 2012b re-brightening is an illumination of the disc's inner rim by fast-moving ejecta produced by the underlying events of 2012a.
One of the challenges for the binary model is to account for gas moving at $v>10\,000 ~\rm{km~s^{-1}}$ as observed in the 2011 outburst \citep{Pastorello2013} and in the 2012a outburst \citep{Mauerhanetal2013}. \cite{TsebrenkoSoker2013} simulated part of the scenario of \cite{Kashietal2013}, in which jets that are launched by the accreting companion and interact with the environment account for the high velocity gas. Namely, they numerically studied the propagation of the jets through the extended envelope. They were able to reach the observed velocities but only with a small fraction of the gas, probably smaller than can account for the observations. They also commented that jets launched by a WR companion will be narrower and denser than by a MS star, with a shorter flow time and a longer photon diffusion time, which would allow the acceleration of more mass to higher velocities.
\subsection{Applying our scenario for SN~2009ip}
We examine whether our proposed common envelope jets supernova impostor scenario can account for the observations of SN~2009ip. Namely, we examine the possibility that the 2011 outburst, and possibly the 2012 eruption, of SN~2009ip were supernova impostors driven by jets from a NS companion. The main advantage of jets from a NS companion is that they can account for the velocity of about $13,000 ~\rm{km} ~\rm{s}^{-1}$ that \cite{Pastorello2013} found in the 2011 outbursts. We will here refer to the 2012a event of SN~2009ip as the supernova, and will adopt the idea that the 2012b event is the result of an interaction with the CSM, ejected earlier.
We evolve a \texttt{MESA} model for an LBV starting from $M_{\rm ZAMS} = 110 ~\rm{M_{\sun}}$, and having a mass-loss rate according to the prescription of \cite{Kashietal2016}, that keeps the photosphere temperature at $20\,000 ~\rm{K}$, in accordance with the bi-stability jump. We evolve it until it reaches $M_{\rm LBV} \simeq 80 ~\rm{M_{\sun}}$. As LBVs are hot stars, their typical radius is smaller than that of RSGs. Therefore, the scenario proposed here requires the NS companion to be closer to the LBV than it would have been had the primary star been a RSG.
For our scenario we adopt the parameters developed by \cite{Kashietal2013}. As mentioned above, the mass of the LBV is $M_{\rm LBV} \simeq 80 ~\rm{M_{\sun}}$, the orbital period is taken to be $P \approx 32$ days (note that the number stated earlier, $\approx 25$ days, is the period at the end of the interaction rather than the period before/during the interaction) and the interaction time is $\tau_{\rm int} \approx 8$ days, therefore $\delta \simeq 0.25$ (see equation \ref{eq:tauint}). The eccentricity is taken to be $e \simeq 0.5$ so that the NS reaches a periastron distance of $\approx 0.4 ~\rm{au}$, that is inside the envelope of the LBV whose radius is $R_{\rm LBV} \simeq 0.55 ~\rm{au}$. The mass of the NS companion is $M_\mathrm{NS} = 1.33 ~\rm{M_{\sun}}$.
We use equation (\ref{eq:Eoutburst}) to calculate the total energy that the jets carry
\begin{equation}
\begin{split}
E_j &\simeq 1.7 \times 10^{48}
\left( \frac{\delta}{0.25} \right)
\left( \frac{\epsilon_j}{0.1} \right) \\
& \times
\left( \frac{v_j}{10^5 ~\rm{km} ~\rm{s}^{-1}} \right) ^2
\left( \frac{60 M_{\rm NS}}{ M_{\rm LBV}} \right)^{2}\\
& \times
\left( \frac{r}{0.4 ~\rm{au}} \right)^{3}
\left( \frac{\rho (r)}{3 \times 10^{-7} ~\rm{g} ~\rm{cm}^{-3}} \right) ~\rm{erg}.
\end{split}
\label{eq:Ej_SN2009ip}
\end{equation}
This energy is about the energy released in the 2012a event. We conclude that even with conservative parameters the accretion onto a NS from the LBV envelope can account for the observed energy in SN~2009ip and probably also other type~IIn supernovae.
The series of six peaks observed in 2011 (that should have probably had seven peaks as one was evidently missing) had intervals of $\approx 40$ days. It is hard to tell what the duration of interaction was as observations are not frequent enough, but $8$ days is a reasonable duration to assume (note that \citealt{TsebrenkoSoker2013} considered a shorter interaction of 6--12 hours for each peak, which would yield 0.25--0.5 for the value of $\delta$ we use here for the entire episode of seven peaks in 2011). Therefore the same scaling of equation (\ref{eq:Ej_SN2009ip}) can also apply to the series of eruptions in 2011, which also had the same (combined) energy as the 2012a event.
Mass removal will also affect the orbit. For the present consideration, the NS enters the envelope of an LBV star, and not an inflated envelope as the one studied in section \ref{sec:parameters}, and hence equation (\ref{eq:Meint}) does not apply. The density profile of the envelope is steep, with $\beta \ga 3$, and for the model we use $\rho(r) \simeq 3 \times 10^{-7} (r/0.4 ~\rm{au})^{-\beta}$ (compare to equation \ref{eq:rho}). This implies that the envelope mass outside a radius which equals the periastron distance of $0.4 ~\rm{au}$ is $\approx 2 M_\odot$. The NS crosses about quarter of a circle, i.e., $\delta \simeq 0.25$, and hence the jets and the bubbles they inflate interact with $M_{\rm e,int} \approx 0.5 M_\odot$. Not all this mass is ejected as the jets marginally have the required energy to eject such a mass. More reasonably a mass of $\Delta M_{\rm ej} \approx 0.1$--$0.3 M_\odot$ is ejected at each periastron passage. After seven periastron passages the NS is expected to eject $\approx 1$--$2 M_\odot$. Mass removal at periastron passages increases the eccentricity, but as here the NS with its jets and the bubbles they inflate are expected to remove only $1$--$2 \%$ of the primary mass, the effect of mass loss on eccentricity is low.
In order to determine whether or not drag forces may cause the orbit to become circularized, we estimate the circularization timescale, given by \cite{VerbuntPhinney1995} as
\begin{equation}
\begin{split}
\frac{1}{\tau_c} \equiv -\frac{d\ln{e}}{dt}
&= 12.4 \left( \frac{T_{\rm eff}}{20\,000 ~\rm{K}} \right)^{4/3}
\left( \frac{M_{\rm env}}{~\rm{M_{\sun}}} \right)^{2/3} \\
&\times \frac{~\rm{M_{\sun}}}{M_{\rm LBV}} \frac{M_{\rm NS}}{M_{\rm LBV}}
\frac{M_{\rm LBV}+M_{\rm NS}}{M_{\rm LBV}}
\left( \frac{R_{\rm LBV}}{a} \right)^{8} \rm{yr^{-1}} , \\
\end{split}
\label{eq:tau_c}
\end{equation}
where $M_{\rm env} \simeq 0.9 M_{\rm LBV}$ is the mass of the LBV's envelope and $T_{\rm eff}$ is its effective temperature. We get ${\tau_c} \simeq 440 ~\rm{yr}$, and conclude that tidal circularization is insignificant on the timescale of the six or seven eruptions of SN~2009ip in 2011.
When the NS is inside the envelope, the tidal interaction considered in equation (\ref{eq:tau_c}) is for the mass inner to the location of the NS. Since the core of an LBV is much denser than its envelope $M_{\rm LBV}$ can stay as is, but the envelope mass should be replaced by its fraction inner to the NS position, $M_{\rm env,in}$. Two other effects that contribute to the drag on the NS then become important, the mass accretion onto the NS and the local gravitational interaction within the envelope mass that is not accreted. Both of these effects are of the same order of magnitude (section \ref{sec:mesamodels}). The fractional change in the eccentricity depends on the exact amount of mass that is accreted at each time along the orbit. For accretion near periastron, the decrease in eccentricity is of the order of the accreted mass divided by the NS mass, $\Delta e_{\rm acc} \approx - 2 M_{\rm acc} / M_{\rm NS}$. For the typical values used in equation (\ref{eq:Ej_SN2009ip}), we find the accreted mass to be $M_{\rm acc} \simeq 2 \times 10^{-4} M_\odot$. Namely, the change of eccentricity per orbit is $\Delta e_{\rm acc} \approx - 2 \times 10^{-4}$. Taking twice as large an effect due to gravitational interaction with mass that is not accreted, we find that for an orbital period of $P \approx 32$~days the timescale to change the eccentricity is $\tau_{c-{\rm drag}} \equiv \vert (\dot e_{\rm drag})^{-1} \vert \approx 150 ~\rm{yr}$. This is similar to the tidal timescale inside the envelope for our parameters (equation \ref{eq:tau_c}). Adding all these effects together we find that the typical time for circularization is $\tau_{c-{\rm tot}} \approx 100 ~\rm{yr}$.
The drag and mass accretion act to decrease the eccentricity. Enhanced mass loss at periastron passages, on the other hand, increases the eccentricity. As discussed above, at each periastron passage a mass of $\Delta M_{\rm ej} \approx 0.1$--$0.3 M_\odot$ is ejected from the envelope. This increases the eccentricity by $\Delta e_{\rm ej} \approx \Delta M_{\rm ej} / M_{\rm LBV} \approx 0.003$. However, the mass ejection lasts for a longer time than accretion and the effect decreases as the NS moves away from periastron. Overall, for our parameters, the mass-loss rate increases the eccentricity by $\Delta e_{\rm ej} \approx 10^{-3}$ each periastron passage (assuming the eccentricity is small), which for an orbital period of $P \approx 32$~days gives a decircularization timescale of $\tau_{\rm dc} \approx 100 ~\rm{yr}$. As was suggested for other cases (e.g., by \citealt{KashiSoker2018}, for the binary system HD~44179), the enhanced mass-loss rate induced by jets more or less compensates for the effect of the forces acting to decrease the eccentricity near periastron passages.
We conclude that the process we propose here of a NS launching jets can account for the occurrence of pre-explosion outbursts in SN~2009ip and similar objects, including those with outflow velocities of $\ga 10^4 ~\rm{km~s^{-1}}$.
\section{SUMMARY AND DISCUSSION}
\label{sec:summary}
In many scenarios for the formation of binary NS systems that eventually merge, the system experiences an early common envelope phase of a NS inside the envelope of a giant (e.g., \citealt{Chruslinskaetal2018}). We set the goal to examine the possible observational signatures of this phase. When a full common envelope phase takes place and the NS spirals in all the way to the core, the outcome might be a terminal supernova-like event \citep{Chevalier2012}, that \cite{SokerGilkis2018} termed a common envelope jets supernova (CEJSN). \cite{SokerGilkis2018} suggested that the peculiar supernova iPTF14hls was a CEJSN event. In the present study we considered cases where the NS can enter the envelope and then exit, so the outburst might repeat itself.
Essentially, the process is like that of many other ILOTS where a companion star accretes mass through an accretion disc and launches jets. The radiation comes directly from the accretion process, or, more likely, from the interaction of the jets with the ambient gas. In most cases of this high-accretion-powered ILOT (HAPI) model the companion was taken to be a MS (or slightly evolved) star \citep{KashiSoker2016,SokerKashi2016}. The new addition described in the present paper is the consideration of a NS companion.
A NS companion introduces three essential differences from a MS companion: (i) When the accretion rate is above about $10^{-3} ~\rm{M_{\odot}}~\rm{yr^{-1}}$, neutrino cooling allows accretion much above the Eddington accretion rate \citep{HouckChevalier1991}. Cooling by jets carries away more energy from the accretion disc and further eases the accretion. This implies that the outburst can be very energetic, up to supernova energies. For that reason we term this event a CEJSN impostor. (ii) The high velocity jets imply that in some cases outflow velocities of the ejecta above about $10^4 ~\rm{km} ~\rm{s}^{-1}$ might be observed (equation \ref{eq:vjet}). (iii) The mass of the NS is generally smaller than that of the MS companion in the HAPI model of LBV ILOTs, like Eta Carinae. This implies that even if the NS that is on an eccentric orbit and due to its high velocity when plunging into the envelope it is able to survive several orbits, it will eventually enter the envelope and perform the inevitable full common envelope evolution. In this case the energy of the outburst will be larger, and the event will be a CEJSN.
Let us elaborate on the last point. In section \ref{sec:scenarios} we discussed several scenarios for the NS to enter the envelope. There are two possible cases for the primary giant star to find itself far from its terminal nuclear evolution. In the first case the NS was formed in a CCSN and the natal kick sent it into the envelope of the giant, and in the second case a the system experienced a perturbation by a tertiary star. In both cases the orbit became eccentric. After one or more periastron passages the NS can either remove the envelope and end in a tight orbit around the core that will later form another NS, or it can spiral into the core and lead to a very energetic CEJSN. \cite{Papishetal2015} raised the possibility that strong \textit{r}-process nucleosynthesis (that form the third peak of the \textit{r}-process) takes place in jets launched in such circumstances (see also \citealt{SokerGilkis2017b}). In cases where the primary giant star is about to explode, the CEJSN impostor will be followed by a CCSN.
In section \ref{sec:parameters} we derived scaled relations to show the typical expected properties for our CEJSN impostor scenario and their dependencies on different parameters, with the focus on stars which undergo rapid expansion near the end of their nuclear evolution. In section \ref{sec:mesamodels} we applied our scenario for envelopes of evolved supergiant star models, and found that the accretion rate is in the range where neutrino cooling is efficient ($\dot{M}_\mathrm{acc} > 10^{-3} ~\rm{M_{\odot}}~\rm{yr^{-1}}$), as well as a limited sensitivity to the depth in the envelope where the NS passes. We found ejecta velocities between $4\,000 ~\rm{km~s^{-1}}$ and $16\,000 ~\rm{km~s^{-1}}$, interaction duration times from days to months, and output energies up to about $10^{51} ~\rm{erg}$.
In section \ref{sec:sn2009ip} we discussed SN~2009ip that had several LBV outbursts (supernova impostors) before its terminal explosion. We raised the possibility that the ejecta velocity of $> 10^4 ~\rm{km} ~\rm{s}^{-1}$ in the 2011 and 2012a outbursts were derived by jets from a NS companion. Though it is not possible to conclusively tell whether the terminal explosion (either 2012a or 2012b) was a CCSN or a spiraling of the NS toward the core (i.e., a CEJSN), our analysis shows that the energy released in the 2012a event is of the order of what would be expected from the scenario.
We call for a serious consideration of peculiar supernovae and impostors as outcomes of CEJSNe (energetic and terminal) and CEJSN impostors (that might repeat). With the operation of jets that are launched by a more compact companion, here a NS, that accretes mass from a giant we connect these types of outbursts to other ILOTs that are driven by accreting MS stars.
\section*{Acknowledgments}
We thank an anonymous referee for suggestions that substantially improved the presentation of our proposed scenario. This research was supported by the Asher Fund for Space Research at the Technion, and the Israel Science Foundation. AG gratefully acknowledges the support of the Blavatnik Family Foundation. AK thanks the support of the Authority for Research \& Development in Ariel University and the Rector of Ariel University.
\bibliographystyle{mnras}
|
{
"timestamp": "2018-11-07T02:12:38",
"yymm": "1802",
"arxiv_id": "1802.08669",
"language": "en",
"url": "https://arxiv.org/abs/1802.08669"
}
|
\section{Introduction}
Sorites paradoxes are usually illustrated by our inability to precise how many grains of sand constitutes a heap. One grain is not enough, nor two or three grains. But there is a point, even though not well marked, above which the collection of grains can be called a heap. The same question may be extended to the quantum limit: how many particles compose a many-body quantum system? Despite being a natural question, especially given the widespread theoretical and experimental interest in many-body quantum systems, the available related literature is surprisingly limited. While topics such as the onset of thermalization, the metal-insulator transition, and the scrambling of quantum information in interacting many-body quantum systems permeate studies in condensed matter physics, atomic and molecular physics, high energy physics, and quantum information theory, very little attention has been devoted to determining the minimum number of particles necessary to perform such studies.
Experiments with cold atoms, ion traps, and photon-based platforms are promising testbeds for addressing this point. In these experiments, the number of particles can be adjusted as desired~\cite{Zinner2016EPJWebConference}, which allows for studying how many-body effects emerge as the number of particles increases~\cite{Serwane2011,Wenz2013,Murmann2015A,Murmann2015B, Dung2017}. Rydberg polaritons, where interactions between photons are mediated by atomic Rydberg states, are also a favorable platform for the comparisons between few- and many-body physics~\cite{Jachymski2016PRL}.
By using a bottom-up approach and increasing one by one the number of ultracold atoms in a quasi-one-dimensional system, it was shown in Ref.~\cite{Wenz2013} that the Fermi sea is formed for $N\geq 4$, where $N$ is the number of atoms. Theoretical works have also detected many-body properties for $N=4$. In studies of thermalization in isolated systems, the Fermi-Dirac distribution was obtained for as few as 4 fermions~\cite{Schnack1996,Flambaum1997,Schnack2000,Izrailev2001}, and in a search for integrable systems composed of particles of unequal masses, chaotic spectrum was found for $N=4$~\cite{Harshman2017}. Another interesting experiment related with the main question of this work is the recent demonstration of Bose-Einstein condensation (BEC) with only 8 photons, which is probably the smallest existing BEC~\cite{WalkerARXIV}. A theoretical construction of few-body models to capture ground-state properties of many-body systems has also been proposed~\cite{Ran2017}.
In this work, we consider primarily a one-dimensional (1D) spin-1/2 model with short-range interactions, where the number $N$ of spin excitations is conserved. It can be mapped into a model of spinless fermions using the Jordan-Wigner transformation~\cite{Lieb1961} or of hard-core bosons using the Holstein-Primakoff transformation~\cite{Holstein1940}. Thus, an excitation in the spin model is equivalent to the presence of a particle in those other models. The corresponding Hamiltonians describe experiments with nuclear magnetic resonance platforms~\cite{Ramanathan2011}, cold atoms~\cite{Bloch2008}, and ion traps~\cite{Jurcevic2014,Richerme2014}. For our spin model to be as generic as possible, we ensure that no local symmetries are present, with the exception of the conservation of the total number of excitations. The case in which half of the chain is filled with excitations is taken as our reference for the many-body limit.
We study static and dynamic properties as $N$ increases from 1, with particular interest in identifying how many excitations are needed for the onset of quantum chaos. In isolated interacting many-body quantum systems, the source of chaos is interparticle interactions. Quantum chaos is a main mechanism for the viability of thermalization, it hinders the transition from a metal to an insulator, and it is related with the fast scrambling of quantum information and the linear growth of entanglement and information entropies.
We verify that, for $N \raisebox{-0.13cm}{~\shortstack{$>$ \\[-0.07cm] $\sim$}} ~ 4$, the static properties of the spin model with different numbers of excitations become analogous to those of the half-filling case. The shape of the density of states (DOS) becomes close to Gaussian, as typical of many-body quantum systems with two-body couplings~\cite{French1970,Brody1981}, and signatures of quantum chaos, such as the Wigner-Dyson distribution of the spacings between neighboring levels, rigid spectrum~\cite{Guhr1998}, and chaotic eigenstates~\cite{ZelevinskyRep1996,Borgonovi2016} become evident.
Turning to the dynamics, the threshold between few-body and many-body becomes fuzzier. The behavior of the system depends on the time scale and, quite expectedly, on the initial separation between the excitations. For initial states where the excitations are very close to each other, the initial evolution after a quench is similar to what we find for chaotic many-body quantum systems. The Shannon (information) entropy, for instance, grows linearly in time~\cite{Santos2012PRL}. Later, as the excitations spread out, the entropy growth slows down. The behavior of the Shannon entropy becomes logarithmic, similarly to what is seen in disordered many-body systems approaching spatial localization~\cite{Kjall2014,Torres2017}. However, the pre-factors of the logarithms in our clean systems are larger than those in disordered models.
Various tools are available for the analysis of the quench dynamics of 1D systems at the two extreme limits, namely single particle and half filling. The case of $N=1$ is rather trivial, especially in clean models with short-range couplings, while generic properties can often be identified when dealing with chaotic many-body quantum systems~\cite{Flambaum2001b,Torres2017PTR,Torres2018}. We believe that further studies of the region between the two extremes should not only improve our understanding of open problems associated with quantum systems of many interacting particles, such as the quantum-classical correspondence~\cite{Akila2017} and the metal-insulator transition~\cite{Fleishman1980}, but may also reveal new~\cite{Shepelyansky1994} and counterintuitive features~\cite{Santos2003PRB}.
This paper is organized as follows. In Sec.~\ref{sec:model}, we present the Hamiltonian and describe its main features. In the following sections, we analyze standard quantities associated with the eigenvalues and the eigenstates of many-body quantum systems. In particular, in Sec.~\ref{sec:DOS}, we show how the DOS approaches a Gaussian distribution as $N$ increases. Sections~\ref{sec:WD} and~\ref{sec:PR} deal with signatures of quantum chaos. In Sec.~\ref{sec:WD}, we investigate how the correlations between the eigenvalues increase with $N$ leading to level repulsion and rigid spectrum. In Sec.~\ref{sec:PR}, we study the structure of the eigenstates and compare them to those of full random matrices. Section ~\ref{sec:dynamics} concentrates on the dynamics, also employing a quantity of interest in studies of many-body quantum systems. We analyze the growth of the Shannon entropy in time. Section~\ref{sec:conclusion} summarizes our results.
\section{System Model} \label{sec:model}
We study 1D spin-1/2 models described by the following Hamiltonian,
\begin{eqnarray}
H &=& J\left[ d_1 S_1^z + d_L S_L^z+\epsilon\sum_{i=1}^LS_i^z\right. \nonumber \\
&+& \sum_{i=1}^{L-1} \left(S^x_iS^x_{i+1}+S^y_iS^y_{i+1}+\Delta S^z_iS^z_{i+1}\right) \nonumber \\
& +& \left. \lambda \sum_{i=1}^{L-2} \left(S^x_iS^x_{i+2}+S^y_iS^y_{i+2}+\Delta S^z_iS^z_{i+2}\right)\right].
\label{eq:HXXZ}
\end{eqnarray}
In the above, we set $\hbar=1$. $L$ is the total number of sites, $J$ is a reference energy scale which we set equal to $1$ and $S^{x,y,z}$ represent the spin $1/2$ operators. The Zeeman splittings of all sites are equal and given by $J\epsilon$, except for two impurities placed at the edges of the chain, which have an excess energy $Jd_{1,l}$. $\Delta$ is the anisotropy parameter and $\lambda$ is the ratio between nearest-neighbors (NN) and next-nearest-neighbors (NNN) couplings. Assuming that the Zeeman splittings were created with a large magnetic field applied to the whole chain and pointing down in the $z$-direction, we can refer to a spin pointing up in $z$ as an excitation.
In the presence of many excitations, the spin model above is a paradigmatic example of many-body quantum systems. When $\lambda=0$ and $\Delta \neq 0$, Hamiltonian (\ref{eq:HXXZ}) represents the XXZ model, which is integrable. We refer to this case as the NN model to distinguish it from the Hamiltonian with $\lambda \neq 0$, which we name NNN model. The latter is no longer integrable. For $\lambda \sim 1$ and many excitations, the NNN model is strongly chaotic, in the sense of showing level statistics equivalent to those of full random matrices. However, signatures of chaos may get concealed if eigenvalues from different symmetry subspaces are mixed~\cite{Santos2009JMP}. To prevent this from happening, we use parameters that avoid most symmetries. The Hamiltonian (\ref{eq:HXXZ}) conserves the total magnetization in the $z$-direction, ${\cal S}^z = \sum_{l=1}^L S_l^z$, so our studies focus on an individual ${\cal S}^z$ subspace. The isotropic point $\Delta=1$ is not considered to avoid conservation of total spin. Open boundary conditions are used to break translational invariance. The edge impurities, $d_1 \neq d_L \neq 0$, break parity symmetry and spin reversal symmetry for the case where ${\cal S}^z=0$. Indeed, when $d_1 \neq d_L \neq 0$, no conservation laws exist in this model, apart from the conservation of energy and total magnetization in the $z$-direction.
In the following, we denote by $E_{\alpha}$ and $|\psi_{\alpha}\rangle $ the eigenvalues and the corresponding eigenstates of the Hamiltonian. The dimension of a ${\cal S}^z$ subspace with $N$ excitations is given by $D = L!/[N!(L-N)!]$.
The next sections are dedicated to different figures of merit that characterizes the proximity of the NNN model with $N$ excitations to a many-body quantum system. All of them point in the same direction: many-body properties appear for $N\gsim4$.
\section{Density of States}\label{sec:DOS}
We start our analysis of the spin model by investigating its eigenvalues and look first at the DOS, defined as
\begin{equation}
\rho(E) \equiv \sum_{\alpha} \delta (E - E_{\alpha} ) .
\label{eq:DOS}
\end{equation}
The shape of the DOS is not a signature of chaos, but contains information about how many particles are coupled simultaneously. In many-body systems with few-body couplings only, the DOS is known to have a Gaussian form~\cite{French1970,Brody1981}. The spin models described by Eq.~(\ref{eq:HXXZ}) have only two-body couplings. We therefore investigate how the DOS approaches the Gaussian limit, as excitations are added into the system.
As a warm-up, let us study the case $d_{1,L}=\Delta=\lambda=0$ with closed boundary conditions. This integrable Hamiltonian represents the well-known XX model~\cite{Lieb1961}, for which we are able to compute the DOS exactly. In the continuum limit $L\rightarrow\infty$, as shown in~\ref{app:rhoXX},
\begin{equation}
\rho^{(N)}_{\rm{XX}} (E) = \frac{1}{2 \pi}\int_{ -\infty }^{\infty} d\tau e^{iE\tau } {\cal J}_0^N(J\tau ) ,
\label{eq:XXDOS}
\end{equation}
where ${\cal J}_0$ is the Bessel function of the first kind. For $N=1$, this gives trivially
\begin{equation}
\rho^{(1)}_{XX}(E)=\frac{1}{\pi}\frac{1}{\sqrt{J^2-E^2}}.
\end{equation}
For $N>1$, the DOS for the XX model is plotted in figures~\ref{Fig:DOS} (a)-(d). From left to right, $N$ grows from 2 to 5. As it can be seen, the peak in the middle of the spectrum becomes progressively smoother and the overall shape of the distribution becomes qualitatively similar to a Gaussian for $N\gsim4$. See also in~\ref{app:extra_data}.1 the DOS for $N=6$ for the integrable XX model, the integrable NN model, and the chaotic NNN model. They all show clear Gaussian shapes.
\begin{figure}[htb]
\centering
\includegraphics*[scale=0.55]{Fig01new.eps}
\caption{Density of states for the XX model (a)-(d) and NNN model (e)-(h). From left to right: $N=2,3,4,5$. For the periodic XX model, we show analytic results in the limit $L\to \infty$. For the NNN model, we take open boundary conditions and system sizes $L=200, 50, 28, 21$ from left to right. We choose $\Delta=0.48$, $\lambda=1$, $d_1\simeq 0.05$ and $d_L \simeq 0.09$. The energies are rescaled so that for any $L$ and chosen parameters, the middle of the spectrum is at zero. Panel (i) gives the kurtosis of the DOS as a function of $N$ for the XX (circle) and the NNN (diamond) models. The solid line is the fitting curve for the XX model. The kurtosis approaches $3$ as the number of excitations increases.}
\label{Fig:DOS}
\end{figure}
The approach to the Gaussian shape can be explained in terms of the central limit theorem. As shown in Eq.~(\ref{eq:XX_E}) of~\ref{app:rhoXX}, the eigenvalues of the XX model for $N$ excitations are given by
$E_\alpha=J\sum_{i=1}^N\cos \left( \frac{2\pi k_i}{L} \right)$, where $k_1< k_2<\cdots< k_N$ and $k_i \in\left\{0,\pm1,\pm2,\cdots, \pm(L/2-1),L/2\right\}$. The distribution of the sums of these many cosines is analogous to the distribution of independent random variables, which, according to the central limit theorem, tends to a normal distribution for sufficiently large sample sizes.
A more quantitative way to compare the DOS to a Gaussian distribution is to compute the kurtosis,
\begin{equation}
K(N)\equiv \frac{\left<(E-\left<E\right>)^4\right>}{\left<(E-\left<E\right>)^2\right>^2},
\end{equation}
where the averages $\langle f(E) \rangle $ are taken on the whole subspace, weighted by the DOS,
\begin{equation}
\left<f(E)\right>\equiv \int_{-NJ}^{NJ}dE \rho^{(N)}_{XX}(E) f(E).
\end{equation}
For a Gaussian distribution, $K_{G}=3$. In~\ref{app:rhoXX}, we show how to obtain the kurtosis for the XX model,
\begin{eqnarray}
K(N)&=&\pi\int_{-\infty}^{\infty}d\tau {\cal J}_0^N(J\tau)\tau^{-5}\left[4NJ\tau(N^2J^2\tau^2-6)\cos (NJ\tau)\right.\nonumber\\
&+&\left.(N^4J^4\tau^4-12 N^2J^2\tau^2+24)\sin (NJ\tau)\right]\label{eq:KXX}\\
&\times&\left(\int_{-\infty}^{\infty}d\tau {\cal J}_0^N(J\tau)\tau^{-3}\left[2NJ\tau\cos (NJ\tau)+(N^2J^2\tau^2-2)\sin (NJ\tau)\right]\right)^{-2}.\nonumber
\end{eqnarray}
The values of $K$ as a function of $N$ are plotted in Fig.~\ref{Fig:DOS} (i). A power-law fit for these points gives
\begin{equation}
K(N)\sim3\left(1-\frac{1}{2N}\right).
\end{equation}
This tells us that the spectrum smoothly approaches the many-body limit as $N$ is increased, although there is not a threshold at which the limit is reached.
For the NNN model, no exact results are available, so we resort to numerical simulations. From Fig.~\ref{Fig:DOS} (e)-(h), one sees that once again, the shape of the DOS starts looking qualitatively similar to a Gaussian for $N \raisebox{-0.13cm}{~\shortstack{$>$ \\[-0.07cm] $\sim$}} ~ 4$. In each plot, we have shifted the energies by a constant, such that $\left<E\right>=0$. Notice that we are numerically limited to relatively small sizes, so the system gets denser for $N\raisebox{-0.13cm}{~\shortstack{$>$ \\[-0.07cm] $\sim$}} ~ 4$. However, the approach to the Gaussian shape is caused by the increased number of excitations and not by an increasingly dense chain, as made clear by the analysis of the XX model, which is done here in the thermodynamic limit.
The values of the kurtosis for the NNN model with $\lambda=1$ are plotted in Fig.~\ref{Fig:DOS} (i). They are larger than those obtained for the XX model and they also approach the Gaussian limit for $N\raisebox{-0.13cm}{~\shortstack{$>$ \\[-0.07cm] $\sim$}} ~ 4$.
\section{Onset of Chaos} \label{sec:WD}
The mechanism of quantum chaos in many-body systems is interparticle interaction~\cite{Borgonovi2016}. The results for the DOS in the previous section indicate that $N\gsim4$ can already be considered many. Here and in the next section, we investigate whether this number of excitations is also associated with the onset of quantum chaos. The focus is now entirely on the nonintegrable NNN model.
One of the signatures of quantum chaos is the strong repulsion between neighboring energy levels. In a real symmetric matrix with entries drawn independently from a Gaussian ensemble, that is in a matrix from the Gaussian Orthogonal Ensemble (GOE)~\cite{Guhr1998,MehtaBook}, the spacing $s$ of neighboring unfolded~\cite{Guhr1998} eigenvalues follows the Wigner surmise,
\begin{equation}
P(s)= \frac{\pi s}{2} \exp \left(- \frac{\pi s^2}{4} \right).
\label{eq:goe}
\end{equation}
(For the exact Wigner-Dyson distribution, see Ref.~\cite{MehtaBook}.) An important feature of this distribution is that it vanishes linearly for $s\rightarrow0$, meaning that the probability of two eigenvalues crossing each other is suppressed. This behavior is due to the fact that the eigenvalues are strongly correlated. The spectra of realistic chaotic systems with real and symmetric Hamiltonian matrices also follow the distribution in Eq.~(\ref{eq:goe}).
For sequences of uncorrelated eigenvalues, the distribution of $s$ is Poissonian, $P(s)=\exp(-s)$. The eigenvalues of integrable systems often follow this distribution, because they typically display an extensive set of local conserved quantities, which partition the Hilbert space in many uncorrelated sectors. We note, however, that in integrable systems with a large number of degenerate levels or with spectra of the ``picket-fence'' type, where the eigenvalues are approximately equally spaced, other distributions are found.
\begin{figure}[htb]
\centering
\includegraphics*[scale=0.55]{Fig02new.eps}
\caption{Level spacing distribution (a)-(d) and level number variance (e)-(h) for the NNN model. From left to right, the system sizes are $L=200, 50, 28, 21$, respectively, and the numbers of excitations are $N=2,3,4,5$. Parameters: $\Delta=0.48$, $\lambda=1$, $d_1 \simeq 0.05$ and $d_L \simeq 0.09$. Open boundary conditions are taken. In (a)-(d), the grey histogram represents the actual numerical data, which are compared with the Poissonian (blue line) and Wigner-Dyson (red line) distributions. In (e)-(h): numerical data are black dots. They are compared with the result for uncorrelated eigenvalues (blue straight line) and the GOE curve (red logarithmic curve). In (i): the parameter $\beta$ as a function of $N$. It converges to the Wigner-Dyson value $\beta=1$ for $N\gsim4$.}
\label{Fig:Chaos}
\end{figure}
In Fig.~\ref{Fig:Chaos} (a)-(d), we plot the level spacing distribution for the NNN model with $\lambda=1$ for systems with $N=2,3,4,5$ excitations. In each plot, we show for comparison the Poisson (red line) and Wigner-Dyson (blue lines) distributions. For $N=2$, the distribution is intermediate between Poisson and Wigner-Dyson with a visible dip at small $s$, signaling that some amount of level repulsion is already present in the system. As $N$ is increased, level repulsion becomes enhanced, and at $N=4$ the shape is very close to a Wigner-Dyson distribution.
A way to quantify the transition from Poisson to the Wigner-Dyson distribution is by employing a distribution that interpolates between the two, such as the Brody distribution~\cite{Brody1981},
\begin{equation}
P_\beta(s) \equiv (\beta +1) b s^{\beta} \exp \left( -b s^{\beta +1} \right), \hspace{0.2 cm}
b\equiv \left[\Gamma \left( \frac{\beta + 2}{\beta +1} \right)\right]^{\beta +1},
\label{eq:BrodyParameter}
\end{equation}
where $\Gamma$ is Euler's gamma function. The Poisson distribution corresponds to the case $\beta=0$, while the Wigner-Dyson distribution to $\beta=1$. To evaluate quantitatively the degree of level repulsion, we fit our numerical distribution with $P_\beta(s)$, using $\beta$ as a fitting parameter. The resulting values of $\beta$ are plotted as a function of $N$ in Fig.~\ref{Fig:Chaos}(i). As it can be seen, already at $N=4$ we have $\beta\sim1$, meaning that indeed a system with $4$ excitations can be meaningfully described as chaotic.
Another manifestation of the correlation among energy levels is the rigidity of the spectrum, which can be evaluated with quantities such as the level number variance, which is obtained as follows. We partition the unfolded spectrum in energy intervals of length $\ell$ and compute the number of eigenvalues inside each interval. The variance of the distribution of these numbers is the level number variance $\Sigma^2(\ell)$. For full random matrices from the GOE, we have~\cite{Guhr1998}
\begin{equation}
\Sigma^2 (\ell) = \frac{2}{\pi^2} \left( \ln(2 \pi \ell) + \gamma_e + 1 -\frac{\pi^2}{8} \right) ,
\end{equation}
where $\gamma_e = 0.5772\ldots $ is Euler's constant. In contrast, for systems with an uncorrelated spectrum, one finds $ \Sigma^2 (\ell) =\ell$, while for the harmonic oscillator, one has $ \Sigma^2 (\ell) =0$, due to the complete rigidity of the spectrum. $P(s)$ and $\Sigma^2 (\ell) $ are complementary. The former characterizes the short-range fluctuations of the spectrum and the latter characterizes the long-range fluctuations.
In Fig.~\ref{Fig:Chaos} (e)-(h), we plot the function $\Sigma^2(\ell)$ for the NNN model for $N=2,3,4,5$ excitations. The data (black dots) are compared with the curve for the GOE (red line) and the Poissonian distribution (blue line). For $N=2$, the data are close to the curve for uncorrelated eigenvalues. $N=3$ looks intermediate. For $N\gsim4$, the data at different number of excitations are similar to what we obtain for half-filling (see~\ref{app:extra_data}.2 for $\Sigma^2(\ell)$ for $N=6$ and $N=L/2$), which shows that the rigidity of the spectrum for $N\gsim4$ is equivalent to that found in chaotic many-body quantum systems. It must be noticed that the data follow the GOE curve for small $\ell$ only, for both $N\raisebox{-0.13cm}{~\shortstack{$>$ \\[-0.07cm] $\sim$}} ~ 4$ and $N=L/2$. This happens because the spectra of realistic chaotic many-body quantum systems are never as rigid as the spectra of full random matrices, where all degrees of freedom interact with each other.
Up to this point, we only considered $\lambda=1$. It is known that in the many-body limit, as $\lambda$ decreases from 1 toward 0, the degree of level repulsion between the eigenvalues decreases until disappearing completely at the integrable point ($\lambda =0$). In ~\ref{app:extra_data}.3, we show that this transition, quantified by the Brody parameter $\beta$ vs $\lambda$, is comparable for $N=5,6$, and $N=L/2$, reinforcing our claim that the properties of the spectrum for $N\raisebox{-0.13cm}{~\shortstack{$>$ \\[-0.07cm] $\sim$}} ~ 4$ are already similar to the case at half filling.
\section{Eigenstates}
\label{sec:PR}
A complete characterization of a many-body quantum system, and especially determining whether it is chaotic or not, requires also the analysis of its eigenstates. Strongly correlated eigenvalues are directly linked with the onset of nearly ergodic eigenstates. In Sec.~\ref{sec:WD}, we showed that the spectral rigidity for $N\raisebox{-0.13cm}{~\shortstack{$>$ \\[-0.07cm] $\sim$}} ~ 4$ is equivalent to that for $N=L/2$. We now verify that this is reflected in the structure of the eigenstates as well.
In contrast with the eigenvalues, the study of the eigenstates requires a choice of basis. This choice depends on the physical problem one is interested in. For example, in studies of spatial localization, one employs the site-basis (also known as computational basis), where on each site the spin either points up or down in the $z$-direction. In the context of quantum chaos, one resorts to the mean-field basis, which corresponds to the eigenstates of the regular (integrable) part of the total Hamiltonian. The term (perturbation) that breaks integrability and brings the system into the chaotic domain, also couples the mean-field basis vectors. The level of complexity of the eigenstates of the Hamiltonian of a quantum many-body system depends on the strength of this term.
In the case of the NNN model, we write the eigenstates $|\psi_{\alpha} \rangle = \sum_n C_n^{\alpha} |\phi_n\rangle$ in terms of the eigenstates $|\phi_n\rangle$ of the NN model, whose eigenvalues are denoted by $\varepsilon_n$. The distribution
\begin{equation}
R^{(\alpha)}(E) = \sum_n |C_n^{\alpha}|^2 \delta(E - \varepsilon_n)
\end{equation}
characterizes the spreading of the eigenstate $|\psi_{\alpha} \rangle $ in the mean-field basis. In the many-body limit, the shape of $R^{(\alpha)}(E)$ for states away from the borders of the spectrum broadens from a nearly delta function, when $\lambda \sim 0$, to a Gaussian at strong chaos ($\lambda \sim 1$) \cite{Santos2012PRL}. In strongly chaotic eigenstates, the coefficients $C_n^{\alpha} $ are approximately random variables following the Gaussian envelope of the system energy shell. The Gaussian shape of $R(E)$ is a consequence of the Gaussian form of the DOS.
In realistic systems with few-body couplings, only the coefficients $C_n^{\alpha} $ within the energy shell are non-zero. This is in contrast with the eigenstates of full random matrices, where all $C_n^{\alpha} $ can be non-zero. The eigenstates of full random matrices are random vectors and therefore fully delocalized in the Hilbert space. One can measure the level of delocalization of the states with quantities such as the participation ratio, defined as
\begin{equation}
PR^{(\alpha)} \equiv \frac{1}{\sum_n |C_n^{\alpha}|^4}.
\end{equation}
For full random matrices from GOE, $PR \simeq D/3$ for any eigenstate. In realistic many-body systems, strongly chaotic states also lead to $PR^{(\alpha)}\propto D$, but the pre-factor is smaller than 1/3.
In figs.~\ref{Fig:PR} (a)-(d), we plot the function $R^{(\alpha)}(E)$ for an eigenstate $\left|\psi_\alpha\right>$ near the middle of the spectrum of the NNN model with $\lambda=1$. For this perturbation strength, we know that the eigenstates in the many-body limit are highly delocalized~\cite{Santos2012PRL}. Our goal in Fig.~\ref{Fig:PR} is to analyze how the structure of the eigenstates depends on the number of excitations. For $N=2$, the distribution is sparse, indicating that the eigenstates are far from being fully extended in the energy shell. The level of delocalization increases with $N$. For $N\gsim4$ the distribution already resembles a Gaussian (red curves), indicating the proximity to the chaotic many-body limit.
\begin{figure}[htb]
\centering
\includegraphics*[scale=0.55]{Fig03new.eps}
\caption{Energy distribution $R^{(\alpha)}(E)$ of an eigenstate $\left|\psi_\alpha\right>$ in the middle of the spectrum of the NNN model (a)-(d) and scaling analysis of the participation ratio for eigenstates written in the mean-field-basis (e). Parameters: $\Delta=0.48$, $\lambda=1$, $d_1 \simeq 0.05$ and $d_L \simeq 0.09$. Open boundary conditions are taken. From (a) to (d), the numbers of excitations are $N=2,3,4,5$. The distributions are shifted, such that $\sum_n \left|C^\alpha_n\right|^2 \varepsilon_n =0$, and they are compared with Gaussians (red line) of variance $\sum_n \left|C^\alpha_n\right|^2 \varepsilon_n^2 $. In (e), the data are averaged over 10\% of the eigenstates in the middle of the spectrum. Each curve corresponds to a different $N$ (indicated). For $N \raisebox{-0.13cm}{~\shortstack{$>$ \\[-0.07cm] $\sim$}} ~ 4$, PR $\propto D$.}
\label{Fig:PR}
\end{figure}
To make a more quantitative analysis, in Fig.~\ref{Fig:PR} (e), we show the scaling of $PR$ as a function of the dimension $D$ for different $N$'s. For $N<4$, the scaling is sub-linear, implying that one cannot consider systems with such small number of excitations as fully chaotic. Conversely, for $N\gsim4$, the curves fall closely on top of each other and give $PR\propto D$, indicating that the eigenstates are already very close to the maximal allowed level of spreading over the mean-field basis for the given perturbation strength.
\section{Dynamics} \label{sec:dynamics}
The characterization of the Hamiltonian developed in the previous sections convinces us that we do not need a large number of excitations to witness properties associated with many-body quantum systems. But are the dynamics of systems with $N\raisebox{-0.13cm}{~\shortstack{$>$ \\[-0.07cm] $\sim$}} ~ 4$ also comparable to those of $N\sim L/2$? This is a very pertinent question, given the enormous interest in the nonequilibrium dynamics of many-body quantum systems. Information about the dynamics is contained in the eigenvalues and eigenstates, but it depends also on the initial states. In addition, different features of the system may be captured and enhanced at different time scales~\cite{Torres2018,Tavora2016}.
In this section, we analyze the real time evolution of the NNN model with $\lambda=1$. In our simulations, we initialize the system in the following two site-basis states,
\begin{eqnarray}
&&|\Psi^{(N)}_1(0) \rangle = | \downarrow _1 \cdots \underbrace { \uparrow _j \uparrow _{j + 1} \cdots \uparrow _{j + N - 2} \uparrow _{j + N - 1} }_{N \,{\rm sites}, \,N\,{\rm up-spins} } \cdots \downarrow _L\rangle , \nonumber \\
&&|\Psi^{(N)}_2(0) \rangle = | \downarrow _1 \cdots \underbrace { \uparrow _j \downarrow _{j + 1} \cdots \uparrow _{j + 2N - 2} \downarrow _{j + 2N - 1} }_{2N \,{\rm sites},\,N \,{\rm up-spins} } \cdots \downarrow _L \rangle. \nonumber
\end{eqnarray}
In the half filling limit, $|\Psi^{(L/2)}_1(0) \rangle$ coincides with the domain wall state and $|\Psi^{(L/2)}_2(0) \rangle $ with the N\'eel state. These states are accessible experimentally~\cite{Trotzky2008,Schreiber2015} and have been extensively investigated in theoretical studies of quench dynamics in the ${\cal S}^z \!\! =\! 0$ subspace.
The quantity chosen for the analysis of the dynamics is the Shannon (information) entropy,
\begin{equation}
S_h(t)\equiv - \sum \limits_j |W_{j} (t) | \ln |W_{j} (t) |,
\label{Shan}
\end{equation}
where $W_{j} (t)= |\langle \phi_j | e^{ - iHt} | \Psi (0)\rangle |^2$. This quantity measures how the initial state spreads into other site-basis vectors $|\phi \rangle$ and is related to the entanglement entropy~\cite{Torres2016Entropy}. In many-body quantum systems with highly delocalized initial states, the Shannon entropy is known to increase linearly in time~\cite{Santos2012PRL,Torres2016Entropy}.
\begin{figure}[htb]
\centering
\includegraphics*[scale=0.55]{Fig04new.eps}
\caption{Evolution of the Shannon entropy in the site-basis. NNN model with $\Delta=0.5$, $\lambda=1$, periodic boundary conditions, and no impurities. $|\Psi^{(N)}_1(0) \rangle $ (which becomes the domain wall state at half-filling) is considered in (a) and $|\Psi^{(N)}_2(0) \rangle$ (which becomes the N\'eel state at half-filling) is used in (b). From bottom to top, the lines correspond to $(L,N) = (1400,2)$, $(184,3)$, $(72,4)$, $(44,5)$, $(32, 6)$, $(28, 7)$, and $(22, 11)$ (half filling limit). A linear increase (red dashed line) is shown for comparison with the half filled case.}
\label{Fig:Sh}
\end{figure}
Figure~\ref{Fig:Sh} shows the evolution of $S_h(t)$ for $N=2,3,4,5,6,7$, and also for the half-filling limit $N=L/2$. The system sizes used lead to dimensions $D$ of the Hilbert space that are similar for all $N$'s, so that the saturation point of the dynamics for all curves are of the same order. Data for the initial state $|\Psi^{(N)}_1(0) \rangle$ are shown in Fig.~\ref{Fig:Sh} (a) and those for $|\Psi^{(N)}_2(0) \rangle$ in Fig.~\ref{Fig:Sh} (b).
As expected, for the state $|\Psi^{(N)}_1(0) \rangle$, $S_h(t)$ follows exactly the half-filling curve up to a time scale dependent on $N$. This happens because the short-time dynamics are restricted to the interface between the up and down spins at the edges of the domain. For the state $|\Psi^{(N)}_2(0) \rangle$, the evolution does not follow the half-filling curve at any time.
In the half-filled case, the two initial states qualitatively evolve in the same way. The Shannon entropy grows linearly, indicating that the site-basis vectors are populated exponentially fast in time, as typical of chaotic systems. For the other $N$'s, the linear growth holds for a certain time interval, but then the evolution slows down and becomes logarithmic. The time interval of the linear behavior increases with $N$, but the crossover between the fast relaxation at short times and the slow dynamics at long times is always visible. This may be interpreted as a crossover from a short-time regime, in which the excitations interact with each other, and a later-time regime, in which the excitations are too diluted to experience strong interactions. Even though somewhat expected, this change in behavior was not anticipated from the analysis of the eigenvalues and eigenstates developed here. This suggests that with other quantities and more refined analysis, we may be able to distinguish systems with $N=L/2$ from those with $N\neq L/2$ already at the static level. For $N\raisebox{-0.13cm}{~\shortstack{$>$ \\[-0.07cm] $\sim$}} ~ 4$, the source of these differences should however be associated with the filling of the chain~\cite{Vidmar2017} and not with a small {\em vs.} large number of excitations.
We note that a similar change in the dynamic behavior occurs also in many-body models with onsite disorder as they approach localization in space~\cite{Kjall2014,Torres2017}. However, the pre-factor of the logarithmic behavior in this case is smaller than 1 and related with the fractal dimension of the eigenstates~\cite{Torres2017}. The pre-factor in our clean model with few excitations is larger than 2 and it increases with $N$. In fact, at least for $N$ up to $4$, the pre-factor seems to be $\sim N$.
Overall, the results display interesting features that will be studied in greater detail in a future work. This includes the pre-factor of the logarithmic behavior and how it depends on the number of excitations, the initial states, and the bounds in the energy spectrum.
\section{Conclusion}
\label{sec:conclusion}
While many tools exist to study systems in the single-particle and in the many-body limit, the crossover between these two regimes is still poorly understood. In this work, we analyzed one of the aspects of this crossover, namely how signatures of quantum chaos emerge as the number of excitations increases. We showed that many-body properties associated with the eigenvalues and eigenstates manifest themselves already for as few as $N\gsim4$. This was done by analyzing different standard indicators of quantum chaos and finding that they all give consistent results. It is interesting that other many-body properties were also found for $N\sim 4$ in experimental~\cite{Wenz2013} and theoretical~\cite{Schnack1996,Flambaum1997,Schnack2000,Izrailev2001, Harshman2017} works.
From the point of view of the dynamics of the system, the behavior depends on the time scale and initial state. If one initially confines all excitations to a small region, the evolution at short times is dominated by the interactions and the behavior is analogous to the many-body case. At long times, the excitations spread out and the effects of the interactions fade away. The analysis of the crossover between the two different temporal regimes is within reach of existing experiments with cold atoms and ion traps, where the number of particles considered in the dynamics can be manipulated.
The fact that we can detect many-body properties for as few as 4 particles is of course of great relevance for experimental and theoretical studies of many-body quantum systems, as well as to the development of new numerical methods targeting these systems. It implies, for example, that an out-of-equilibrium isolated interacting quantum system with only $N\raisebox{-0.13cm}{~\shortstack{$>$ \\[-0.07cm] $\sim$}} ~ 4$ should be able to reach thermal equilibrium. It also means that it may be as hard to localize an interacting system with about 4 particles as it is for $N=L/2$.
It is our hope that this work will motivate further research on how the properties of quantum systems change as the number of particles increases. It would be interesting to extend our studies to non-chaotic systems, such as exactly integrable models, where analytical results could be obtained, and many-body localized systems. For the latter, localization properties may change from few particles to the many-body limit. This analysis may shed light on the influence of finite size effects on the localization transition~\cite{Devakul2015,DeRoeck2016}.
\ack
This work was funded by the American National Science Foundation (NSF) Grant No. DMR-1603418.
|
{
"timestamp": "2018-09-07T02:11:20",
"yymm": "1802",
"arxiv_id": "1802.08691",
"language": "en",
"url": "https://arxiv.org/abs/1802.08691"
}
|
\section{Introduction}
\label{sec:introduction}
Visual object tracking is one of the most fundamental and challenging tasks in computer vision. Given the bounding box of an unknown target in the first frame, the objective is to localize the target in all the following frames in a video sequence. While visual object tracking finds numerous applications in surveillance, autonomous systems, and augmented reality, it is a very challenging task. For one reason, with only a bounding box in the first frame, it is difficult to differentiate the unknown target from the cluttered background when the target itself moves, deforms, or has an appearance change due to various reasons. For another, most applications demand real-time tracking. It is even harder to design a real-time high-performance tracker.
The key to design a high-performance tracker is to find expressive features and corresponding classifiers that are simultaneously \emph{discriminative} and \emph{generalized}. Being discriminative allows the tracker to differentiate the true target from the cluttered or even deceptive background. Being generalized means that a tracker would tolerate the appearance changes of the tracked object, even when the object is not known a priori. Conventionally, both the discrimination and the generalization power need to be strengthened through online training process, which collects target information while tracking. However, online updating is time consuming, especially when a large number of parameters are involved. It is therefore very crucial to balance the tracking performance and the run-time speed.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=\columnwidth]{final_fig/examples-crop.pdf}
\end{center}
\caption{Comparing the tracking results of SiamFC and our tracker. Thanks to the semantic features, our tracker successfully follows the target object in case of shooting angle change or scale change, when SiamFC fails. }
\vspace{-10pt}
\label{fig:examples}
\end{figure}
In recent years, deep convolutional neural networks (CNNs) demonstrated their superior capabilities in various vision tasks. They have also significantly advanced the state-of-the-art of object tracking. Some trackers \cite{DeepSRDCF, HCF, HDT, CCOT, ECO} integrate deep features into conventional tracking approaches and benefit from the expressive power of CNN features. Some others \cite{MDNET, TCNN, BranchOut, STCT} directly use CNNs as classifiers and take full advantage of end-to-end training. Most of these approaches adopt online training to boost the tracking performance. However, due to the high volume of CNN features and the complexity of deep neural networks, it is computationally expensive to perform online training. As a result, most online CNN-based trackers have a far less operational speed than real-time.
Meanwhile, there emerge two real-time CNN-based trackers \cite{GOTURN,SiamFC} which achieve high tracking speed by completely avoiding online training. While GOTURN \cite{GOTURN} treats object tracking as a box regression problem, SiamFC \cite{SiamFC} treats it as a similarity learning problem. It appears that SiamFC achieves a much better performance than GOTURN. This owes to the fully convolutional network architecture, which allows SiamFC to make full use of the offline training data and make itself highly discriminative.
However, the generalization capability remains quite poor and it encounters difficulties when the target has significant appearance change, as shown in Fig.\ref{fig:examples}. As a result, SiamFC still has a performance gap to the best online tracker.
In this paper, we aim to improve the generalization capability of SiamFC. It is widely understood that, in a deep CNN trained for image classification task, features from deeper layers contain stronger semantic information and is more invariant to object appearance changes. These semantic features are an ideal complement to the appearance features trained in a similarity learning problem.
Inspired by this observation, we design SA-Siam, which is a twofold Siamese network comprised of a semantic branch and an appearance branch. Each branch is a Siamese network computing the similarity scores between the target image and a search image. In order to maintain the heterogeneity of the two branches, they are separately trained and not combined until the similarity score is obtained by each branch.
For the semantic branch, we further propose a channel attention mechanism to achieve a minimum degree of target adaptation. The motivation is that different objects activate different sets of feature channels. We shall give higher weights to channels that play more important roles in tracking specific targets. This is realized by computing channel-wise weights based on the channel responses at the target object and in the surrounding context. This simplest form of target adaptation improves the discrimination power of the tracker. Evaluations show that our tracker outperforms all other real-time trackers by a large margin on OTB-2013/50/100 benchmarks, and achieves state-of-the-art performance on VOT benchmarks.
The rest of the paper is organized as follows. We first introduce related work in Section \ref{sec:related}. Our approach is described in Section \ref{sec:approach}. The experimental results are presented in Section \ref{sec:experiment}. Finally, Section \ref{sec:conclusion} concludes the paper.
\begin{figure*}
\begin{center}
\includegraphics[width=0.95\textwidth]{final_fig/architecture-crop.pdf}
\end{center}
\caption{The architecture of the proposed twofold SA-Siam network. A-Net indicates the appearance network. The network and data structures connected with dotted lines are exactly the same as SiamFC \cite{SiamFC}. S-Net indicates the semantic network. Features from the last two convolution layers are extracted. The channel attention module determines the weight for each feature channel based on both target and context information. The appearance branch and the semantic branch are separately trained and not combined until testing time. }
\vspace{-10pt}
\label{fig:framework}
\end{figure*}
\section{Related Work} \label{sec:related}
\subsection{Siamese Network Based Trackers}
Visual object tracking can be modeled as a similarity learning problem. By comparing the target image patch with the candidate patches in a search region, we can track the object to the location where the highest similarity score is obtained. A notable advantage of this method is that it needs no or little online training. Thus, real-time tracking can be easily achieved.
Similarity learning with deep CNNs is typically addressed using Siamese architectures \cite{SiamRep}. The pioneering work for object tracking is the fully convolutional Siamese network (SiamFC) \cite{SiamFC}. The advantage of a fully-convolutional network is that, instead of a candidate patch of the same size of the target patch, one can provide as input to the network a much larger search image and it will compute the similarity at all translated sub-windows on a dense grid in a single evaluation \cite{SiamFC}. This advantage is also reflected at training time, where every sub-window effectively represents a useful sample with little extra cost.
There are a large number of follow-up work\cite{RFL,CFNET,EAST,SINT,DSiam} of SiamFC. EAST\cite{EAST} attempts to speed up the tracker by early stopping the feature extractor if low-level features are sufficient to track the target. CFNet\cite{CFNET} introduces correlation filters for low level CNNs features to speed up tracking without accuracy drop.
SINT \cite{SINT} incorporates the optical flow information and achieves better performance. However, since computing optical flow is computationally expensive, SINT only operates at 4 frames per second (fps). DSiam\cite{DSiam} attempts to online update the embeddings of tracked target. Significantly better performance is achieved without much speed drop.
SA-Siam inherits network architecture from SiamFC. We intend to improve SiamFC with an innovative way to utilize heterogeneous features.
\subsection{Ensemble Trackers}
Our proposed SA-Siam is composed of two separately trained branches focusing on different types of CNN features. It shares some insights and design principles with ensemble trackers.
HDT\cite{HDT} is a typical ensemble tracker. It constructs trackers using correlation filters (CFs) with CNN features from each layer, and then uses an adaptive Hedge method to hedge these CNN trackers. TCNN\cite{TCNN} maintains multiple CNNs in a tree structure to learn ensemble models and estimate target states. STCT\cite{STCT} is a sequential training method for CNNs to effectively transfer pre-trained deep features for online applications. An ensemble CNN-based classifier is trained to reduce correlation across models. BranchOut\cite{BranchOut} employs a CNN for target representation, which has a common convolutional layers but has multiple branches of fully connected layers.
It allows each branch to have a different number of layers so as to maintain variable abstraction levels of target appearances. PTAV\cite{PTAV} keeps two classifiers, one acting as the tracker and the other acting as the verifier. The combination of an efficient tracker which runs for sequential frames and a powerful verifier which only runs when necessary strikes a good balance between speed and accuracy.
A common insight of these ensemble trackers is that it is possible to make a strong tracker by utilizing different layers of CNN features. Besides, the correlation across models should be weak. In SA-Siam design, the appearance branch and the semantic branch use features at very different abstraction levels. Besides, they are not jointly trained to avoid becoming homogeneous.
\subsection{Adaptive Feature Selection}
Different features have different impacts on different tracked target. Using all features available for a single object tracking is neither efficient nor effective. In SCT \cite{SCT} and ACFN \cite{ACFN}, the authors build an attention network to select the best module among several feature extractors for the tracked target. HART \cite{HART} and RATM\cite{RATM} use RNN with attention model to separate where and what processing pathways to actively suppress irrelevant visual features. Recently, SENet\cite{SENET} demonstrates the effectiveness of channel-wise attention on image recognition tasks.
In our SA-Siam network, we perform channel-wise attention based on the channel activations. It can be looked on as a type of target adaptation, which potentially improves the tracking performance.
\section{Our Approach} \label{sec:approach}
We propose a twofold fully-convolutional siamese network for real-time object tracking. The fundamental idea behind this design is that the appearance features trained in a similarity learning problem and the semantic features trained in an image classification problem complement each other, and therefore should be jointly considered for robust visual tracking.
\subsection{SA-Siam Network Architecture}
The network architecture of the proposed SA-Siam network is depicted in Fig.~\ref{fig:framework}. The input of the network is a pair of image patches cropped from the first (target) frame of the video sequence and the current frame for tracking. We use notations $z$, $z^s$ and $X$ to denote the images of target, target with surrounding context and search region, respectively. Both $z^s$ and $X$ have a size of $W_s \times H_s \times 3$. The exact target $z$ has a size of $W_t \times H_t \times 3$ ($W_t < W_s$ and $H_t < H_s$), locates in the center of $z^s$. $X$ can be looked on as a collection of candidate image patches $x$ in the search region which have the same dimension as $z$.
SA-Siam is composed of the appearance branch (indicated by blue blocks in the figure) and the semantic branch (indicated by orange blocks).
The output of each branch is a response map indicating the similarity between target $z$ and candidate patch $x$ within the search region $X$. The two branches are separately trained and not combined until testing time.
\textbf{The appearance branch:} The appearance branch takes $(z,X)$ as input. It clones the SiamFC network \cite{SiamFC}. The convolutional network used to extract appearance features is called A-Net, and the features extracted are denoted by $f_a(\cdot)$. The response map from the appearance branch can be written as:
\begin{equation}
h_a(z, X) = corr(f_a(z), f_a(X)),
\end{equation}
where $corr(\cdot)$ is the correlation operation. All the parameters in the A-Net are trained from scratch in a similarity learning problem. In particular, with abundant pairs $(z_i, X_i)$ from training videos and corresponding ground-truth response map $Y_i$ of the search region, A-Net is optimized by minimizing the logistic loss function $L(\cdot)$ as follows:
\begin{equation}
\arg \min_{\theta_a} \frac{1}{N} \sum_{i=1}^N{\left\lbrace L \left( h_a(z_i, X_i{;}\; \theta_a), Y_i \right) \right\rbrace},
\end{equation}
where $\theta_a$ denotes the parameters in A-Net, $N$ is the number of training samples.
\textbf{The semantic branch:} The semantic branch takes $(z^s, X)$ as input. We directly use a CNN pretrained in the image classification task as S-Net and fix all its parameters during training and testing. We let S-Net to output features from the last two convolutional layers ($conv4$ and $conv5$ in our implementation), since they provide different levels of abstraction. The low-level features are not extracted.
Features from different convolutional layers have different spatial resolution. For simplicity of notation, we denote the concatenated multilevel features by $f_s(\cdot)$. In order to make the semantic features suitable for the correlation operation, we insert a fusion module, implemented by $1 \times 1$ ConvNet, after feature extraction. The fusion is performed within features of the same layer. The feature vector for search region $X$ after fusion can be written as $g(f_s(X))$.
The target processing is slightly different. S-Net takes $z^s$ as the target input. $z^s$ has $z$ in its center and contains the context information of the target. Since S-Net is fully convolutional, we can easily obtain $f_s(z)$ from $f_s(z^s)$ by a simple crop operation. The attention module takes $f_s(z^s)$ as input and outputs the channel weights $\xi$. The details of the attention module is included in the next subsection. The features are multiplied by channel weights before they are fused by $1 \times 1$ ConvNet. As such, the response map from the semantic branch can be written as:
\begin{equation}
h_s(z^s, X) = corr\left(
g\left(\xi \cdot f_s(z)\right),
g\left(f_s(X)\right)
\right),
\end{equation}
where $\xi$ has the same dimension as the number of channels in $f_s(z)$ and $\cdot$ is element-wise multiplication.
In the semantic branch, we only train the fusion module and the channel attention module. With training pairs $(z_i^s, X_i)$ and ground-truth response map $Y_i$, the semantic branch is optimized by minimizing the following logistic loss function $L(\cdot)$:
\begin{equation}
\arg \min_{\theta_s} \frac{1}{N} \sum_{i=1}^N{\left\lbrace L \left( h_s(z_i^s, X_i{;}\; \theta_s), Y_i \right) \right\rbrace},
\end{equation}
where $\theta_s$ denotes the trainable parameters, $N$ is the number of training samples.
During testing time, the overall heat map is computed as the weighted average of the heat maps from the two branches:
\begin{equation}
h(z^s,X) = \lambda h_a(z,X) + (1-\lambda)h_s(z^s,X),
\end{equation}
where $\lambda$ is the weighting parameter to balance the importance of the two branches. In practice, $\lambda$ can be estimated from a validation set. The position of the largest value in $h(z^s, X)$ suggests the center location of the tracked target. Similar to SiamFC \cite{SiamFC}, we use multi-scale inputs to deal with scale changes. We find that using three scales strikes a good balance between performance and speed.
\subsection{Channel Attention in Semantic Branch}
High-level semantic features are robust to appearance changes of objects, and therefore make a tracker more generalized but less discriminative. In order to enhance the discriminative power of the semantic branch, we design a channel attention module.
Intuitively, different channels play different roles in tracking different targets. Some channels may be extremely important in tracking certain targets while being dispensable in tracking others. If we could adapt the channel importance to the tracking target, we achieve the minimum functionality of target adaptation. In order to do so, not only the target is relevant, the surrounding context of the target also matters. Therefore, in our proposed attention module, the input is the feature map of $z^s$ instead of $z$.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=\columnwidth]{final_fig/attention.pdf}
\end{center}
\caption{Channel-wise attention generates the weighing coefficient $\xi_i$ for channel $i$ through max pooling and multilayer perceptron (MLP).}
\vspace{-10pt}
\label{fig:attention}
\end{figure}
The attention module is realized by channel-wise operations. Fig.~\ref{fig:attention} shows the attention process for the $i^{th}$ channel. Take a $conv5$ feature map as an example, the spatial dimension is $22 \times 22$ in our implementation. We divide the feature map into $3 \times 3$ grids, and the center grid with size $6 \times 6$ corresponds to the tracking target $z$. Max pooling is performed within each grid, and then a two-layer multilayer perceptron (MLP) is used to produce a coefficient for this channel. Finally, a Sigmoid function with bias is used to generate the final output weight $\xi_i$. The MLP module shares weights across channels extracted from the same convolutional layer.
Note that attention is only added to the target processing. All the activations in channel $i$ for the target patch are multiplied by $\xi_i$.
Therefore, this module is passed only once for the first frame of a tracking sequence. The computational overhead is negligible.
\subsection{Discussions of Design Choices}
\textbf{We separately train the two branches.} We made this choice based on the following observations. For some training samples, tracking with semantic clues may be much easier than with appearance clues, and for some others, it could just be the opposite. Let us take the former case as an example. If the two branches are jointly trained, the overall loss could be small when the semantic branch has a discriminative heat map and the appearance branch has a non-informative heat map. Then, these training samples do not play their part in optimizing the appearance branch. As a result, both branches are less powerful when they are jointly trained than separately trained.
\textbf{We do not fine-tune S-Net.} A common practice in transfer learning is to fine-tune the pre-trained network in the new problem. We choose not to do so because fine-tuning S-Net using the same procedure as we train A-Net will make the two branches homogeneous. Although fine-tuning S-Net may improve the tracking performance of the semantic branch alone, the overall performance could be unfavorably impacted. We have carried out experiments to verify this design choice, although we do not include them in this paper due to space limitation.
\textbf{We keep A-Net as it is in SiamFC.} Using multilevel features and adding channel attention significantly improve the semantic branch, but we do not apply them to the appearance branch. This is because appearance features from different convolutional layers do not have significant difference in terms of expressiveness. We cannot apply the same attention module to the appearance branch because high-level semantic features are very sparse while appearance features are quite dense. A simple max pooling operation could generate a descriptive summary for semantic features but not for appearance features. Therefore, a much more complicated attention module would be needed for A-Net, and the gain may not worth the cost.
\section{Experiments} \label{sec:experiment}
\subsection{Implementation Details}
\textbf{Network structure:} Both A-Net and S-Net use AlexNet\cite{ALEXNET}-like network as base network. The A-Net has exactly the same structure as the SiamFC network \cite{SiamFC}. The S-Net is loaded from a pretrained AlexNet on ImageNet\cite{ILSVRC}. A small modification is made to the stride so that the last layer output from S-Net has the same dimension as A-Net.
In the attention module, the pooled features of each channel are stacked into a 9-dimensional vector. The following MLP has $1$ hidden layer with nine neurons. The non-linear function of the hidden layer is ReLU. The MLP is followed by a Sigmoid function with bias $0.5$. This is to ensure that no channel will be suppressed to zero.
\textbf{Data Dimensions:} In our implementation, the target image patch $z$ has a dimension of $127 \times
127 \times 3$, and both $z^s$ and $X$ have a dimension of $255 \times 255 \times 3$. The output features of A-Net for $z$ and $X$ have a dimension of $6 \times 6 \times 256$ and $22 \times 22 \times 256$, respectively. The $conv4$ and $conv5$ features from S-Net have dimensions of $24 \times 24 \times 384$ and $22 \times 22 \times 256$ channels for $z^s$ and $X$. The $1 \times 1$ ConvNet for these two sets of features outputs 128 channels each (which adds up to 256), with the spatial resolution unchanged. The response maps have the same dimension of $17 \times 17$.
\textbf{Training:} Our approach is trained offline on the ILSVRC-2015~\cite{ILSVRC} video dataset and we only use color images. Among a total of more than 4,000 sequences, there are around 1.3 million frames and about 2 million tracked objects with ground truth bounding boxes. For each tracklet, we randomly pick a pair of images and crop $z^s$ from one image with $z$ in the center and crop $X$ from the other image with the ground truth target in the center.
Both branches are trained for $30$ epochs with learning rate $0.01$ in the first $25$ epochs and learning rate $0.001$ in the last five epochs.
We implement our model in TensorFlow\cite{Tensorflow} 1.2.1 framework. Our experiments are performed on a PC with a Xeon E5 2.4GHz CPU and a GeForce GTX Titan X GPU. The average testing speed of SA-Siam is 50 fps.
\textbf{Hyperparameters:}
The two branches are combined by a weight $\lambda$. This hyperparameter is estimated on a small validation set TC128\cite{TC128} excluding sequences in OTB benchmarks. We perform a grid search from $0.1$ to $0.9$ with step $0.2$. Evaluations suggest that the best performance is achieved when $\lambda=0.3$. We use this value for all the test sequences. During evaluation and testing, three scales are searched to handle scale variations.
\begin{table*}
\begin{center}
\small{
\begin{tabular}{cccc|cc|cc|cc}
\hline
\multicolumn{4}{c}{} & \multicolumn{2}{|c}{OTB-2013} & \multicolumn{2}{|c}{OTB-50} & \multicolumn{2}{|c}{OTB-100}\\
App. & Sem. & ML & Att. & AUC & Prec. & AUC & Prec. & AUC & Prec. \\
\hline
\checkmark & && & 0.599 & 0.811 & 0.520 & 0.718 & 0.585 & 0.790\\
&\checkmark && & 0.607 & 0.821 & 0.517 & 0.715 & 0.588 & 0.791\\
\checkmark&\checkmark && & 0.649 & 0.862 & 0.583 & 0.792 & 0.637 & 0.843\\
\checkmark&\checkmark & \checkmark & & 0.656 & 0.865 & 0.581 & 0.778 & 0.641 & 0.841\\
\checkmark&\checkmark & &\checkmark & 0.650 & 0.861 & 0.591 & 0.803 & 0.642 & 0.849\\
\checkmark&\checkmark & \checkmark &\checkmark& 0.676 & 0.894 & 0.610 & 0.823 & 0.656 & 0.864\\
\hline
\end{tabular}}
\end{center}
\caption{Ablation study of SA-Siam on OTB benchmarks. App. and Sem. denote appearance model and semantic model. ML means using multilevel feature and Att. denotes attention module.}
\vspace{-10pt}
\label{tab:res-emp}
\end{table*}
\subsection{Datasets and Evaluation Metrics}
\textbf{OTB:} The object tracking benchmarks (OTB)\cite{OTB13, OTB15} consist of three datasets, namely OTB-2013, OTB-50 and OTB-100. They have 51, 50 and 100 real-world target for tracking, respectively. There are eleven interference attributes to which all sequences belong.
The two standard evaluation metrics on OTB are success rate and precision. For each frame, we compute the IoU (intersection over union) between the tracked and groundtruth bounding boxes, as well as the distance of their center locations. A success plot can be obtained by evaluating the success rate at different IoU thresholds. Conventionally, the area-under-curve (AUC) of the success plot is reported. The precision plot can be acquired in a similar way, but usually the representative precision at the threshold of 20 pixels is reported. We use the standard OTB toolkit to obtain all the numbers.
\textbf{VOT:} The visual object tracking (VOT) benchmark has many versions, and we use the most recent ones: VOT2015\cite{VOT2015}, VOT2016\cite{VOT2016} and VOT2017\cite{VOT2017}. VOT2015 and VOT2016 contain the same sequences, but the ground truth label in VOT2016 is more accurate than that in VOT2015.
In VOT2017, ten sequences from VOT2016 are replaced by new ones.
The VOT benchmarks evaluate a tracker by applying a reset-based methodology. Whenever a tracker has no overlap with the ground truth, the tracker will be re-initialized after five frames. Major evaluation metrics of VOT benchmarks are accuracy (A), robustness (R) and expected average overlap (EAO). A good tracker has high A and EAO scores but low R scores.
\subsection{Ablation Analysis}
We use experiments to verify our claims and justify our design choices in SA-Siam. We use the OTB benchmark for the ablation analysis.
\textbf{The semantic branch and the appearance branch complement each other.}
We evaluate the performances of the semantic model alone (model $S_1$) and the appearance model alone (model $A_1$). $S_1$ is a basic version with only S-Net and fusion module. Both $S_1$ and $A_1$ are trained from scratch with random initialization. The results are reported in the first two rows in Table \ref{tab:res-emp}. The third row presents the results of an SA-Siam model which combines $S_1$ and $A_1$. The improvement is huge and the combined model achieves state-of-the-art performance even without multilevel semantic features or channel attention.
\begin{table}
\begin{center}
\small{
\begin{tabular}{l|c c c c}
\hline
Model & $S_2$ & $A_2$ & $A_1A_2$ & $S_1S_2$ \\
\hline
AUC & 0.606 & 0.602 & 0.608 & 0.602 \\
Prec. & 0.822 & 0.806 & 0.813 & 0.811 \\
\hline
\end{tabular}}
\end{center}
\caption{Evaluation of separate and ensemble models on OTB-2013. $A_1A_2$ is an ensemble of appearance models and $S_1S_2$ is an ensemble of basic semantic models.}
\vspace{0pt}
\label{tab:sacombine}
\end{table}
In order to show the advantage of using heterogeneous features, we compare SA-Siam with two simple ensemble models. We train another semantic model $S_2$ using different initialization, and take the appearance model $A_2$ published by the SiamFC authors. Then we ensemble $A_1A_2$ and $S_1S_2$. Table \ref{tab:sacombine} shows the performance of the separate and the ensemble models. It is clear that the ensemble models $A_1A_2$ and $S_1S_2$ do not perform as well as the SA-Siam model. This confirms the importance of complementary features in designing a twofold Siamese network.
\textbf{Using multilevel features and channel attention bring gain.}
The last three rows in Table \ref{tab:res-emp} show how each component improves the tracking performance. Directly using multilevel features is slightly helpful, but there lacks a mechanism to balance the features of different abstraction levels. We find that the attention module plays an important role. It effectively balances the intra-layer and inter-layer channel importance and achieves significant gain.
\begin{figure}
\begin{center}
\includegraphics[width=0.45\columnwidth]{final_fig/att_david_conv4-crop.pdf}
\includegraphics[width=0.45\columnwidth]{final_fig/att_david_conv5-crop.pdf}
\includegraphics[width=0.45\columnwidth]{final_fig/att_bolt_conv4-crop.pdf}
\includegraphics[width=0.45\columnwidth]{final_fig/att_bolt_conv5-crop.pdf}
\end{center}
\caption{Visualizing the channel weights output by attention module for video \emph{david} and \emph{bolt}. Channels are sorted according to the weights. There is no correspondence between channel numbers for the two videos. }
\vspace{-10pt}
\label{fig:att}
\end{figure}
\begin{table*}
\begin{center}
\small{
\begin{tabular}{l|l|ccccccccccc}
\hline
& Tracker & SA-Siam & BACF & PTAV & ECOhc & DSiamM & EAST & Staple & SiamFC & CFNet & LMCF & LCT\\
&& (ours) & \cite{BACF} &\cite{PTAV} & \cite{ECO} &\cite{DSiam} &\cite{EAST} &\cite{STAPLE}&\cite{SiamFC} &\cite{CFNET} &\cite{LMCF} &\cite{LCT}\\
\hline
\multirow{2}{*}{OTB-2013}
& AUC & \textbf{0.677} & 0.656* & 0.663 & 0.652 & 0.656 & 0.638 & 0.593 & 0.607 & 0.611 & 0.628& 0.628 \\
&Prec. & \textbf{0.896} & 0.859 & 0.895 & 0.874 & 0.891 & - & 0.782 & 0.809 & 0.807 & 0.842& 0.848 \\
\hline
\multirow{2}{*}{OTB-50}
& AUC & \textbf{0.610} & 0.570 & 0.581 & 0.592 & - & - & 0.507 & 0.516 & 0.530 & 0.533 & 0.492 \\
&Prec. & \textbf{0.823} & 0.768 & 0.806 & 0.814 & - & - & 0.684 & 0.692 & 0.702 & 0.730 & 0.691 \\
\hline
\multirow{2}{*}{OTB-100}
& AUC & \textbf{0.657} & 0.621* & 0.635 &0.643 & - & 0.629 & 0.578 & 0.582 & 0.568 & 0.580& 0.562 \\
&Prec. & \textbf{0.865} & 0.822 & 0.849 & 0.856 & - & - & 0.784 & 0.771 & 0.748 & 0.789 & 0.762 \\
\hline
& FPS & 50 & 35 & 25 & 60 & 25 & 159 & 80 & 86 & 75 & 85 & 27 \\
\hline
\end{tabular}}
\end{center}
\caption{Comparison of state-of-the-art real-time trackers on OTB benchmarks. * The reported AUC scores of BACF on OTB-2013 (excluding \emph{jogging-2}) and OTB-100 are 0.678 and 0.630, respectively. They have used a slightly different measure.}
\vspace{-10pt}
\label{tab:res-soa}
\end{table*}
Fig.\ref{fig:att} visualizes the channel weights for the two convolutional layers of video sequence \emph{david} and \emph{bolt}. We have used a Sigmoid function with bias $0.5$ so that the weights are in range $[0.5,1.5]$. First, we observe that the weight distributions are quite different for $conv4$ and $conv5$. This provides hints why the attention module has a larger impact on models using multilevel features. Second, the weight distributions for $conv4$ are quite different for the two videos. The attention module tends to suppress more channels from $conv4$ for video \emph{bolt}.
\textbf{Separate vs. joint training.} We have claimed that the two branches in SA-Siam should be separately trained. In order to support this statement, we tried to jointly train the two branches (without multilevel features or attention). As we anticipated, the performance is not as good as the separate-training model. The AUC and precision of the joint training model on OTB-2013/50/100 benchmarks are (0.630, 0.831), (0.546,0.739), (0.620, 0.819), respectively.
\begin{figure}
\begin{center}
\includegraphics[width=0.48\columnwidth]{final_fig/2013-prc-crop.pdf}
\includegraphics[width=0.48\columnwidth]{final_fig/2013-auc-crop.pdf}
\includegraphics[width=0.48\columnwidth]{final_fig/50-prc-crop.pdf}
\includegraphics[width=0.48\columnwidth]{final_fig/50-auc-crop.pdf}
\includegraphics[width=0.48\columnwidth]{final_fig/100-prc-crop.pdf}
\includegraphics[width=0.48\columnwidth]{final_fig/100-auc-crop.pdf}
\end{center}
\caption{The precision plots and success plots over three OTB benchmarks. Curves and numbers are generated with OTB toolkit.}
\vspace{-10pt}
\label{fig:OPEplots}
\end{figure}
\begin{figure*}
\begin{center}
\includegraphics[width=0.9\textwidth]{final_fig/qualitative-crop.pdf}
\end{center}
\caption{Qualitative results comparing SA-Siam with other trackers on sequences \emph{bolt, soccer, matrix, human3, skiing} and \emph{motorrolling}. SA-Siam tracks accurately and robustly in these hard cases. In the very challenging \emph{skiing} and \emph{motorrolling} sequence, SA-Siam can always track to the target when most of other trackers fail.}
\vspace{-10pt}
\label{fig:qualitative}
\end{figure*}
\subsection{Comparison with State-of-the-Arts}
We compare SA-Siam with the state-of-the-art real-time trackers on both OTB and VOT benchmarks. Conventionally, a tracking speed beyond 25fps is considered real-time. Our tracker runs at 50fps.
\textbf{OTB benchmarks:}
We compare SA-Siam with BACF\cite{BACF}, PTAV\cite{PTAV}, DSiamM\cite{DSiam}, EAST\cite{EAST}, Staple\cite{STAPLE} ,SiamFC\cite{SiamFC}, CFNet\cite{CFNET}, LMCF\cite{LMCF} and LCT\cite{LCT} on OTB 2013/50/100 benchmarks. The precision plots and success plots of one path evaluation (OPE) are shown in Fig.\ref{fig:OPEplots}. More results are summarized in Table \ref{tab:res-soa}. The comparison shows that SA-Siam achieves the best performance among these real-time trackers on all three OTB benchmarks.
Note that the plots in Fig.\ref{fig:OPEplots} are obtained by running the OTB evaluation kit on the tracking result file released by authors. The numbers for BACF \cite{BACF} is different from the paper because the authors only evaluate 50 videos (no \emph{jogging-2}) for OTB-2013. Also, they use per-video average instead of per-frame average to compute the numbers.
\begin{table}
\begin{center}
\small{
\begin{tabular}{l|c c c c}
\hline
Tracker & A & R & EAO & FPS \\
\hline
MDNet & 0.60 & 0.69 & 0.38 & 1\\
DeepSRDCF & 0.56 & 1.05 & 0.32 & $<1$\\
EBT & 0.47 & 1.02 & 0.31 & 4.4\\
SRDCF & 0.56 & 1.24 & 0.29 & 5\\
\hline
BACF & 0.59 & 1.56 & - & 35\\
EAST & 0.57 & 1.03 & 0.34 & 159\\
Staple & 0.57 & 1.39 & 0.30 & 80\\
SiamFC & 0.55 & 1.58 & 0.29 & 86 \\
\hline
\textbf{SA-Siam}& \emph{0.59} & \emph{1.26} & \emph{0.31} & \emph{50} \\
\hline
\end{tabular}}
\end{center}
\caption{Comparisons between SA-Siam and state-of-the-art real-time trackers on VOT2015 benchmark. Four top non-real-time trackers in the challenge are also included as a reference.}
\vspace{-10pt}
\label{tab:res-vot15}
\end{table}
\textbf{VOT2015 benchmark:}
VOT2015 is one of the most popular object tracking benchmarks. Therefore, we also report the performance of some of the best non-real-time trackers as a reference, including MDNet\cite{MDNET}, DeepSRDCF\cite{DeepSRDCF}, SRDCF\cite{SRDCF} and EBT\cite{EBT}. Also, SA-Siam is compared with BACF\cite{BACF}, EAST\cite{EAST}, Staple\cite{STAPLE} and SiamFC\cite{SiamFC}.
Table~\ref{tab:res-vot15} shows the raw scores generated from VOT toolkit and the speed (FPS) of each tracker. SA-Siam achieves the highest accuracy among all real-time trackers. While BACF has the same accuracy as SA-Siam, SA-Siam is more robust.
\textbf{VOT2016 benchmark:}
We compare SA-Siam with the top real-time trackers in VOT2016 challenge \cite{VOT2016} and Staple\cite{STAPLE} , SiamFC\cite{SiamFC} and ECOhc\cite{ECO}.
The results are shown in Table~\ref{tab:res-vot16}. On this benchmark, SA-Siam appears to be the most robust real-time tracker. The A and EAO scores are also among the top three.
\begin{table}
\begin{center}
\small{
\begin{tabular}{l|c c c c}
\hline
Tracker & A & R & EAO & FPS \\
\hline
ECOhc & 0.54 & 1.19 & 0.3221 & 60\\
Staple & 0.54 & 1.42 & 0.2952 & 80\\
STAPLE+ & 0.55 & 1.31 & 0.2862& $>25$\\
SSKCF & 0.54 & 1.43 & 0.2771& $>25$\\
SiamRN & 0.55 & 1.36 & 0.2766& $>25$\\
DPT & 0.49 & 1.85 & 0.2358& $>25$\\
SiamAN & 0.53 & 1.91 & 0.2352 & 86\\
NSAMF & 0.5 & 1.25 & 0.2269& $>25$\\
CCCT & 0.44 & 1.84 & 0.2230& $>25$\\
GCF & 0.51 & 1.57 & 0.2179& $>25$\\
\hline
\textbf{SA-Siam}& \emph{0.54} & \emph{1.08} & \emph{0.2911} & \emph{50}\\
\hline
\end{tabular}}
\end{center}
\caption{Comparisons between SA-Siam and state-of-the-art real-time trackers on VOT2016 benchmark. Raw scores generated from VOT toolkit are listed. }
\vspace{-10pt}
\label{tab:res-vot16}
\end{table}
\textbf{VOT2017 benchmark:}
Table \ref{tab:res-vot17} shows the comparison of SA-Siam with ECOhc\cite{ECO}, CSRDCF++\cite{CSRDCF}, UCT\cite{UCT}, SiamFC\cite{SiamFC} and Staple\cite{STAPLE} on the VOT2017 benchmark. Different trackers have different advantages, but SA-Siam is always among the top tier over all the evaluation metrics.
\begin{table}
\begin{center}
\small{
\begin{tabular}{l|c c c c}
\hline
Tracker & A & R & EAO & FPS \\
\hline
SiamDCF & 0.500& 0.473& 0.249 & 60\\
ECOhc & 0.494& 0.435& 0.238 & 60\\
CSRDCF++ & 0.453& 0.370& 0.229 &$>25$\\
CSRDCFf & 0.479& 0.384& 0.227 &$>25$\\
UCT & 0.490& 0.482& 0.206 & 41\\
ATLAS & 0.488& 0.595& 0.195 &$>25$\\
SiamFC & 0.502& 0.585& 0.188 & 86\\
SAPKLTF & 0.482& 0.581& 0.184 & $>25$\\
Staple & 0.530& 0.688& 0.169 & 80\\
ASMS & 0.494& 0.623& 0.169 &$>25$\\
\hline
\textbf{SA-Siam} & \emph{0.500} & \emph{0.459} & \emph{0.236} & \emph{50}\\
\hline
\end{tabular}}
\end{center}
\caption{Comparisons between SA-Siam and state-of-the-art real-time trackers on VOT2017 benchmark. Accuracy, normalized weighted mean of robustness score, EAO and speed (FPS) are listed. }
\vspace{-10pt}
\label{tab:res-vot17}
\end{table}
More qualitative results are given in Fig. \ref{fig:qualitative}. In the very challenging \emph{motorrolling} and \emph{skiing} sequence, SA-Siam is able to track correctly while most of others fail.
\section{Conclusion} \label{sec:conclusion}
In this paper, we have presented the design of a twofold Siamese network for real-time object tracking. We make use of the complementary semantic features and appearance features, but do not fuse them at early stage. The resulting tracker greatly benefits from the heterogeneity of the two branches. In addition, we have designed a channel attention module to achieve target adaptation. As a result, the proposed SA-Siam outperforms other real-time trackers by a large margin on the OTB benchmarks. It also performs favorably on the series of VOT benchmarks. In the future, we plan to continue exploring the effective fusion of deep features in object tracking task.
\section*{Acknowledgement}
This work was supported in part by National Key Research and Development Program of China 2017YFB1002203, NSFC No.61572451, No.61390514, and No.61632019, Youth Innovation Promotion Association CAS CX2100060016, and Fok Ying Tung Education Foundation WF2100060004.
The authors would like to thank Wenxuan Xie and Bi Li for helpful discussions.
{\small
\bibliographystyle{ieee}
|
{
"timestamp": "2018-02-27T02:04:53",
"yymm": "1802",
"arxiv_id": "1802.08817",
"language": "en",
"url": "https://arxiv.org/abs/1802.08817"
}
|
\section{Acknowledgements}
The authors would like to thank Andrzej Skrzypacz,
Emma Brunskill, Ben van Roy, Andreas Krause, and Carlos Riquelme
for their suggestions and feedback.
This work was supported by the Stanford TomKat Center, and by the National Science Foundation under
Grant No. CNS-1544548. Any opinions, findings, and conclusions or recommendations expressed in this
material are those of the author(s) and do not necessarily reflect the views of the National Science
Foundation.
\section{Proofs}
\label{sec:proofs}
\subsection{Threshold models}
\begin{proof}[Proof of Proposition~\ref{thm:fixed}]
The proof follows from defining an appropriate dynamic program and
solving it using value iteration.
We will denote the state by $x$, denoting the best lower bound on $c$.
In practice, if the process survives up to time $t$ ($T > t$)
the state is $x = \max_{s \le t} x_s$.
Furthermore, it is convenient to use the survival function $S(x) = 1 - F(x)$.
It is easy to see that the optimal policy is non-decreasing,
so we can restrict our focus to non-decreasing policies.
The Bellman equation for the value function at state $x$ is given by
\[
V(x) = \max_{y \ge x} \frac{S(y)}{S(x)} (r(y) + \gamma V(y)).
\]
For convenience we define the following transformation $ J(x) = S(x)V(x) $
and note that we can equivalently use $J$ to find the optimal policy.
We now explicitly compute the limit of value iteration to find $J(x)$.
Start with $J_0(x) = 0$ for all $x$ and note that the iteration takes the form
\[
J_{k+1} = \max_{y \ge x} S(y) r(y) + \gamma J_k(y) = \max_{y \ge x} p(y) + \gamma J_k(y).
\]
We prove the following two properties by induction for all $k > 0$:
\begin{enumerate}
\item $J_k(x) = p(x^*) \sum_{i=0}^{k-1} \gamma_i$ for all $x \le x^*$.
\item $J_k(x) < J_k(x^*)$ for all $x > x^*$.
\end{enumerate}
The above is immediately true for $k = 1$.
Now assume it is true for an arbitrary $k$, then
\[
J_{k+1}(x) = p(x^*) + \gamma J_k(x^*) \qquad \text{ for all } x \le x^*
\]
and
\[
J_{k+1}(x) < p(x^*) + \gamma J_k(x^*) = J_{k+1}(x^*) \qquad \text{ for all } x > x^*.
\]
The result follows from taking the limit as $k \to \infty$ and noting that for any state $x \le x^*$,
it is optimal to jump to state $x^*$ (and stay there).
We also immediately see that the value of the optimal policy thus is $p(x) / \gamma$, as required.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{thm:independent}]
It is immediate that the optimal policy must be constant; if the process
survives $x_t = x$, then at time $t+1$ we face the same problem as at time
$t$. So whatever action is optimal at time $t$, is also optimal at time $t+1$.
Let $V(x)$ denote the value of playing $x_t = x$ for all $t$.
Then the following relation holds
\[
V(x) = (1-F(x)) (r(x) + \gamma V(x))
\]
which leads to
\[
V(x) = \frac{r(x)(1 - F(x))}{1 - \gamma (1 - F(x))}.
\]
The result now follows immediately.
\end{proof}
\subsection{Robustness}
\begin{proof}[Proof of Proposition~\ref{thm:small_noise}]
First we consider the constant policy $x_t = x^* - y$ for all $t$
in the noiseless case.
We note that
\[
r(x^* - y) S(x^* - y)
\ge (r(x^*) - yL) S(x^*)
\ge V(x^*) - yL
\]
where $V(x^*)$ is the value of the optimal constant policy for the noise-free model.
Now let us consider the best possible noise model,
then $\epsilon_t = y$ for all $t$.
But this is equivalent to the noise-free model with the threshold shifted by $y$.
Hence, we know that a constant policy is optimal.
We can bound the value of this model by
\begin{align}
\max_x r(x) S(x-y)
&= \max_x r(x + y) S(x) \\
&\le \max_x ( r(x) + y L ) S(x) \\
&= \max_x r(x) S(x) + y L S(x) \\
&\le \max_x r(x) S(x) + y L\\
&= V(x^*) + y L
\end{align}
Hence, this implies that the constant policy $x_t = x^* - y$
is at most $\frac{2yL}{1-\gamma}$ worse than the optimal policy for the
most optimistic noise model.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{thm:large_noise}]
Let $\bar \theta$ be the midpoint of the $\eta$ cover, $c = \frac{l+u}{2}$.
Now we bound the expected value of an oracle policy, i.e.
a policy that knows the true threshold $\theta^*$ as follows
\begin{align*}
\ensuremath{\mathbb{E}}(v(\theta^*, \theta^*))
&\le \frac{2 \eta B}{1-\gamma} + \int_l^u v(\theta^*, \theta^*) dF_\theta\\
&\le \frac{2 \eta B}{1-\gamma} + \int_l^u v(\theta^*, \theta^*) + L|\bar \theta - \theta^*|dF_\theta\\
&\le \frac{2 \eta B}{1-\gamma} + \int_l^u v(\bar \theta, \theta^*) + L\frac{u - l}{2} dF_\theta\\
&\le \ensuremath{\mathbb{E}}(v(\bar \theta, \theta^*)) + \frac{2 \eta B}{1-\gamma} + (1-\eta) \frac{Lw}{2}
\end{align*}
which completes the proof.
\end{proof}
\subsection{Learning}
\begin{proof}[Proof of Proposition~\ref{thm:ucb_upper}]
Due to the discretization, the proof consists of two parts.
First, we show that the policy that plays the best arm $i^*$
suffers small regret with respect to the optimal policy.
Then we use the UCB regret bound to show that
the learning strategy has low regret with respect to the
playing arm $i^*$.
Thus we can decompose regret into
\[
\textup{regret}(UCB) = \textup{regret}_D + \textup{regret}_U
\]
where the first term corresponds to the discretization error and the second
from the learning policy.
Due to the time horizon and discounting, we write
Let $x^*$ be the optimal strategy, i.e. it maxizimes $r(x)D(x)$.
Then the discretization error from playing $i^*/K$, by Assumption~1
is
\[
\textup{regret}_D \le \frac{c_2 n}{2K^2} = \frac{c_2 \sqrt{n \log n} }{2}.
\]
Thus, the error due to the discretization is small.
Now let us bound the UCB regret with respect to action $i^* / K$.
As Kleinberg and Leighton (2003) note, the assumption that the pulls of different
arms are independent is not used in the proof.
Thus we can apply Lemma 5.
First, we show that the arms are sub-Gaussian.
Since the rewards are bounded by $1$ and independent across time,
straightforward calculation shows that
\[
\var{(1-\gamma)\sum_{t=0}^\infty \gamma^t R_t(x)}
= \frac{(1-\gamma)^2}{4 (1-\gamma^2)}
\le \frac{1}{4}.
\]
Then using the law of total variance, conditioning on the event $x < \theta_u$,
the variance of the total obtained reward for user $u$, $R_u$, can be bounded by
\begin{align}
\var{ R_u }
&= \ensuremath{\mathbb{E}}(\var{ R_u \mid \theta_u }) +
\var{ \ensuremath{\mathbb{E}}(R_u \mid \theta_u) }\\
&= \frac{(1-F(x_u)) M^2}{4} +
\left( r(x_u) \right)^2 F(x_u) (1-F(x_u))\\
&\le M^2/2
\end{align}
Thus we find that the reward for users is sub-Gaussian
with parameter $ \sigma = \frac{M^2}{2} $.
Recall the UCB regret bound
\[
\textup{regret}(UCB) \le
\sum_{i : \Delta_i > 0}
\frac{8 \alpha \sigma^2}{\Delta_i} \log n + \frac{\alpha}{\alpha - 2}.
\]
We now focus on the $\sum_{i=1 : \Delta_i > 0}^K \frac{1}{\Delta_i}$ term.
Let $\Delta_{(1)} \le \Delta_{(2)} \le \ldots \le \Delta_{(K-1)}$ denote
the ordered gaps with respect to the optimal arm.
Note that for $j \ge 2$, we know $\Delta_{(j)} > c_1(\frac{j}{2K})^2$
due to Assumption~1.
However, for the smallest gap, we only know $0 \le \Delta_{(1)} \le \frac{c_2}{K^2}$,
depending how close $i^*/K$ is to $x^*$.
We thus obtain
\begin{align}
\sum_{i=1}^K \frac{1}{\Delta_i}
&= \sum_{i=1}^{K-1} \frac{1}{\Delta_{(i)}} \\
&= \frac{1}{\Delta_{(1)}} + \sum_{i \ge 2} \frac{1}{\Delta_{(j)}}\\
&\le \frac{1}{\Delta_{(1)}} + \frac{4K^2}{c_1} \sum j^{-1}\\
&\le \frac{1}{\Delta_{(1)}} + \frac{2\pi^2}{3 c_1} K^2
\end{align}
Thus regret is bounded by
\[
\textup{regret}_U \le
\frac{8 \alpha \sigma^2 \log n}{\Delta_{(1)}}
+ \frac{16 \alpha \sigma^2 \pi^2}{3 c_1} (K-2)^2 \log n
+ K \frac{\alpha}{\alpha - 2}
\]
However, the regret from due to playing the second best action
is trivially bounded by $n \Delta_{(1)}$.
Thus, we can bound the worst case when $\Delta_{(1)} = 4 \sqrt{ \log n / n }$.
This leads to a bound of
\[
\textup{regret}_U \le
2\alpha \sigma^2 \sqrt{n\log n}
+ \frac{16 \alpha \sigma^2 \pi^2}{3 c_1} (K-2)^2 \log n
+ K \frac{\alpha}{\alpha - 2}
\]
since there are $K = (n/\log n)^{1/4}$ arms, we get
\[
\textup{regret}_u \le
2\alpha\sigma^2 \sqrt{n\log n}
+ \frac{16\alpha \sigma^2 \pi^2}{3 c_1} \sqrt{n \log n}
+ o(\sqrt{n \log n})
\]
Combining this with the bound on $\textup{regret}_D$ completes the proof.
\end{proof}
\subsection{Feedback}
\begin{proof}[Proof of Lemma~\ref{thm:lipschitz_value}]
The Bellman equation of the dynamic program for the feedback model can be
written as:
\[
V(l, u) =
\max_{l \le y \le u}
\frac{F(u) - F(y)}{F(u) - F(l)} (r(y) + \gamma V(y, u))
+ \frac{F(y) - F(l)}{F(u) - F(l)} \gamma V(l, y)
\]
where $l$ and $u$ are the lower bounds and upper bounds on $c$ based on the history.
Note that $V$ is finite and therefore value iteration
converges pointwise to $V$.
We use induction on the value iterates to find the Lipschitz constant for $V$.
Let $V_0, V_1,\ldots$ indicate the value iterates.
Since $V_0(l, u) = 0$ for all states $(l, u)$, the Lipschitz constant for $V_0$,
denoted by $L_0 = 0$.
We further claim that $L_{n+1} = L_p \frac{B}{1-\gamma} + \beta \gamma L_n$.
Suppose this is true for $n = 1,\ldots,i-1$, then for $n=i+1$ we consider
state $(l + \epsilon, u)$ and write $x^*$ for the optimal action
in that state, and $y^* = x^* - l$.
Then
\begin{multline}
V_{i+1}(l, u) \ge p(y^* \mid l, u) (r(x^*) + \gamma V(x^*, u))
+ (1-p(y^* \mid l, u)) \beta \gamma V(l, x^*)
\end{multline}
Also, $V(l, x^*) \le V(l, u)$.
Then we find
\begin{multline}
V_{i+1}(l + \epsilon, u) - V_{i+1}(l, u)
\le \left[p(y^* \mid l + \epsilon, u) - p(y^* \mid l, u)\right]
(r(x^*) + \gamma V_i(x^*, u) )\\
+ (1-p(y^* \mid l + \epsilon, u)) \beta \gamma V_i(l+\epsilon, x^*)
- (1-p(y^* \mid l, u) \beta \gamma V_i(l, x^*)
\end{multline}
Using the Lipschitz continuity of $p$ we can bound
\[
p(y^* \mid l + \epsilon, u) - p(y^* \mid l, u) \le \epsilon L_p.
\]
Then note that
\[
r(x^*) + \gamma V(x^*, u) \le \frac{B}{1-\gamma}
\]
and for the final two terms we note
\begin{multline}
(1-p(y^* \mid l + \epsilon, u)) \beta \gamma V_i(l+\epsilon, x^*)
- (1-p(y^* \mid l, u) \beta \gamma V_i(l, x^*)\\
\le \beta \gamma (V_i(l+\epsilon, x^*) - V_i(l, x^*))
\le \beta \gamma \epsilon L_i
\end{multline}
where we use the inductive assumption.
Because $l, u$ and $\epsilon$ are arbitrary, we see that
\[
L_n \le \frac{L'B}{(1-\beta\gamma)(1-\gamma)}.
\]
which implies $V$ is Lipschitz.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{thm:partial_learning}]
First we note that by Lemma~8, $V$ is Lipschitz, and
we write $L_v$ for its Lipschitz constant.
Fix $u$, and consider a state $(u-\nu, u)$ for some $\nu > 0$.
For notational convenience, for action $x$ we write $y = x - (u - \nu)$
for the difference from the lower bound.
We also use the shorthand $l = u-\nu$ and $p(y) = p(y \mid l,u)$.
We can upperbound the value function by
\begin{align}
V(l, u) &=
\max_y p(y) [r(x) + \gamma V(x, u)] + (1-p(y)) \beta\gamma V(l, x)\\
&\le p(y)[r(l) + L_r y + \gamma V(l, u)\\
&\quad+ \gamma L_v y] + (1-p(y)) \beta \gamma V(l, u)\\
&\le (1-\lambda(\nu)y) [r(l) + \gamma V(l, u) + Ly]\\
&\quad+ \lambda(\nu)y\beta\gamma V(l, u)
\end{align}
where we write $L = L_r + \gamma L_v$ and use the non-degeneracy of $p$.
The derivative for the above expression with respect to $y$ is
\begin{multline}
(1-2\lambda(\nu))Ly + L - \lambda(\nu) r(l) - \gamma \lambda(\nu) (1-\beta) V(l, u)\\
\le (1-2\lambda(\nu))Ly + L - \lambda(\nu) r(l).
\end{multline}
Since $r(l) > 0$ for all $l \in \mathop{Int}\mathbf{X}$, for $\nu$ sufficiently
small this derivative is negative for all $y\ge 0$.
To complete the proof, we need this upperbound to be tight at $y=0$,
which follows immediately
\begin{multline}
\left. (1-\lambda(\nu)y) [r(l) + \gamma V(l, u) + Ly] + \lambda(\nu)y\beta\gamma V(l, u) \right\rvert_{y=0} = \\
r(l) + \gamma V(l, u) \ge \frac{r(l)}{1-\gamma}.
\end{multline}
Since $r$ is increasing, it follows immediately that $\epsilon(u)$ is non-decreasing
in $u$.
\end{proof}
\section{Model}
\subsection{Setup}
The general model we consider is as follows.
A single threshold $c$ is drawn from known distribution function $F$.
At time times $t=0,1,\ldots$, the platform sets activity level $x_t \in \mathbf{X}$,
where $\mathbf{X}$ is a given closed set.
While $x_t < c$, obtains a (random) reward of $R_t(x_t)$ with
expected value $\ensuremath{\mathbb{E}}(R_t(x_t)) = r(x_t) > 0$.
Let $T$ be the first time $x_t > c$: $T = \min\{\tau : x_\tau > c\}$.
The goal of the platform is to maximize the discounted sum of rewards up
to but not including the first crossing time $T$:
\[
\max_{\{x_t\}} \ensuremath{\mathbb{E}} \left[ \sum_{t=0}^{T-1} \gamma^t R(x_t) \right],
\]
where $\gamma \in (0, 1)$ is the discount factor.
For convencience we write $S(\cdot) = 1 - F(\cdot)$ for the survival function.
\subsection{Optimal policy}
In this section, we take a deep dive into the baseline model.
We show that under general conditions there exists an optimal constant policy,
and it is easily characterized.
The proof is a straightfoward application of value iteration.
Besides the simplicity and generality of the result,
it is also noteworthy that the optimal policy does not depend on the discount factor.
\begin{theorem}
Assume $r$ is positive for all $x \in \mathbb{X}$.
Suppose the function $p: x \to r(x) S(x)$ has a unique global optimum at $x^*$.
Then, the optimal policy is $x_t = x^*$ for all t, and the value is
$r(x^*)S(x^*) / \gamma$.
\end{theorem}
\begin{proof}
The proof follows from defining an appropriate dynamic program and
solving it using value iteration.
We will denote the state by $x$, denoting the best lower bound on $c$.
In practice, if the process survives up to time $t$ ($T > t$)
the state is $x = \max_{s \le t} x_s$.
It is easy to see that the optimal policy is non-decreasing,
so we can restrict our focus to non-decreasing policies.
The Bellman equation for the value function at state $x$ is given by
\[
V(x) = \max_{y \ge x} \frac{S(y)}{S(x)} (r(y) + \gamma V(y)).
\]
For convenience we define the following transformation $ J(x) = S(x)V(x) $
and note that we can equivalently use $J$ to find the optimal policy.
We now explicitly compute the limit of value iteration to find $J(x)$.
Start with $J_0(x) = 0$ for all $x$ and note that the iteration takes the form
\[
J_{k+1} = \max_{y \ge x} S(y) r(y) + \gamma J_k(y) = \max_{y \ge x} p(y) + \gamma J_k(y).
\]
We prove the following two properties by induction for all $k > 0$:
\begin{enumerate}
\item $J_k(x) = p(x^*) \sum_{i=0}^{k-1} \gamma_i$ for all $x \le x^*$.
\item $J_k(x) < J_k(x^*)$ for all $x > x^*$.
\end{enumerate}
The above is immediately true for $k = 1$.
Now assume it is true for an arbitrary $k$, then
\[
J_{k+1}(x) = p(x^*) + \gamma J_k(x^*) \qquad \text{ for all } x \le x^*
\]
and
\[
J_{k+1}(x) < p(x^*) + \gamma J_k(x^*) = J_{k+1}(x^*) \qquad \text{ for all } x > x^*.
\]
The result follows from taking the limit as $k \to \infty$ and noting that for any state $x \le x^*$,
it is optimal to jump to state $x^*$ (and stay there).
We also immediately see that the value of the optimal policy thus is $p(x) / \gamma$, as required.
\end{proof}
A key insight that is useful as we explore extensions to the model
is that the state of the dynamic program can be completely characterized
by the posterior over the threshold after observing survival up to time $t$
(i.e. $T > t$).
In the baseline model, this posterior is completely characterized by
the maximum activity level up to time $t$.
The intuition of the optimality of such a simple policy is also better understood from this perpective.
In a single time period, we are able to select exactly what subset of users (thresholds) we want to serve
to maximize rewards.
If, after an initial activity level of $x_0 = y$ for some $y$, it is better to set $x_1 = z > y$,
then we should have set $x_0 = z$ to begin with.
We note that the conditions for optimality of a constant policy are rather weak.
If $p$ has multiple global optima then any non-decreasing policy that jumps
between global optima of $p$ is optimal.
\subsection{Properties of optimal solution}
We now discuss properties of the optimal solution,
focusing on the case where the platform interacts with a user or agent.
The threshold $c$ models the user preference on when to abandon.
\subsubsection{Simple and widely applicable}
Practically speaking, the optimality of the constant policy has a number of advantages.
First of all, the result is widely applicable and barely relies on assumptions.
The policy is easily communicated to users,
and after a single time period users know whether they are happy or should abandon.
\subsubsection{Strategy-proof}
The proposed model implicitly assumes that agents are \emph{myopic}.
That is, the decision to abandon only depends on the current activity level.
Such agents fail to take into account how their decisions affect the actions of
the platform, and thus they might not act in their best interest.
In this section we show that the optimal policy is strategy-proof.
That is, also for \emph{strategic} (or rational) agents
the same constant policy is optimal.
To do this, we draw a parallel to the viewpoint of mechanism design.
The idea is to relax the constraint that there is permanent abandonment and
find an optimal mechanism for setting activity levels.
It turns out that this leads to the a policy that is also optimal in the
constraint case, and equivalent to the solution based on dynamic programming.
\begin{prop}
Assume $r$ is positive for all $x \in \mathbb{X}$.
Suppose the function $p: x \to r(x) S(x)$ has a unique global optimuum at $x^*$.
The constant policy $x_t = x^*$ is an optimal policy for strategic agents.
\end{prop}
\begin{proof}
By the revelation principle \cite{myerson1981optimal} suppose the principal
queries the agent for her threshold $\tilde c$,
and implements the allocation rule $x_t(\tilde c)$.
The goal of the principal is to maximize her total discounted reward.
\[
\max \sum \gamma^t r(x_t(\tilde c))
\]
under incentive compatibility and individual rationality constraints.
For now, we relax the problem by forgetting about the abandonment constraint.
We want to design a mechanism that maximizes the reward over allocation rules
$x_t(\tilde c)$.
The (scaled) total discounted utility for the consumer is
\[
(1-\gamma) \sum_{t=0}^\infty \gamma^t (c - x_t( \tilde c))
= c - (1-\gamma) \sum_{t=0}^\infty \gamma^t x_t(\tilde c)
= c - q(\tilde c)
\]
where we implicitly define $q$ as the cost of the allocation schedule to the
agent.
But this is exactly the monopoly pricing problem and we know that posted prices
are optimal.
That is, the principal implements $x_t = z$ if $\tilde c \ge z$ and $x_t = 0$
otherwise, for all time steps $t$, for some $z$.
This satisfies the incentive compatibility and individual rationality
constraints.
Moreover, we note that this also adheres to the specific condition that
$x_t = 0$ for all $t > s$ if $x_s > c$.
we note that the solution also adheres to the condition that the agent can only
switch once and for all to the alternative with utility $0$.
To maximize the total discounted reward, the principal selects $z$ to maximizes
\[
\max_z r(z) S(z).
\]
Hence, the optimal mechanism is equal to the optimal policy.
\end{proof}
\section{Conclusion}
When machine learning algorithms are deployed in settings where
they interact with people, it is important to understand
how user behavior affects these algorithm.
In this work, we propose a novel model for personalization that takes into account
the risk that a dissatisfied user abandons the platform.
This leads to some unexpected results. We show that constant policies are
optimal under fixed threshold and independent threshold models.
We have shown that under small perturbations of these models,
constant policies are ``robust'' (i.e., perform well in the perturbed model),
though in general finding an optimal policy becomes intractable.
In a setting where a platform faces many users, but
does not know the reward function nor population distribution over
threshold, under suitable assumptions we have shown that
UCB-type algorithms perform well, both
theoretically by providing regret bounds and running simulations.
We also consider an explore-exploit strategy that is more efficient
in practice, but it requires knowledge of the reward function.
Feedback from users leads to more sophisticated optimal learning strategies
that exhibit partial learning; the optimal learning algorithm
personalizes to a certain degree to each user.
Also, we have found that the optimal policy is more conservative
when the probability of abandonment is high, and aggressive when
that probability is low.
\subsection{Further directions}
There are several interesting directions of further research
that are outside the scope of this work.
\paragraph{Abandonment models}
First, more sophisticated behaviour on user abandonment should be considered.
This could take many forms, such as a total \emph{patience budget}
that gets depleted as the threshold is crossed.
Another model is that of a user playing a learning strategy herself,
comparing this platform to one or multiple outside options.
In this scenario, the user and platform are simultaneously learning
about each other.
\paragraph{User information}
Second, we have not considered additional user information in
terms of covariates.
In the notification example, user activity seems like an important
signal of her preferences.
Models that are able to incorporate such information and are able
to infer the parameters from data are beyond the scope of this work
but an important direction of further research.
\paragraph{Empirical analysis}
This work focuses on theoretical understanding of the abandonment model,
and thus ignores important aspects of a real world system.
We believe there is a lot of potential to gain additional insight
from an empirical perspective using real-world systems with abandonment risk.
\section{Feedback}
\label{sec:feedback}
In this section, we consider a ``softer'' version of abandonment, where the platform receives some feedback before the user abandons.
As example, consider optimizing the number of push notifications.
When a user receives a notification, she may decide to open the app,
or decide to turn off notifications.
However, her most likely action is to ignore the notification.
The platform can interpret this as a signal of dissatisfaction, and work to improve the policy.
In this section, we augment our model to capture such effects.
While the solution to this updated model is intractable,
we discuss interesting structure that the optimal policy exhibits:
{\em partial learning}, and the {\em aggressiveness} of the optimal policy.
\paragraph{Feedback model}
To incorporate user feedback, we expand the model as follows. Suppose that whenever the current action $x_t$
exceeds the threshold (i.e., $x_t > \theta_t$),
then with probability $p$ we receive no reward
but the user remains, and with probability $1-p$ the user abandons. Further, we assume that the platform at time $t$ both observes the reward $R(x_t)$,
if rewarded,
and an indicator $Z_t = {\mathbb{I}}_{x_t > \theta_t}$.
This is equivalent to assuming that a user has geometrically distributed
\emph{patience}; the number of times she allows the platform to cross her threshold.
As before the goal is to maximize expected discounted reward.
Note that because the platform does not receive a reward when the
threshold is crossed, the problem is nontrivial even when $p=1$.
We restrict our attention to the single threshold model, where
$\theta$ is drawn once and then fixed for all time periods.
Figure~\ref{fig:tree} shows the numerically computed optimal policy
when the threshold distribution is uniform on $[0,1]$,
the reward function is $r(x) = x$, the probability of abandonment $p=0.5$
and $\gamma = 0.9$.
Depending on whether or not a feedback signal is received, the
optimal policy follows the green or the red line as we step through
time from left to right.
We note that one can think of the optimal policy as a form of bisection, though
it does not explore the entire domain of $F$. In particular it is conservative
regarding users with large $\theta$. For example, consider a user with
threshold $0.9$. While the policy is initially increasing and thus partially
personalizes to her threshold, $x_t$ does not converge to $0.9$, and in fact
never comes close. We call this \emph{partial learning}; in the next section, we demonstrate that this is a key feature of the optimal policy in general.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{img/tree_d9_b5.pdf}
\caption{
Visualization of optimal policy when discount factor $\gamma = 0.9$
in the $p=0.5$ model.
Follow the tree from left to right, where if $Z_t = 0$ (reward obtained)
the next action follows from following the green line, and
if $Z_t = 1$, the optimal action is given by the point following the red line
if the user has not abandoned.
}
\label{fig:tree}
\end{figure}
\paragraph{Partial learning}
Partial learning refers to the fact that the optimal policy does not fully
reduce its uncertainty (the posterior) on $\theta$.
Initially, the policy learns about the threshold using a bisection-type search.
However, at some point (dependent on the user's threshold), further learning
is too risky and the optimal policy switches to a constant policy.
We note that this happens even when there is no risk of abandonment at all ($p=1$),
because at some point even the risk of losing a reward is not offset by potential
gains in getting a more accurate posterior on $\theta$.
Partial learning occurs under some regularity conditions on the
threshold distribution that ensures the posterior does not collapse,
and is Lipschitz as defined in the following paragraph.
Write $F_l^u$ for the posterior distribution over $\theta$ given
lower bound $l$ and upper bound $u$ based on previous actions
\[
F_l^u(y) = \P(l + y < \theta \mid l < \theta < u) = \frac{F(u) - F(l+y)}{F(u) - F(l)}.
\]
We say the that the posterior distribution is non-degenerate if the
following condition holds:
\begin{definition}[Non-degenerate posterior distribution ]
For all $\lambda > 0$, there exists a $\nu$ such that for all $l, u$ where $u - l <
\nu$, $F_l^u(\epsilon) < 1 - \lambda \epsilon $ for $0 < \epsilon < \nu$.
\end{definition}
Thus, for sufficiently small intervals, the conditional probability decreases
rapidly as we move away from the lower bound of the interval.
Suppose $F$ is such that the posterior is non-degenerate and
is Lipschitz in the following sense.
\begin{assumption}[Lipschitz continuity of conditional distribution]
There exists an $L' > 0$ such that
for all intervals $[l, u]$ and all $0 < y < u - l$, we have
\[
p(y \mid l + \epsilon, u) - p(y \mid l, u) \le \epsilon L'.
\]
\end{assumption}
We can use this assumption to show that the value function corresponding to the dynamic program
that models the feedback model is Lipschitz.
\begin{lemma}[Lipschitz continuity of value function]
\label{thm:lipschitz_value}
Consider a bounded action space $\mathbf{X}$.
If $p$ is Lipschitz with Lipschitz constant $L_p$,
and the reward function $r$ is bounded by $B$,
there exists constant $L_V$ such that for all $l < u$
\[
V(l + \epsilon, u) - V(l, u) \le \epsilon L_V.
\]
\end{lemma}
Using these assumptions, we can then prove that
the optimal policy exhibits partial learning,
as stated in the following proposition.
\begin{prop}
\label{thm:partial_learning}
Suppose $r$ is increasing, $L_r$-Lipschitz,
non-zero on the interior of $\mathbf{X}$ and bounded by $B$.
Furthermore, assume $p$ is non-degenerate and Lipschitz as defined above.
For all $u \in \mathop{Int}(\mathbf{X})$ there exists an $\epsilon(u) >0$ such that for
all $l$ where $u - l < \epsilon(u)$, the optimal action in state
$(l, u)$ is $l$, that is
\[
V(l, u) = \frac{r(l)}{1-\gamma}.
\]
Furthermore, $\epsilon(u)$ is non-decreasing in $u$.
\end{prop}
We prove this result by analyzing the value function of the corresponding
dynamic program.
The result shows that at some point, the potential gains from a better posterior for the threshold
are not worth the risk of abandonment.
This is especially true when $\theta$ is quite likely under the posterior.
If, to the contrary, we belief the threshold is small, there is little to lose
in experimentation.
Note however that the result also holds for $p=1$, where there are only signals
and no abandonment.
In this case the risk of a signal (and no reward for the current timestep), outweights
(all) possible future gains.
Naturally, if the probability of override is small (i.e. $p$ is small),
the condition on $\lambda$ also weakens, leading to larger intervals of constant policies.
\paragraph{Aggressive and conservative policies}
Another salient feature of the structure of optimal policies in the feedback model
is the aggressiveness of the policy.
In particular, we say a policy is \emph{aggressive} if the first action $x_0$ is larger
than the optimal constant policy $x^*$ in the
absence of feedback (corresponding to $p=0$),
and \emph{conservative} if it is smaller.
As noted before, when there is no feedback, there is no benefit to adapting
to user thresholds.
However, there is value in personalization when users give feedback.
Empirically, we find that when there is low risk of abandonment,
i.e., $p \approx 1$, then the optimal policy is aggressive.
In this case, the optimal policy can aggressively target
high-value users because other users are unlikely to abandon immediately.
Thus the policy can personalize to high-value users in later periods.
However, when the risk of abandonment is large ($p \approx 0$)
and the discount factor is sufficiently close to one,
the optimal policy is more conservative than the optimal constant
policy when $p=0$.
In this case, the high risk of abandonment forces the policy
to be careful: over a longer horizon the algorithm can extract
value even from a low value user, but it has
to be careful not to lose her in the first few periods.
This long term value of a user with low threshold makes up
for the loss in immediate reward gained from aggressively
targeting users with a high threshold.
Figure~\ref{fig:first_action} illustrates this effect.
Here, we use deterministic rewards $r(x) = x$ and the
threshold distribution is uniform $F = U[0,1]$,
but a similar effect is observed for other distributions and reward functions
as well.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{img/first_action.pdf}
\caption{
The relation between the override probability $p$ and the (approximate) optimal initial action $x_0$
when the discount factor $\gamma = 0.9$.
The artifacts in the plot are due to the discretization error from numerical computations.
}
\label{fig:first_action}
\end{figure}
\section{Introduction}
Machine learning algorithms are increasingly intermediating interactions
between platforms and their users. As a result, users' interaction with the
algorithms will impact optimal learning strategies; we investigate this
consequence in our work. In the setting we consider, a platform wants to
personalize service to each user. The distinctive feature in this work is that
the platform faces the risk of a user abandoning the platform if she is
dissatisfied with the actions of the platform.
Algorithms designed by the platform thus need to be careful
to avoid losing users.
There are many examples of such settings.
In the near future, smart energy meters will be able to
throttle consumers' energy consumption to increase efficiency of the power grid
during peak demand, e.g., by raising or lowering the level of air conditioning.
This can lead to cost savings for both utility companies and consumers.
However, if the utility company is too aggressive in its throttling of
energy, a user might abandon the program.
Due to heterogeneity in housing, appliances and preferences of customers,
it is important that utility companies learn personalized strategies
for each consumer.
Content creators (e.g., news sites, blogs, etc.) face a similar problem with e-mail dissemination.
There is value in sending more e-mails, but each e-mail also risks
the recipient unsubscribing, taking away any opportunity of the creator
to interact with the user in the future.
Yet another example is that of mobile app notifications.
These can be used to improve user engagement and experience.
However if the platform sends too many notifications, an upset user might turn
off notifications from the application.
In all of the above scenarios, we face a decision problem where ``more is better;''
however, there is a threshold beyond which the user abandons
and no further rewards are gained.
This work focuses on developing insight into the structure of optimal learning strategies in such settings. We are particularly interested in understanding when such strategies take on a ``simple'' structure, as we elaborate below.
In Section~\ref{sec:model}, we introduce a benchmark model of learning with
abandonment. In the initial model we consider, a platform interacts with a
single user over time. The user has a {\em threshold} $\theta$ drawn from a
distribution $F$, and at each time $t = 0, 1, 2, \ldots$ the platform chooses
an action $x_t$. If $x_t$ ever exceeds $\theta$, the user abandons; otherwise,
the user stays, and the platform earns some reward dependent on $x_t$.
We first consider the case where the distribution $F$ and the reward function are known
known (say, from prior estimation), and the challenge is finding an optimal
strategy for a given new user. We consider the problem of maximizing expected
discounted reward. Intuitively, we might expect that the optimal policy is
increasing and depends on the discount factor: in particular, we might try to
serve the user at increasing levels of $x_t$ as long as we see they did not
abandon. Surprisingly, our main result shows this is not the case: that in
fact, the {\em static} policy of maximizing one-step reward is optimal for this
problem. Essentially, because the user abandons if the threshold is ever
crossed, there is no value to trying to actively learn the threshold.
In Section~\ref{sec:learning}, we consider how to adapt our results when $F$ and/or the reward function are
unknown. In this case, the platform can learn over multiple user arrivals. We
relate the problem to one of learning an unknown demand curve, and suggest an
approach to efficiently learning the threshold distribution $F$ and the reward function.
Finally in Section \ref{sec:feedback}, we consider a more general model with
``soft'' abandonment: after a negative experience, users may not abandon
entirely, but continue with the platform with some probability. We
characterize the structure of an optimal policy to maximize expected discounted
reward on a per-user basis; in particular, we find that the policy adaptively
experiments until it has sufficient confidence, and then commits to a static
action. We empirically investigate the structure of the optimal policy as
well.
\paragraph{Related work}
The abandonment setting is quite unique, and we are aware of only one
other work that addresses the same setting.
Independently from this work, \citet{LuMSOM} model
the abandonment problem using only two actions; the safe action and the
risky action. This naturally leads to rather different results.
There are some similarities with the mechanism design literature,
though there the focus is on strategic behavior by agents
\citep{rothschild1974two, myerson1981optimal, farias2010dynamic, pavan2014dynamic, lobel2017dynamic}.
As in this work, the revenue management literature considers
agents with heuristic behaviour, but the main focus is on dealing with a
finite inventory \citep{gallego1994optimal}.
It may seem that our problem is closely related to many problems in
reinforcement learning (RL) \citep{sutton1998reinforcement} due to the dynamic
structure of our problem. However, there are important differences. Our focus
is on personalization; viewed through the RL lens, this corresponds to having
only a single episode to learn, which is independent of other episodes (users).
On the other hand, in RL the focus is on learning an optimal policy using
multiple episodes where information carries over between episodes. These
differences present novel challenges in the abandonment setting, and
necessitate use of the structure present in this setting.
Also related is work on safe reinforcement learning, where catastrophic states
need to be avoided \citep{Moldovan2012SafeEI, Berkenkamp2017SafeMR}. In such a
setting, the learner usually has access to additional information, for example
a safe region is given. Finally, we note that in our work, unlike in safe RL,
avoiding abandonment is not a hard constraint.
\section{Learning thresholds}
\label{sec:learning}
Thus far, we have assumed that the heterogeneity across the population and the
mean reward function are known to the platform, and we have focused on
personalization for a single user. It is natural to ask what the platform
should do when it lacks such knowledge, and in this section we show how the
platform can learn an optimal policy efficiently across the population.
We study this problem within the context of the fixed threshold model described above, as it naturally lends itself to development of algorithms that learn about population-level heterogeneity.
In particular, we give theoretical performance guarantees on a UCB type \citep{auer2002finite}
algorithm, and show that a variant based on MOSS \citep{MOSS} performs better in practice.
We also empirically show that an explore-exploit strategy performs well.
\paragraph{Learning setting}
We focus our attention on the fixed threshold model,
and consider a setting where $n$ users arrive sequentially,
each with a fixed threshold $\theta_u$ ($u = 1, \ldots, n$)
drawn from unknown distribution $F$ with support on $[0, 1]$.
To emphasize the role of learning from users over time, we consider a stylized setting where the platform interacts with one user at a time, deciding on all
the actions and observing the outcomes for this user, before
the next user arrives.
Inspired by our preceding analysis, we consider a proposed algorithm that uses a constant policy
for each user.
Furthermore, we assume that the rewards $R_t(x)$ are
bounded between $0$ and $1$, but otherwise drawn from an arbitrary distribution
that depends on $x$.
\paragraph{Regret with respect to oracle}
We measure the performance of learning algorithms against the oracle
that has full knowledge about the threshold distribution $F$ and the
reward function $r$, but no access to realizations of random variables.
As discussed in Section~\ref{sec:model}, the optimal policy for the oracle
is thus to play constant policy $x^* = \max_{x \in [0, 1]} r(x) (1 - F(x))$.
We define regret as
\begin{multline}
\textup{regret}_n(A) = nr(x^*)(1-F(x^*)) \\
- (1-\gamma) \sum_{u=1}^{n} \ensuremath{\mathbb{E}} \left[ \sum_{t=0}^{T_u - 1} \gamma^t r(x_{u, t}) \right]
\end{multline}
which we note is normalized on a per-user basis with respect to the discount factor $\gamma$.
\subsection{UCB strategy}
We propose a UCB algorithm \citep{auer2002finite} on a suitably discretized
space, and prove an upper bound on its regret in terms of the number of users.
This approach is based on earlier work by \citep{Kleinberg2003TheVO}[Section 3]
for learning demand curves. Before presenting the details, we introduce the
UCB algorithm for the standard multi-armed bandit problem.
In the standard setting, there are $K$ arms, each with its own mean $\mu_i$.
At each time $t$, UCB($\alpha$) selects the arm with largest index $B_{i, t}$
\[
B_{i, t} = \bar X_{i, n_i(t)} + \sigma \sqrt{\frac{2\alpha \log t}{n_i(t)}}
\]
where $n_i(t)$ is the number of pulls of arm $i$ at time $t$.
We assume $B_{i, t} = \infty$ if $n_i(t) = 0$.
The following lemma bounds the regret of the UCB index policy.
\begin{lemma}[Theorem 2.1 \citep{bubeck2012regret}]
\label{thm:ucb}
Suppose rewards for each arm $i$ are independent across multiple pulls,
$\sigma$-sub-Gaussian and have mean $\mu_i$.
Define $\Delta_i = \max_j \mu_j - \mu_i$.
Then, UCB($\alpha$) attains regret bound
\[
\textup{regret}_n(UCB) \le
\sum_{i : \Delta_i > 0}
\frac{8 \alpha \sigma^2}{\Delta_i} \log n + \frac{\alpha}{\alpha - 2}.
\]
\end{lemma}
\citet{Kleinberg2003TheVO} adapt the above result to the problem of demand curve learning.
We follow their approach:
Discretize the action space and then use the standard UCB approach to
find an approximately optimal action.
For each user, the algorithm selects a constant action $x_u$ and
either receives reward $R_u = 0$ if $x_u > \theta_u$ or
$R_u = \sum_{t=0}^{\infty} \gamma^t R_t(x_u)$.
We need to impose the following assumptions:
$\theta \in [0, 1]$, $0 \le R(x) \le M$ for some $M > 0$,
and the function $f(x) = r(x)D(x) = r(x) (1-F(x))$ is strongly convex
and thus has a unique maximum at $x^*$.
\begin{assumption}[Lemma 3.11 in Leighton and Kleinberg]
\label{thm:concave}
There exists constants $c_1$ and $c_2$ such that
\[ c_1(x^* - x)^2 < f(x^*) - f(x) < c_2(x^* - x)^2\]
for all $x \in [0, 1]$.
\end{assumption}
Using these assumptions, we can prove the main learning result.
\begin{theorem}
\label{thm:ucb_upper}
Suppose that $f$ satisfies the concavity condition above.
Then UCB($\alpha$) on the discretized space with $K = O\left((n/\log n)^{1/4}\right)$ arms
satisfies
\[
\textup{regret}_n(UCB) \le O\left(\sqrt{n \log n}\right)
\]
for all $\alpha > 2$.
\end{theorem}
The proof consists of two parts, first we use Lemma~\ref{thm:concave} to
bound the difference between the best action and the best arm in the
discretized action space.
Then we use Theorem~\ref{thm:ucb} to show that the learning strategy
has small regret compared to the best arm.
Combined, these prove the result.
It is important to note that the algorithm requires prior knowledge of the
number of users, $n$. In practice it is reasonable to assume that a platform
is able to estimate this accurately, but otherwise the well-known doubling trick
can be employed at a slight cost.
\subsection{Lower bound}
We now briefly discuss lower bounds on learning algorithms.
If we restrict ourselves to algorithms that play a constant policy for each
user, the lower bound in \cite{Kleinberg2003TheVO} applies immediately.
\begin{prop}[Theorem 3.9 in \cite{Kleinberg2003TheVO}]
Any learning algorithm A that plays a constant policy for each user,
has regret at least
\[
\textup{regret}_n(A) \ge \Omega(\sqrt{n})
\]
for some threshold distribution.
\end{prop}
Thus, the discretized UCB strategy is near-optimal in the class
of constant policies.
However, algorithms with dynamic policies for users can obtain more
information on the user's threshold and therefore more easily estimate
the empirical distribution function.
Whether the $O(\sqrt{n})$ lower bound carries over to dynamic policies
is an open problem.
\subsection{Simulations}
\label{sec:simulations}
In this section, we empirically compare the performance of the discretized UCB
against other policies.
For our simulations, we also include the MOSS
algorithm \citep{MOSS}, and an explore-exploit strategy.
\paragraph{MOSS}
\citet{MOSS} give a upper confidence bound algorithm that
has a tighter regret bound in the standard multi-armed bandit problem.
The MOSS algorithm is an index policy where the index for arm $i$ is given by
\[
B_{i, t} =
\bar X_{i, n_i(t)}
+ \sqrt{\left( \log \frac{t}{K n_i(t)} \right)_+ / n_i(t)}
\]
While the policy is quite similar to the UCB algorithm,
it does not suffer from an extra $\sqrt{\log n}$ term in the regret bound.
However, we cannot adapt the bound to the abandonment setting,
due to worse dependence on the number of arms.
In practice, we expect this algorithm to perform better than the UCB algorithm,
as it is a superior multi-armed bandit algorithm.
\paragraph{Explore-exploit strategy}
Next, we consider an explore-exploit strategy that first estimates an empirical distribution function,
and then uses that to optimize a constant policy.
For this algorithm, we assume that for zero reward, the learner
can observe $\theta_u$ for a particular user, which mimics a strategy
where the learner increases its action by $\epsilon$ at each time period
to learn the threshold $\theta_u$ of a particular user with arbitrary precision.
Because it directly estimates the empirical distribution function and does not
require discretization, it is better able to capture the structure of our model.
The explore-exploit strategy consists of two stages.
\begin{itemize}
\item First, obtain $m$ samples of $\theta_u$ to
find an empirical estimate of $F$, which we denote by $\hat F_m$
\item For the remaining users, play constant policy
$x_u = \arg\max r(x) (1 - \hat F_m(x))$
\end{itemize}
Note that compared to the previous algorithm,
we assume this learner has access to the reward function,
and only the threshold distribution $F$ is unknown.
If the signal-to-noise ratio in the stochastic rewards is large,
this is not unrealistic: the platform, while exploring,
is able to observe a large number of rewards and should therefore
be able to estimate the reward function reasonably well.
\paragraph{Setup}
For simplicity, our simulations focus on
a stylized setting; we observed similar results under
different scenarios.\footnote{Code to replicate the simulations under
a variety of scenarios is available at \url{https://github.com/schmit/learning-abandonment}.}
We assume that the rewards are deterministic and
follow the identity function $r(x) = x$, and
the threshold distribution (unknown to the learning algorithm)
is uniform on $[0, 1]$.
For each algorithm, we run 50 repetitions for $n = 2000$ time steps,
and plot all cumulative regret paths.
For the discretized policies, we set $K \approx 2.5 \left(\frac{n}{\log n}\right)^{1/4} = 12$.
The explore-exploit strategy first observes $20 + 2\sqrt{n} = 110$ samples
to estimate $F$, before committing to a fixed strategy.
\paragraph{Results}
The cumulative regret paths are shown in Figure~\ref{fig:regret}.
We observe that MOSS, while having higher variance,
indeed performs better than the standard UCB algorithm,
despite the lack of a theoretical bound.
However, the explore-exploit strategy obtains the lowest regret.
First, since it is aware of the reward function, it has less uncertainty.
More importantly, the algorithm leverages the structure of the problem because
it does not discretize the action space and then treat actions independently.
Finally, we note that when rewards are stochastic, the UCB and MOSS are even worse
compared to explore-exploit, as they have to estimate the mean reward function,
while the explore-exploit strategy assumes it is given.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{img/regret_unif_iden_3.pdf}
\caption{
Cumulative regret plots for $r(x) = x$ and $F = U[0, 1]$.
}
\label{fig:regret}
\end{figure}
\section{Threshold model}
\label{sec:model}
In this section, we formalize the problem of finding a personalized
policy for a single user without further feedback.
\subsection{Formal setup and notation}
We consider a setting where heterogeneous users interact with a platform at
discrete time steps indexed by $t$, and focus on the problem of finding a
personalized policy for a single user. The user is characterized by
sequence of hidden thresholds $\{\theta_t\}_{t=0}^\infty$ jointly drawn from a
known distribution that models the heterogeneity across users. At every time
$t$, the platform selects an action $x_t \in \mathbf{X} \subset \ensuremath{\mathbb{R}}_+$ from
a given closed set $\mathbf{X}$. Based on the chosen action $x_t$, the
platform obtains the random reward $R_t(x_t) \ge 0$. The expected reward of
action $x$ is given by $r(x) = \ensuremath{\mathbb{E}}(R_t(x)) < \infty$, which we assume to be
stationary and known to the platform.\footnote{Section~\ref{sec:learning}
discusses the case when both $F$ and $r$ are unknown.} While not required for
our results, we expect $r$ to be increasing. When the action exceeds the
threshold at time $t$, the process stops. More formally, let $T$ be the
stopping time that denotes the first time the $x_t$ exceeds the threshold $\theta_t$:
\[
T = \min \{ t : x_t > \theta_t \}.
\]
The goal is to find a sequence of actions $\{ x_t \}_{t=0}^\infty$ that
maximizes:
\[
\ensuremath{\mathbb{E}} \left[ \sum_{t=0}^{T-1} \gamma^t R_t(x_t) \right],
\]
where $\gamma \in (0,1)$ denotes the discount factor.
We note that this expectation is well defined even if $T = \infty$,
since $\gamma < 1$. We focus here on the discounted expected reward criterion. An alternative approach is to consider maximizing average reward on a finite horizon; considering this problem remains an interesting direction for future work.
\subsection{Optimal policies}
Without imposing further restrictions on the structure of the stochastic
threshold process, the solution is intractable. Thus, we first consider two
extreme cases: (1) the threshold is sampled at the start and then remains fixed across time; and (2) the thresholds are independent across time.
Thereafter, we look at the robustness of the results when we
deviate from these extreme scenarios.
\paragraph{Fixed threshold}
We first consider a case where the threshold is sampled at the beginning of the horizon, but then remains fixed. In other words, for all $t$, $\theta_t = \theta \sim F$. Intuitively, we might expect that the platform might try to gradually learn this threshold, by starting with $x_t$ low and increasing it as long as the user does not abandon. In fact, we find something quite different: our main result is that the optimal policy is a constant policy.
\begin{prop}
\label{thm:fixed}
Suppose the function
and the function $x \to r(x) (1-F(x))$ has a unique optimum $x^* \in \mathbf{X}$.
Then, the optimal policy is $x_t = x^*$ for all $t$.
\end{prop}
All proofs can be found in the supplemental material.
We sketch an argument why there exists a constant policy that is optimal. Consider a policy
that is increasing and suppose it is optimal.\footnote{It is clear that the
optimal policy cannot be decreasing.} Then there exists a time $t$ such that
$x_t = y < x_{t+1} = z$. Compare these two actions with the policy that would
use action $z$ at both time periods. First suppose $\theta < y$; then the user
abandons under either alternative and so the outcome is identical. Now
consider $\theta \geq y$; then by the optimality of the first policy, given
knowledge that $\theta \geq y$, it is optimal to play $z$. But that means the
constant policy is at least as good as the optimal policy.
In the appendix, we provide another proof of the result using value iteration. This proof also characterizes the optimal policy and optimal value exactly (as in the proposition). Remarkably, the
optimal policy is independent of the discount factor $\gamma$.
\paragraph{Independent thresholds}
For completeness, we also note here the other extreme case: suppose the thresholds $\theta_t$ are drawn
independently from the same distribution $F$ at each $t$. Then since there is no correlation
between time steps, it follows immediately that the optimal policy is a
constant policy, with a simple form.
\begin{prop}
\label{thm:independent}
Then the optimal policy under the independent threshold assumption is $x_t = x^*$
for all $t$
if
\[
x^* \in \arg\max_{x \in \mathbf{X}} \frac{r(x) (1-F(x))}{1-\gamma(1-F(x))}
\]
is the unique optimum.
\end{prop}
\paragraph{Robustness}
So far, we have considered two extreme threshold models and have shown
that constant policies, albeit different ones, are optimal.
In this section we look at the robustness of those results by
understanding what happens when we interpolate between the two sides by
considering an additive noise threshold model.
Here, the threshold at time $t$ consists of a fixed element and
independent noise:
$\theta_t = \theta + \epsilon_t$,
where $\theta \sim F$ is drawn once,
and the noise terms are drawn independently.
In general, the optimal policy in this model is increasing and intractable because
the posterior over $\theta$ now depends on all previous actions.
However, there exists constant policies that are close to optimal
in case the noise terms are either small or large, reflecting our preceding results in the extreme cases.
First consider the case where the noise terms are {\em small}.
In particular, suppose the error distribution has an arbitrary distribution over
a small interval $[-y, y]$.
\begin{prop}
\label{thm:small_noise}
Suppose $\epsilon_t \in [-y, y]$ and
the reward function $r$ is $L$-Lipschitz.
Then there exists a constant policy with
value $V_c$ such that
\[
V^* - V_c \le \frac{2yL}{1-\gamma}
\]
where $V^*$ is the value of the optimal policy for the noise model,
and $x^*$ is the optimal constant policy for the noiseless case.
\end{prop}
This result follows from comparing the most beneficial and detrimental scenarios;
$\epsilon_t = y$ and $\epsilon_t = -y$ for all $t$, respectively,
and nothing that in both cases the optimal policies are constant policies,
because thresholds are simply shifted.
We can then show that the optimal policy for the worst scenario achieves
the gap above compared to the optimal policy in the best case.
The details can be found in the appendix.
Similarly, when the noise level is sufficiently {\em large} with respect to the
threshold distribution $F$ there also exists a constant policy that is close to
optimal. The intuition behind this is as follows. First, if the noise level
is large, the platform receives only little information at each step, and thus
cannot efficiently update the posterior on $\theta$. Furthermore, the high
variance in the thresholds also reduces the expected lifetime of any policy.
Combined, these two factors make learning ineffective.
We formalize this by comparing a constant policy to an oracle policy
that knows $\theta$ but not the noise terms $\epsilon_t$.
Let $G$ be the CDF of the noise distribution $\epsilon_t$ with
$\bar G$ denoting its complement: $\bar G(y) = 1 - G(y)$.
Then we note that for a given threshold $\theta$, the probability of survival
is $\bar G (x - \theta)$,
and thus the expected value for the constant policy $x_t = x$ for all $t$ is
\[
\frac{\bar G(x - \theta) r(x)} {1-\gamma \bar G(x-\theta)}.
\]
Define the optimal constant policy given knowledge of the fixed part of the
threshold, $\theta$ by $x(\theta)$:
\[
x(\theta) = \arg\max_x \frac{\bar G(x - \theta) r(x)} {1-\gamma \bar G(x-\theta)}.
\]
We can furthermore define the value of policy $x_t = x(\theta)$ when the
threshold is $\theta'$ by $v(\theta, \theta')$:
\[
v(\theta, \theta') = \frac{\bar G(x(\theta) - \theta') r(x(\theta))} {1-\gamma \bar G(x(\theta)-\theta')}.
\]
We note that $v$ is non-decreasing in $\theta'$.
We assume that $v$ is $L_v$-Lipschitz:
\[
|v(\theta, \theta') - v(s, \theta')| \le L_v |\theta - s|
\]
for all $\theta$ and $s$.
Note that noise distributions
$G$ that have high variance lead to a smaller Lipschitz constant.
To state our result in this case, we define an $\eta$-cover, which is a simple
notion of the spread of a distribution.
\begin{definition}
An interval $(l, u)$ provides an $\eta$ cover for distribution $F$
if $F(u) - F(l) > \eta$.
\end{definition}
In other words, with probability as least $1-\eta$, a random variable drawn from
distribution $F$ lies in the interval $(l, u)$.
\begin{prop}
\label{thm:large_noise}
Assume $r$ is bounded, and
$\ensuremath{\mathbf{X}}$ is a continuous and connected space.
Suppose $v$ defined above is $L_v$-Lipschitz,
and there exists an $\eta$-cover for threshold distribution
$F_\theta$ with width $w = u - l$.
Then the constant policy $x_t = \frac{l + u}{2}$ with expected value
$V_\theta$ satisfies
\[
V^* - V_\theta \le V_o - V_\theta \le \frac{L_v w}{2} + 2\frac{\eta B}{1-\gamma}.
\]
\end{prop}
The shape of $v$, and in particular its Lipschitz constant $L_v$ depend
on the threshold distribution $F$ and reward function $r$. As the noise distribution $G$ ``widens'', $L_v$ decreases.
As a result, the bound above is most relevant when the variance of $G$ is substantial relative to spread of $F$.
To summarize, our results show that in the extreme cases where the thresholds are drawn independently, or drawn once,
there exists a constant policy that is optimal. Further, the class of constant policies is robust when
the joint distribution over the thresholds is close to either of these scenarios.
\section{Related work}
\subsection{Dynamic pricing}
No repeat transactions; customers interact once and price can be ajusted over time.
\citet{rothschild1974two} considers learning the demand curve of a product
using a multi-armed bandit approach.
The work assumes that customers arrive only once,
and it is therefore not possible to learn about specific users.
\citet{farias2010dynamic} give an overview on dynamic pricing and consider
a setting where there is an initial inventory $x_0$ that gets sold over time and
the merchant is able to dynamically adjust prices $p(t)$.
They consider the case where customers arrive according to a Poisson process with
unknown arrival rate and their reservation price is drawn from a distribution $F$.
\subsection{Dynamic mechanism design}
There is a vast literature on mechanism design where the principal
designs pricing and allocation mechanism to sell a single or multiple goods,
starting by the work \cite{myerson1981optimal}.
Recently, there has been a flurry of interest in the dynamic case,
where at each time $t=1,2,\ldots$ a good is sold \cite{pavan2014dynamic, bergemann2010dynamic,
Kakade2013OptimalDM, Skrzypacz2014MechanismsFR, athey2013efficient}.
This leads to interesting dynamics such as commitment problems
\cite{lobel2017dynamic} when information arrives over time.
There are two key differences between this line of research and our work.
First, our main focus is not on strategic agents, but on agents using simpler
heuristics, though we do mention where this leads to differences.
The second difference is that the setting of a single user that abandons
the principle forever has not been studied.
This can partly be explained by the fact that this might not be optimal from a
strategic agent's perspective; or it can be viewed as a severe restriction on
the action space of agents.
\subsection{Revenue management}
\citet{gallego1994optimal} considers the problem of selling a given stock of
items by a deadline with non-strategic agents; these do not take into account
future changes in price, and show that constant pricing policies are
asymptotically optimal.
There has been work on pricing under uncertainty, such as
\cite{bergemann2008pricing, bergemann2011robust, parakhonyak2015non,
caldentey2016intertemporal}.
Roughly speaking, we see that prices are lowered under uncertainty.
Partial monitoring?
Safe reinforcement learning?
Planning problems?
|
{
"timestamp": "2018-02-27T02:00:59",
"yymm": "1802",
"arxiv_id": "1802.08718",
"language": "en",
"url": "https://arxiv.org/abs/1802.08718"
}
|
\section{Introduction}
\label{sec:intro}
Most of the baryons in the Universe lie outside galaxies. We can study baryonic halos around galaxies through absorption lines towards distant quasars,~or cooling-radiation emitted in Ly$\alpha$. Absorption-line studies detect $\sim$100\,kpc halos of warm, T$\sim$10$^{4}$\,K, enrich gas around high-$z$ galaxies and quasars \citep[e.g.,][]{pro14,nee17}.
We recently discovered that the coldest gas phase can also exist in such environments, by revealing a molecular gas reservoir across the halo of the massive forming Spiderweb Galaxy at z\,=\,2.2 \citep[][hereafter EM16]{emo16}. The Spiderweb Galaxy is a conglomerate of starforming galaxies that surround the radio galaxy MRC\,1138-262, and that are embedded in a giant Ly$\alpha$ halo \citep{pen97,mil06}. We refer to the entire 200\,kpc region of the Ly$\alpha$ halo as the ``Spiderweb Galaxy'', because it will likely evolve into a single dominant cluster galaxy \citep{hat09}. The Spiderweb Galaxy is part of a larger proto-cluster \citep[][]{kur04,kod07,dan14}.
The halo gas spans a wide range of temperatures and densities (T\,$\sim$\,100-10$^{7}$\,K,\,n\,$\sim$\,10$^{-3}$-10$^{4}$\,cm$^{-3}$; \citealt{car02}, EM16). Across the inner $\sim$70 kpc, we detected $\sim$10$^{11}$ M$_{\odot}$ of molecular gas via CO(1-0) (EM16). The location and velocity of the CO, as well as its large angular scale (EM16, their Fig. S1), imply that the bulk of the molecular gas is found in the gaseous medium that lies between the brightest galaxies in the halo. We refer to this gaseous medium as the circumgalactic medium (CGM). There is also diffuse blue light across the halo, indicating that \textit{in-situ} star formation occurs within the CGM \citep{hat08}. Since the surface densities of the molecular gas and the rate of star formation fall along the Schmidt-Kennicutt relation, the CO(1-0) results provided the first direct link between star formation and cold molecular gas in the CGM of a forming massive galaxy at high-$z$ (EM16). Extended CO is also found in the CGM of a massive galaxy at $z$\,=\,3.47 \citep{gin17}.
Here we present observations sensitive to low-surface-brightness extended emission of atomic carbon, [C\,{\sc i}]~$^{3}$P$_{1}$-$^{3}$P$_{0}$\ (hereafter [C\,{\sc i}]) in the CGM of the Spiderweb Galaxy. We supplement these with observations of CO(1-0) and CO(4-3) to study the chemical composition and excitation conditions of the gas. [C\,{\sc i}]\ and CO(1-0) are fully concomitant in molecular clouds across the Milky Way \citep{ojh01,ike02}. They have a similar critical density, with the [C\,{\sc i}]\ $J$=1 level well populated down to T$_{\rm k}$\,$\sim$\,15\,K \citep{pap04}. A large positive K-correction means that, at comparable resolution, [C\,{\sc i}]\ is much brighter than CO(1-0). This becomes progressively more advantageous towards higher redshifts, as the instrumental T$_{\rm sys}$ at the corresponding frequencies become more comparable \citep{pap04,tom14}. Furthermore, a high cosmic ray flux from star formation or radio jets may reduce the CO abundance in favor of [C\,{\sc i}]\ \citep{bis15,bis17}.
We assume H$_{0} = 71$\,km s$^{-1}$\,Mpc$^{-1}$, $\Omega_\textrm{M} = 0.27$ and $\Omega_{\Lambda} = 0.73$, i.e., 8.4 kpc/$^{\prime\prime}$ and $D_{\rm L} = 17309$ Mpc at z\,=\,2.2 (EM16).
\section{Observations}
\label{sec:observations}
We observed the Spiderweb for 1.8~hrs on-source during ALMA cycle-3 on 16 Jan 2016 in its most compact 12m configuration (C36-1; baselines 15$-$161m). We placed two adjacent spectral windows of 1.875 GHz on [C\,{\sc i}]~$^{3}$P$_{1}$-$^{3}$P$_{0}$\ at $\nu_{\rm obs}$\,$\sim$\,155.7 GHz ($\nu_{\rm rest}$\,=\,492.16 GHz) and another two on CO(4-3) at $\nu_{\rm obs}$\,$\sim$\,145.8 GHz ($\nu_{\rm rest}$\,=\,461.04 GHz). The ALMA data were reduced in CASA (Common Astronomy Software Applications; \citealt{mcm07}). We binned the data to 30 km\,s$^{-1}$\ channels and cleaned signal $\geq$1 mJy\,bm$^{-1}$. We then Hanning smoothed the data to a resolution of 60 km\,s$^{-1}$. The resulting noise is 0.085 mJy\,beam$^{-1}$ channel$^{-1}$. We imaged our field using natural weighting out to $\sim$33$^{\prime\prime}$, where the primary beam response drops to $\sim$10$\%$ sensitivity. The synthesized beam is 2.3$^{\prime\prime}$\,$\times$1.5$^{\prime\prime}$ with PA=73.4$^{\circ}$.
Using the ATCA, we made a 2-pointing mosaic with an on-source time of $\sim$90\,hrs per pointing at $\nu_{\rm obs}$\,$\sim$\,36.5 GHz to observe CO(1-0). The first pointing was centred on the Spiderweb Galaxy (EM16). The second one was centred $\sim$23$^{\prime\prime}$ to the west and observed in Jan 2016 in 750C configuration for 42\,hrs and in April 2016 in H214 configuration for 40\,hrs on-source. The observing strategy and data reduction in MIRIAD followed EM16. Because the Spiderweb Galaxy was located near the edge of the primary beam in the second pointing, its strong continuum caused beam-smearing errors that could not be completely eliminated, even with model-based continuum subtraction in the ($u$,$v$)-domain \citep{all12}. This prevented us from improving the image of the faint CO(1-0) in the halo compared with EM16. We imaged both pointings separately using natural weighting, and combined them using the task LINMOS. We binned the channels to 34 km\,s$^{-1}$\ and applied a Hanning smooth, resulting in a resolution of 68 km\,s$^{-1}$. The noise in the center of the mosaic is 0.073 mJy\,bm$^{-1}$, with a beam of 4.7$^{\prime\prime}$\,$\times$\,4.1$^{\prime\prime}$ (PA\,36.3$^{\circ}$). Velocities are in the optical frame relative to $z$=2.1612.
\vspace{-1mm}
\section{Results}
\label{sec:results}
Fig.~\ref{fig:galaxies} shows [C\,{\sc i}], CO(4-3) and CO(1-0) from proto-cluster galaxies within 250\,kpc radius around the Spiderweb Galaxy. Six proto-cluster galaxies show line emission (Table\,\ref{tab:results}).
\begin{figure*}
\centering
\includegraphics[width=0.82\textwidth]{Emonts_Spiderweb_Fig1.pdf}
\caption{Spectra of [C\,{\sc i}]~$^{3}$P$_{1}$-$^{3}$P$_{0}$ (blue), CO(4-3) (black), and CO(1-0) (grey) in the six proto-cluster galaxies. Except for radio galaxy MRC\,1138-262, all are located far outside the Ly$\alpha$ halo of the Spiderweb Galaxy. Some of the CO(1-0) spectra are scaled up by a factor indicated at the bottom-right, to better visualize them. For MRC\,1138-262 we also show the [C\,{\sc i}]~$^{3}$P$_{2}$-$^{3}$P$_{1}$ line (light blue), derived by tapering and smoothing the ALMA data from \citet{gul16} to the resolution of our [C\,{\sc i}]~$^{3}$P$_{1}$-$^{3}$P$_{0}$\ data. Galaxy $\#$2 is H$\alpha$ emitter HAE\,229, for which \citet{dan17} detected CO(1-0) across a large disk. The CO(4-3) line of galaxy $\#$5 fell at the edge of the band, and the dotted line estimates the profile if it is symmetric. The top-right inset in each panel shows a 2$^{\prime\prime}$\,$\times$\,2$^{\prime\prime}$ region of the galaxy in {\it HST}/ACS F475W+F814W imaging \citep{mil06}. Coordinates in seconds and arcsec are relative to RA=11h40m and $\delta$=$-$26$^{\circ}$29$^{\prime}$.}
\label{fig:galaxies}
\end{figure*}
The central radio galaxy MRC\,1138-262 is covered fully by the ALMA beam and shows an extraordinary high global $L^{\prime}_{\rm CO(4-3)}$/$L^{\prime}_{\rm CO(1-0)}$\,$\sim$\,1 (Table\,\ref{tab:results}). In metal-rich environments, such high global gas excitation states are hard to achieve with far-UV photons from star formation, and cloud-heating mechanisms due to cosmic rays, jet-induced shocks, or gas turbulence must be prevailing \citep{pap08,pap12,ivi12}. MRC\,1138-262 also has a high $L^{\prime}_\textrm{[CI]}$/$L^{\prime}_\textrm{CO}$$\sim$0.67, exceeding that of most submm galaxies (SMGs), quasi-stellar objects (QSOs), and lensed galaxies \citep{wal11, ala13, bot17}. We compare our [C\,{\sc i}]~$^{3}$P$_{1}$-$^{3}$P$_{0}$\ detection with [C\,{\sc i}]~$^{3}$P$_{2}$-$^{3}$P$_{1}$ data from \citet{gul16}, which we tapered and smoothed to the same spatial resolution (Fig.~\ref{fig:galaxies}). We derive a [C\,{\sc i}]\ fine-structure ratio of $L^{\prime}_\mathrm{[CI]2 \rightarrow 1}$/$L^{\prime}_\mathrm{[CI]1 \rightarrow 0}$\,$\sim$\,0.62, which implies an excitation temperature T$_\mathrm{ex}$$\sim$32\,K for optically thin gas \citep{stu97}.
\begin{table*}
\caption{Emission-line properties. Velocity $v$ is relative to $z$=2.1612, while $v$ and FWHM are derived by fitting a Gaussian function to the CO(4-3) line ([C\,{\sc i}]\ for galaxy $\#$5). The ratios of the brightness luminosity ($L'$) are r$_\mathrm{CI/1-0}$ = $L^{\prime}_\mathrm{[CI]1 \rightarrow 0}$/$L^{\prime}_\mathrm{CO(1-0)}$, r$_\mathrm{CI/4-3}$ = $L^{\prime}_\mathrm{[CI]1 \rightarrow 0}$/$L^{\prime}_{\rm CO(4-3)}$, and r$_{\rm 4-3/1-0}$ = $L^{\prime}_{\rm CO(4-3)}$/$L^{\prime}_{\rm CO(1-0)}$. The molecular gas mass M$_\mathrm{H_2}$ is derived from $L'_{\rm CO(1-0)}$ \citep{sol05}, assuming $\alpha_{\rm CO}$\,=\,M$_{\rm H_2}$/$L'_{\rm CO(1-0)}$\,=\,0.8 M$_{\odot}$ (K km\,s$^{-1}$\ pc$^{2}$)$^{-1}$ for galaxies $\#$1$-$6, and $\alpha_{\rm CO}$\,=\,4 M$_{\odot}$ (K km\,s$^{-1}$\ pc$^{2}$)$^{-1}$ for the CGM (see EM16). The [C\,{\sc i}]\ mass (M$_{\rm [CI]}$) is estimated following \citet{wei05}, assuming $T_{\rm ex}$\,=\,30\,K. Errors (in brackets) include uncertainties in $I$ from both the noise (see Eqn. 2 of \citealt{emo14}; also \citealt{sag90}) and the absolute flux calibration (5$\%$ for ALMA; 20$\%$ for ATCA).}
\label{tab:results}
\begin{tabular}{lcccccccccc}
$\#$ & $v$ & FWHM & $I_{\rm [CI]1 \rightarrow 0}$ & $I_{\rm CO(1-0)}$ & $I_{\rm CO(4-3)}$ & r$_{\rm CI/1-0}$ & r$_{\rm CI/4-3}$ & r$_{\rm 4-3/1-0}$ & M$_{\rm H_2}$ & M$_{\rm [CI]}$ \\
& km/s & km/s & \multicolumn{3}{c}{Jy/bm$\times$km/s} & & & & 10$^{10}$\,M$_{\odot}$ & 10$^{6}$\,M$_{\odot}$ \\
\hline
1 & $-$ & 610\,(10) &1.32\,(0.07)& 0.11\,(0.03)$^{\dagger}$ & 1.77\,(0.09) & 0.66\,(0.18) & 0.67\,(0.04) & 1.00\,(0.28) & 2.0\,(0.6) & 21\,(1.0) \\
2 & -1290\,(10) & 375\,(20) & 0.60\,(0.08) & 0.15\,(0.04)$^{\ddagger}$ & 1.19\,(0.08) & 0.22\,(0.05) & 0.44\,(0.07) & 0.49\,(0.17) & 2.8\,(0.6) & 9.5\,(1.2) \\
3 & -450\,(10) & 255\,(15) & 0.29\,(0.03) & 0.06\,(0.02) & 0.54\,(0.03) & 0.27\,(0.09) & 0.47\,(0.05) & 0.56\,(0.19) & 1.1\,(0.4) & 4.6\,(0.5) \\
4 & 45\,(10) & 310\,(20) & $<$0.03 & $<$0.03 & 0.25\,(0.02) & $-$ & $<$0.11 & $>$0.52 & $<$0.6 & $<$0.5 \\
5 & -1655\,(10) & 220\,(15) & 0.08\,(0.01) & $<$0.03 & 0.16\,(0.01)$^{*}$ & $>$0.15 & 0.44\,(0.06) & $>$0.33 & $<$0.6 & 1.2\,(0.1) \\
6 & 60\,(15) & 200\,(25) & $<$0.04 & $<$0.03 & 0.13\,(0.02) & $-$ & $<$0.27 & $>$0.27 & $<$0.6 & $<$0.6 \\
\multicolumn{2}{l}{CGM \hspace{3mm} $-$} & $-$ & 0.56\,(0.09) & 0.11\,(0.04) & 0.79\,(0.07) & 0.28\,(0.11) & 0.62\,(0.11) & 0.45\,(0.17) & 10\,(4) & 8.9\,(1.4)\\
\hline
\end{tabular}
\flushleft
$^{\dagger}$ The ATCA profile in Fig.\,\ref{fig:galaxies} provides an upper limit to the CO(1-0) content of MRC\,1138-262, because the CO(1-0) data have a larger beam than the [C\,{\sc i}]\ and CO(4-3) data, and therefore include more extended CO(1-0). The corresponding $I_{\rm CO(1-0)}$\,$\la$\,0.126 Jy\,bm$^{-1}$ $\times$ km\,s$^{-1}$. To estimate lower limit values, we tapered existing high-resolution VLA data (EM16) to the spatial resolution of our ALMA data. This gives $I_{\rm CO(1-0)}$\,$\ga$\,0.101 Jy\,bm$^{-1}$\,$\times$\,km\,s$^{-1}$. However, these VLA data have lower sensitivity, hence likely underestimate the full width of the profile. $I_{\rm CO(1-0)}$ in the Table is a weighted average of the two values, although both values are within the uncertainties.\\
$^{\ddagger}$ \citet{dan17} previously reported a somewhat higher $I_{\rm CO(1-0)}$, although our estimate agrees to within the uncertainties.\\
$^{*}$ CO(4-3) falls at the edge of the band and half the profile is missing. We derive values assuming that the line profile is symmetric.
\end{table*}
\begin{figure*}
\centering
\includegraphics[width=0.78\textwidth]{Emonts_Spiderweb_Fig2.pdf}
\caption{Channels maps of the [C\,{\sc i}]~$^{3}$P$_{1}$-$^{3}$P$_{0}$\ emission (blue contours) over-plotted on an \textit{HST}/ACS F475W+F814W image of the Spiderweb Galaxy \citep{mil06}. The magenta contours indicate the previously detected CO(1-0) emission in channels where it is bright enough to be reliably detected (EM16). The most prominent features seen in [C\,{\sc i}]\ and CO(1-0) are also detected in CO(4-3) (black contours in the insets). All data-sets were binned to a velocity resolution of 90 km\,s$^{-1}$, and the central velocity of each channel is indicated. Contour levels of [C\,{\sc i}]\ and CO(4-3) start at 2$\sigma$ and increase by factor 1.5, with $\sigma$\,=\,0.07 mJy\,beam$^{-1}$ (negative contours are shown in grey). CO(1-0) contour levels are at 2, 3, 4, 5$\sigma$, with $\sigma$\,=\,0.086 mJy\,beam$^{-1}$. The red contours indicate the 36 GHz radio continuum (EM16). The synthesized beams of the ALMA and ATCA data are shown in the bottom-left corner of the top-left plot.}
\label{fig:IGM}
\end{figure*}
While the bulk of the [C\,{\sc i}]\ and CO(4-3) in the Spiderweb Galaxy is associated with the central radio galaxy MRC\,1138-262, we also detect emission across $\sim$50 kpc in the CGM (Fig.~\ref{fig:IGM}). As with our previous CO(1-0) results, the extended [C\,{\sc i}]\ is not co-spatial in either location or velocity with ten of the brightest satellite galaxies visible in Fig.\,\ref{fig:IGM} (\citealt{kui11}; EM16). The [C\,{\sc i}]\ and CO(1-0) appear to follow the same distribution and kinematics across the velocity range where both lines are reliably detected ($-$87 to 273 km\,s$^{-1}$\ in Fig.\,\ref{fig:IGM}). At the highest velocities, the [C\,{\sc i}]~$^{3}$P$_{1}$-$^{3}$P$_{0}$\ peaks $\sim$7~kpc SE of the core of the radio galaxy, at a location where previous high-resolution ALMA data found a concentration of [C\,{\sc i}]\ $^{3}$P$_{2}$-$^{3}$P$_{1}$ \citep{gul16}. The CO(4-3) and [C\,{\sc i}]\ show similar morphologies.
The bright [C\,{\sc i}]\ and CO(4-3) in the central $\sim$2$^{\prime\prime}$ ($\sim$17 kpc) beam make it non-trivial to determine flux densities across the CGM. We therefore taper the ALMA data to a beam of $\sim$8$^{\prime\prime}$ ($\sim$70 kpc), which covers the full CO(1-0) halo. We then take the spectrum within this tapered beam and subtract the line profile of the central radio galaxy (Fig.\,\ref{fig:galaxies}). The resulting spectra of the CGM are shown in Fig.\,\ref{fig:spectraIGM}. For both [C\,{\sc i}]\ and CO(4-3), $\sim$30$\%$ of the total flux is spread on 17-70 kpc scales (Table\,\ref{tab:results}). The ten bright satellite galaxies with known redshifts, and likely also any fainter satellites \citep{hat08}, do not substantially contribute to this emission. The reasons are that the galaxies have a much higher velocity dispersion than the gas \citep[Fig.\,\ref{fig:spectraIGM};][]{kui11}, and the 3$\sigma$ upper limit for even the brightest satellite galaxies is $I_{\rm [CI]} $<$ 0.028$ Jy\,beam$^{-1}$ $\times$ km\,s$^{-1}$\ (FWHM = 200\,km\,s$^{-1}$), or $<$5$\%$ of the [C\,{\sc i}]\ brightness of the CGM.
\begin{figure}
\centering
\includegraphics[width=0.37\textwidth]{Emonts_Spiderweb_Fig3.pdf}
\caption{Emission on 17-70\,kpc scales in the Spiderweb's CGM. The spectra were extracted by tapering the various data to $\sim$8$^{\prime\prime}$ and subtracting the central 2$^{\prime\prime}$ spectra of MRC\,1138-262 (Fig.\,\ref{fig:galaxies}). For the central CO(1-0) spectrum of MRC\,1138-262, we used the average between the untapered ATCA spectrum and a tapered high-resolution VLA spectrum from EM16, as explained in Table\,\ref{tab:results}. The horizontal bar indicates the conservative velocity range over which we detect all three tracers in the CGM, which we used to determine intensities and ratios. The botton histogram shows velocities of satellite galaxies that lie within the molecular halo, based on [O\,II], [O\,III] and H$\alpha$ \citep{kui11}.}
\label{fig:spectraIGM}
\end{figure}
\section{Discussion}
\label{sec:discussion}
Observations of [C\,{\sc i}]~$^{3}$P$_{1}$-$^{3}$P$_{0}$, CO(1-0), and CO(4-3) enable us to estimate the carbon abundance and excitation conditions of the molecular gas in the CGM and proto-cluster galaxies. Fig.\,\ref{fig:ratios} (top) shows that the values for $L'_{\rm [CI]}$/$L'_{\rm CO(4-3)}$ spread across a large range \citep[see also][]{wal11,ala13,bot17}. When instead comparing the ground-transitions of [C\,{\sc i}]~$^{3}$P$_{1}$-$^{3}$P$_{0}$\ and CO(1-0), Fig.\,\ref{fig:ratios} (bottom) shows two interesting results. First, the CGM has excitation conditions, $L'_\mathrm{CO(4-3)}$/$L'_\mathrm{CO(1-0)}$, and relative [C\,{\sc i}]\ brightness, $L'_\mathrm{[CI]}$/$L'_\mathrm{CO(1-0)}$, similar to those of the proto-cluster galaxies, as well as low-$z$ star-forming galaxies. Second, both the gas excitation and relative [C\,{\sc i}]\ brightness are substantially higher in the radio galaxy MRC\,1138-262. A possible explanation for the latter is that the CO(1-0) luminosity is reduced due to a high cosmic ray flux near the AGN \citep{bis17}. Alternatively, the [C\,{\sc i}]\ luminosity may depend on processes that also affect the gas excitation, and thus the luminosity of high-$J$ CO lines like CO(4-3) (Sect.\,\ref{sec:results}).
We estimate an H$_{2}$ mass in the CGM on 17$-$70 kpc scales of M$_\mathrm{H_2}$\,$\sim$\,1.0$\pm$0.4$\times$10$^{11}$($\alpha_\mathrm{CO}$/4)\,M$_{\odot}$ \citep[Table\,\ref{tab:results};][]{sol05}. The [C\,{\sc i}]\ mass in the CGM is M$_\mathrm{[CI]}$$\sim$8.9$\pm$1.4$\times$10$^{6}$\,M$_{\odot}$, assuming T$_\mathrm{ex}$$\sim$30\,K \citep{wei05}. This results in a [C\,{\sc i}]\ abundance of $X_\mathrm{[CI]}$/$X_\mathrm{H_2}$\,=\,M$_\mathrm{[CI]}$/(6M$_\mathrm{H_2}$) $\sim$ 1.5$\pm$0.6$\times$10$^{-5}$ (4/$\alpha_\mathrm{CO}$), close to that of the Milky Way ($\sim$2.2$\times$10$^{-5}$) and other high-$z$ star-forming galaxies \citep[][]{fre89,wei05,bot17}. The H$_{2}$ densities must be at least $\sim$100 cm$^{-3}$, which is the high end of densities of the cool neutral medium, where the H{\,\small I}\ to H$_{2}$ transition occurs \citep{bia17}. More likely, values will be close to $\sim$\,500 cm$^{-3}$, the critical density of [C\,{\sc i}]\ \citep{pap04}.
\begin{figure}
\centering
\includegraphics[width=0.38\textwidth]{Emonts_Spiderweb_Fig4.pdf}
\caption{Ratios of the [C\,{\sc i}], CO(1-0) and CO(4-3) lines tracing a wide range of carbon abundances and excitation conditions. The open square represents the CGM of the Spiderweb Galaxy (17$-$70 kpc), the large solid dots the six proto-cluster galaxies from Fig.\,\ref{fig:galaxies}. The two stars represent two high-$z$ lensed SMGs \citep{dan11,les10,les11}, the small open circles low-$z$ star-forming galaxies \citep{kam16, isr15, ros15}. The histogram on the right ordinate of the top-panel is the r$_\mathrm{CI/CO(4-3)}$ distribution of high-$z$ SMGs and QSOs \citep{ala13,bot17}.}
\label{fig:ratios}
\end{figure}
\subsection{Mixing in the CGM}
Our findings of extended [C\,{\sc i}], CO(1-0) and CO(4-3) imply that the cold molecular CGM is metal-rich and not diffuse. As we showed in EM16, the surface densities of the molecular CGM and the rate of in-situ star formation across the halo fall on the same Kennicutt-Schmidt relation as for star-forming galaxies \citep{ken98}. The fact that the gas excitation and [C\,{\sc i}]\ abundance of the CGM are similar to that of the ISM in star-forming galaxies strengthens this claim.
Despite the similarities between the CGM of the Spiderweb and the ISM in surrounding proto-cluster galaxies, it is unlikely that the cold CGM consists mainly of gas that is currently being tidally stripped from proto-cluster galaxies. If originally the gas was stripped, the low velocity-dispersion of the cold gas compared to that of the galaxies means that the gas must have had at least a dynamical time of t$_\mathrm{dyn}$\,$\gtrsim$\,10$^{8}$ yr to settle. Since the life-time of the OB stars across the CGM is only $\sim$10$^{7}$ yr, they must have formed long after the cold gas settled and cooled \citep[see][]{hat08}.
Our results have important implications for our understanding of galaxy formation. Most importantly, the [C\,{\sc i}]\ and CO properties do not corroborate models of efficient and direct stream-fed accretion of relatively pristine gas \citep[e.g.,][]{dek09}. Instead, they agree with more complex models where the gas in the CGM is a melange from various sources -- metal-enriched outflows, mass transfer among galaxies, gas accretion, and mergers \citep{mor06,nar15, angl17, fau16}. If the gas becomes multiphase and turbulent as it flows, the interaction and mixing of gas from these various sources is likely efficient \citep[][]{corn17}. The gradual build-up of carbon, oxygen and dust that is starting to be modeled across these extended regions mimics many of the properties that we observe in the CGM of the Spiderweb. Thus, our results support the hypothesis that galaxies grow from recycled gas in the CGM and not directly out of accreted cold gas from the cosmic web.
\section*{Acknowledgments}
We thank Padelis Papadopoulos for valuable feedback and expert advice on how to maximize carbon emissions and pollute the environments of galaxies. This paper makes use of the following ALMA data: ADS/JAO.ALMA$\#$2015.1.00851.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. The Australia Telescope is funded by the Commonwealth of Australia for operation as a National Facility managed by CSIRO. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. This research received funding from the Spanish Ministerio de Econom\'{i}a y Competitividad grants Ram\'{o}n y Cajal RYC2014-15686 (HD) and AYA2015-64346-C2-2-P (MV) and the Australian Research Council Centre of Excellence for All-sky Astrophysics in 3D project CE170100013 (JA).\\
\vspace{-10mm}\\
|
{
"timestamp": "2018-02-27T02:01:40",
"yymm": "1802",
"arxiv_id": "1802.08742",
"language": "en",
"url": "https://arxiv.org/abs/1802.08742"
}
|
\section{Appendix}
\subsection{Algorithm for Network of SAGE}
Algorithms \ref{alg:sage} and \ref{alg:nsage}, respectively, define SAGE \cite{sage} and Network of SAGE (N-SAGE).
Algorithm \ref{alg:sage} assumes mean-pool aggregation by \cite{sage}, which performs on-par to their top performer max-pool aggregation. Further, Algorithm \ref{alg:sage} operates in full-batch while \cite{sage} offer a stochastic implementation with edge sampling. Nonetheless, their proposed stochastic implementation should be wrapped in a network, though we would need a way to approximate (e.g. sample entries) from dense $\hat{A}^k$ as $k$ increases. We leave this as future work.
\begin{figure*}[h]
\begin{minipage}[t]{2.8in}
\begin{algorithm}[H]
\caption{SAGE Model \citep{sage}}
\label{alg:sage}
\begin{algorithmic}[1]
\Require{$\hat{A}$ is a normalization of $A$}
\Function{SageModel}{$\hat{A}$, $X$, $L$}
\State {$Z \leftarrow X$}
\For{$i = 1 $ to $L$}
\State{$Z \leftarrow \sigma(\left[ \begin{array}{c;{2pt/2pt}c}
Z & \hat{A} Z
\end{array} \right] W^{(i)})$ }
\State{$Z \leftarrow \textsc{L2NormalizeRows}(Z)$ }
\EndFor
\State{\Return $Z$}
\EndFunction
\end{algorithmic}
\end{algorithm}
\end{minipage}
\hfill
\begin{minipage}[t]{2.8in}
\begin{algorithm}[H]
\caption{N-SAGE}
\label{alg:nsage}
\begin{algorithmic}[1]
\Function{Nsage}{$A$, $X$}
\State{$D \leftarrow \textbf{diag}(A \mathbf{1}) $}
\Comment{Sum rows}
\State{$\hat{A} \leftarrow D^{-1}A$}
\State{\Return $\textsc{Network}(\textsc{SageModel}, \hat{A}, X, 2)$}
\EndFunction
\end{algorithmic}
\end{algorithm}
\end{minipage}
\hfill
\end{figure*}
Using SAGE with mean-pooling aggregation is very similar to a vanilla GCN model but with three differences. First, the choice of adjacency normalization ($D^{-1}A$ versus $D^{-\frac{1}{2}}AD^{-\frac{1}{2}}$). Second, the skip connections in line 4, which concatenates the features with the adjacency-multiplied (i.e. diffused) features. We believe this is analogous in intuition of incorporating $\hat{A}^0$ in our model, which keeps the original features. Third, the use of node-wise L2 feature normalization at line 5, which is equivalent to applying a layernorm transformation \cite{layernorm}. Nonetheless, it is worth noting \cite{sage}'s formulation of SAGE is flexible to allow different aggregations, such as max-pooling or LSTM, which further deviates SAGE from GCN.
\section{Background}
\label{sec:background}
\subsection{Semi-Supervised Node Classification}
Traditional label propagation algorithms \citep{semiemb, belkin2006} learn a model that transforms node features into node labels and uses the graph to add a regularizer term:
\begin{equation}
\label{eq:labelprop}
\mathcal{L}_\textrm{label.propagation} = \mathcal{L}_\textrm{classification}\left(f(X), \mathcal{V}_L\right) + \lambda f(X)^T \Delta f(X),
\end{equation}
The first term $\mathcal{L}_\textrm{classification}$ trains the model $f : \mathbb{R}^{N \times d_0} \rightarrow \mathbb{R}^{N \times C}$ to to predict the known labels $\mathcal{V}_L$. The second term is the graph-based regularizer, ensuring that connected nodes have a similar model output, with $\Delta$ being the graph Laplacian and $\lambda \in \mathbb{R}$ is the regularization coefficient hyperparameter.
\subsection{Graph Convolutional Networks}
Graph Convolution \citep{bruna} generalizes convolution from Euclidean domains to
graph-structured data.
Convolving a ``filter'' over a signal on graph nodes can be calculated by transforming both the filter and the signal to the Fourier domain, multiplying them, and then transforming the result back into the discrete domain.
The signal transform is achieved by multiplying with the eigenvectors of the graph Laplacian. The transformation requires a quadratic eigendecomposition of the symmetric Laplacian;
however, the low-rank approximation of the eigendecomposition can be calculated using truncated Chebyshev polynomials \citep{chebyshev}. For instance,
\cite{kipf} calculates a rank-1 approximation of the decomposition. They propose a multi-layer Graph Convolutional Networks (GCNs) for semi-supervised graph learning. Every layer computes the transformation:
\begin{equation}
\label{eq:kipflayer}
H^{(l+1)} = \sigma\left(\hat{A} H^{(l)} W^{(l)} \right),
\end{equation}
where $H^{(l)} \in \mathbb{R}^{N \times d_l}$ is the input activation matrix to the $l$-th hidden layer with row $H^{(l)}_i$ containing a $d_l$-dimensional feature vector for vertex $i \in \mathcal{V}$, and $W^{(l)} \in \mathbb{R}^{d_{l} \times d_{l+1}}$ is the layer's trainable weights. The first hidden layer $H^{(0)}$ is set to the input features $X$. A softmax on the last layer is used to classify labels. All layers use the same ``normalized adjacency'' $\hat{A}$, obtained by the ``renormalization trick'' utilized by \cite{kipf}, as $\hat{A}=D^{-\frac12} A D^{-\frac12}$.\footnote{with added self-connections added as $A_{ii} = 1$, similar to \cite{kipf}}
Eq. \eqref{eq:kipflayer} is a first order approximation of convolving filter $W^{(l)}$ over signal $H^{(l)}$ \citep{chebyshev, kipf}. The left-multiplication with $\hat{A}$ averages node features with their direct neighbors; this signal is then passed through a non-linearity function $\sigma(\cdot)$ (e.g, ReLU$(z) = \max(0,z)$). Successive layers effectively \emph{diffuse} signals from nodes to neighbors.
Two-layer GCN model can be defined in terms of vertex features $X$ and normalized adjacency $\hat{A}$ as:
\begin{equation}
\label{eq:kipf2layers}
\textsc{GCN}_\textrm{2-layer}(\hat{A}, X; \theta) = \textrm{softmax}\left( \hat{A} \sigma(\hat{A}X W^{(0)}) W^{(1)}\right),
\end{equation}
where the GCN parameters $\theta = \left\{W^{(0)}, W^{(1)}\right\}$ are trained to minimize the cross-entropy error over labeled examples. The output of the GCN model is a matrix $\mathbb{R}^{N \times C}$, where $N$ is the number of nodes and $C$ is the number of labels. Each row contains the label scores for one node, assuming there are $C$ classes.
\subsection{Graph Embeddings}
Node Embedding methods represent graph nodes in a continuous vector space. They learn a dictionary $Z\in \mathbb{R}^{N \times d}$, with one $d$-dimensional embedding per node. Traditional methods use the adjacency matrix to learn embeddings. For example, Eigenmaps \citep{eigenmaps} performs the following constrained optimization:
\begin{equation}
\label{eq:eigenmaps}
\sum_{i, j} ||A_{ij}(Z_i - Z_J)|| \ \ \textrm{s.t.} \ \ Z^T D Z = I,
\end{equation}
where $I$ is identity vector. Skipgram models on text corpora \citep{word2vec} inspired modern graph embedding methods, which simulate random walks to learn node embeddings \citep{deepwalk, node2vec}.
Each random walk generates a sequence of nodes. Sequences are converted to textual paragraphs, and are passed to a word2vec-style embedding learning algorithm \citep{word2vec}.
As shown in \cite{neuralwalk}, this learning-by-simulation is equivalent, in expectation, to the decomposition of a random walk co-occurrence statistics matrix $\mathcal{D}$. The expectation on $\mathcal{D}$ can be written as:
\begin{equation}
\mathbb{E}[\mathcal{D}] \propto \mathbb{E}_{q \sim \mathcal{Q}}\left[\left(\mathcal{T}\right)^q\right] = \mathbb{E}_{q \sim \mathcal{Q}} \left[\left(D^{-1}A\right)^q\right],
\label{eq:exp_D}
\end{equation}
where $\mathcal{T}=D^{-1}A$ is the row-normalized transition matrix (a.k.a right-stochastic adjacency matrix), and $\mathcal{Q}$ is a ``context distribution'' that is determined by random walk hyperparameters, such as the length of the random walk. The expectation therefore weights the importance of one node on another as a function of how well-connected they are, and the distance between them. The main difference between traditional node embedding methods and random walk methods is the optimization criteria: the former minimizes a loss on representing the adjacency matrix $A$ (see Eq. \ref{eq:eigenmaps}), while the latter minimizes a loss on representing random walk co-occurrence statistics $\mathcal{D}$.
\section{Conclusions and Future Work}
\label{sec:conclusion}
In this paper, we propose a meta-model that can run arbitrary Graph Convolution models, such as GCN \citep{kipf} and SAGE \citep{sage}, on the output of random walks. Traditional Graph Convolution models operate on the normalized adjacency matrix. We make multiple instantiations of such models, feeding each instantiation a power of the adjacency matrix, and then concatenating the output of all instances into a classification sub-network. Each instantiation is therefore operating on different scale of the graph. Our model, Network of GCNs (and similarly, Network of SAGE), is end-to-end trainable, and is able to directly learn information across near or distant neighbors. We inspect the distribution of parameter weights in our classification sub-network, which reveal to us that our model is effectively able to circumvent adversarial perturbations on the input by shifting weights towards model instances consuming higher powers of the adjacency matrix. For future work, we plan to extend our methods to a stochastic implementation and tackle other (larger) graph datasets.
\section{Experiments}
\label{sec:experiments}
\subsection{Datasets}
We experiment on three citation graph datasets: Pubmed, Citeseer, Cora, and a biological graph: Protein-Protein Interactions (PPI).
We choose the aforementioned datasets because they are available online and are used by our baselines. The citation datasets are prepared by \cite{planetoid}, and the PPI dataset is prepared by \cite{sage}. Table \ref{table:dataset-stats} summarizes dataset statistics.
Each node in the citation datasets represents an article published in the corresponding journal. An edge between two nodes represents a citation from one article to another, and a label represents the subject of the article. Each dataset contains a binary Bag-of-Words (BoW) feature vector for each node. The BoW are extracted from the article abstract. Therefore, the task is to predict the subject of articles, given the BoW of their abstract and the citations to other (possibly labeled) articles. Following \cite{planetoid} and \cite{kipf}, we use 20 nodes per class for training, 500 (overall) nodes for validation, and 1000 nodes for evaluation. We note that the validation set is larger than training $|\mathcal{V}_L|$ for these datasets!
The PPI graph, as processed and described by \cite{sage}, consists of 24 disjoint subgraphs, each corresponding to a different human tissue. 20 of those subgraphs are used for training, 2 for validation, and 2 for testing, as partitioned by \cite{sage}.
\subsection{Baseline Methods}
For the citation datasets, we copy baseline numbers from \cite{kipf}. These include label propagation (LP, \cite{lp}); semi-supervised embedding (SemiEmb, \cite{semiemb}); manifold regularization (ManiReg, \cite{manireg}); skip-gram graph embeddings \citep[DeepWalk][]{deepwalk}; Iterative Classification Algorithm \citep[ICA, ][]{ica}; Planetoid \citep{planetoid}; vanilla GCN \citep{kipf}. For PPI, we copy baseline numbers from \citep{sage}, which include GraphSAGE with LSTM aggregation (SAGE-LSTM) and GraphSAGE with pooling aggregation (SAGE). Further, for all datasets, we use our implementation to run baselines DCNN \citep{diffusion-cnn}, GCN \citep{kipf}, and SAGE \citep[with pooling aggregation, ][]{sage}, as these baselines can be recovered as special cases of our algorithm, as explained in Section \ref{sec:nmodel}.
\subsection{Implementation}
We use TensorFlow\citep{tensorflow} to implement our methods, which we use to also measure the performance of baselines GCN, SAGE, and DCNN.
For our methods and baselines, all GCN and SAGE modules that we train are 2 layers\footnote{except as clearly indicated in Table \ref{table:deepgcn}}, where the first outputs 16 dimensions per node and the second outputs the number of classes (dataset-dependent). DCNN baseline has one layer and outputs 16 dimensions per node, and its channels (one per transition matrix power) are concatenated into a fully-connected layer that outputs the number of classes.
We use $50\%$ dropout and L2 regularization of $10^{-5}$ for all of the aforementioned models.
\begin{table*}[t]
\begin{center}
\begin{tabular}{lcccccc}
\textbf{Dataset}
&\textbf{Type}
& \textbf{Nodes}
&\textbf{Edges}
&\textbf{Classes}
& \textbf{Features}
& \textbf{Labeled nodes}
\\
& & $|\mathcal{V}|$ & $|\mathcal{E}|$ & $C$ & $F$ & $|\mathcal{V}_L|$
\\ \hline
Citeseer & citaction &3,327 &4,732 &6 (single class)&3,703 & 120 \\
Cora & citaction &2,708 &5,429 &7 (single class) &1,433 & 140 \\
Pubmed & citaction &19,717 &44,338 &3 (single class)&500 & 60 \\
PPI & biological & 56,944& 818,716 & 121 (multi-class) & 50 & 44,906 \\
\end{tabular}
\end{center}
\caption{Dataset used for experiments.
For citation datasets, 20 training nodes per class are observed, with $|\mathcal{V}_L| = 20 \times C$
}
\label{table:dataset-stats}
\end{table*}
\begin{table*}[t]
\begin{center}
\input{result_summary_table_2.tex}
\end{center}
\caption{
Node classification performance ($\%$ accuracy for the first three, citation datasets, and f1 micro-averaged for multiclass PPI), using data splits of \cite{planetoid, kipf} and \cite{sage}.
We report the test accuracy corresponding to the run with the highest validation accuracy.
Results in rows (a) through (g) are copied from \cite{kipf}, rows (h) and (i) from \citep{sage}, and (j) through (l) are generated using our code since we can recover other algorithms as explained in Section \ref{sec:nmodel}. Rows (m) and (n) are our models.
Entries with ``--'' indicate that authors from whom we copied results did not run on those datasets. Nonetheless, we run all datasets using our implementation of the most-competitive baselines.
}
\label{table:results}
\end{table*}
\begin{figure*}[t]
\centering
\begin{subfigure}{0.3\linewidth}
\makebox[\textwidth]{
\includegraphics[trim={0cm 0 0 0},clip,width=\linewidth]{sweeps/plot2d_citeseer_gcn_fc_1.pdf}
}
\caption{N-GCN on Citeseer}
\end{subfigure}
\begin{subfigure}{0.3\linewidth}
\makebox[\textwidth]{
\includegraphics[trim={0cm 0 0 0},clip,width=\linewidth]{sweeps/plot2d_cora_gcn_fc_1.pdf}
}
\caption{N-GCN on Cora}
\end{subfigure}
\begin{subfigure}{0.3\linewidth}
\makebox[\textwidth]{
\includegraphics[trim={0cm 0 0 0},clip,width=\linewidth]{sweeps/plot2d_pubmed_gcn_fc_1.pdf}
}
\caption{N-GCN on Pubmed}
\end{subfigure}
\begin{subfigure}{0.3\linewidth}
\makebox[\textwidth]{
\includegraphics[trim={0cm 0 0 0},clip,width=\linewidth]{sweeps/plot2d_citeseer_sage_fc_1.pdf}
}
\caption{N-SAGE on Citeseer}
\end{subfigure}
\begin{subfigure}{0.3\linewidth}
\makebox[\textwidth]{
\includegraphics[trim={0cm 0 0 0},clip,width=\linewidth]{sweeps/plot2d_cora_sage_fc_1.pdf}
}
\caption{N-SAGE on Cora}
\end{subfigure}
\begin{subfigure}{0.3\linewidth}
\makebox[\textwidth]{
\includegraphics[trim={0cm 0 0 0},clip,width=\linewidth]{sweeps/plot2d_pubmed_sage_a_1.pdf}
}
\caption{N-SAGE on Pubmed}
\end{subfigure}
\caption{Sensitivity Analysis. Model performance when varying random walk steps $K$ and replication factor $r$. Best viewed with zoom. Overall, model performance increases with larger values of $K$ and $r$. In addition, having random walk steps (larger $K$) boosts performance more than increasing model capacity (larger $r$).}
\label{fig:sensitivity}
\end{figure*}
\begin{table*}[t]
\begin{center}
\input{pubmed_npc.tex}
\end{center}
\caption{Node classification accuracy (in $\%$) for our largest dataset (Pubmed) as we vary size of training data $\frac{|\mathcal{V}|}{C} \in \{5, 10, 20, 100\}$. We report mean and standard deviations on 10 runs.
We use a different random seed for every run (i.e. selecting different labeled nodes), but the same 10 random seeds across models.
Convolution-based methods (e.g. SAGE) work well with few training examples, but \textit{unmodified} random walk methods (e.g. DCNN) work well with more training data. Our methods combine convolution and random walks, making them work well in both conditions.
}
\label{table:npc}
\end{table*}
\begin{figure*}[t]
\centering
\includegraphics[trim={0 0.5cm 0 0},clip,width=\linewidth]{feat_r_plot.pdf}
\caption{
Classification accuracy for the Cora dataset with 20 labeled nodes per class $(|\mathcal{V}| = 20\times C)$, but features removed at random, averaging 10 runs.
We use a different random seed for every run (i.e. removing different features per node), but the same 10 random seeds across models.
}
\label{fig:featremoval}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[trim={0 0 0 0cm},clip,width=\linewidth]{feat_r_attention_gcn.pdf}
\caption{
Attention weights ($m$) for $\textrm{N-GCN}_\textrm{a}$
when trained with feature removal perturbation on the Cora dataset. Removing features shifts the attention weights to the right, suggesting the model is relying more on long range dependencies.
}
\label{fig:attentionfeatremoval}
\end{figure*}
\subsection{Node Classification Accuracy}
Table \ref{table:results} shows node classification accuracy results.
We run 20 different random initializations for every model (baselines and ours), train using Adam optimizer \citep{adam} with learning rate of 0.01 for 600 steps, capturing the model parameters at peak validation accuracy to avoid overfitting.
For our models, we sweep our hyperparameters $r$, $K$, and choice of classification sub-network $\in \{\textrm{fc}, \textrm{a}\}$. For baselines and our models, we choose the model with the highest accuracy on validation set, and use it to record metrics on the test set in Table \ref{table:results}.
Table \ref{table:results} shows that N-GCN outperforms GCN \citep{kipf} and N-SAGE improves on SAGE for all datasets, showing that \textit{unmodified} random walks indeed help in semi-supervised node classification. Finally, our proposed models acheive state-of-the-art on all datasets.
\subsection{Sensitivity Analysis}
We analyze the impact of random walk length $K$ and replication factor $r$ on classification accuracy in Figure \ref{fig:sensitivity}. In general, model performance improves when increasing $K$ and $r$. We note utilizing random walks by setting $K>1$ improves model accuracy due to the additional information, not due to increased model capacity: Contrast $K=1, r>1$ (i.e. mixture of GCNs, no random walks) with $K>1, r=1$ (i.e. N-GCN on random walks) -- in both scenarios, the model has more capacity, but the latter shows better performance. The same holds for SAGE.
\subsection{Tolerance to feature noise}
We test our method under feature noise perturbations by removing node features at random. This is practical, as article authors might forget to include relevant terms in the article abstract,
and more generally not all nodes will have the same amount of detailed information.
Figure \ref{fig:featremoval} shows that when features are removed, methods utilizing unmodified random walks: N-GCN, N-SAGE, and DCNN, outperform convolutional methods including GCN and SAGE. Moreover, the performance gap widens as we remove more features. This suggests that our methods can somewhat recover removed features by \textit{directly} pulling-in features from nearby and distant neighbors.
We visualize in Figure \ref{fig:attentionfeatremoval} the attention weights as a function of \% features removed. With little feature removal, there is some weight on $\hat{A}^{0}$, and the attention weights for $\hat{A}^{1}, \hat{A}^{2}, \dots$ follow some decay function. Maliciously dropping features causes our model to shift its attention weights towards higher powers of $\hat{A}$.
\subsection{Random Walk Steps Versus GCN Depth}
$K$-step random walk will allow every node to accumulate information from its neighbors, up to distance $K$. Similarly, a $K$-layer GCN \citep{kipf} will do the same. The difference between the two was mathematically explained in Section \ref{sec:motivation}. To summarize: the former averages node feature vectors according to the random walk co-visit statistics, whereas the latter creates non-linearities and matrix multiplies at every step. So far, we displayed experiments where our models (N-GCN and N-SAGE) were able to use information from distant nodes (e.g. $K=5$), but for all GCN and SAGE modules, we used 2 GCN layer for baselines and our models.
Even though the authors of GCN \citep{kipf} and SAGE \citep{sage} suggest using two GCN layers, according by holdout validation, for a fair comparison with our models, we run experiments utilizing deeper GCN and SAGE are models so that its ``receptive field'' is comparable to ours.
\begin{table}
\begin{center}
\input{deep_gcn_accuracies.tex}
\end{center}
\caption{Performance of deeper GCN and SAGE models, both using our implementation. Deeper GCN (or SAGE) does not consistently improve classification accuracy, suggesting that N-GCN and N-SAGE are more performant and are easier to train. They use shallower convolution models that operate on multiple scales of the graph.}
\label{table:deepgcn}
\end{table}
Table \ref{table:deepgcn} shows test accuracies when training deeper GCN and SAGE models, using our implementation. We notice that, unlike our method which benefits from a wider ``receptive field'', there is no direct correspondence between depth and improved performance.
\section{Introduction}
\label{sec:intro}
Semi-supervised learning on graphs is important in many real-world applications, where the goal is to recover labels for all nodes given only a fraction of labeled ones.
Some applications include social networks, where one wishes to predict user interests, or in health care, where one wishes to predict whether a patient should be screened for cancer. In many such cases, collecting node labels can be prohibitive.
However, edges between nodes can be easier to obtain, either using an explicit graph (e.g. social network) or implicitly by calculating pairwise similarities \citep[e.g. using a patient-patient similarity kernel, ][]{selin}.
Convolutional Neural Networks \citep{lecun98} learn location-invariant hierarchical filters, enabling significant improvements on Computer Vision tasks \citep{alexnet, szegedy, resnet}. This success has motivated researchers \citep{bruna} to extend convolutions from spatial (i.e. regular lattice) domains to graph-structured (i.e. irregular) domains, yielding a class of algorithms known as Graph Convolutional Networks (GCNs).
Formally, we are interested in semi-supervised learning where we are given a graph $\mathcal{G}=(\mathcal{V}, \mathcal{E})$ with $N=|\mathcal{V}|$ nodes; adjacency matrix $A$; and matrix $X \in \mathbb{R}^{N \times F}$ of node features. Labels for only a subset of nodes $\mathcal{V}_L \subset \mathcal{V}$ are observed. In general, $|\mathcal{V}_L| \ll |\mathcal{V}|$. Our goal is to recover labels for all unlabeled nodes $\mathcal{V}_U = \mathcal{V} - \mathcal{V}_L$, using the feature matrix $X$, the known labels for nodes in $\mathcal{V}_L$, and the graph $G$.
In this setting, one treats the graph as the ``unsupervised'' and labels of $\mathcal{V}_L$ as the ``supervised'' portions of the data.
Depicted in Figure \ref{fig:model}, our model for semi-supervised node classification builds on the GCN module proposed by Kipf and Welling \cite{kipf}, which operates on the normalized adjacency matrix $\hat{A}$, as in $\textrm{GCN}(\hat{A})$, where $\hat{A} = D^{-\frac12} A D^{-\frac12}$, and $D$ is diagonal matrix of node degrees. Our proposed extension of GCNs is inspired by the recent advancements in random walk based graph embeddings \citep[e.g.][]{deepwalk, node2vec, neuralwalk}.
We make a Network of GCN modules (N-GCN), feeding each module a different power of $\hat{A}$,
as in $\{\textrm{GCN}(\hat{A}^0), \textrm{GCN}(\hat{A}^1), \textrm{GCN}(\hat{A}^2), \dots\}$.
The $k$-th power contains statistics from the $k$-th step of a random walk on the graph.
Therefore, our N-GCN model is able to combine information from various step-sizes (i.e. graph scales). We then combine the output of all GCN modules into a classification sub-network, and we jointly train all GCN modules and the classification sub-network on the upstream objective for semi-supervised node classification.
Weights of the classification sub-network give us insight on how the N-GCN model works. For instance, in the presence of input perturbations, we observe that the classification sub-network weights shift towards GCN modules utilizing higher powers of the adjacency matrix, effectively widening the ``receptive field'' of the (spectral) convolutional filters.
We achieve state-of-the-art on several semi-supervised graph learning tasks, showing that explicit random walks enhance the representational power of vanilla GCN's.
The rest of this paper is organized as follows. Section \ref{sec:background} reviews background work that provides the foundation for this paper. In Section \ref{sec:method}, we describe our proposed method, followed by experimental evaluation in Section \ref{sec:experiments}. We compare our work with recent closely-related methods in Section \ref{sec:related}. Finally, we conclude with our contributions and future work in Section \ref{sec:conclusion}.
\begin{figure*}[t]
\centering
\begin{subfigure}{0.47\linewidth}
\makebox[\textwidth]{
\scalebox{0.6}{
\input{fig1tikz.tex}
}
}
\caption{N-GCN Architecture}
\end{subfigure}
\begin{subfigure}{0.47\linewidth}
\makebox[\textwidth]{
\includegraphics[width=\linewidth]{tsne_fc.pdf}
}
\caption{t-SNE visualization of fully-connected (fc) hidden layer of NGCN when trained over Cora graph.}
\end{subfigure}
\caption{
Left: Model architecture, where $\hat{A}$ is the normalized normalized adjacency matrix, $I$ is the identity matrix, $X$ is node features matrix, and $\times$ is matrix-matrix multiply operator. We calculate $K$ powers of the $\hat{A}$, feeding each power into $r$ GCNs, along with $X$. The output of all $K \times r$ GCNs can be concatenated along the column dimension, then fed into fully-connected layers, outputting
$C$ channels per node, where $C$ is size of label space. We calculate cross entropy error, between rows \textit{prediction} $N \times C$ with known labels, and use them to update parameters of classification sub-network and all GCNs. Right: pre-relu activations after the first fully-connected layer of a 2-layer classification sub-network. Activations are PCA-ed to 50 dimensions then visualized using t-SNE.
}
\label{fig:model}
\end{figure*}
\section{Our Method}
\label{sec:method}
\subsection{Motivation}
\label{sec:motivation}
Graph Convolutional Networks and random walk graph embeddings are individually powerful.
\cite{kipf} uses GCNs for semi-supervised node classification. Instead of following traditional methods that use the graph for regularization (e.g. Eq. \ref{eq:eigenmaps}), \cite{kipf} use the adjacency matrix for training and inference, effectively diffusing information across edges at all GCN layers (see Eq. \ref{eq:kipf2layers}).
Separately,
recent work has showed that random walk statistics can be very powerful for learning an unsupervised representation of nodes that can preserve the structure of the graph \citep{deepwalk, node2vec, neuralwalk}.
Under special conditions, it is possible for the GCN model to learn random walks. In particular, consider a two-layer GCN defined in Eq. \ref{eq:kipf2layers} with the assumption that first-layer activation is identity as $\sigma(z) = z$, and weight $W^{(0)}$ is an identity matrix (either explicitly set or learned to satisfy the upstream objective). Under these two identity conditions, the model reduces to:
\begin{align*}
\textsc{GCN}_\textrm{2-layer-special}(\hat{A}, X) &= \textrm{softmax}\left( \hat{A} \hat{A}X W^{(1)}\right) \\
&= \textrm{softmax}\left( \hat{A}^2 X W^{(1)}\right)
\end{align*}
where $\hat{A}^2$ can be expanded as:
\begin{align}
\nonumber
\hat{A}^2 &= \left(D^{-\frac12} A D^{-\frac12}\right) \left(D^{-\frac12} A D^{-\frac12}\right) = D^{-\frac12} A \left[D^{-1} A\right] D^{-\frac12} \\
&= D^{-\frac12} A \mathcal{T} D^{-\frac12}
\label{eq:a_hat_2}
\end{align}
By multiplying the adjacency $A$ with the transition matrix $\mathcal{T}$ before, the $\textsc{GCN}_\textrm{2-layer-special}$ is effectively doing a one-step random walk i.e. diffusing signals from nodes to neighbors, without non-linearities, then applying a non-linear Graph Conv layer.
\subsection{Explicit Random Walks}
The special conditions described above are not true in practice. Although stacking hidden GCN layers allows information to flow through graph edges, this flow is \emph{indirect} as the information goes through feature reduction (matrix multiplication) and a non-linearity (activation function $\sigma(\cdot)$).
Therefore, the vanilla GCN cannot directly learn high powers of $\hat{A}$, and could struggle with modeling information across distant nodes.
We hypothesize that making the GCN directly operate on random walk statistics will allow the network to better utilize information across distant nodes, in the same way that node embedding methods (e.g. DeepWalk, \cite{deepwalk}) operating on $\mathcal{D}$ are superior to traditional embedding methods operating on the adjacency matrix (e.g. Eigenmaps, \cite{eigenmaps}).
Therefore, in addition to feeding only $\hat{A}$ to the GCN model as proposed by \cite{kipf} (see Eq. \ref{eq:kipf2layers}), we propose to feed a $K$-degree polynomial of $\hat{A}$ to $K$ instantiations of GCN. Generalizing Eq.~\eqref{eq:a_hat_2} to arbitrary power $k$ gives:
\begin{equation}
\hat{A}^k = D^{-\frac12} A \mathcal{T}^{k-1} D^{-\frac12}.
\end{equation}
We also define $\hat{A}^0$ to be the identity matrix. Similar to \cite{kipf}, we add self-connections and convert directed graphs to undirected ones, making $\hat{A}$ and hence $\hat{A}^k$ symmetric matrices. The eigendecomposition of symmetric matrices is real. Therefore, the low-rank approximation of the eigendecomposition \cite{chebyshev} is still valid, and a one layer of \cite{kipf} utilizing $\hat{A}^k$ should still approximate multiplication in the Fourier domain.
\subsection{Network of GCNs}
Consider $K$ instantiations of $\{ \textsc{GCN}(\hat{A}^0, X)$, $\textsc{GCN}(\hat{A}^1, X)$, $\dots$,
$\textsc{GCN}(\hat{A}^{K-1}, X) \}$. Each GCN outputs a matrix $\mathbb{R}^{N \times C_k}$, where the $v$-th row describes a latent representation of that particular GCN for node $v \in \mathcal{V}$, and $C_k$ is the latent dimensionality. Though $C_k$ can be different for each GCN, we set all $C_k$ to be the same for simplicity. We then combine the output of all $K$ GCN and feed them into a classification sub-network, allowing us to jointly train all GCNs and the classification sub-network via backpropagation. This should allow the classification sub-network to choose features from the various GCNs, effectively allowing the overall model to learn a combination of features using the raw (normalized) adjacency, different steps of random walks (i.e. graph scales), and the input features $X$ (as they are multiplied by identity $\hat{A}^0$).
\subsubsection{Fully-Connected Classification Network}
From a deep learning prospective, it is intuitive to represent the classification network as a fully-connected layer. We can concatenate the output of the $K$ GCNs along the column dimension, i.e. concatenating all $\textrm{GCN}(X, \hat{A}^k)$, each $\in \mathbb{R}^{N \times C_k}$ into matrix $\in \mathbb{R}^{N \times C_K}$ where $C_K = \sum_k C_k$. We add a fully-connected layer $f_\textrm{fc} : \mathbb{R}^{N \times C_K} \rightarrow \mathbb{R}^{N \times C}$, with trainable parameter matrix $W_\textrm{fc} \in \mathbb{R}^{C_K \times C}$, written as:
\begin{align}
\label{eq:ngcnfc}
&\textrm{N-GCN}_\textrm{fc}(\hat{A}, A; W_\textrm{fc}, \theta) = \textrm{softmax}\bigg( \\
&
\nonumber
\ \ \ \ \ \ \ \ \ \left[
\begin{array}{c;{2pt/2pt}c;{2pt/2pt}c}
\textrm{GCN}(\hat{A}^0, X; \theta^{(0)}) & \textrm{GCN}(\hat{A}^1, X; \theta^{(1)}) & \dots
\end{array}
\right] W_\textrm{fc} \bigg).
\end{align}
The classifier parameters $W_\textrm{fc}$ are jointly trained with GCN parameters $\theta=\{\theta^{(0)}, \theta^{(1)}, \dots \}$. We use subscript fc on N-GCN to indicate the classification network is a fully-connected layer.
\subsubsection{Attention Classification Network}
We also propose a classification network based on ``softmax attention'', which learns a convex combination of the GCN instantiations. Our attention model ($\textrm{N-GCN}_\textrm{a}$) is parametrized by vector $\widetilde{m} \in \mathbb{R}^K$, one scalar for each GCN. It can be written as:
\begin{align}
\textrm{N-GCN}_\textrm{a}( \hat{A}, X; m, \theta) &= \sum_{k} m_k \textrm{GCN}(\hat{A}^k, X; \theta^{(k)})
\end{align}
where $m$ is output of a softmax: $m = \textrm{softmax}(\widetilde{m})$.
This softmax attention is similar to ``Mixture of Experts'' model, especially if we set the number of output channels for all GCNs equal to the number of classes, as in $C_0 = C_1 = \dots = C$.
This allows us to add cross entropy loss terms on all GCN outputs in addition to the loss applied at the output NGCN, forcing all GCN's to be independently useful. It is possible to set the $m \in \mathbb{R}^K$ parameter vector ``by hand'' using the validation split, especially for reasonable $K$ such as $K \le 6$. One possible choice might be setting $m_0$ to some small value and remaining $m_1, \dots, m_{K-1}$ to the harmonic series $\frac{1}{k}$; another choice may be linear decay $\frac{K - k}{K-1}$. These are respectively similar to the context distributions of GloVe \citep{glove} and word2vec \citep{word2vec, levy}. We note that if on average a node's information is captured by its direct or nearby neighbors, then the output of GCNs consuming lower powers of $\hat{A}$ should be weighted highly.
\subsection{Training}
We minimize the cross entropy between our model output and the known training labels $Y$ as:
\begin{equation}
\min \textrm{diag}(\mathcal{V}_L) \left[Y \circ \log \textrm{N-GCN}(X, \hat{A})\right],
\end{equation}
where $\circ$ is Hadamard product, and $\textrm{diag}(\mathcal{V}_L)$ denotes a diagonal matrix, with entry at $(i, i)$ set to 1 if $i \in \mathcal{V}_L$ and 0 otherwise. In addition, we can apply intermediate supervision for the $\textrm{NGCN}_\textrm{a}$ to attempt make all GCN become independently useful, yielding minimization objective:
\begin{align}
\nonumber
\min_{m, \theta} \textrm{diag}(\mathcal{V}_L) \bigg[ & Y \circ \log \textrm{N-GCN}_\textrm{a}(\hat{A}, X; m, \theta) \\
& + \sum_k Y \circ \log \textrm{GCN}(\hat{A}^k, X; \theta^{(k)}) \bigg].
\end{align}
\subsection{GCN Replication Factor $r$}
To simplify notation, our N-GCN derivations (e.g. Eq. \ref{eq:ngcnfc}) assume that there is one GCN per $\hat{A}$ power. However, our implementation feeds every $\hat{A}$ to $r$ GCN modules, as shown in Fig. \ref{fig:model}.
\subsection{Generalization to other Graph Models}
\label{sec:nmodel}
In addition to vanilla GCNs \citep[e.g. ][]{kipf}, our derivation also applies to other graph models including GraphSAGE \citep[SAGE,][]{sage}.
Algorithm \ref{alg:nmodel} shows a generalization that allows us to make a network of arbitrary graph models (e.g. GCN, SAGE, or others). Algorithms \ref{alg:gcn} and \ref{alg:sage}, respectively, show pseudo-code for the vanilla GCN \citep{kipf} and GraphSAGE\footnote{Our implementation assumes mean-pool aggregation by \cite{sage}, which performs on-par to their top performer max-pool aggregation. In addition, our Algorithm \ref{alg:sage} lists a full-batch implementation whereas \citep{sage} offer a mini-batch implementation.} \citep{sage}. Finally, Algorithm \ref{alg:ngcn} defines our full Network of GCN model (N-GCN) by plugging Algorithm \ref{alg:gcn} into Algorithm \ref{alg:nmodel}. Similarly, Algorithm \ref{alg:nsage} defines our N-SAGE model by plugging Algorithm \ref{alg:sage} in Algorithm \ref{alg:nmodel}.
\begin{figure*}[h]
\begin{minipage}[t]{5.52in}
\begin{algorithm}[H]
\caption{General Implementation: Network of Graph Models}
\label{alg:nmodel}
\begin{algorithmic}[1]
\Require{$\hat{A}$ is a normalization of $A$}
\Function{Network}{$\textsc{GraphModelFn}$, $\hat{A}$, $X$, $L$, $r=4$, $K=6$, \textsc{ClassifierFn}=\textsc{FcLayer}}
\State{$P \leftarrow I$}
\State{GraphModels $ \leftarrow $ []}
\For{$k = 1 $ to $K$}
\For{$i = 1 $ to $r$}
\State{GraphModels.append($\textsc{GraphModelFn}(P, X, L)$)}
\EndFor
\State{$P \leftarrow \hat{A} P$}
\EndFor
\State{\Return \textsc{ClassifierFn}(GraphModels)}
\EndFunction
\end{algorithmic}
\end{algorithm}
\end{minipage}
\hfill
\begin{minipage}[t]{2.5in}
\begin{algorithm}[H]
\caption{GCN Model \citep{kipf}}
\label{alg:gcn}
\begin{algorithmic}[1]
\Require{$\hat{A}$ is a normalization of $A$}
\Function{GcnModel}{$\hat{A}$, $X$, $L$}
\State {$Z \leftarrow X$}
\For{$i = 1 $ to $L$}
\State{$Z \leftarrow \sigma(\hat{A} Z W^{(i)})$ }
\EndFor
\State{\Return $Z$}
\EndFunction
\end{algorithmic}
\end{algorithm}
\end{minipage}
\hfill
\begin{minipage}[t]{2.5in}
\begin{algorithm}[H]
\caption{SAGE Model \citep{sage}}
\label{alg:sage}
\begin{algorithmic}[1]
\Require{$\hat{A}$ is a normalization of $A$}
\Function{SageModel}{$\hat{A}$, $X$, $L$}
\State {$Z \leftarrow X$}
\For{$i = 1 $ to $L$}
\State{$Z \leftarrow \sigma(\left[ \begin{array}{c;{2pt/2pt}c}
Z & \hat{A} Z
\end{array} \right] W^{(i)})$ }
\State{$Z \leftarrow \textsc{L2NormalizeRows}(Z)$ }
\EndFor
\State{\Return $Z$}
\EndFunction
\end{algorithmic}
\end{algorithm}
\end{minipage}
\hfill
\begin{minipage}[t]{2.5in}
\begin{algorithm}[H]
\caption{N-GCN}
\label{alg:ngcn}
\begin{algorithmic}[1]
\Function{Ngcn}{$A$, $X$, $L=2$}
\State{$D \leftarrow \textbf{diag}(A \mathbf{1}) $}
\Comment{Sum rows}
\State{$\hat{A} \leftarrow D^{-1/2}AD^{-1/2}$}
\State{\Return $\textsc{Network}(\textsc{GcnModel}, \hat{A}, X, L) $}
\EndFunction
\end{algorithmic}
\end{algorithm}
\end{minipage}
\hfill
\begin{minipage}[t]{2.5in}
\begin{algorithm}[H]
\caption{N-SAGE}
\label{alg:nsage}
\begin{algorithmic}[1]
\Function{Nsage}{$A$, $X$}
\State{$D \leftarrow \textbf{diag}(A \mathbf{1}) $}
\Comment{Sum rows}
\State{$\hat{A} \leftarrow D^{-1}A$}
\State{\Return $\textsc{Network}(\textsc{SageModel}, \hat{A}, X, 2)$}
\EndFunction
\end{algorithmic}
\end{algorithm}
\end{minipage}
\hfill
\end{figure*}
We can recover the original algorithms GCN \citep{kipf} and SAGE \citep{sage}, respectively, by using Algorithms \ref{alg:ngcn} (N-GCN) and \ref{alg:nsage} (N-SAGE) with $r=1$, $K=1$, identity \textsc{ClassifierFn}, and modifying line 2 in Algorithm \ref{alg:nmodel} to $P \leftarrow \hat{A}$. Moreover, we can recover original DCNN \citep{diffusion-cnn} by calling Algorithm \ref{alg:ngcn} with $L=1$, $r=1$, modifying line 3 to $\hat{A} \leftarrow D^{-1}A$, and keeping $K>1$ as their proposed model operates on the power series of the transition matrix i.e. \textit{unmodified} random walks, like ours.
\section{Related Work}
\label{sec:related}
The field of graph learning algorithms is quickly evolving. We review work
most similar to ours.
Defferrard et al \cite{fast-spectrals} define graph convolutions as a $K$-degree polynomial of the Laplacian, where the polynomial coefficients are learned. In their setup, the $K$-th degree Laplacian is a sparse square matrix where entry at $(i, j)$ will be zero if nodes $i$ and $j$ are more than $K$ hops apart. Their sparsity analysis also applies here. A minor difference is the adjacency normalization. We use $\hat{A}$ whereas they use the Laplacian defined as $I-\hat{A}$. Raising $\hat{A}$ to power $K$ will produce a square matrix with entry $(i, j)$ being the probability of random walker ending at node $i$ after $K$ steps from node $j$. The major difference is the order of random walk versus non-linearity. In particular, their model calculates learns a linear combination of $K$-degree polynomial and pass through classifier function $g$, as in $g(\sum_k q_k \widetilde{A}^k)$, while our (e.g. $\textrm{N-GCN}$) model calculates $\sum_k q_k g(\widetilde{A}^k)$, where $\widetilde{A}$ is $\hat{A}$ in our model and $I-\hat{A}$ in theirs, and our $g$ can be a GCN module. In fact, \cite{fast-spectrals} is also similar to work by \cite{neuralwalk}, as they both learn polynomial coefficients to some normalized adjacency matrix.
Atwood and Towsley \cite{diffusion-cnn} propose DCNN, which calculates powers of the transition matrix and keeps each power in a separate channel until the classification sub-network at the end. Their model is therefore similar to our work in that it also falls under $\sum_k q_k g(\widetilde{A}^k)$. However, where their model multiplies features with each power $\widetilde{A}^k$ once, our model makes use of GCN's \citep{kipf} that multiply by $\widetilde{A}^k$ at every GCN layer (see Eq. \ref{eq:kipflayer}). Thus, DCNN model \citep{diffusion-cnn} is a special case of ours, when GCN module contains only one layer, as explained in Section \ref{sec:nmodel}.
|
{
"timestamp": "2018-02-27T02:07:27",
"yymm": "1802",
"arxiv_id": "1802.08888",
"language": "en",
"url": "https://arxiv.org/abs/1802.08888"
}
|
\section{Introduction}
Content selection algorithms take data and other information as input, and -- given a user's properties and past behavior -- produce a personalized list of content to display \cite{goldfarb2011online,liu2010personalized}.
This personalization leads to higher utility and efficiency both for the platform, which can increase revenue by selling targeted advertisements, and also for the user, who sees content more directly related to their interests \cite{Forbes2017,farahat2012effective}.
However, it is now known that such personalization may result in propagating or even creating biases that can influence decisions and opinions.
Recently, field studies have shown that user opinions about political candidates can be manipulated by personalized rankings of search results \cite{Epstein2015}.
Concerns have also been raised about gender and racial inequality in serving personalized advertising \cite{datta2015automated, sweeney2013discrimination, farahat2012effective}.
Just in the US, over two-thirds of adults consume news online on social media sites \cite{Mitchell2015};
the impact of how social media personalizes content is immense.
One approach to eliminate such biases would be to hide certain user properties so that they cannot be used for personalization;
however, this could come at a loss to the utility for both the user and the platform -- the content displayed would be less relevant and result in decreased attention from the user and less revenue for the platform
(see, e.g., \cite{sakulkar2016stochastic}).
\emph{Can we design personalization algorithms that allow us to be fair without a significant compromise in their utility?}
Here we focus on bandit-based personalization and introduce a rigorous algorithmic approach to this problem.
For concreteness we describe our approach for personalized news feeds, however it also applies to other personalization settings.
Here, content has different {\em types} such as news stories that lean republican vs. democrat, ads for high-paying or low-paying jobs, and is classified into {\em groups}; often based on a single type or a combination of a small number of types or \emph{sensitive attributes}.
Users can also have different types/contexts and, hence, different preferences over groups, but for simplicity here we focus on the case of a single user.
Current personalization algorithms, at every time-step, select a piece of content for the user,\footnote{In order to create a complete feed, content is simply selected repeatedly to fill the screen as the user scrolls down. The formalization does not change and hence, for clarity, we describe the process of selecting a single piece of content.} and feedback is obtained in the form of whether they click on, purchase or hover over the item.
The goal of the content selection algorithm is to select content for each user in order to maximize the positive feedback (and hence revenue) received.
As this optimal selection is a-priori unknown, the process {is often} modeled as an online learning problem in which a probability distribution (from which one selects content) is maintained and updated according to the feedback \cite{pandey2006handling}.
As such, as the content selection algorithms learn more about a user, the corresponding probability distributions tend to become sparse (i.e., concentrate the mass on a small subset of entries); the hypothesis is that this is what leads to extreme personalization in which content feeds skew entirely to a single type of content.
To counter this, we introduce a notion of \emph{online group fairness}, in which we require that the probability distribution from which content is sampled satisfies certain fairness constraints {\em at all time steps}; this ensures that the probability vectors do not become sparse (or specialize) to a \emph{single} group and thus the content shown to different types of users is not sparse across groups.
Subsequently, we present our model where we do not fix a notion of fairness, as this could depend on the application; instead, our framework allows for the specification of types or sensitive attributes, groups, and fairness constraints (in a similar spirit as \cite{yang2016measuring}).
Unlike previous work, our model can capture general class of constraints; we give a few important examples showing how it can incorporate prevalent discrimination metrics.
At the same time, the constraints remain linear and allow us to leverage the bandit optimization framework.
While there are several polynomial time algorithms for this setting, the challenge is to come up with a {\em practical} algorithm for this constrained bandit setting.
Our main technical contribution is to show how an adaptation of an existing algorithm for the unconstrained bandit setting, along with the special structure of our constraints, can lead to a scalable algorithm (with provable guarantees) for the resulting computational problem of { maximizing revenue while satisfying fairness constraints}.
Finally, we experimentally evaluate our model and algorithms on both synthetic and real-world data sets.
{Our algorithms approach the theoretical optimum in the constrained setting on both synthetic and real-world data sets and \textsc{Constrained-$\varepsilon$-Greedy}\xspace is very fast in practice.
We study how guaranteeing a fixed amount of fairness with respect to standard fairness metrics (e.g., {\em Risk Difference}) affects revenue on the YOW Dataset \cite{zhang2005bayesian} and a synthetic dataset.
For instance, we observe that for ensuring that the risk difference is less than $1-x$ for $x < \nicefrac{1}{2}$, our algorithms lose roughly $20x\%$ in revenue.
Similarly, to satisfy the 80\% rule often used in regal rulings \cite{DisparateImpactBook}, we lose less than 5\% in revenue in our synthetic experiments.
Our results show that ensuring fairness is not necessarily at odds with maximizing revenue.}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.6\linewidth]{settingV2.png}
\caption{{The content selection algorithm decides what to show to the user
and the feedback received from similar users. Different colors represent different types of content, e.g., news stories that lean republican vs. democrat.
Feedback could be past likes, purchases or follows.}}
\label{fig:setting}
\end{center}
\end{figure}
\begin{figure}
\centering
\begin{tabular}{cc}
\includegraphics[width=0.5\columnwidth]{biasV2.png}&
\label{fig:bias}
\includegraphics[width=0.5\columnwidth]{fairV2.png}\\
\label{fig:fair}
(a)&(b) \\
\end{tabular}
\caption{ (a) Existing algorithms can perpetuate systemic bias by presenting different types of content to different types of users. (b) Our proposed solution satisfies the fairness constraints and does not allow extreme differences in the types of content presented to different users. Different colors in the content represent different groups, e.g., ads for high vs low paying jobs. Our fair content selection algorithm does not permit extreme biases while personalizing content.}
\label{fig:bias_fair}
\end{figure}
\section{\bf Bandit Optimization and Personalization}
Algorithms for the general problem of {displaying content} to users largely fall within the multi-armed bandit framework, and \emph{stochastic contextual bandits} in particular \cite{BanditBook}.
At each time step $t = 1, \ldots, T$, a user views a {page} (e.g., Facebook, Twitter or Google News), the user's \emph{context} $s^t$ (that lies in a set $\mathcal S$) is given as input, and one {piece of content} (or \emph{arm}) $a^t \in [k]$ must be selected to be displayed.
A random \emph{reward} $r_{a^t, s^t}$ (hereafter denoted by $r_{a,s}^t$ for readability), which depends on both the given user context and the type of the selected {content} is then received.
{This reward captures resulting clicks, purchases, or time spent viewing the given content and depends not only on the type of user $s$ (e.g., men may be more likely to click on a sports article) but also on the content $a$ itself (e.g., some news articles have higher quality or appeal than others).}
More formally, at each time step $t$, a sample
$(s^t, r_{1,s}^t, \ldots, r_{k,s}^t)$
is drawn from an unknown distribution $\mathcal D$, the context $s^t \in \mathcal S$ is revealed, the player (the content selection algorithm in this case) selects an arm $a \in [k]$ and receives reward $r_{a,s}^t \in [0,1]$.
As is standard in the literature, we assume that the $r_{a,s}$s are drawn independently across $a$ and $t$ (and not necessarily $s$).
The rewards $r_{a^\prime,s^\prime}$ for any $a^\prime \neq a$ and $s^\prime \neq s$ are assumed to be {\em unknown} -- indeed, there is no way to observe what a user's actions {would have been} had a different {piece of content} been displayed, or what a different user would have done.
The algorithm computes a probability distribution $p^{t}$ over the {arms} based on the previous observations $(s^1, a^1, r_{a,s}^1), \ldots, (s^{t-1},a^{t-1},r_{a,s}^{t-1})$ and the current user type $s^{t}$, and then selects {arm} $a^t \sim p^t$; as $p^t$ depends on the context $s^t$, we often write $p^t(s^t)$ for clarity.
The goal is to select $p^t(s^t)$s in order to maximize the cumulative rewards, and
the efficacy of such an algorithm is measured with respect to how well it minimizes \emph{regret} -- the difference between the algorithm's reward and the reward obtained from the (unknown) optimal policy.
Formally, let $f: \mathcal S \to [k]$ be a mapping from contexts to {arms}, and let $f^\star := \arg\max_{f} \mathbb E_{(s,\vec{r})\sim\mathcal D} [r_{f(s),s}];$ i.e., $f^\star$ is the policy that selects the best arm in expectation for each context.
Then, the regret is defined as
$ \mathsf{Regret}_T := T \cdot \mathbb E_{(s,\vec{r})\sim\mathcal D} [r_{f^\star(s),s}] - \sum_{t=1}^T r_{a,s}^t. $
Note that the regret is a random variable as $a^t$ depends not only the draws from $p^t$, but also on the realized \emph{history} of samples $ \{(s^t, a^t, r_{a,s}^t)\}_{t=1}^T$.
\section{Our Model}
We would like a model that can guarantee fairness with respect to sensitive attributes of content in the bandit framework used for personalization.
Guaranteeing such {\em group fairness} would involve controlling disproportionate representation across the sensitive attributes.
Towards defining our notion of group fairness,
let $G_1,$ $\ldots,$ $G_g \subseteq [k]$ be $g$ \emph{groups} of {content}.
{For instance the $G_i$s could be a partition (e.g., ``republican-leaning'' news articles, ``democratic-leaning'', and ``neutral'').
An important feature of bandit algorithms, which ends up being the root cause of bias, is that the probability distribution converges to the action with the best expected reward for each context; i.e., the entire probability mass in each context ends up on a single group.
This leads to an effect where different users may be shown very different ad groups
and can be problematic when it leads to observed outcomes such as only showing minimum-wage jobs to disenfranchised populations.}
In Section \ref{sec:metrics-app}, we show that many metrics for discrimination or bias boil down to quantifying how different these probability distributions across groups can be.
Thus, finding a mechanism to control these probability distributions would ensure fairness with respect to many metrics.
\begin{table}
\centering
\begin{tabular}{ |c|c|c| }
\hline
\textbf{Algorithm} & \textbf{Per iteration Running time} & \textbf{Regret Bound}\\
\hline
\textsc{Confidence-Ball$_2$}\xspace \cite{dani2008stochastic} & NP-Hard problem & $O\left(\frac{k^2}{\gamma} \log^3 T\right)$ \\
\hline
\textsc{OFUL}\xspace \cite{YA2011} & NP-Hard problem & $\tilde{O}\left(\frac{1}{\gamma}\left(k^2+\log^2T\right)\right)$\\
\hline
\textsc{Confidence-Ball$_1$}\xspace \cite{dani2008stochastic} & $O\left(k^\omega\right) + 2k$ LP-s & $O\left(\frac{k^3}{\gamma} \log^3 T\right)$\\
\hline
\textsc{$L_1$-OFUL}\xspace (Algorithm \ref{algo:foful}) & $O\left(k^\omega\right) + 2k$ LP-s & $\tilde{O}\left(\frac{k}{\gamma}\left(k^2+\log^2T\right)\right)$ \\
\hline
\textsc{Constrained-$\varepsilon$-Greedy}\xspace (Algorithm \ref{algo:epsgreedy}) & $O\left(1\right) + 1$ LP & $O\left(\frac{k}{\gamma^2} \log T\right)$ \\
\hline
\end{tabular}
\caption{The complexity and problem-dependent regret bounds for various algorithms when the decision set is a polytope. }
\label{table:comparison}
\end{table}
This motivates our definition of \emph{group fairness constraints}.
For each group $G_i$, let $\ell_i$ be a lower bound and $u_i$ be an upper bound on the amount of probability mass that the content selection algorithm can place on this group. Formally, we impose
the following constraints:
\begin{equation}
\label{eq:fair}
\ell_i \leq \sum_{a \in G_i} p^t_a(s) \leq u_i \;\;\;\; \forall i \in [g], \forall t \in [T], \forall s \in \mathcal S.
\end{equation}
The bounds $\ell_i$s and $u_i$s provide a handle with which we can ensure that the probability mass placed on any given group is neither too high nor too low at each time step.
Rather than fixing the values of $u_i$s and $\ell_i$s, we allow them to be specified as input.
This allows one to control the extent of group fairness depending on the application, and hence (indirectly) encode bounds on a wide variety of existing fairness metrics.
This typically involves translating the fairness metric parameters into concrete values of $\ell_i$s and $u_i$s; see Section \ref{sec:metrics-app} for several examples.
For instance, given $\beta>0$, by setting $u_i$s and $\ell_i$s such that $u_i-\ell_i\leq \beta$ for all $i$, we can ensure that, what is referred to as, the {\em risk difference} is bounded by $\beta$.
An additional feature of our model is that no matter what the group structures, or the lower and upper bounds are, the constraints are always convex.
Importantly, note that unlike ignoring user contexts entirely, the constraints still allow for personalization \emph{across} groups.
For instance, if the groups are republican (R) vs democrat (D) articles, and the user contexts are known republicans (r) or democrats (d), we may require that $p^t_{\mbox{R}}(\cdot) \leq 0.75$ and $p^t_{\mbox{D}}(\cdot) \leq 0.75$ for all $t$.
This ensures that extreme polarization cannot occur -- at least 25\% of the articles a republican is presented with will be democrat-leaning.
Despite these constraints, personalization at the group level can still occur, e.g., by letting $p_{\mbox{R}}^t(\mbox{r}) = 0.75$ and $p_{\mbox{R}}^t(\mbox{d}) = 0.25$.
Furthermore, this framework allows for complete personalization \emph{within} a group; e.g., the republican-leaning articles shown to republicans and democrats may differ.
This is crucial as the utility maximizing republican articles for a republican may differ than the utility maximizing republican articles for a democrat.
To the best of our knowledge, such fairness constraints are novel for personalization.
Here the constraints allow us to addresses the concerns illustrated in our motivating examples.
The next question we address is how to measure an algorithm's performance against the best \emph{fair} solution.
\footnote{The unconstrained regret may be arbitrarily bad, e.g., if $u_i = \varepsilon \ll 1$ for the group $i$ that contains the arm with the best reward.}
We say that a probability distribution $p$ on $[k]$ is \emph{fair} if it satisfies the upper and lower bound constraints in \eqref{eq:fair}, and let $\mathcal C$ be the set of all such probability distributions. Note that given the linear nature of the constraints, the set $\cC$ is a polytope.
Let $\mathcal B$ be the set of functions $g: \mathcal S \to [0,1]^k$ such that $g(s) \in \mathcal C$; i.e., all $g \in \mathcal B$ satisfy the fairness constraints.
Further, we let
$g^\star := \arg\max_{g \in \mathcal B} \mathbb E_{(s,\vec{r})\sim\mathcal D} [r_{g(s),s}];$ i.e., $g^\star$ is the policy that selects the best arm in expectation for each context.
An algorithm is said to be fair if it only selects $p^t(s^t) \in \mathcal C$.
Thus, the \emph{fair regret} for such an algorithm can be defined as
$ \mathsf{FairRegret}_T := T \cdot \mathbb E_{(s,\vec{r})\sim\mathcal D} [r_{g^\star(s),s}] - \sum_{t=1}^T r_{a^t,s^t}.$
\section{Other Related Work}
Studies considering notions of group fairness such as {statistical parity}, {disparate impact}, and others mentioned above (see, e.g., \cite{kamiran2009classifying, feldman2015certifying}), apply these metrics the \emph{offline} problem; in our setting this would correspond to enforcing $p^T(s)$ to be roughly the same for all $s$, but would leave the intermediary $p^t(s)$ for $t < T$ unrestricted.
A subtle point is that most notions of (offline) group fairness primarily consider the selection or classification of groups of \emph{users}; in our context, while the goal is still fairness towards users, the selection is over \emph{content}.
However, as the constraints necessary to attain fairness remain on the selection process, we use the terminology \emph{online group fairness} to highlight these parallels.
In a completely different bandit setting, a recent work \cite{joseph2016fairness} defined a notion of \emph{online individual fairness} which, in our language, restricts $p^t$s so that all \emph{arms} are treated equally by only allowing the probability of one arm to be more than another if we are reasonably certain that it is better than the other.
When the arms correspond to \emph{users} and not content such individual fairness is indeed important, but for the personalized setting the requirement is both too strong (we are only concerned with \emph{groups} of content) and too weak (it still allows for convergence to different groups for different contexts).
Other constrained bandit settings that encode \emph{global} knapsack-like constraints (and locally place no restriction on $p^t$) have also been considered; see, e.g., \cite{agrawal2016linear}.
Our two fair bandit algorithms build on the calssic work of \cite{auer2002finite} and works on linear bandit optimization over convex sets \cite{dani2008stochastic,YA2011}.
\section{Algorithmic Results}
We present two algorithms that attempt to minimize regret in the presence of fairness constraints.
Recall that our fairness constraints are linear and the reward function linear.
For each arm $a \in [k]$ and each context $s \in \mathcal{S}$, let its mean reward be $\mu^\star_{a,s}$.
Our algorithms do not assume any relation between different contexts and, hence, function independently for each context.
Thus, we describe them for the case of a single fixed context.
In this case, the unknown parameters are the expectations of each arm $\mu^\star_a$ for $a \in [k]$.
We assume that the reward for the $t$-th time step is sampled from a Bernoulli distribution with probability of success $\mu^\star_{a^t}.$
In fact, the reward can be sampled from any bounded distribution with the above mean; we explain this further in Section \ref{sec:proofs_fofulreg}.
For this setting, we present two different algorithms, \textsc{$L_1$-OFUL}\xspace (Algorithm \ref{algo:foful}) and \textsc{Constrained-$\varepsilon$-Greedy}\xspace (Algorithm \ref{algo:epsgreedy}): the first has a better regret, and the second a better running time. The latter fact, as we discuss a bit later, is due to the special structure arising in our model when compared to the general linear bandit setting.
\begin{theorem}
Given the description of $\mathcal{C}$ and the sequence of rewards, the \textsc{$L_1$-OFUL}\xspace algorithm, run for $T$ iterations, has the following fair regret bound for each context $s \in \cS$:
$\mathbb{E}\left[{\sf FairRegret}_T\right]=O\left(\frac{k}{\gamma} \left(\log^2T + k\log T + k^2\log\log T \right)\right),$
where the expectation is taken over the histories and $a^t \sim p^t$, and $\gamma$ depends on $(\mu^\star_{a})_{a \in [k]}$ and $\mathcal{C}$ as defined in \eqref{eq:gamma}.
\label{theorem:fofulreg}
\end{theorem}
\begin{theorem} \label{thm:epsgreedy}
Given the description of $\cC$, a fair probability distribution $q_f \in \left\{q : B_{\infty}(q,\eta) \subset \mathcal{C} \right\}$,
and the sequence of rewards, the $\textsc{Constrained-$\varepsilon$-Greedy}\xspace$ algorithm, run for $T$ iterations, has the following fair regret bound for each context $s \in \mathcal{S}$:
$\mathbb{E}\left[{\sf FairRegret}_T\right] = \nonumber O
\left( \frac{\log T}{\eta \gamma^2} \right),
$
where $\epsilon_t = \min\{1,\nicefrac{4}{(\eta d^2t)}\}$ and $d = \min\{\gamma, \nicefrac{1}{2}\}$.\footnote{$B_\infty(q,\eta)$ is an $\ell_\infty$-ball of radius $\eta$ centered at $q$. The algorithm works for any lower bound $L$ on $\gamma$, with a $L$ instead of $\gamma$ in the regret bound.}
\label{theorem:epsgreedy}
\end{theorem}
\noindent
The quantity $\gamma$ is the difference between the maximum and the second maximum of the expected reward with respect to the $\mu^\star$s over the vertices of the polytope $\mathcal{C}$.
Formally, let $V(\mathcal{C})$ denote the set of vertices of $\mathcal{C}$ and
$v^\star := \arg\max_{v \in V(\mathcal{C})} \sum_{a \in [k]} \mu^\star_a v_a.$ Then,
\begin{equation}\label{eq:gamma}
\gamma:= \sum_{a \in [k]} \mu^\star_a v_a^\star - \max_{v \in V(\mathcal{C}) \backslash v^\star} \sum_{a \in [k]} \mu^\star_a v_a.
\end{equation}
For general convex sets, $\gamma$ can be $0$ and the regret bound can at best only be $\sqrt{T}$ \cite{dani2008stochastic}.
As our fairness constraints result in a $\mathcal{C}$ is a polytope, unless there are degeneracies, $\gamma$ is non-zero.
In general, $\gamma$ may be hard to estimate theoretically.
However, for the settings in which we conduct our experiments on, we observe that the value of $\gamma$ is reasonably large.
In traditional algorithms for Multi-Armed bandits, when the probability space is unconstrained, it suffices to solve $\argmax_{i \in [k]} \tilde{\mu}_i$, where $\tilde{\mu}_i$ is an estimate for the mean reward of the $i$-th arm.
It can be an optimistic estimate for the arm mean in case of the UCB algorithm \cite{auer2002finite}, a sample drawn from the normal distribution with the mean set as the empirical mean for the Thompson Sampling algorithm \cite{agrawal2012analysis} etc.
When the probability distribution is constrained to lie in a polytope $\cC$, instead of a maximum over the arm mean estimates, we need to solve $\argmax_{p \in \cC} \tilde{\mu}^\top p$.
This necessitates the use of a linear program for any algorithm operating in this fashion.
At every iteration, \textsc{$L_1$-OFUL}\xspace solves $2k$ LP-s, and \textsc{Constrained-$\varepsilon$-Greedy}\xspace solves one LP.
\textsc{Constrained-$\varepsilon$-Greedy}\xspace thus offers major improvements in running time over \textsc{$L_1$-OFUL}\xspace.
The regret bound of Algorithm \ref{algo:epsgreedy} has better dependence on $k$ and $T$, but is worse by a factor of $\nicefrac{1}{\gamma}$, as compared to Algorithm \ref{algo:foful}; see {Table \ref{table:comparison}}.
We now give overviews of the both the algorithms.
The full proofs of Theorems \ref{theorem:fofulreg} and \ref{theorem:epsgreedy} appear in Sections \ref{sec:proofs_fofulreg} and \ref{sec:proofs_epsgreedy} respectively.
\begin{figure}
\begin{algorithm}[H]
\caption{ \textsc{$L_1$-OFUL}\xspace}
\label{algo:foful}
\begin{algorithmic}[1]
{
\REQUIRE Constraint set $\cC$, maximum failure probabilty $\delta$, an $L_2$-norm bound on $\mu^\star$: $\norm{\mu^\star}_2 \leq \sigma$ and a positive integer $T$
\STATE Initialize $V_1 := \cI,$ $\hat{\mu}_1 := 0,$ and
$b_1 := 0$
\FOR{$t = 1, \ldots, T$}
\STATE Compute {${\beta_t(\delta)} := \left(\sqrt{2\log\left(\frac{\det(V_t)}{\delta}\right)^\frac{1}{2}} + \sigma \right)^2$}
\STATE Denote {$B_t^1 := \left\{\mu : \norm{\mu - \hat{\mu}_t}_{1, V_t} \leq \sqrt{k\beta_t(\delta)}\right\}$}
\STATE Compute {$p^t := \argmax_{p \in \cC} \max_{\mu \in B_t^1} \mu^\top p$}
\STATE Sample $a$ from the probability distribution $p^t$
\STATE {Observe reward $r_t = r^t_{a}$}
\STATE Update {$V_{t+1} := V_t + p^t{p^t}^\top$ }
\STATE Update {$b_{t+1} := b_t + r_tp^t$}
\STATE Update {$\hat{\mu}_{t+1} := V_{t+1}^{-1}b_{t+1}$}
\ENDFOR
}
\end{algorithmic}
\end{algorithm}
\end{figure}
\paragraph{\textsc{$\boldsymbol{L_1}$-OFUL}\xspace.}
At any given time $t$, \textsc{$L_1$-OFUL}\xspace maintains a regularized least-squares estimate for the optimal reward vector $\mu^\star$, which is denoted by $\hat{\mu}_t$.
At each time step $t$, the algorithm first constructs a suitable confidence set $B_t^1$ around $\hat{\mu}_t$.
Roughly, the definition of this set ensures that the confidence ball is ``flatter'' in the directions already explored by the algorithm so it has more likelihood of picking a probability vector from unexplored directions.
The algorithm chooses a probability distribution $p^t$ by solving a linear program on each of the $2k$ vertices of this confidence set, and plays an arm $a^t \sim p^t$.
Recall that for each arm $a\in[k]$, the mean reward is $\mu^\star_a\in[0,1]$.
The reward for each time step is generated as $r_t\sim\text{Bernoulli}(\mu^\star_{a^t})$, where $a^t \sim p^t$ is the arm the algorithm chooses at the $t^{th}$ time instant.
The algorithm observes this reward and updates its estimate to $\hat{\mu}_{t+1}$ for the next time-step appropriately.
\textsc{$L_1$-OFUL}\xspace (Algorithm \ref{algo:foful}) is an adaptation of the \textsc{OFUL}\xspace algorithm that appeared in \cite{YA2011}.
The key difference is that instead of using a scaled $L_2$-ball in each iteration, we use a a scaled $L_1$-ball (Step 4 in Algorithm \ref{algo:foful}).
As we explain below, this makes Step 5 of our algorithm efficient as opposed to that of \cite{YA2011} where the equivalent step required solving a NP-hard and nonconvex optimization problem.
This idea is similar to how \textsc{Confidence-Ball$_2$}\xspace was adapted to \textsc{Confidence-Ball$_1$}\xspace in \cite{dani2008stochastic}.
In particular, our algorithm improves, by a multiplicative factor of $O\left(\log T\right)$, the regret bound of $\left(O\left(\frac{k^3}{\gamma} \log^3 T\right)\right)$ of \textsc{Confidence-Ball$_1$}\xspace in \cite{dani2008stochastic}, see Table \ref{table:comparison}.
We show that the \textsc{$L_1$-OFUL}\xspace algorithm can be implemented in time polynomial in $k$ at each iteration.
Apart from Step 4, where we need to find
$\argmax_{p \in \cC}\max_{\mu \in B_t^1}\mu^\top p,
$
the other steps are quite easy to implement efficiently.
For Step 4, we assume oracle access to a linear programming algorithm which can efficiently (in poly($k$) time) compute $\argmax_{p \in \cC}\mu^\top p$. (where $\mu$ is the input to the oracle).
We first change the order of the maximization, i.e., $\max_{p \in \cC} \max_{\mu \in B_t^1} \mu^\top p = \max_{\mu \in B_t^1} \max_{p \in \cC} \mu^\top p$.
Using the linear programming oracle, we can solve the inner maximization problem of finding $\max_{p \in \cC} \mu^\top p$ for any given value of $\mu$.
It is enough to solve this at the $2k$ vertices of $B_t^1$ and take the maximum of these as our value of $\max_{p \in \cC} \max_{\mu \in B_t^1}\mu^\top p$, since one of these $2k$ vertices would be the maximum.
The value of $p$ corresponding to this maximum value would be the required value $\argmax_{p \in \cC}\max_{\mu \in B_t^1}\mu^\top p$.
Thus, in $2k$ calls to this oracle, we can find the desired probability distribution $p^t$.
Note that in order to find the value of $\det V_{t+1}$ and $V_{t+1}^{-1}$ in Steps 3 and 10 of the algorithm, we can perform rank-one updates by using the well-known Sherman-Morrison formula, which bring down the complexity for these steps to $O(k^2)$ from $O(k^3)$, since we already know the value of $\det V_{t}$ and $V_{t}^{-1}$.
\begin{figure}
\begin{algorithm}[H]
\caption{ \textsc{Constrained-$\varepsilon$-Greedy}\xspace}
\label{algo:epsgreedy}
\begin{algorithmic}[1]
{
\REQUIRE Constraint set $\cC$, a fair probability distribution $q_f \in \left\{q : B_{\infty}(q,\eta) \subset \mathcal{C} \right\}$, a positive integer $T$, a constant $L$ that controls the exploration
\STATE Initialize $\bar{\mu}_1 := 0$
\FOR{$t = 1, \ldots, T$}
\STATE Update $\epsilon_t := \min\{1,\nicefrac{4}{(\eta L^2t)}\}$
\STATE Compute {$p^t := \argmax_{p \in \cC} \bar{\mu}_t^\top p$}
\STATE Sample $a$ from the probability distribution $(1-\epsilon_t)p^t + \epsilon_t q_f$
\STATE {Observe reward $r_t = r^t_{a}$}
\STATE Update empirical mean {$\bar{\mu}_{t+1}$}
\ENDFOR
}
\end{algorithmic}
\end{algorithm}
\end{figure}
\paragraph{\textsc{Constrained-$\boldsymbol{\varepsilon}$-Greedy}\xspace.}
Compared to \textsc{$L_1$-OFUL}\xspace, instead of maintaining a least-squares estimate of the optimal reward vector $\mu^\star$, {\textsc{Constrained-$\varepsilon$-Greedy}\xspace} maintains an empirical mean estimate of it denoted by $\bar{\mu}_t$.
The algorithm, with probability $1-\eps$ chooses the probability distribution $p^t = \argmax_{p \in \cC}\bar{\mu}^\top p $, and with probability $\eps$ it samples from a feasible fair distribution $q_f \in \mathcal{C}$ in the $\eta$-interior.
It then plays an arm $a^t \sim p^t$.
The reward for each time step is generated as $r_t\sim\text{Bernoulli}(\mu^\star_{a^t})$, where $a^t \sim p^t$ is the arm the algorithm chooses at the $t^{th}$ time instant.
The algorithm observes this reward and updates its estimate to $\bar{\mu}_{t+1}$ for the next time-step appropriately.
Maintaining an empirical mean estimate instead of a least-squares estimate, and solving only one linear program instead of $2k$ linear programs at every iteration causes the main decrease in running time compared to \textsc{$L_1$-OFUL}\xspace.
\textsc{Constrained-$\varepsilon$-Greedy}\xspace is a variant of the classical $\epsilon$-Greedy approach \cite{auer2002finite}.
Recall that in our setting, an arm is an ad (corner of the $k$-dimensional simplex) and not a vertex of the polytope $\mathcal{C}$.
The polytope $\mathcal{C}$ sits inside this simplex and may have exponentially many vertices.
This is not that case in the setting of \cite{dani2008stochastic,YA2011} -- there may not be any ambient simplex in which their polytope sits, and even if there is, they do not use this additional information about which vertex of the simplex was chosen at each time $t$.
Thus, while they are forced to maintain confidence intervals of rewards for all the points in $\mathcal{C}$, this speciality in our model allows us to get away by maintaining confidence intervals only for the $k$ arms (vertices of the simplex) and then use these intervals to obtain a confidence interval for any point in $\mathcal{C}$.
Similar to $\epsilon$-Greedy, if we choose each arm enough number of times, we can build a good confidence interval around the mean of the reward for each arm.
The difference is that instead of converging to the optimal arm, our constraints maintain the point inside $\mathcal{C}$ and it converges to a vertex of $\mathcal{C}$.
To ensure that we choose each arm with high probability, we fix a fair point $q_f \in \eta$-interior of $\mathcal{C}$ and sample from the point $(1-\eps)p^t + \eps q_f$.
Then, as in $\epsilon$-Greedy, we proceed by bounding the regret showing that if the confidence-interval is tight enough, the optimal of LP with true mean $\mu^\star$ and LP with the empirical mean $\bar{\mu}$ does not change.
\textbf{Solving the LP.} In both \textsc{$L_1$-OFUL}\xspace and \textsc{Constrained-$\varepsilon$-Greedy}\xspace, if the ``groups'' in the constraint set form a partition, one can solve the linear program in $O(k)$ time via a simple greedy algorithm.
This is because, since each part is separate, the decision of how much probability mass goes to which group can be decided by a simple greedy process and, once that is decided, how to distribute the probability mass within each group can be also done trivially using a greedy strategy.
This can be extended to {\em laminar} family of constraints and we can solve the LP step in $O(gk)$ time exactly.
We provide more details in Section \ref{sec:laminar}.
For general group structures, given that the constraints are of packing/covering type, we believe that the algorithm of \cite{AllenZhuOrecchia} may be useful to obtain a fast and approximate LP solver.
\begin{figure*}[t!]
\centering
\begin{tabular}{ccc}
{\includegraphics[width=0.45\textwidth]{synthetic_lowerbound_varied_1000.png}}&
{\includegraphics[width=0.45\textwidth]{synthetic_alpha_varied.png}}&{\includegraphics[width=0.1\textwidth,trim = 0cm -4cm 0cm 0cm]{legend_ICML.JPG}}
\\
$\qquad \; \;$(a) & $\qquad \;\;$(b)
\end{tabular}
\caption{{Empirical results on Synthetic Data.} Depicts the normalized cumulative reward of the algorithms we consider
(a) as we vary the lower bound constraints $\ell_1 = \ell_2 = \ell$, and (b) as we vary the reward $\alpha$ lost when presenting content from a group the user does not prefer.
}
\label{fig:hm}
\end{figure*}
\section{Empirical Results}
\label{sec:experiments}
In this section we describe experimental evaluations of our algorithms when run on various constraint settings of our model on rewards derived both synthetically and from real-world data.
In each experiment we report the normalized cumulative reward for each of the following algorithms and benchmarks:
\begin{itemize}[leftmargin=*]
\item {\textsc{Naive}\xspace }. As a baseline, we consider a simple algorithm that satisfies the constraints as follows: For each group $i$, with probability $\ell_i$ it selects an arm uniformly at random from $G_i$, then, with any remaining probability, it selects an arm uniformly at random from the entire collection $[k]$ while respecting the upper bound constraints $u_i$.
\item {\textsc{Unc}\xspace}. For comparison, we also depict the performance of the \emph{unconstrained} \textsc{$L_1$-OFUL}\xspace algorithm (in which we let $\mathcal{C}$ be the set of {\em all} probability distributions over $[k]$).
\item {\textsc{Ran}\xspace}. At each time step, given the probability distribution $p^t$ specified by the unconstrained \textsc{$L_1$-OFUL}\xspace algorithm, take the largest $\theta \in [0,1]$ such that selecting an arm with probability $\theta \cdot p^t$ does not violate the fairness constraints. With the remaining probability $(1-\theta)$ it follows the same procedure as in \textsc{Naive}\xspace to select an arm at \emph{random} subject to the fairness constraints.
\item {\textsc{Fair-OFUL}\xspace}. Our implementation of \textsc{$L_1$-OFUL}\xspace with the given fairness constraints as input.
\item {\textsc{Fair-EPS}\xspace }. Our implementation of \textsc{Constrained-$\varepsilon$-Greedy}\xspace with the given fairness constraints as input\footnote{We set $\epsilon_t = \min(1, \nicefrac{10}{t})$. Tuning $\epsilon_t$ might give better results, depending on the dataset used.}.
\item {\textsc{Opt}\xspace}. For comparison, we often depict the performance of the hypothetical \emph{optimal} probability distribution, subject to the fairness constraints, that we could have used had we known the reward vector {$\mu^\star$} for the arms a-priori. Note that this optimal distribution is easy to compute via a simple greedy algorithm; it simply places the most probability mass that satisfies the constraints on the best arm, the most probability mass remaining on the second-best arm subject to the constraints, and so on and so forth until the entire probability mass is exhausted.
\end{itemize}
Note that \textsc{Fair-OFUL}\xspace, \textsc{Ran}\xspace, and \textsc{Unc}\xspace all use \textsc{$L_1$-OFUL}\xspace as a subroutine; however only \textsc{Fair-OFUL}\xspace and \textsc{Fair-EPS}\xspace take in the constraints as input, with \textsc{Ran}\xspace satisfying the fairness constraints via a different approach, and \textsc{Unc}\xspace does not satisfy the fairness constraints at all.
\subsection{Experiments on Synthetic Data}
\paragraph{Synthetic Data.}
We first consider a simple synthetic arm model in order to illustrate the tradeoffs between fairness, rewards, and arm structure.
We consider 2 groups, each containing 4 arms that give Bernoulli rewards. When a user prefers one group over another, we decrease the rewards of the group they don't like by a fixed value $\alpha$.
In the experiments, we let the mean of the rewards be $\left[0.28, 0.46, 0.64, 0.82\right]$, and let $\alpha = .1$ unless specified otherwise.\footnote{These values were chosen because they are the expected values of the arm reward, from smallest to largest, of 4 arms are sampled from $\cU[\alpha,1]$ where $\alpha = .1$.}
In each experiment we perform 100 repetitions and report the normalized cumulative reward after 1000 iterations; error bars represent the standard error of the mean.
\paragraph{Effect of Fairness Constraints.}
As there are only two groups, setting a lower bound constraint $\ell_1 = \zeta$ is equivalent to setting an upper bound constraint $u_2 = 1-\zeta$. Hence, it suffices to see the effect as we vary the lower bounds.
We fix $\alpha = .1$, and vary $\ell_1 = \ell_2 = \ell$ from {0 to .5}; i.e., a completely unconstrained setting to a fully constrained one in which each group has exactly 50\% probability of being selected.
We observe that, even for very small values of $\ell$, the $\textsc{Fair-OFUL}\xspace$ and $\textsc{Fair-EPS}\xspace $ algorithms significantly outperform \textsc{Ran}\xspace. Indeed, the performance of the $\textsc{Fair-OFUL}\xspace$ and $\textsc{Fair-EPS}\xspace $ algorithms is effectively the same as the performance of the (unattainable) hypothetical optimum, and is only worse than the unconstrained (and hence unfair) algorithm by an additive factor of approximately {$\nicefrac{\ell}{10}$}.
\paragraph{Effect of Group Preference Strength.}
An important parameter in the above model is the amount of reward lost when a user is shown items from a group that they do not prefer.
We fix $\ell = .25$, and vary $\alpha$ (the reward subtracted when choosing an arm from a non-preferred group) from {0 to .25}.
We note that even for very small values, e.g., $\alpha = .05$, algorithms such as \textsc{Ran}\xspace attain significantly less reward, while $\textsc{Fair-OFUL}\xspace$ and $\textsc{Fair-EPS}\xspace $ are just slightly worse than the unconstrained (and hence unfair) algorithm.
As before, we note that $\textsc{Fair-OFUL}\xspace$ and $\textsc{Fair-EPS}\xspace $ perform almost as well as the unattainable optimum, and are only worse than the unconstrained (and hence unfair) algorithm by an additive factor of approximately {$\nicefrac{\alpha}{4}$}.
\subsection{Experiments on Real-World Data}
\paragraph{Dataset.} We consider the YOW dataset \cite{zhang2005bayesian} which contained data from a collection of 24 paid users who read 5921 unique articles over a 4 week time period. The dataset contains the time at which each user read an article, a [0-5] rating for each article read by each user, and (optional) user-generated categories of articles viewed. We use this data to construct reward distributions for several different contexts (corresponding to different types of users) on a larger set of arms (corresponding to different articles with varying quality) of different types of content (corresponding to groups) that one can expect to see online.
We first created a simple ontology to categorize the 10010 user-generated labels into a total of $g = 7$ groups of content: Science, Entertainment, Business, World, Politics, Sports, and USA. We then removed all articles that did not have a unique label, in addition to any remaining articles that did not fit this ontology. This left us with 3403 articles, each belonging to a single group. We removed all users who did not view at least 100 of these articles; this left 21 users. We think of each user as a single context. We observe that on average there are $k=81$ unique articles in a day, and take this to be the number of ``arms'' in our experiment. The number of articles $k_i$ in group $G_i$ is simply the average number of unique articles observed from that group in a day.
Lastly, we note that the articles suggested to users in the original experiment were selected via a recommendation system tailored for each user. We note that some users rarely, if ever, look at certain categories at any point in the 4 weeks; the difference in the \% of User Likes (see {Table~\ref{tab:data}}) suggests that some amount of polarization, either extrinsic or intrinsic, is present in the data. This makes it an interesting dataset to work with, and is, in a sense, a worst-case setup for our experiment -- we take the pessimistic view that presenting content from un-viewed categories gives a user 0 reward (see below), yet we attempt to enforce fairness by presenting these categories anyway.
\paragraph{Experimental Setup.} Let $\rho_a$ be the rating of article $a \in [k]$, given by averaging all of the ratings it received across all users; we consider this to be the underlying quality of an article.
The quality of our arms are determined as follows: for a group $i$ with $|G_i|$ articles, we split the articles into $k_i$ buckets containing the $|G_i|/k_i$ lowest rated articles, 2nd lowest, and so on; call each such bucket $G_i^h$ for $h \in [k_i]$. Then, we let the score of arm $h \in [k_i]$ in group $i$ be
$\rho_{i,h} = \frac{\sum_{a \in G_i^h} \rho_a}{|G_i^h|}.$
This gives the underlying arm qualities for our simulations.
To determine the user preferences across groups,
we first calculate the probability of an article being of category \emph{i}, given that a given user $u$ has read the article; formally,
$q_i^u = \P{ a \in G_i | \mbox{user $u$ viewed $a$}}.$
We let the \emph{average reward} of arm $h$ in group $G_i$ for a user $u$ be $\left({\mu^\star}\right)_{i,h}^u = q_i^u\cdot \rho_{i,h}.$
We normalize the average rewards for each user to lie in [0.1,0.9], and assume that when we select an arm we receive its average reward plus 0-mean noise drawn from a truncated Normal distribution $\overline{\mathcal N}(0,0.05)$ which is truncated such that the rewards always lie within $(0,1)$.
\begin{table}
\centering
\scalebox{1}{
\begin{tabular}{ |c|c|c|c|c|c| }
\hline
\textbf{Category} & \textbf{\# Ratings} & \textbf{\# Articles/Day} & \textbf{Avg. Rating} & \textbf{\% Users Like} \\
\hline
\hline
Science & 1708&26&3.64&90.5 \\
\hline
Entertainment& 1170&18&3.31&76.2 \\
\hline
Business & 847&12&3.64&90.5 \\
\hline
World & 828&12&3.54&76.2 \\
\hline
Politics & 492&7&3.55&47.6 \\
\hline
Sports & 227&3&3.59&14.3 \\
\hline
USA & 227&3&3.48&28.6 \\
\hline
\hline
\textbf{Total} & \textbf{5501}&\textbf{81}&\textbf{3.54} &-\\
\hline
\end{tabular}
}
\caption{An overview of the dataset and resulting parameters used in our experiment.
We report the average number of unique articles each category has in a day across all users; in our experiment this is equivalent to the number of arms in each category. Lastly, we say that a user \emph{likes} a category if at least 5\% of the articles they read are from that category, and we report the \% of users who like each category.
}
\label{tab:data}
\end{table}
We have 21 users, each of which we think of as a different context.
We report the normalized cumulative reward averaged across all users for each of the algorithms described above.
Error bars depict the standard error of the mean.
\begin{figure*}[t!]
\centering
\begin{tabular}{ccc}
{\includegraphics[width=0.45\textwidth]{real_world_2000_upper_bound.png}}&
{\includegraphics[width=0.44\textwidth]{real_world_10000_convergence.png}}&
{\includegraphics[width=0.1\textwidth,trim = 0cm -4cm 0cm 0cm]{legend_ICML.JPG}}
\\
$\qquad\;\;$(a) & $\qquad\;\; $ (b)
\end{tabular}
\caption{{Empirical results on Real-World Data.} (a) {\em Tradeoff between Fairness and Reward.} The $x$ axis depicts the fairness of the constrained algorithms as measured by risk difference, and its effect on the normalized cumulative reward is reported.
Achieves the same risk difference by instead varying the upper bounds $u_i = u$ and leaving the lower bounds unconstrained $\ell_i = 0$ for all $i$. (b) {\em Convergence Over Time.} We observe that, for sufficiently many iterations, the normalized cumulative rewards from \textsc{Fair-OFUL}\xspace and \textsc{Fair-EPS}\xspace converge to \textsc{Opt}\xspace, but \textsc{Ran}\xspace does not.
}
\label{fig:realworld}
\end{figure*}
We consider the risk difference fairness metric (see Section \ref{sec:metrics-app}), and study how guaranteeing a fixed amount of fairness affects the normalized cumulative reward. Note that, if $\ell_i = \ell$ and $u_i = u$ for all $i$, then the risk difference is upper bounded by $u - \ell$, where 1 is the most unfair (users can be served completely different groups) and 0 is the most fair (all users see the same proportion of each group). Thus, for any fixed value $x$ of the risk difference, any $u-\ell = x$ satisfies the guarantee.
We consider the extreme setting where $\ell = 0$ and we vary only the upper bound $u$ in order to satisfy the desired amount of fairness (see Figure \ref{fig:realworld}).\footnote{Note that in order to compute the risk difference, one must take $\ell = \max\{0,1-(g-1)\cdot u\}$, i.e., one must consider the implicit lower bound implied by the set of upper bounds. Similarly, $u = \min\{1, 1-(g-1)\cdot \ell\}$. The risk difference is guaranteed to be at most $u - \ell$ given these implicit definitions. }
We observe that across the board \textsc{Fair-OFUL}\xspace and \textsc{Fair-EPS}\xspace outperform \textsc{Ran}\xspace. However, for the tightest constraints, none give much of an advantage over the \textsc{Naive}\xspace algorithm. As discussed above, this is due to the pessimistic nature of our reward estimates where we assume that if a viewer does not prefer a given category, they receive 0 reward from viewing such an article. Hence, when enforcing strict constraints, the algorithm must take many such 0-reward decisions. However, for more moderate constraints the performance improves significantly as compared to \textsc{Naive}\xspace . Furthermore, this is where the advantage of \textsc{Fair-EPS}\xspace and \textsc{Fair-OFUL}\xspace as opposed to \textsc{Ran}\xspace can be seen; in effect, \textsc{Fair-EPS}\xspace and \textsc{Fair-OFUL}\xspace optimize over the (non-zero) sub-optimal groups that the fairness constraints dictate, giving them an advantage over the other randomized approaches to satisfying fairness constraints.
Unlike the synthetic data, there is now a gap between the performance of the \textsc{Fair-OFUL}\xspace, \textsc{Fair-EPS}\xspace and \textsc{Opt}\xspace algorithms; this is largely due to the fact that, with 81 arms, there is a higher learning cost.
Indeed, even in the completely unconstrained case, we see that \textsc{Opt}\xspace outperforms \textsc{Unc}\xspace by an additive gap of approximately $0.05$. We now compare the empirical regret and running time for \textsc{Fair-OFUL}\xspace and \textsc{Fair-EPS}\xspace .
\paragraph{Empirical Regret.}We note that \textsc{Fair-EPS}\xspace outperforms \textsc{Fair-OFUL}\xspace for most settings with constraints. This demonstrates the worse dependence of regret on $k$ for \textsc{Fair-OFUL}\xspace. While the regret of \textsc{Fair-OFUL}\xspace grows as $O\left(k^3\right)$ with the number of arms $k$, regret of \textsc{Fair-EPS}\xspace grows as $O\left(k\right)$. In the case of 81 arms, the difference between \textsc{Fair-OFUL}\xspace and \textsc{Fair-EPS}\xspace becomes apparent.
Given enough time, the normalized reward of \textsc{Fair-OFUL}\xspace does converge to \textsc{Opt}\xspace as predicted by {Theorem \ref{theorem:fofulreg}} (we report the empirical convergence, for $\ell = 0$ and $u = \nicefrac{6}{7}$, in Figure \ref{fig:realworld}b).
\paragraph{Empirical Running Time.}With $\ell = 0$, $u = \nicefrac{6}{7}$, and $T=2000$, the empirical running time of \textsc{Fair-EPS}\xspace is 3.9 seconds, whereas the empirical running time of \textsc{Fair-OFUL}\xspace is 187.7 seconds, which shows the efficiancy gains we have with $\textsc{Constrained-$\varepsilon$-Greedy}\xspace$ as compared to $\textsc{$L_1$-OFUL}\xspace$.
Lastly, we observe that satisfying the same risk difference using upper bound constraints as opposed to lower bound constraints results in higher reward when the constraints are not tight. This occurs, again, because of the pessimistic reward calculations -- upper bound constraints allow more probability mass to be kept away from the 0-reward groups for longer while satisfying the same fairness guarantee.
In general, for a given fairness metric, there could be multiple ways of setting constraints to achieve the same fairness guarantee; an important open question that remains is how to optimize over these different possibilities.
\section{Proofs}
\subsection{Proof of Theorem \ref{theorem:fofulreg}}
\label{sec:proofs_fofulreg}
In this section we prove Theorem \ref{theorem:fofulreg}.
In fact we prove the following more precise version of it.
Since the bound is the same for any context $s \in \mathcal{S}$, we omit the context henceforth.
\begin{theorem}\label{thm:main2} Given the description of $\mathcal{C}$ and the sequence of rewards drawn from a $O(1)$-subgaussian distribution with the expectation vector $\mu^\star$.
Assume that $\norm{\mu^\star}_2 \leq \sigma$ for some $\sigma \geq 1$.
Then, with probability at least $1-\delta$, the regret of \textsc{$L_1$-OFUL}\xspace after time $T$ is:
\begin{align*}
&{\sf FairRegret}_T \leq \frac{8k\sigma^2}{\gamma}\bigg(\log T + (k-1)\log\frac{64 \sigma^2}{\gamma^2} +\\
& 2(k-1)\log\big(k\log\big(1 + \nicefrac{T}{k}\big) + 2\log\left(\nicefrac{1}{\delta}\right)\big)+ 2\log\left(\nicefrac{1}{\delta}\right)\bigg)^2.
\end{align*}
\end{theorem}
\paragraph{Notations for the proof.}
For a positive definite matrix $A\in\bR^{k\times k}$, the weighted $1$-norm and $2$-norm of a vector $x \in \bR^k$ is defined by
$$\norm{x}_{1,A} := \sum_{i = 1}^k\abs{A^{\nicefrac{1}{2}}x}_i \mbox{and} \; \; \norm{x}_{2,A} := \sqrt{x^\top A x}.$$
Let $p^\star := \argmax _{p \in \mathcal{C}} \langle \mu^\star, p\rangle$.
Let the instantaneous regret $R_t$ at time $t$ of \textsc{$L_1$-OFUL}\xspace be defined as the difference between the expected values of the reward received for $p^\star$ and the chosen arm $p^t$:
$$R_t= \ip{\mu^\star}{p^\star-p^t}.$$
The cumulative regret until time $T$, ${\sf FairRegret}_T$, is defined as $\sum_{t = 1}^{T}R_t$.
Recall that $r^t$ is the reward that the algorithm receives at the $t$-th time instance.
Note that the expected value of reward $r^t$ is $\ip{\mu^\star}{p^t}$.
Let $$\eta_t := r^t - \ip{\mu^\star}{p^t}.$$
The fact that $r^t$ is $O(1)$-subgaussian implies that $\eta_t$ is also $O(1)$-subgaussian.
Finally, recall that we denote our estimate of $\mu^\star$ at the $t$-th iteration by $\hat{\mu}_t$.
\paragraph{Technical lemmas.}
Towards the proof of Theorem \ref{thm:main2}, we need some results
from \cite{dani2008stochastic} and \cite{YA2011} that we restate in our setting.
The first is a theorem from \cite{YA2011} which helps us to prove that $\mu^\star$ lies in the confidence set $B_t^1$ at each time-step with high probability.
\begin{theorem}[Theorem 2 in \cite{YA2011}] Assume that the rewards are drawn from an $O(1)$-subgaussian distribution with the expectation vector $\mu^\star$.
%
Then, for any $0 < \delta < 1$, with probability at least $1-\delta$, for all $t \geq 0, \mu^\star$ lies in the set
\begin{align*}
B_t^2 := \left\{\mu \in \bR^k : \norm{\mu - \hat{\mu}_t}_{2, V_t} \leq \sqrt{\beta_t(\delta)}\right\}
\end{align*}
where $\beta_t$ is defined in Step 5 of the \textsc{$L_1$-OFUL}\xspace algorithm.
\label{theorem:YAconf}
\end{theorem}
\noindent
As a simple consequence of this theorem we prove that $\mu^\star$ lies inside $B_t^1$ with high probability.
\begin{lemma} $\mu^\star$ lies in the confidence set $B_t^1$ with a probability at least $1-\delta$ for all $t \in T$.
\label{lemma:confidenceset}
\end{lemma}
\begin{proof}
\begin{align*}
\norm{\mu^\star - \hat{\mu}_t}_{1, V_t}\leq \sqrt{k}\norm{\mu^\star - \hat{\mu}_t}_{2, V_t}
\leq \sqrt{k\beta_t(\delta)},
\end{align*}
Here, the first inequality follows from Cauchy-Schwarz and the second inequality holds with probability at least $1-\delta$ for all $t$ due to Theorem \ref{theorem:YAconf}.
\end{proof}
\noindent
The following four lemmas would be required in the proof of our main theorem.
\begin{lemma}[Lemma 7 in \cite{dani2008stochastic}] For all $\mu \in B_t^1$ (as defined in Step 6 of \textsc{$L_1$-OFUL}\xspace) and all $p \in \cC$, we have:
$$\abs{(\mu-\hat\mu_t)^\top p} \leq \sqrt{k\beta_t(\delta)p^\top V_t^{-1}p}$$ where $V_t$ is defined in Step 10 of the \textsc{$L_1$-OFUL}\xspace algorithm.
\label{lemma:dani7}
\end{lemma}
\begin{lemma}[Lemma 8 in \cite{dani2008stochastic}] If $\mu^\star \in B_t^1$, then $$R_t \leq 2\min\left(\sqrt{k\beta_t(\delta)p^{t\top} V_t^{-1}p^t}, 1\right).$$
\label{lemma:dani8}
\end{lemma}
\begin{lemma}[Lemma 11 in \cite{YA2011}] Let $\left\{p^t\right\}_{t=1}^{T}$ be a sequence in $\bR^k$, $V = \cI_k$ be the $k\times k$ identity matrix, and define $V_t := V + \sum_{\tau=1}^{t}p^{\tau}p^{\tau\top}$. Then, we have that:
\begin{align*}
\log\det(V_T) \leq \sum_{t = 1}^{T}\norm{p^t}_{V_{t-1}^{-1}}^2 \leq 2\log\det(V_T).
\end{align*}
\label{lemma:ya11}
\end{lemma}
\noindent Finally, we state another result from the proof of Theorem 5 in \cite{YA2011}.
\begin{lemma} For any $T$, we have the following upper bound on the value of $\frac{\beta_T(\delta)}{\gamma}\log\det(V_T)$:
\begin{align*}
&\frac{\beta_T(\delta)}{\gamma}\log\det(V_T) \leq \frac{\sigma^2}{\gamma}\bigg(\log T + (k-1)\log\frac{64\sigma^2}{\gamma^2}\\
&+2(k-1)\log(k\log\left(1 + \nicefrac{T}{k}\right) + 2\log(\nicefrac{1}{\delta}))+ 2\log(\nicefrac{1}{\delta})\bigg)^2
\end{align*}
where $\gamma$ is defined as in Equation \ref{eq:gamma}, $\sigma$, $\delta$ are as in Theorem \ref{thm:main2}.
\label{lemma:appendixE}
\end{lemma}
\paragraph{Proof of Theorem \ref{thm:main2}.}
Start by noting that
\begin{align}
{\sf FairRegret}_T=\sum_{t=1}^T R_t \leq \sum_{t=1}^T \frac{R_t^2} {\gamma}. \tag{$0 < \gamma \leq R_t$}
\end{align}
From Lemma \ref{lemma:confidenceset}, we know that $\mu^\star$ lies in $B_t^1$ with probability at least $1-\delta$.
Hence, with probability at least $1-\delta$, we have:
\begin{align*}
\sum_{t=1}^T\frac{R_t^2}{\gamma} &\leq \sum_{t=1}^T\frac{4k\beta_t(\delta)}{\gamma}\norm{p^t}_{2, V_t^{-1}}^2 \tag{from Lemma \ref{lemma:dani8}}\\
&\leq \frac{4k\beta_T(\delta)}{\gamma}\sum_{t=1}^T\norm{p^t}_{2, V_t^{-1}}^2 \tag{since $\beta_t$ is increasing with $t$}\\
&\leq \frac{8k\beta_T(\delta)}{\gamma}\log\det(V_T) \tag{from the second ineq. in Lemma \ref{lemma:ya11}}.
\end{align*}
To conclude the proof of the theorem, we combine the bound on ${\sf FairRegret}_T$ with Lemma \ref{lemma:appendixE} to give us the required upper bound on the RHS of the last inequality above:
\begin{align*}
&\sum_{t=1}^TR_t \leq \frac{8k\sigma^2}{\gamma}\bigg(\log T + (k-1)\log\frac{64\sigma^2}{\gamma^2}+\\
& 2(k-1)\log(k\log\left(1 + \nicefrac{T}{k}\right) + 2\log(\nicefrac{1}{\delta}))+ 2\log(\nicefrac{1}{\delta})\bigg)^2.
\end{align*}
\subsection{Proof of Theorem \ref{theorem:epsgreedy}}
\label{sec:proofs_epsgreedy}
Next, we give the proof of Theorem \ref{theorem:epsgreedy}.
\begin{proof}
Let $v^\star = [v^\star_1,\cdots,v^\star_k]$ be the optimal probability.
Conditioned on the history at time $t$, the expected regret of the $\textsc{Constrained-$\varepsilon$-Greedy}\xspace$ at iteration $t$ can be bounded as follows
\begin{align*}
R(t) &= \mu^{\star \top} v^\star - \left( (1-\epsilon_t)\mu^{\star \top} \bar{v}^t + \frac{\epsilon_t}{k}\sum_{a=1}^{k}\mu^\star_a \right) \\ &
\leq (1-\epsilon_t) \mu^{\star \top} (v^\star - \bar{v}^t) + \mu^{\star \top}v^\star\epsilon_t \\&
\leq (1-\epsilon_t) \mu^{\star \top}v^\star 1\{\bar{v}^t \neq v^\star \} +\mu^{\star \top}v^\star \epsilon_t.
\end{align*}
Let $n= \nicefrac{4}{(\eta d^2)}$. For $t\leq n$ we have $\epsilon_t=1$. The expected regret of the $\epsilon$-greedy is
\begin{align} \label{eq:regret_greedy}
& \mathbb{E}\left[{\sf FairRegret}_T\right] \leq \nonumber \\& \mu^{\star \top}v^\star \sum_{t=n+1}^{T} \mathbb{P}\{\bar{v}^t \neq v^\star \} + \mu^{\star \top}v^\star\sum_{t=1}^{T} \epsilon_t.
\end{align}
%
Let $\Delta \mu = \bar{\mu} - \mu^\star$.
Without loss of generality, let $\mu^{\star \top} v_i > \mu^{\star \top} v_j$ for any $v_i,v_j \in V(C)$ with $i<j$.
%
Hence, $v_1 = v^\star$. Let $\Delta_i = \mu^{\star \top} (v_1 - v_i)$.
As a result $\Delta_2 = \gamma$.
%
The event $\bar{v}^t \neq v^\star$ happens when $\bar{\mu}^\top_t v_i > \bar{\mu}^\top_t v_1$ for some $i>1$.
%
That is $(\mu^\star + \Delta \mu_t)^\top(v_i-v_1) = -\Delta_i + \Delta \mu_t^\top(v_i-v_1)\geq0$. As a result, we have
\begin{align}
\mathbb{P}\{\bar{v}^t \neq v^\star \} &= \mathbb{P}\{\bigcup_{v_i \in V(C) \backslash v_1 } \Delta \mu_t^\top(v_i-v_1) \geq \Delta_i \} \nonumber\\&
\leq \mathbb{P}\{\bigcup_{v_i \in V(C) \backslash v_1 } \|\Delta \mu_t\|_\infty \|v_i-v_1\|_1 \geq \Delta_i \} \label{eq:temp}\\&
\leq \mathbb{P}\{\bigcup_{v_i \in V(C) \backslash v_1 } \|\Delta \mu_t\|_\infty \geq \frac{\Delta_i}{2} \} \nonumber\\&
= \mathbb{P}\{\|\Delta \mu_t\|_\infty \geq \frac{\gamma}{2} \} \nonumber\\&
= \mathbb{P}\{\bigcup_{j \in[k] } |\Delta \mu_{t,j}| \geq \frac{\gamma}{2} \}\nonumber\\&
\leq \sum_{j\in [k]}\mathbb{P}\{ |\Delta \mu_{t,j} |\geq \frac{\gamma}{2} \} .
\end{align}
In \eqref{eq:temp} we use Holder's inequality.
%
Let $E_t = \eta \sum_{\tau=1}^{t} \nicefrac{\epsilon_t}{2} $ and
%
let $N_{t,j}$ be the number of times that we have chosen arm $j$ up to time $t$.
Next, we bound $\mathbb{P}\{| \Delta \mu _{t,j}|\geq \frac{\Delta_2}{2} \}$.
%
\begin{align}
&\mathbb{P}\{ |\Delta \mu_{t,j}| \geq \frac{\gamma}{2} \} \nonumber\\&
= \mathbb{P}\{ |\Delta \mu_{t,j}| \geq \frac{\gamma}{2} | N_{t,j} \geq E_t\} \mathbb{P}( N_{t,j} \geq E_t) \nonumber \\&+ \mathbb{P}\{ |\Delta \mu_{t,j}|\geq \frac{\gamma}{2} | N_{t,j} < E_t\} \mathbb{P}( N_{t,j} < E_t) \nonumber \\ &
\leq \mathbb{P}\{ |\Delta \mu_{t,j}|\geq \frac{\gamma}{2} | N_{t,j} \geq E_t\} +
\mathbb{P}( N_{t,j} < E_t). \label{eq:temp2}
\end{align}
%
%
As $q_f \in \left\{q : B_{\infty}(q,\eta) \subset \mathcal{C} \right\}$, we have $q_{a,f} > \eta $.
Next, we bound each term of \eqref{eq:temp2}. First, using Chernoff-Hoeffding bound we have
%
\begin{equation} \label{eq:temp3}
\mathbb{P}\{ |\Delta \mu_{t,j}|\geq \frac{\gamma}{2} | N_{t,j} \geq E_t\} \leq 2\exp(-\frac{E_t\gamma^2}{2}).
\end{equation}
Second, using Bernstein inequality we have
%
\begin{equation} \label{eq:temp4}
\mathbb{P}( N_{t,j} < E_t) \leq \exp(-\frac{E_t}{5}).
\end{equation}
For $t\leq n$, $\epsilon_t=1$ and $E_t = \eta t/ 2$. For $t > n$ we have
\begin{align}
E_t = \frac{\eta \cdot n}{2} + \sum_{i=n+1}^{t} \frac{2 }{d^2i} &\geq\frac{2}{d^2} + \frac{2}{d^2}\ln(\frac{t}{n}) \nonumber \\&
= \frac{2}{d^2}\ln(e\frac{t}{n}).\label{eq:temp5}
\end{align}
By plugging \eqref{eq:temp3}, \eqref{eq:temp4} and \eqref{eq:temp5} in \eqref{eq:temp2} and noting that $\gamma <1/2$ we get
\begin{align} \label{eq:temp6}
\mathbb{P}\{ |\Delta \mu_{t,j}|&\geq \frac{\gamma}{2} \} \leq
\left(\frac{n}{et} \right)^{\frac{\gamma^2}{d^2}} +
\left(\frac{n}{et} \right)^{\frac{4}{10d^2}} \nonumber \\&
\leq \left(\frac{n}{et} \right) +
\left(\frac{n}{et} \right)^{\frac{4}{10d^2}} \leq 2\left(\frac{n}{et} \right) .
\end{align}
Plugging \eqref{eq:temp6} in \eqref{eq:regret_greedy} yields
\begin{align} \label{eq:regret_greedy2}
&\mathbb{E}\left[{\sf FairRegret}_T\right] \leq \nonumber \\& \mu^{\star \top}v^\star
\left( (1+\frac{2n}{e}) \ln T + n \right).
\end{align}
By substituting $n = \nicefrac{4}{(\eta d^2)} $ in the regret above and noting that $\gamma\leq 2d$ we conclude the proof
\begin{align} \label{eq:regret_greedy3}
& \mathbb{E}\left[{\sf FairRegret}_T\right] \leq \nonumber \\& \mu^{\star \top}v^\star
\left( \left(1+\frac{4}{\eta d^2}\right) \ln T + \frac{4}{\eta d^2} \right) = O\left( \frac{\log T}{\eta \gamma^2} \right).
\end{align}
%
\end{proof}
\subsection{Laminar Constraints}
\label{sec:laminar}
In this section, we consider a laminar type of constraints. Let the Groups $G_1,\ldots,G_g \subseteq [k]$ be such that: $G_i \cap G_j \neq \emptyset$ implies $G_i \subseteq G_j$ or $G_j \subseteq G_i$.
In this case, the linear programming problem can be solved efficiently by a greedy algorithm.
The groups form a tree data structure, where the children are the largest groups that are subset of the parents.
For example in Figure~\ref{fig:Laminar}, the groups $G_1$ and $G_2$ are subsets of the arms $[k]$ and $G_1 \cap G_2 = \emptyset$. Similarly, the groups $G_3$ and $G_4$ are subsets of the group $G_1$ and $G_3 \cap G_4 = \emptyset$. $G_5$ is a subset of $G_2$.
\begin{figure}[t]
\centering
\hspace{-.2in}
\includegraphics[width=.6\linewidth]{fast_bandit.pdf}
\caption{Laminar Group structure.}
\label{fig:Laminar}
\end{figure}
If the lower bound $\ell_i$ for a group $G_i$ is smaller than the sum of the lower bounds for the children groups, then we increase it to the sum of of the lower bounds for the children groups.
Fo example in Figure~\ref{fig:Laminar}, if $\ell_1< \ell_3+ \ell_4$, then we increase it to $\ell_3+ \ell_4$. This is because satisfying the lower bound of $G_3$ and $G_4$ automatically satisfies the probability of $G_1$.
Similarly, if the upper bound $u_i$ for a group $G_i$ is larger than the sum of the lower bounds for the children groups, then we decrease it to the sum of of the upper bounds for the children groups.
For example in Figure~\ref{fig:Laminar}, if $u_1 > u_3+ u_4$, then we decrease it to $u_3+ u_4$. This is again because the total probability that an arm in group $G_i$ is selected cannot be larger than the upper bounds of its children.
This change of the upper and lower bounds does not change the optimum of the LP problem.
In the greedy algorithm, first we satisfy the lower bounds, then we allocate the remaining probability such that the upper bounds are not violated.
In satisfying the lower bounds, we take a bottom-up approach.
We start from the leaves and satisfy the lower bound by giving the item to the arm $a$ with the largest reward in the group, i.e., $\argmax_{a \in G_i} \mu_a$. In our example, we set the probability of $\argmax_{a \in G_3} \mu_a$ to $\ell_3$, the probability of $\argmax_{a \in G_4} \mu_a$ to $\ell_4$ and the probability of $\argmax_{a \in G_5} \mu_a$ to $\ell_5$.
Next, we proceed with satisfying the lower bound for the parents. In our example, we add the probability of $\ell_1- (\ell_3+\ell_4)$ to $\argmax_{a \in G_1} \mu_a$, and the probability of $\ell_2- \ell_5$ to $\argmax_{a \in G_2} \mu_a$.
We continue the process until no group remains infeasible. Finally, we assign the remaining probability to the arm with the largest reward. In our example, we add the probability of $1- (\ell_1+\ell_2)$ to $\argmax_{a \in [k]} \mu_a$.
The remaining probability is first allocated to $\argmax_{a \in [k]} \mu_a$ until we reach the probablility for one of the upper bound constraints.
Then, we eliminate the arms inside that group, and we allocate some probability to the arm with the maximum reward until another upper bound constraint is reached.
We continue this process until either our distribution over arms is a probability distribution or we cannot allocate more probability to any arm without violating a constraint.
Let the probability that an arm from the group $G_i$ is selected be $\sum_{a\in G_i} v_a = q_i$ and the children of group $G_i$ be $G_{i1}, G_{i2},\ldots,G_{im}$. We denote the optimal allocation (subject to the constraints) at node $G_i$ be $OPT(G_i,q_i,\ell,u)$. Then, we have
\begin{align} \label{eq:temp_opt}
&OPT(G_i,q_i,\ell,u) = \\&\sum_{j=1}^{m} OPT(G_{ij},\ell_{ij},\ell,1) + OPT(G_i,q_i-\sum_{j=1}^{m}\ell_{ij},0,u). \nonumber
\end{align}
In satisfying the lower bounds (i.e., first term in \eqref{eq:temp_opt})
if we do not change the probability of selecting an arm inside a group $G_i$, i.e., $\sum_{a\in G_i} v_a$, then it does not effect the parent groups, hence we can locally optimize the problem beginning from the smaller groups.
We can show by contradiction that the procedure for allocating the remaining probability (i.e., second term in \eqref{eq:temp_opt}) is optimal.
This is because, if the probability of $\argmax_{a \in [k]} v_a$ can be increased without violating an upper bound or we can increase it by reducing another arm's probability of winning, then the current probability allocation is not optimal.
The running time of the algorithm is linear in the number of the arms $k$ and height of the tree.
Given that height of the tree is less than $g$, the total running time becomes $O(gk)$.
\section{Conclusion}
In this paper we initiate a formal study of incorporating fairness in bandit-based personalization algorithms.
We present a general framework that allows one to ensure that a fairness metric of choice attains a desired fixed value by providing appropriate upper and lower bounds on the probability that a given group of content is shown.
We present two new bandit algorithms that perform well with respect to reward, improving the regret bound over the state-of-the-art.
\textsc{Fair-EPS}\xspace is particularly fast and we expect it to scale well in web-level applications.
Empirically, we observe that our \textsc{Fair-OFUL}\xspace and \textsc{Fair-EPS}\xspace algorithms indeed perform well; they not only converge quickly to the theoretical optimum, but this optimum, even for the tightest constraints (which attain a risk difference of 0) on the pessimistic arm values selected, is within a factor of 2 of the unconstrained rewards.
From an experimental standpoint, it would be interesting to explore the effect of the group structure in conjunction with the rewards structure on the tradeoff between fairness and regret.
Additionally, testing this algorithm in the field, in particular to measure user satisfaction given diversified news feeds, would be of significant interest.
Such an experiment would give deeper insight into the benefits and tradeoffs between personalization and diversification of content, which could then be leveraged to set the appropriate rewards and parameters in our algorithm.
\bibliographystyle{plain}
|
{
"timestamp": "2018-02-26T02:12:29",
"yymm": "1802",
"arxiv_id": "1802.08674",
"language": "en",
"url": "https://arxiv.org/abs/1802.08674"
}
|
\section{Introduction}
Online learning has received little attention from physics education researchers relative to topics such as conceptual understanding or student discussions in the classroom \citep{docktor_synthesis_2014}. Physics courses are comparatively rare in online offerings, in part because of the hands-on laboratory courses required by the introductory sequence. However, many instructors are interested in promoting more student discussion in their classes, and web-based forums are a readily available tool for this purpose \citep{howard_discussion_2015}. Some work in physics has analyzed student discussion posts about homework problems \citep{kortemeyer_analysis_2006} or in textbook annotation \citep{miller_analysis_2016}, but more general-purpose forums of the type commonly discussed in distance learning literature are only beginning to be studied \citep{traxler_coursenetworking_2016,gavrin_connecting_2017}.
Forums are included with learning management systems at universities, and are freely available on various stand-alone platforms. Thus, they represent a tool that is available to instructors regardless of their choice of homework system or textbook.
To better understand these tools, this paper turns to network methods, which are a natural framework for analyzing the intricate record of transactions produced by discussion forums \citep{garton_studying_1997}.
We consider data from three semesters of an introductory physics course, taking the entire forum transcript (on- or off-topic) as our data. We use network analysis to explore and compare the structure of the discussions between semesters, drawing on the ``map'' of student connectivity that electronic records preserve. Since online environments have not been extensively studied in physics, we will summarize key results and questions of interest from
educational technology and distance learning.
This study applies network analysis in a new context for physics education research and aims to begin building a physics-specific understanding of how students form asynchronous discussion communities that help their learning.
\subsection{Computer-mediated communication}
Research about how students talk online is typically published
under keywords such as computer-mediated communication (CMC) and computer-supported collaborative learning (CSCL).
Numerous CSCL studies compare online to offline classes in terms of student achievement or satisfaction, and in many cases find that the online environment does at least as well as face-to-face classes \citep{johnson_synchronous_2006}.
Potential strengths of asynchronous forums include longer ``think time'' and the ability to easily reference comments from previous weeks, while drawbacks include reluctance to participate and high variability in comment quality \citep{guzdial_effective_2000,howard_discussion_2015}. The reduced-social-cues nature of text communication can lead to an unpredictable social gestalt in CMC. Researchers have observed both impersonal, highly task-focused environments, and equally strong interpersonal groups where a sense of community can even interfere with ``on-task'' discussions
if members hesitate to disagree with each other \citep{walther_computer-mediated_1996}. A review by \citet{walther_computer-mediated_1996} synthesizes early results to suggest that the speed and quality of community development are shaped by a sense of shared purpose among users, longevity of
the group, and outside cues or facilitation.
Educational settings vary in formality from technical, highly-focused project work to free-for-all socializing, producing a range of conversation styles from
expository to epistolary \citep{fahy_epistolary_2002}.
Shared purpose might be expected as a given in course forums, but in practice is often missing, and this is one area where instructor guidance can be very influential \citep{guzdial_effective_2000,howard_discussion_2015}.
To analyze the cognitive level of discussions, many researchers have turned to content analysis. Key results from this area are summarized by \citet{de_wever_content_2006} in their review of 15
content analysis schemes for asynchronous discussion groups. They find that content analysis tools vary widely in how clearly they connect learning theory to content codes and how (or if) they report inter-rater reliability measures. Few schemes were used in more than one study, and there is no wide consensus about how to break online conversations into an appropriate ``length scale'' (post, sentence, etc.) for analysis \citep{rourke_methodological_2001}.
Many researchers instead seek purely quantitative ways to study online talk, including social network analysis. \citet{garton_studying_1997} argue that social network analysis can effectively describe online interactions with concepts like tie strength, multiplexity (different channels or purposes of communication), or structural roles of nodes in the network. \citet{wortham_nodal_1999} notes that different network topologies could be productively linked to claims about communities of practice or cognitive apprenticeship. Though network analysis does not speak to the details of messages between students, it can show who talks to whom, the density and frequency of those ties, and how they evolve over time. For instructors trying to build a useful community for an online or online-supplemented course, there are many open questions, some of them first posed decades ago \citep{rice_electronic_1987,guzdial_effective_2000}: What time scales are appropriate to characterize discussions? What does reciprocity in relationships mean online, where many students might read a post but few respond? How much instructor involvement is needed to promote useful conversation?
In this study, we include data from the entire semester, to eliminate possible selection effects from only sampling a slice of weeks. The question of reciprocity is taken up again in Section \ref{sec:meth-nw} where our network model is discussed. We found no obvious link between the instructor's posting frequency and the discussion network that develops, but a future content analysis of the data may better address this question.
A final caution in generalizing from the CSCL literature is that most results come from fully online courses, and graduate-level courses are overrepresented.
It may be possible to draw on the discussion strengths of forums without the isolating effects of a distance course by using a web-based forum to supplement a traditional live class. Studies of this type of forum use are still relatively rare \citep{guzdial_effective_2000,yang_effects_2003}, especially at the introductory undergraduate level. This adjunct or ``anchored'' mode may be of the most interest to physics educators, whose courses are typically offered face-to-face
and who increasingly want to build community as part of active learning.
\subsection{Network analyses of online learning}
In a recent review of social network analyses in educational technology, \citet{sie_social_2012} classify study goals as visualization, analysis, simulation, or intervention. Work reviewed here fits in the first two types, and
can be grouped into two broad categories: descriptive studies of the structure of student networks in online education, and research connecting students' network positions with performance measures.
In the first category, work appearing in distance learning or online education literature has used network methods to understand online community structures (or lack thereof).
Researchers use network analysis to show power relations in the group or the engagement level of learners \citep{wortham_nodal_1999,de_laat_investigating_2007}. Other work contrasts between semesters or between student groups within a semester \citep{reffay_social_2002,aviv_network_2003}, and uses visual displays or clustering analysis to show differences in the community structure.
These studies all function as proofs of concept for analyzing online talk via networks, and some suggest best practices for constructing learning environments, but they are primarily exploratory.
They also span a range of communication channels, from synchronous text chat to asynchronous forums or email lists.
One larger pattern that emerges from this literature review is that
the communication medium affects network models---for example, using emails to link the network may produce many one-way but few reciprocal connections.
We will return to this issue in Section \ref{sec:methods}.
A second category of studies chooses one or more markers of course success and tries to link students' network centrality with those outcomes. \citet{yang_effects_2003} correlated centrality in friendship, advice, and adversarial networks with several components of course grade in an undergraduate business course that used a forum to supplement the face-to-face class. They found that centrality in the advice network was positively correlated with performance in both online and offline class activities. Centrality in the adversarial network (\textit{e.g.}, ``Which of the following individuals are difficult to keep a good relationship with?'') was negatively correlated with final exam and overall course grade. \citet{cho_social_2007} collected survey-based networks at the beginning and end of a two-semester online course sequence on aerospace system design. They looked for links between centrality and final grade and between a Willingness-to-Communicate (WTC) construct and network growth. They found that post-course (but not pre-course) degree and closeness centrality were positively correlated with final grade, and that students with higher WTC were more likely to form new ties during the two semesters.
Other approaches use different positive outcomes or look for network characteristics of successful students rather than course-wide correlations.
\citet{dawson_study_2008} correlated students' centrality in course forum networks and their sense of course community as measured by Rovai's Classroom Community Scale \citep{rovai_development_2002}. He found that degree and closeness centrality were positively correlated and betweenness centrality was negatively correlated with greater feelings of classroom community.
However, the data pools 25 courses at undergraduate and graduate levels, different amounts of online integration, and different communication channels, so direct comparisons with these results are difficult.
In a second study \citep{dawson_seeing_2010}, the same author examined student participation in an optional (but encouraged) discussion forum used as a supplement to a large introductory chemistry course. Focusing on the ``ego networks'' (immediate connections, see \citep{hanneman_introduction_2005}) of individual students in the top and bottom 10\% of the grade distribution, he found that students in the high-performing group had larger ego networks, and the members of those networks had higher average grades. Additionally, there was a higher percentage of instructor presence in the networks of high-scoring students, who tended to ask a larger number of conceptual questions. Students in the lower-performing group often asked more fact-based questions, which were typically answered by other students, leading to an unintended ``rich get richer'' effect of the higher-performing students receiving a larger share of instructor attention.
There is thus evidence to support networks' ability to distinguish between at least some types of online dialog structure, and to support links between network centrality and final grade. The latter point has been observed in some physics classrooms \citep{bruun_talking_2013}, but not previously sought in electronic forums.
With some exceptions \citep{aviv_network_2003}, most of the online network studies either give results for a single course offering or pool multiple courses together. They thus provide interesting cases, but it is unclear how stable their results may be from one semester to the next. Since network analysis requires start-up time for data cleaning and analysis, it is also reasonable to ask if it shows anything not already evident from the participation statistics reported by most forum software.
Building on the literature above, we consider three research questions:
\begin{enumerate}
\item How do discussion forum networks differ among multiple semesters of an introductory physics course, and can this information be extracted more easily from participation statistics?
\item How much are student final grades correlated with their centrality in the discussion forum network?
\item Do centrality/grade correlations, if present, strengthen when reducing the network to a more simplified ``backbone?''
\end{enumerate}
The third question has not been considered in any prior educational network studies we could find, but emerged from the high density of our discussion networks (Sec.\ \ref{sec:results}) and recent work piloting network sparsification in physics education research \citep{brewe_using_2016}.
\section{Methods\label{sec:methods}}
Below, we describe data collection, the process of building forum networks, comparison of network measures with final course grades, and how we simplified the network using backbone extraction. Further details on the backbone process, including source code, are in the Supplemental Material.
\subsection{Data collection}
We adopted the CourseNetworking (CN) platform \citep{theCN}, which combines a robust forum tool with features typical of learning management systems. CN is a cloud-based platform, accessible either through a web browser or through apps on IOS and Android mobile devices. We selected CN primarily because the interface is ``student-centric,'' that is, student work occupies the majority of the view, and faculty focused tools are secondary. Although it is possible to use CN as a standalone LMS, the instructor coupled it with another system (Canvas) and used CN exclusively as a forum. The CN forum has a look and feel similar to other popular social media, so students pick it up with minimal introduction. The forum supports starting threads as either posts or polls and allows hyperlinks, embedded images and videos, and downloadable files.
Polls may be structured as multiple choice, ranking, free response, and other formats, allowing students to create and post ``sample questions'' for one another. Students may also post Reflections (comments) beneath Posts and Polls, and rate Posts and Polls using a 1--3 star system. Our network analysis is based on which students post comments on one another's work, as detailed in the next subsection.
One of us (AG) used the CN forum in three full semesters of a calculus-based introductory mechanics class. The initial enrollment was over 160 students each semester, with the majority of the students being engineering and computer science majors. The institutional context is an urban, public university enrolling approximately 30,000 students.
In all three semesters, the university had undergraduate racial/ethnic demographics of 71--72\% white, 10\% African American, 6--7\% Hispanic/Latino, other groups (including international students) 4\% or less.
The majority of students commute, and most work part- or full-time in off campus jobs.
The course is heavily interactive, using Peer Instruction \citep{mazur_peer_1997} and Just-in-Time Teaching (JiTT) \citep{novak_just--time_1999} in the lectures, and group problem solving in the recitations. Students received extra credit (maximum 5\% of the course grade) for use of the forum. (All calculations below involving student grades exclude forum bonus points.)
Further course details are described by \citet{gavrin_students_2016}. In all semesters, CN was introduced on the first day of class with a brief demonstration. In Semesters 1 and 3, the instructor used the CN ``Tasks'' feature to provide an optional weekly discussion topic, which took place in the forum and did not involve extra class time. Finally, in Semester 3, the first-day introduction included mention of a new ability in the software to tag posts with instructor- or user-created ``hashtags.'' In all other respects, the CN implementation was identical across terms.
\subsection{Casting forum data as networks\label{sec:meth-nw}}
The forum transcript contains the following data: Content ID (Post, Poll, or Reflection); a unique student identifier code; the date, time, and text of the post; the number of attachments (pictures or ``other''); and the star rating (pre-2016, number of ``likes'') accumulated by the post or comment.
In this analysis, the fields for text, number of attachments, and stars or Likes are not used; content ID, student code, date, and time are retained. The transcript also groups all reflections below their parent post or poll, showing a threaded view that corresponds to the student view of the forum. The CN software does track the ``nesting'' level of a reply (whether a student hit the reply button for the original post, or for another reply to that post). In practice, most students did not organize their replies in a multi-layer fashion, using a single reply layer even when the content was clearly a response to another comment. For this analysis, we treat each thread as consisting of a root plus single reply level (Fig.\ \ref{fig:nesting}, left). This has consequences for constructing a network---in
some other studies using transcript data \citep{aviv_network_2003,de_laat_investigating_2007}, clear
nested structure in the electronic logs
led the authors to draw links only between a poster and the person to whom they were immediately replying. In our data, accurate nesting information is largely unavailable, requiring a different model for drawing connections between participants in a thread.
\begin{figure}[bthp]
\begin{center}
\includegraphics[width=0.9\linewidth]{fig1}
\caption{Structure of forum transcript data. The CN data largely shows a post with a single reply layer (left), in contrast to studies where more nested structure is retained and informs the network construction (right). \label{fig:nesting}}
\end{center}
\end{figure}
Though it is intuitive that students talking in a forum are interacting with each other somehow, some set of assumptions must be chosen to map the logs into a network object. Prior studies of forum networks have used several different approaches: adding a link between a student commenter and the poster they were directly replying to~\citep{aviv_network_2003,de_laat_investigating_2007}, surveying students at the beginning and end of the semester \citep{yang_effects_2003,cho_social_2007}, or unspecified methods \citep{dawson_study_2008,fahy_patterns_2001}.
We used a bipartite network model, often used to model situations where both people (``actors'') and some set of shared activities ("events") are of interest \citep{borgatti_network_1997}. This approach has been used to model scientific collaboration networks and is starting to see use in online education studies \citep{ouyang_influences_2017,rodriguez_exploring_2011}.
After constructing a bipartite network, the analysis presented here focuses on the actor projection (see Fig.\ \ref{fig:bipartite}), which links together all students who posted together in the same discussion thread \citep{borgatti_analyzing_2011,traxler_coursenetworking_2016}.
For the full-semester forum network, this process creates a dense, heavily-interlinked representation of student nodes, including the instructor's place in this web. People who post in many threads in common with each other will be connected by high-weight links (``edges''), while those who post in only one or two threads can only have low-weight links to others.
\begin{figure}[tbph]
\begin{center}
\includegraphics[width=0.9\linewidth]{fig2}
\caption{Bipartite network model for transforming forum transcript into a network object. Students are ``actor'' nodes, who post to thread or ``event'' nodes. The actor network projection links together student nodes who posted to the same thread.\label{fig:bipartite}}
\end{center}
\end{figure}
\subsection{Network measures}
Even a small network quickly becomes unwieldy to describe by naming all actors and listing their connections. (But for very small classes this kind of description can be very illuminating, see \citep{de_laat_investigating_2007}.)
Structural measures condense broad features of network objects, and centrality measures quantify the position and importance of a particular node. Basic structure descriptors include the number of nodes ($N$) and edges ($N_E$) and the network density, defined as the ratio of total to possible edges:
\begin{equation}
\rho = \frac{N_E}{N(N-1)/2} \label{eq:dens}
\end{equation}
for an undirected network.
Larger social networks tend to be less dense---mathematically, because the denominator of (\ref{eq:dens}) scales as $N^2$, and practically because any individual can only sustain relationships with so many other people \citep{dunbar_neocortex_1992}. In a forum environment, where both rare and frequent interactions are recorded, higher density values may be expected unless some thresholding process is used (see Sec. \ref{sec:meth-bb}).
Uncertainty in the network density can be estimated using boostrap methods~\citep{snijders_non-parametric_1999}. Using this technique, a new sample of $N$ nodes is drawn from the network and an artificial network is constructed using the connections belonging to those nodes. The density of this artificial network represents a new possible value, and the process is repeated many times, generating a distribution of artificial density values. This distribution can be used to calculate a standard error for the observed statistic. We use the bootstrap method of~\citet{snijders_non-parametric_1999} with 5000 samples.
Centrality measures describe the position or importance of a node in a network. ``Position'' does not refer to physical location on a network diagram, as plotting algorithms use randomized processes to find reasonable display configurations (for example, minimizing overlap of edges). The number and strength of a node's connections to others, and the extent to which that person is at the core or periphery of the whole network, form the basis of centrality.
The most basic measure is degree centrality, which counts the number of edges (connections) attached to a node. In weighted networks, this concept is often expanded to node strength \citep{hanneman_introduction_2005}, which is the sum of the node's weighted degree. In directed networks such as the reduced backbones described below, directionality of links can be tracked using in- and out-degree or in- and out-strength. All of these values account for only the direct neighbors of a node, but in many networks the
wider set of the neighbors' connections can also constrain or boost a node's access to information or resources (for example, study group invitations).
A later generation of centrality measures accounts for both the number of neighbors of a node and the importance of each of those neighbors. Their importance, in turn, depends on that of their own neighbors, requiring simultaneous solution over the whole network. Measures of this type can be computed as eigenvalue problems \citep{bonacich_power_1987}. One of the most popular measures of this type is PageRank, the same base algorithm used by the Google search engine to rank the importance of pages on the internet \citep{brin_anatomy_1998}. PageRank designates a node as being important if a large number of important pages point to it. It was developed for directed networks (on the internet, linking to another page makes a directed network edge), but can be used in undirected networks as well. PageRank is one of the three centrality measures used below.
The other two centrality values we will test, Target Entropy (TE) and Hide \citep{sneppen_hide-and-seek_2005}, have been used in network analyses of classroom interactions between physics students over a university term \citep{bruun_talking_2013}. Target Entropy is a measure of the diversity of a node's information sources; high TE nodes will have many neighbors who themselves talk to a wide array of other students. Conversely, Hide quantifies how difficult it is to ``find'' a node in the network. High-Hide nodes will have few neighbors, who may themselves be more sparsely connected than average.
For each semester, we calculate PageRank, Target Entropy, and Hide for all nodes, with PageRank using the \texttt{igraph} package in R~\citep{R_software,igraph} and the other two measures using code from \citet[][Supplemental Material]{bruun_talking_2013}.
We then calculate Pearson correlations between each of these three centrality measures and final course grade.
Network centralities inherently violate the assumption of independence that underlies typical correlation calculations. To correct for this issue, permutation tests can be used, where the data set is repeatedly resampled and the correlation re-calculated, typically thousands or tens of thousands of times~\citep{grunspan_understanding_2014}. The resulting distribution of correlation coefficients gives an estimate of how likely the observed correlation was to occur by chance in a network of the same size and density---in other words, an empirical $p$-value.
Though network measures are our primary interest, for comparison we also report Pearson correlations between final grade and a student's total contributions to the forum (their combined number of Posts, Polls, and Reflections).
\subsection{Backbone extraction\label{sec:meth-bb}}
The forum networks generated by the process described above are much more dense than typical survey-based networks in a physics class of comparable size~\citep{brewe_changing_2010,traxler_community_2015,sandt_TNT_2016}. Since they are built from thousands of posts, with content ranging from physics-based conversations to ``post count'' boosting, it seems reasonable that not all interactions are equally important. The most active individuals might be connected by some core structure underlying the ``noisy'' full network, and it is these types of structures that backbone extraction is designed to uncover \citep{serrano_extracting_2009}.
Various methods exist for extracting backbones, and for this work we used the Locally Adaptive Network Sparsification (LANS) algorithm of~\citet{foti_nonparametric_2011}, which has been tested on several real-world dense networks including answer distributions from the Force Concept Inventory~\citep{brewe_using_2016}.
LANS is tuned through a parameter $\alpha$: for each node in the network, all edges below the $(1-\alpha)$ percentile of edge weight are discarded. Thus, an alpha value of 0.05 would correspond to keeping only the 95th percentile and above of a node's strongest links. For a node with edges of weights 1, 5, and 10, a threshold of $\alpha=0.05$ would remove all but the weight-10 edge(s). There is no single value of alpha which will suit for all network problems; rather, each analysis should test several values and select one that simplifies to the desired density while still preserving necessary information. Here, we test several values of $\alpha$ and re-run permutation correlation tests between centrality
and final grade, investigating whether backbone extraction strengthens the correlations by removing the effect of extraneous low-weight connections.
\section{Results\label{sec:results}}
\subsection{Participation and network statistics}
\begin{table*}
\begin{ruledtabular}
\caption{Forum participation and network statistics by semester. Participation includes students enrolled ($N_{class}$), percent who posted in the forum, total number of threads and replies, and average replies per thread and posts per student plus or minus standard deviation. Network statistics include number of nodes ($N$), isolates, average degree ($\pm SD$), and network density ($\pm$ boostrapped standard error).\label{tab:part-nw}}
\begin{center}
\begin{tabular}{c|cccccc|cccc}
Semester & $N_{class}$ & Part.\ (\%) & Threads & Replies & Replies/thread & Posts/student & $N$ & Isolates & Degree & Density \\ \hline
1 & 173 & 90 & 936 & 2376 & $2.5\pm 3.6$ & $21\pm16$ & 156 & 12 & $53\pm30$ & $0.32\pm0.03$ \\
2 & 152 & 86 & 912 & 2253 & $2.5\pm2.4$ & $23\pm24$ & 131 & 5 & $29\pm22$ & $0.22\pm0.03$ \\
3 & 166 & 87 & 762 & 2508 & $3.3\pm3.3$ & $22\pm22$ & 145 & 6 & $42\pm29$ & $0.28\pm0.03$
\end{tabular}
\end{center}
\end{ruledtabular}
\end{table*}
Table \ref{tab:part-nw} shows summary participation statistics for the forum. Each semester, 85--90\% of the enrolled students posted at least once. The number of threads was similar between the first two semesters and lower in the third, when the average number of replies per thread increased. We compared thread length and posts per student between semesters using pairwise Wilcoxan tests, which account for the non-normal distribution and presence of outliers in the data. Only Semester 3 had a significantly different ($p < 10^{-5}$) average number of replies per thread.
There were no significant differences in the number of posts per student between semesters.
Average posts per student can mask very different posting patterns, if some semesters have a few high-volume participants and others have a lower but more widespread posting rate.
Figure \ref{fig:par} shows the distribution of forum contributions among students.
To control for varying class size, the figure shows the density, essentially a smoothed histogram normalized to integrate to 1 for each semester.
All three semesters have a peak at low activity (0--15 contributions), a few very active members around 75--100 contributions, and a high-activity ``tail.'' Semester 1 has its largest peak around 25 contributions, while the other two semesters had a less prominent ``shoulder'' there.
\begin{figure}[ptbh]
\begin{center}
\includegraphics[width=0.95\linewidth]{fig3}
\caption{Density distribution of forum activity (combined posts, polls, and comments) for class members by semester. The instructor's contribution totals are included and are 94, 182, and 141 by semester.\label{fig:par}}
\end{center}
\end{figure}
Table \ref{tab:part-nw} also shows descriptive statistics for the forum discussion networks. Nodes are all students who posted at least once, and isolates are students who only posted one thread, which received no replies (see student D in Fig.\ \ref{fig:bipartite}). Though there are fewer isolates in the second semester compared to the first, the average degree of nodes is lower, as is the network density. Because larger networks will tend to have lower density,
the ``natural'' ranking of density values in the three semesters would be (2, 3, 1) for a comparable level of network structure. The observed ranking reverses this.
The aggregate forum network for the whole semester is too dense to be visually useful without extensive filtering of low weight edges (see Fig.\ 1 in \citet{traxler_coursenetworking_2016}). Fig.\ \ref{fig:networks} shows the week 7--8 subset of Semesters 1 and 2,
a time of similar activity in the middle of the semester. Each circle shows a student,
sized by total contributions over the semester and colored by final grade.
Darker connecting lines indicate higher-weight edges, resulting from more common posting by a pair of students.
Though total forum activity was similar between the two semesters, the Semester 2 network is less dense and more dominated by a few high-participation, high-grade students during the time shown.
Semester 3 (not shown) is visually ``between'' the two pictures, with fewer students posting than Semester 1 during this time slice but notably higher density than Semester 2.
\begin{figure*}
\begin{center}
\includegraphics[width=3.3in]{fig4a}
\includegraphics[width=3.3in]{fig4b}
\caption{Forum networks from weeks 7--8 in Semester 1 (left) and Semester 2 (right). Line opacity is scaled by edge weight, so darker lines indicate more threads in common for a student pair. Nodes are sized by total contributions over the semester and colored by grade (red low, yellow medium, blue high). Withdrawals and instructor or CN staff accounts are white, and the instructor's node is labeled ``I.''\label{fig:networks}}
\end{center}
\end{figure*}
\subsection{Centrality/grade correlations}
Table \ref{tab:corr} shows the results of the bootstrap correlations between final grade and centrality in the discussion forum network. In the first semester, PageRank and Target Entropy are positively correlated with final grade and Hide is negatively correlated, all at small effect sizes (using Cohen's suggested thresholds of (0.1, 0.3, 0.5) for size of effect \citep{cohen_power_1992}). In the second semester, no correlations are significant. The third semester repeats the pattern of semester 1, with the PageRank and Hide correlations now above the threshold for medium effect size.
The table also gives the Pearson correlation between total number of forum contributions and final grade for each semester. This correlation is only significant in Semester 3, at a medium effect size.
\begin{table}[htbp]
\begin{ruledtabular}
\caption{Correlation coefficients between final grade, the network centrality measures PageRank (PR), Target Entropy (TE), and Hide, and forum participation (total threads$+$comments). Asterisks show the level of statistical significance ($^* p < 0.05$, $^{**} p < 0.01$, and $^{***} p < 0.001$).\label{tab:corr}}
\begin{center}
\begin{tabular}{cllll}
Semester & PR & TE & Hide & Participation \\ \hline
1 & 0.18 $^*$ & 0.29 $^{**}$ & -0.27 $^{**}$ & 0.091 \\
2 & 0.13 & 0.17 & -0.18 & 0.12 \\
3 & 0.34 $^{***}$ & 0.28 $^{**}$ & -0.31 $^{**}$ & 0.33 $^{***}$
\end{tabular}
\end{center}
\end{ruledtabular}
\end{table}
\subsection{Backbone extraction}
The goal of backbone extraction is to simplify a network to its essential structure, so high-density forum networks are ideal candidates for this technique.
For each semester of data, we calculated the LANS backbone extraction at values of $\alpha=(0.5, 0.1, 0.05, 0.01)$. Table~\ref{tab:bbstats} shows the number of edges and the fraction of the original total edge weight remaining~\citep{serrano_extracting_2009} for each reduction of the three semesters.
\begin{table}[htbp]
\begin{ruledtabular}
\caption{Edges ($N_E$) and fraction of total original weight (\%$W_T$) at each level of backbone extraction; $\alpha=1$ is the original network.\label{tab:bbstats}}
\begin{center}
\begin{tabular}{lrcrcrc}
& \multicolumn{2}{c}{Semester 1} & \multicolumn{2}{c}{Semester 2} & \multicolumn{2}{c}{Semester 3} \\
$\alpha$ & $N_E$ & \%$W_T$ & $N_E$ & \%$W_T$ & $N_E$ & \%$W_T$ \\ \hline
1 & 7628 & 1.00 & 3704 & 1.00 & 5858 & 1.00 \\
0.5 & 5635 & 0.88 & 2476 & 0.88 & 4042 & 0.88 \\
0.1 & 1173 & 0.36 & 572 & 0.39 & 1000 & 0.39 \\
0.05 & 661 & 0.24 & 334 & 0.26 & 530 & 0.25 \\
0.01 & 194 & 0.09 & 186 & 0.12 & 221 & 0.10
\end{tabular}
\end{center}
\end{ruledtabular}
\end{table}
There are competing criteria for judging a backbone extraction to be appropriate or a value of alpha to be suitably small. One heuristic is that a large portion of the original network weight (the sum of its weighted degree) should remain~\citep{serrano_extracting_2009}.
Another possible metric is to lower $\alpha$ until the forum network reaches a comparable density or average degree to a classroom survey-based network of similar size~\citep{traxler_community_2015}. By the first measure, values of $\alpha=0.05$ or lower may be cause for concern in this data, since they hold only a quarter of the original network weight (a small amount in comparison to the example backbones of\citet{serrano_extracting_2009}). By the second measure, values of $\alpha=0.05$ or 0.01 might be most appropriate.
To resolve this possible contradiction, the ultimate arbiter is what happens to the centrality values of the nodes: their relative distribution and their correlations with students' final grades. For all three semesters, backbone reduction appears to destroy rather than strengthen correlations between network centrality and final grade.
The negative Hide/grade correlation vanishes immediately,
with similar though less severe effects on PageRank and Target Entropy (see Supplemental Material for details). In the third semester, there is some suggestion that backbone reduction does not hurt and may even help the PageRank and Target Entropy correlations down to $\alpha=0.1$. However, the overall effect of the technique is to reduce rather than highlight the useful information.
Figure \ref{fig:boxplot-PR} shows boxplots of the PageRank scores of nodes for the original ($\alpha=1$) and reduced networks for Semester 1. These distributions help to explain why correlations with final grade decrease as supposedly ``extraneous'' links are removed. Backbone extraction flattens calculated centrality values for most nodes in the network as $\alpha$ decreases, with the distribution skewing lower and many nodes eventually occupying the minimum possible PageRank value.
Plots for the other semesters and the other two centrality measures show a similar effect.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.93\linewidth]{fig5.eps}
\caption{Boxplot of PageRank centrality values for the Semester 1 network backbones.
The bottom, middle, and top line of the boxes show first quartile, median, and third quartile.
The upper and lower ``whiskers'' extend to the maximum/minimum values or 1.5 times the inter-quartile range, whichever is larger.
As alpha decreases, more node centralities cluster at the minimum value.\label{fig:boxplot-PR}}
\end{center}
\end{figure}
\section{Discussion}
\subsection{Network analysis reveals important differences in forum use between semesters}
Our first research question was: {\em How do discussion forum networks differ among multiple semesters of an introductory physics course, and can this information be extracted more easily from participation statistics?} From the analyses summarized in Table \ref{tab:part-nw} and Fig.\ \ref{fig:networks}, we find that the forum networks have different densities, average degree, and breadth of participation between semesters. In particular, Semesters 1 and 3 show a higher level of connectivity that is not easily explained by fluctuations in class size or numbers of discussion threads and comments.
In contrast, non-network participation statistics show few significant differences between the classes, with only Semester 3 having longer discussion threads (but not more activity overall).
Some essential structure of discussions is captured by network analysis, beyond that available by participation tracking but without time-consuming content analysis of posts.
Our second research question was: {\em How much are student final grades correlated with their centrality in the discussion forum network?}
Students who are more central in the forum network tend to score higher in the course, but not in every semester---in particular, the higher-density networks are those in which centrality is correlated with grade.
Target Entropy and Hide seem to be the most reliable predictors,
with PageRank somewhat less consistent. Exploratory analysis shows that in this data set, Target Entropy and Hide are highly correlated, so we focus our discussion below on Target Entropy.
This result builds on the findings of question 1: not only do networks better capture the discussion connectivity, but they track a kind of interaction that benefits students in the course.
Our third research question was: {\em Do centrality/grade correlations, if present, strengthen when reducing the network to a more simplified ``backbone?''}
We predicted that using backbone extraction on the forum networks would clarify correlations between centrality and final grade, by streamlining low-weight links that proliferate in long ``chat'' threads and leaving the most important connections between students. We found that instead, this ``noise'' is part of the signal, and that reducing the forum networks to backbone representations destroys correlations between centrality and grade.
It is possible that
a backbone extraction method developed specifically for bipartite networks \citep{neal_backbone_2014} might improve this result. However, plotting PageRank, Target Entropy, and Hide at successive alpha levels shows that backbone extraction flattens these centrality distributions and pushes more and more nodes to the minimum value. This issue seems likely to recur even with a change in algorithm.
\subsection{Implications for network research}
One recommendation that emerges from the literature review of this paper is for researchers to carefully document their choices in using network models to describe online learning. Some past studies have used survey methods to gather network data \citep{yang_effects_2003,cho_social_2007}, while others draw from electronic logs \citep{wortham_nodal_1999,reffay_social_2002,aviv_network_2003,de_laat_investigating_2007,dawson_seeing_2010}. Studies in the first category base their approach on earlier social network analysis studies of business organizations, though physics education researchers have tied similar data collection to theoretical frameworks of transformation of participation or communities of practice \citep{brewe_investigating_2012,bruun_networks_2012}.
Studies that derive their data from electronic logs are more common in the CSCL literature, and this is a promising direction given the growing amount of data that is available from learning management systems.
\citet{kortemeyer_analysis_2006} argues that these data open a more natural window onto students' thought processes than think-aloud interviews, where students may be trying to perform to the interviewer's expectations.
For instructors, detecting differences in student participation early in the semester, based on their use of resources like forums, can give early warnings about at-risk students in live or online courses \citep{dawson_seeing_2010,reffay_social_2002}.
A few studies do not specify how they constructed their networks.
Both the data source (survey or logs) and the assumptions made about how to connect the network have consequences for the density and structures that result. In other words, the network model---what constitutes a link between students?---is an interaction model \citep{freeman_centrality_1978}, which makes a statement about what communication processes the researcher thinks are important to learning in a given environment. Our bipartite model generated far denser networks than survey-based classroom studies, even those drawn from weekly sampling (see \citep{bruun_talking_2013}, Supplemental Material Fig.\ 5 for link weight distribution of their densest network). We chose an expansive definition of interaction, and find that centrality in the resulting network is an equally strong predictor of grades as a sparser survey approach.
Our measured correlations between network centrality and grades are also comparable to those found between annotation quality and exam grade in a physics content analysis study \citep{miller_analysis_2016}.
Different online learning studies have used a variety of centrality measures, and it is not at all clear that a ``best'' set will emerge. Only by documenting their assumptions can researchers allow for any hope of comparing between or replicating results.
\subsection{Implications for online learning research}
As outlined above, the range of data sources, network statistics, and outcome measures makes it challenging to check results between CSCL network analyses. However, we can look for alignment in trends or effect sizes of results. \citet{dawson_seeing_2010}
found that high-performing students had more connections and were more likely to be linked to the instructor. High Target Entropy students in our Semesters 1 and 3, who were more likely to do well in the course, would tend to have a large number of connections like the high-scoring students in Dawson's study. Similarly, low Target Entropy---signaling limited sources of information---would generally correspond to student ego networks with only a few connections.
Though the instructor in our data was not intentionally making an anchored forum with the traits recommended by \citet{guzdial_effective_2000}, the CN interface builds in two of those authors' recommendations: a thread-grouped view with always-visible archives and the ability to choose a post category (through instructor- or user-created ``hashtags''). The authors make a third recommendation of ``anchor'' threads that prompt students with a few key discussion topics and include a link to post their contributions. In Semesters 1 and 3, the instructor created anchor threads via the Tasks feature on CN. Tasks show at the top of the forum page, and were updated once a week in those two semesters.
The instructor did not use these weekly tasks in Semester 2, and this change came with (though we cannot say it was the sole cause of) a loss of network connectivity.
\citet{aviv_network_2003} compared two semesters and found that the level of integration between the forum and class assignments was linked to substantial differences in the amount and cognitive level of discussion by students. Our results match theirs in part: the raw amount of discussion was not necessarily tied to facilitation, but the resulting network between students was more dense and appears to be more educationally useful in the more-structured semesters.
The work by Aviv and collaborators is one of a small but growing number of studies that combine network measures with content analysis of posts \citep{rice_electronic_1987,fahy_epistolary_2002,de_laat_investigating_2007}. Work in physics has shown links between the cognitive level of student comments on homework problems \citep{kortemeyer_analysis_2006} or textbook annotation \citep{miller_analysis_2016} with their grades \citep{kortemeyer_analysis_2006,miller_analysis_2016} or conceptual gains \citep{miller_analysis_2016}.
Content analysis of the CN data, currently in progress, will let us look for interplay between the quantitative network structures and qualitative content of discussions.
\citet{cho_social_2007} and \citet{yang_effects_2003} found that degree centrality positively correlated with final grade in survey-based classroom networks, though in the first study, the correlation was only marginally significant and a significant correlation instead appeared with closeness centrality. Though their network construction methods were different, the correlations found ($r=0.442$ for \citep{cho_social_2007}, $r=0.4$ or 0.46 for \citep{yang_effects_2003}) are similar to the results of this study as well as the correlations with PageRank, Target Entropy, and Hide found by \citet{bruun_talking_2013}.
The closest comparison study in physics is \citet{bruun_talking_2013}, who used weekly surveys to build an aggregate network for an introductory mechanics course. We found that the three centrality measures that emerge as most important in their study---PageRank, Target Entropy, and Hide---are also useful for exploring position/grade correlations in the forum data. Of these, Target Entropy and Hide seem to show the most consistent signal; these measures originate from a theoretical perspective of quantifying information flow \citep{bruun_talking_2013,sneppen_hide-and-seek_2005}, which may be especially suited for describing long post chains in forum networks.
\section{Limitations and future work}
Like most CSCL studies \citep{johnson_synchronous_2006}, this is not a control-group experimental study. One possible reading of our results is that more engaged students tend to participate in the forum, and that high-centrality nodes are merely the ``good'' students (however a reader might define that) who would succeed regardless of the presence of a forum or discussion prompts. Certainly, there is evidence that students who are inclined to talk to others are more likely to benefit from forums \citep{cho_social_2007}. However, the lack of forum/grade correlations in Semester 2 suggests that this explanation is incomplete. First, and as a general argument for forum use, even students who are predisposed to talk about class material can benefit from tools for doing so outside of class hours at commuter schools. Furthermore, the differences in Semester 2 show that even a similarly-active forum may not be equally useful. There is no reason to believe that the fraction of engaged, self-starting students was substantially different between our three semesters, but there are significant differences in network structure and in correlations between forum position and grade. Taken together, these points suggest that not only does instructor facilitation matter, but that network analysis can detect this difference even when participation tracking does not.
Finally, a detailed content analysis is beyond the scope of this paper, but spot-checking suggests that the most active threads (which contribute to higher network density) are a mixture of physics-based and social topics. This further weakens the idea that the correlations we found only show the ``best'' students using the forum for strictly studious purposes. The nature of the conversations and community that arise are more complicated than an on/off-topic dichotomy \citep{rourke_assessing_1999}, so the
next stage of this project will use post text to analyze the discussion differences between semesters and the effect of anchoring by weekly tasks.
Ultimately, content analysis results can be combined with a time-developing picture of the network characteristics \citep{ouyang_influences_2017} to better understand instructor facilitation, the student discussion culture that emerges, and the benefits that both have for learning in physics forums.
|
{
"timestamp": "2018-02-27T02:01:34",
"yymm": "1802",
"arxiv_id": "1802.08738",
"language": "en",
"url": "https://arxiv.org/abs/1802.08738"
}
|
\section{Introduction}
Consider the Cayley graph $\Gamma(S_n)$ of the symmetric group $S_n$ with generators given by adjacent transpositions $\pi_i = (i, i + 1), i \in \{1, \mathellipsis n -1\}$. A {\bf sorting network} is a minimal length path in $\Gamma(S_n)$ from the identity permutation $\text{id}_n = 1 2 \cdots n$ to the reverse permutation $\text{rev}_n = n \cdots 2 1$. The length of such paths is $N = {n \choose 2}$.
\medskip
Sorting networks are also known as {\bf reduced decompositions} of the reverse permutation, as any sorting network can equivalently be represented as a minimal length decomposition of the reverse permutation as a product of adjacent transpositions: $\text{rev}_n = \pi_{k_N} \mathellipsis \pi_{k_1}$. In this setting, the path in the Cayley graph is the sequence
$$
\left\{ \pi_{k_i} \cdots \pi_{k_2} \pi_{k_1}: i \in \left\{0, \mathellipsis N \right\} \right\} .
$$
The combinatorics of sorting networks have been studied in detail under this name. There are connections between sorting networks and Schubert calculus, quasisymmetric functions, zonotopal tilings of polygons, and aspects of representation theory. For more background in this direction, see Stanley \cite{stanley1984number}; Bjorner and Brenti \cite{bjorner2006combinatorics}; Garsia \cite{garsia2002saga}; Tenner \cite{tenner2006reduced}; and Manivel \cite{manivel2001symmetric}.
\medskip
In computer science, sorting networks are viewed as $N$-step algorithms for sorting a list of $n$ numbers. At step $i$ of the algorithm, we sort the elements at positions $k_i$ and $k_i + 1$ into increasing order. This process sorts any list in $N$ steps.
\medskip
In order to understand the geometry of sorting networks, we think of the numbers $\{1, \mathellipsis, n \}$ as labelled particles being sorted in time (see Figure \ref{fig:wiring}).
We use the notation $\sigma(x, t) = \pi_{k_\floor{t}} \mathellipsis \pi_{k_2}\pi_{k_1}(x)$ for the position of particle $x$ at time $t$.
\begin{figure}
\centering
\includegraphics[scale=0.7]{Wiring-3.pdf}
\caption{A ``wiring diagram" for a sorting network with $n = 4$. In
this diagram, trajectories are drawn as continuous curves for
clarity.}
\label{fig:wiring}
\end{figure}
\medskip
Angel, Holroyd, Romik, and Vir\'ag \cite{angel2007random} initiated the study of uniform random sorting networks. Based on numerical evidence, they made striking conjectures about their global behaviour.
\medskip
Their first conjecture concerns the rescaled trajectories of a uniform random sorting network. In this rescaling, space is scaled by $2/n$ and shifted so that particles are located in the interval $[-1, 1]$. Time is scaled by $1/N$ so that the sorting process finishes at time $1$. Specifically, we define the {\bf global trajectory} of particle $x$ by
$$
\sigma_G(x, t) = \frac{2\sigma(x, Nt)}n - 1.
$$
In \cite{angel2007random}, the authors conjectured that global trajectories converge to sine curves (see Figure \ref{fig:sinecurves}). They proved that limiting trajectories are H\"older-$1/2$ continuous with H\"older constant $\sqrt{8}$. To precisely state their conjecture, we use the notation $\sigma^n$ for an $n$-element uniform random sorting network.
\begin{conj}
\label{CJ:sine-curves}
For each $n$ there exist random variables $\{(A^n_x, \Theta^n_x) : x = 1, \mathellipsis n \}$ such that for any $\epsilon > 0$,
$$
\mathbb{P} \left( \max_{x \in [1, n] } \sup_{t \in [0, 1]} \card{ \sigma^n_G(x, t) - A_x^n \sin(\pi t + \Theta_x^n) } > \epsilon \right) \to 0 \qquad \;\text{as}\; n \to \infty.
$$
\end{conj}
\begin{figure}
\centering
\includegraphics[scale=0.8]{RSN-sinecurves.pdf}
\caption{A diagram of selected particle trajectories in a 2000 element sorting network. This image is taken from \cite{angel2007random}.}
\label{fig:sinecurves}
\end{figure}
Their second conjecture concerns the time-$t$ permutation matrices of a uniform sorting network. First, let the Archimedean measure $\mathfrak{Arch}_{1/2}$ on the square $[-1, 1]^2$ be the probability measure with Lebesgue density
$$
f(x, y) = \frac{1}{2\pi \sqrt{1 - x^2 - y^2}}
$$
on the unit disk, and $0$ outside. The measure $\mathfrak{Arch}_{1/2}$ is the projected surface area measure of the $2$-sphere.
For general $t$, define $\mathfrak{Arch}_t$ to be the distribution of
$$
(X, X\cos(\pi t) + Y \sin (\pi t)), \qquad \text{where } (X, Y) \sim \mathfrak{Arch}_{1/2}.
$$
In \cite{angel2007random}, the authors conjectured that the time-$t$ permutation matrix of a uniform sorting network converges to $\mathfrak{Arch}_t$ (see Figure \ref{fig:circles}). They proved that for any $t$, the support of the time-$t$ permutation matrix is contained in a particular octagon with high probability.
\begin{conj}
\label{CJ:matrices}
Consider the random measures
\begin{equation}
\label{E:eta-n-t}
\eta^n_t = \frac{1}n \sum_{i = 1}^n \delta(\sigma^n_G(i, 0), \sigma^n_G(i, t)).
\end{equation}
Here $\delta(x, y)$ is a $\delta$-mass at $(x, y)$. Then for any $t \in [0, 1]$,
$$
\eta^n_t \to \mathfrak{Arch}_t \qquad \text{in probability in the weak topology}.
$$
That is, for any weakly open neighbourhood $O$ of $\mathfrak{Arch}_t$, $\mathbb{P}(\eta^n_t \in O) \to 1$ as $n \to \infty.$
\end{conj}
\begin{figure}
\centering
\includegraphics[scale=0.9]{RSN-circles.pdf}
\caption{A diagram of the measures $\{\eta^n_t : t \in \{0, 1/10, 2/10, \mathellipsis 1\} \}$ in a $500$-element sorting network. The octagon bounds from \cite{angel2007random} are given in blue in this picture. One of our main results in this paper is proving the ellipse bounds (red). As can be seen from the figure, simulations suggest that this bound is tight. This figure is from \cite{angel2007random}.}
\label{fig:circles}
\end{figure}
\medskip
The main results of this paper work towards proving the above two conjectures. To state these results, let $\mathcal{D}$ be the closure of the space of all possible sorting network trajectories under the uniform norm. Let $Y_n \in \mathcal{D}$ be a unifomly chosen particle trajectory from the set of $n$-element sorting network trajectories. That is, if $\sigma^n$ is a uniform $n$-element sorting network, and $I_n$ is an independent uniform random variable on $\{1, \mathellipsis n\}$, then
$$
Y_n = \sigma^n_G(I_n, \cdot).
$$
The following lemma, proven in Section \ref{S:prelim}, guarantees that subsequential limits of $Y_n$ exist in distribution. This is a version of the H\"older continuity result from \cite{angel2007random}.
\begin{lemma}
\label{L:precompact}
(i) The sequence $\{Y_n : n \in \mathbb{N}\}$ is uniformly tight.
(ii) Let $Y$ be a subsequential limit of $\{Y_n : n \in \mathbb{N}\}$ in distribution. Then
$$
\mathbb{P} \left( Y \text{ is H\"older-$1/2$ continuous with H\"older constant $\sqrt{8}$} \;\text{and}\; Y(0) = -Y(1) \right) =1.
$$
Moreover, for each $t \in [-1, 1]$, $Y(t)$ is uniformly distributed on $[-1, 1]$.
\end{lemma}
\medskip
We say that a path $y \in \mathcal{D}$ is $g(y)$-Lipschitz if $y$ is absolutely continuous and if for almost every $t$, $|y'(t)| \le g(y(t))$. We can now state the main theorem of this paper.
\begin{theorem}
\label{T:main}
Suppose that $Y$ is a distributional subsequential limit of $Y_n$. Then
$$
\mathbb{P} \left(Y \text{ is } \pi\sqrt{1 - y^2}\text{-Lipschitz}\right) = 1.
$$
\end{theorem}
As a consequence of Theorem \ref{T:main}, we show that any weak limit of the time-$t$ permutation matrices is contained in the elliptical support of $\mathfrak{Arch}_t$. We also show that trajectories near the top of sorting networks are close to sine curves.
\begin{theorem}
\label{T:main-3}
Let $t \in [0, 1]$, and let $\eta_t$ be a subsequential limit of $\eta^n_t$. Then the support of the random measure $\eta_t$ is almost surely contained in the support of $\mathfrak{Arch}_t$.
\end{theorem}
\begin{theorem}
\label{T:main-2}
Suppose that $Y$ is a subsequential limit of $Y_n$. Then for any $\epsilon > 0$,
\begin{align*}
\mathbb{P} \left( Y(0) \ge 1 - \epsilon \;\text{and}\; ||Y(t) - \cos(\pi t)||_u \ge \sqrt{2\epsilon}\right) &= 0, \qquad \;\text{and}\; \\
\mathbb{P} \left(Y(0) \le -1 + \epsilon \;\text{and}\; ||Y(t) + \cos(\pi t)||_u \ge \sqrt{2\epsilon}\right) &= 0.
\end{align*}
Here $||\cdot||_u$ is the uniform norm.
\end{theorem}
\subsection{Local limit theorems}
In order to prove Theorem \ref{T:main}, we analyze the interactions between the local and global structure of sorting networks. As a by-product of this analysis, we prove that in the local limit of random sorting networks, particles have bounded speeds and swap rates. To state these theorems, we first give an informal description of the local limit (a precise description is given in Section \ref{S:prelim}). The existence of this limit was established independently by Angel, Dauvergne, Holroyd, and Vir\'ag \cite{angel2017local}, and by Gorin and Rahman \cite{gorin2017}. Define the local scaling of trajectories
$$
U_n (x, t) = \sigma^n(\floor{n/2} + x, nt) - \floor{n/2}.
$$
With an appropriate notion of convergence, we have that
$$
U_n \stackrel{d}{\to} U,
$$
where $U$ is a random function from $\mathds{Z} \times [0, \infty) \to \mathds{Z}$. $U$ is the local limit centred at particle $\floor{n/2}$. We can also take a local limit centred at particle $\floor{\alpha n}$ for any $\alpha \in (0, 1)$. The result is the process $U$ with time rescaled by a semicircle factor $2\sqrt{\alpha(1 - \alpha)}$. We now state our two main theorems about $U$.
\begin{theorem}
\label{T:main-local}
For every $x \in \mathds{Z}$, the following limit
$$
S(x) = \lim_{t \to \infty} \frac{U(x, t) - U(x, 0)}{t} \qquad \text{exists } \text{almost surely}.
$$
$S(x)$ is a symmetric random variable with distribution $\mu$ independent of $x$. The support of $\mu$ is contained in the interval $[-\pi, \pi]$.
Moreover, the random function $S: \mathds{Z} \to \mathbb{R}$ is stationary and mixing of all orders with respect to the spatial shift $\tau$ given by $\tau S(x) = S(x + 1)$.
\end{theorem}
We call $\mu$ the {\bf local speed distribution}. Theorem \ref{T:main-local} is proven as Corollary \ref{C:speed-exist} and Theorem \ref{T:bounded-speed}.
To state the second theorem, let $Q(x, t)$ be the number of swaps made by particle $x$ in the interval $[0, t]$.
\begin{theorem}
\label{T:swap-rate}
Let $x \in \mathds{Z}$, and let $S(x)$ be as in Theorem \ref{T:main-local}. Then
$$
\lim_{t \to \infty} \frac{Q(x, t)}t = \int |y - S(x)|d\mu(y) \qquad \text{almost surely and in $L^1$.}
$$
\end{theorem}
Note that the speed distribution $\mu$ is not supported on a single point, so the process $U$ is not ergodic in time. In fact, Corollary \ref{C:av-swap-rate} shows that if $X$ and $X'$ are two independent samples from $\mu$, then $\mathbb{E} |X - X' | = 8/\pi$.
\subsection*{Further Work}
In a subsequent paper \cite{dauvergne3}, the first author uses the results of this paper as a starting point for proving all the sorting network conjectures from \cite{angel2007random}. In particular, this proves Conjectures \ref{CJ:sine-curves} and \ref{CJ:matrices}.
\subsection*{Related Work}
Different aspects of random sorting networks have been studied by Angel and Holroyd \cite{angel2010random}; Angel, Gorin and Holroyd \cite{angel2012pattern}; Reiner \cite{reiner2005note}; Tenner \cite{tenner2014expected}; and Fulman and Stein \cite{fulman2014stein}. In much of the previous work on sorting networks, the main tool is a bijection of Edelman and Greene \cite{edelman1987balanced} between Young tableaux of shape $(n-1, n-2, \mathellipsis 1)$ and sorting networks of size $n$. Little \cite{little2003combinatorial} found another bijection between these two sets, and Hamaker and Young \cite{HY} proved that these bijections coincide.
\medskip
Interestingly in our work and in the subsequent work \cite{dauvergne3}, we are able to work purely with previous known results about sorting networks and avoid direct use of the combinatorics of Young tableaux. As mentioned above, our starting point is the local limit of random sorting networks \cite{angel2017local, gorin2017}, though interestingly we only use a few probabilistic facts about this limit and never use any of the determinantal structure proved in \cite{gorin2017}. Other than basic sorting network symmetries, the only other previously known results that enter into our proofs and those in \cite{dauvergne3} are a bound on the longest increasing subsequence in a random sorting network from \cite{angel2007random} and consequences of this bound (H\"older continuity and the permutation matrix 'octagon' bound).
\medskip
Problems involving limits of sorting networks under a different measure and with different restrictions on the length of the path in $\Gamma(S_n)$ have been considered by Angel, Holroyd, and Romik \cite{angel2009oriented}; Kotowski and Vir\'ag \cite{kotowski2016limits}; Rahman, Vir\'ag, and Vizer \cite{rahman2016geometry}; and Young \cite{young2014markov}.
\medskip
In particular, in \cite{kotowski2016limits} (see also \cite{rahman2016geometry}), the authors prove that trajectories in reduced decompositions of $\text{rev}_n$ of length $n^{2 + \epsilon}$ for some $\epsilon \in (0, 1)$ converge to sine curves, proving the `relaxed' analogues of Conjectures \ref{CJ:sine-curves} and \ref{CJ:matrices}. They do this by using large deviations techniques from the field of interacting particle systems. However, it appears to be very difficult to say anything about random sorting networks using this approach. Instead, both this paper and the subsequent work \cite{dauvergne3} take an entirely different approach based around patching together local swap rate information to deduce global structure.
\subsection*{Overview of the proofs and structure of the paper}
The guiding principle behind our proofs is that we can gain insight into both the local and global structure of random sorting networks by thinking of a large-$n$ sorting network as consisting of many local limit-like blocks. By doing this, we can show that if the local limit behaves too badly, then this contradicts a global bound, and similarly if the local limit behaves well, then this forces global structure.
\medskip
We first show that particle speeds exist and are bounded in the local limit. The existence of particle speeds follows from stationarity properties of the local limit, and is proven in Section \ref{S:existence}.
To show that speeds are bounded, we connect the local and global structure of sorting networks. If the local speed distribution is not supported in $[-\pi, \pi]$, then spatial ergodicity of the local limit guarantees that there are particles travelling with local speed greater than $\pi$ in most places in a typical large-$n$ sorting network $\sigma$. By patching together the movements of these fast particles, we can create a long increasing subsequence in the swap sequence for $\sigma$. This contradicts a theorem from \cite{angel2007random} and finishes the proof of Theorem \ref{T:main-local}. This is done in Section \ref{S:bounded-speed}.
\medskip
In Section \ref{S:local-swap-rates} and \ref{S:Lipschitz}, we complete the proof of Theorem \ref{T:main} by showing that control over the local speed of particles gives us control over their global speeds. By the bound on local speeds, most particles in a typical large-$n$ sorting network move with local speed in $[-\pi, \pi]$ most of the time. To control what happens when particles don't move with speeds in this range, we first prove a lower bound on the number of swaps that occur when particles do move with speed in $[-\pi, \pi]$ (essentially Theorem \ref{T:swap-rate}). This shows that not too many swaps, and hence not too much particle movement, can occur when particle speeds are not in this range.
\medskip
Theorem \ref{T:main-2} and Theorem \ref{T:main-3} follow easily from Theorem \ref{T:main} and are proven in Section \ref{S:corollaries}. In particular, the fact that edge trajectories are close to sine curves is due to the fact that for a particle starting near the edge to reach its destination along a $\pi\sqrt{1-y^2}$-Lipschitz trajectory, it must move with speed close to $\pi$ most of the time.
\section{Preliminaries}
\label{S:prelim}
In this section we collect necessary facts about sorting networks, and recall a precise definition of the local limit. We also prove Lemma \ref{L:precompact}.
\medskip
A basic fact about sorting networks is that they exhibit time-stationarity. Specifically, we have the following theorem, first observed in \cite{angel2007random}.
\begin{theorem}
\label{T:basic-time-stat}
Let $(K_1, \mathellipsis K_N)$ be the swap sequence of an $n$-element uniform random sorting network $\sigma^n$. We have that
$$
(K_1, \mathellipsis K_N) \stackrel{d}{=} (K_2, \mathellipsis K_N, n + 1 - K_1).
$$
\end{theorem}
\noindent This theorem follows from the observation that the map
$$
\{k_1, \mathellipsis k_N\} \mapsto \{k_2, \mathellipsis k_N, n + 1 - k_1\}
$$
is a bijection in the space of $n$-element sorting network swap sequences. The second theorem that we need bounds the length of the longest increasing subsequence in an initial segment of the swap sequence for a random sorting network. This result is proven in \cite{angel2007random} as Corollary 15 and Lemma 18 (though it is not written down formally as a theorem itself).
\begin{theorem}
\label{T:subsequence}
Let $L_n(t)$ be the length of the longest increasing subsequence of $(K_1, K_2, \mathellipsis K_\ceil{Nt})$.
Then for any $\epsilon > 0$, we have that
$$
\mathbb{P} \left( \max_{t \in [0, 1]} \big|L_n(t) - n\sqrt{t(2 - t)}\big| > \epsilon n\right) \to 0 \qquad \;\text{as}\; n \to \infty.
$$
\end{theorem}
We also need the result regarding H\"older continuity of trajectories from \cite{angel2007random}.
\begin{theorem}
\label{T:holder}
For any $\epsilon > 0$, the global particles trajectories of $\sigma^n$ satisfy
$$
\lim_{n \to \infty} \mathbb{P} \left( |\sigma^n_G(x, t) - \sigma^n_G(x, s)| \le \sqrt8|t - s|^{1/2} + \epsilon \text{ for all } x \in [1, n], s, t \in [0, 1] \right) = 1.
$$
\end{theorem}
Theorem \ref{T:holder} can be used to immediately prove Lemma \ref{L:precompact}. Recall that $Y_n$ is the trajectory random variable on $n$-element sorting networks.
\begin{proof}[Proof of Lemma \ref{L:precompact}.]
Let
$$
A_\epsilon = \left\{ f \in \mathcal{D} : |f(t) - f(s)| \le \sqrt{8}|t -s|^{1/2} + \epsilon \text{ for all } s, t \in [0, 1]\right\}.
$$
By Theorem \ref{T:holder}, we can find a sequence $\epsilon_n \to 0$ such that
\begin{equation}
\label{E:A-ep-n}
\lim_{n \to \infty} \mathbb{P} \left(Y_n \in A_{\epsilon_n} \right) = 1.
\end{equation}
For a function $f \in \mathcal{D}$, define the $m$th linearization $f_m$ of $f$ by letting $f_m(i/m) = f(i/m)$ for all $i \in \{0, \mathellipsis m \}$, and by setting $f_m$ to be linear at times in between.
\medskip
Now fix $\delta > 0$. There exists a sequence $m_\delta(n) \to \infty$ as $n \to \infty$ such that for large enough $n$, if $f \in A_{\epsilon_n}$, then $f_{m_\delta(n)}$ is H\"older-$1/2$ continuous with H\"older constant $\sqrt{8} + \delta$. Moreover,
there exists a sequence $c_n \to 0$ such that if $f \in A_{\epsilon_n}$, then the uniform norm
\begin{equation}
\label{E:error-lin}
||f - f_{m_\delta(n)}||_u \le c_n.
\end{equation}
For each $n$, define the random variable $Y_{n, m}$ to be the $m$th linearization of $Y_n$. By \eqref{E:A-ep-n} and \eqref{E:error-lin}, a subsequence $Y_{n_i} \to Y$ in distribution if and only if $Y_{n_i, m_\delta(n_i)} \to Y$ in distribution. Moreover, \eqref{E:A-ep-n} implies that the probability that $Y_{n_i, m_\delta(n_i)}$ is H\"older-$1/2$ continuous with H\"older constant $\sqrt{8} + \delta$ approaches $1$ as $n \to \infty$.
Therefore both $Y_{n, m_\delta(n)}$ and $Y_n$ are uniformly tight, and any subsequential limit $Y$ of $Y_n$ must be supported on the set of H\"older-$1/2$ continuous functions with H\"older constant $\sqrt{8} + \delta$.
\medskip
This holds for all $\delta > 0$, giving the H\"older continuity in the statement of the lemma. The rest of part (ii) of Lemma \ref{L:precompact} follows directly from the definition of $Y_n$.
\end{proof}
\begin{remark}
For a sorting network $\sigma$, let $\nu_\sigma$ be uniform measure on the trajectories $\{\sigma_G(i, \cdot)\}_{i \in \{1, \dots, n\}}$. Letting $\Omega_n$ be the space of all $n$-element sorting networks, consider the random measure
$$
\nu_n = \frac{1}{\card{\Omega_n}} \sum_{\sigma \in \Omega_n} \nu_\sigma.
$$
Let $\mathcal{M}(\mathcal{D})$ be the space of probability measures on $\mathcal{D}$ with the topology of weak convergence, and let $\mathcal{M}(\mathcal{M}(\mathcal{D}))$ be the space of probability measures on $\mathcal{M}(\mathcal{D})$ with the topology of weak convergence.
Essentially the same proof as that of Lemma \ref{L:precompact} can be used to show that the sequence $\{\nu_n\}_{n \in \mathbb{N}}$ is precompact in $\mathcal{M}(\mathcal{M}(\mathcal{D}))$. This is stronger than the statement that the sequence $\{Y_n\}_{n \in \mathbb{N}}$ is precompact, since the law of $Y_n$ can be thought of as the expectation of $\nu_n$.
Theorems \ref{T:main}, \ref{T:main-3}, and \ref{T:main-2} can also all be stated for subsequential limits of $\nu_n$.
\end{remark}
\subsection{The local limit}
Define a {\bf swap function} as a map $U:\mathbb{Z} \times [0, \infty) \to \mathds{Z}$ satisfying the following properties:
\smallskip
\begin{enumerate}[nosep,label=(\roman*)]
\item For each $x$, $U(x,\cdot)$ is cadlag with nearest neighbour jumps.
\item For each $t$, $U(\cdot,t)$ is a bijection from $\mathds{Z}$ to $\mathds{Z}$.
\item Define $U^{-1}(x, t)$ by $U(U^{-1}(x, t),t) = x$. Then for each $x$, $U^{-1}(x, \cdot)$
is a cadlag path with nearest neighbour jumps.
\item For any time $t \in (0, \infty)$ and any $x \in \mathds{Z}$,
$$
\lim_{s \to t^-} U^{-1}(x, s) = U^{-1}(x + 1, t) \qquad \text{if and only if} \qquad \lim_{s \to t^-} U^{-1}(x +1, s) = U^{-1}(x, t).
$$
\end{enumerate}
We think of a swap function as a collection of particle trajectories $\{U(x, \cdot) : x \in \mathds{Z}\}$.
Condition (iv) guarantees that the only way that a particle at position $x$ can move up at time $t$ is if the particle at position $x+1$ moves down. That is, particles move by swapping with their neighbours.
\medskip
Let $\mathcal{A}$ be the space of swap functions endowed with the following topology. A sequence of swap functions $U_n \to U$ if each of the cadlag paths $U_n(x, \cdot) \to U(x, \cdot)$ and $U^{-1}_n(x, \cdot) \to U^{-1}(x, \cdot)$. Convergence of cadlag paths is convergence in the Skorokhod topology. We refer to a random swap function as a \textbf{swap process}.
\medskip
For a swap function $U$ and a time $t \in (0, \infty)$, define
$$
U(\cdot, t, s) = U(U^{-1}(\cdot, t), t + s).
$$
The function $U(\cdot, t, s)$ is the increment of $U$ from time $t$ to time $t + s$.
\medskip
Now let $\alpha \in (-1, 1)$, and let $\{k_n \}_{n \in \mathbb{N}}$ be any sequence of integers such that $k_n/n \to (1 + \alpha)/2$. Consider the shifted, time-scaled swap process
$$
U_{n}^{k_n}(x, s) = \sigma^n \left(k_n + x, \frac{ns}{\sqrt{1- \alpha^2}} \right) - k_n.
$$
To ensure that $U_{n}^{k_n}$ fits the definition of a swap process, we can extend it to a random function from $\mathds{Z} \times [0, \infty) \to \mathds{Z}$ by letting $U_{n}^{k_n}$ be constant after time $\frac{n-1}{2\sqrt{1 - \alpha^2}}$, and with the convention that $U_{n}^{k_n}(x, s)= x$ whenever $x \notin \{1 -k_n, \mathellipsis n - k_n\}$. In the swap processes $U_{n}^{k_n}$, all particles are labelled by their initial positions.
The following is shown in \cite{angel2017local}, and also essentially in \cite{gorin2017}.
\begin{theorem}
\label{T:local}
There exists a swap process $U$ such that for any $\alpha, k_n$ satisfying the above conditions,
$$
U_{n}^{k_n} \stackrel{d}{\to} U \qquad \;\text{as}\; \; n \to \infty.
$$
The swap process $U$ has the following properties:
\begin{enumerate}[nosep,label=(\roman*)]
\item $U$ is stationary and mixing of all orders with respect to the spatial shift $\tau U(x, t) = U(x + 1, t) - 1$.
\item $U$ has stationary increments in time: for any $t \ge 0$, the
process $U(\cdot, t, s)_{s\ge 0}$ has the same law
as $U(\cdot,s)_{s\geq 0}$.
\item $U$ is symmetric: $U(\cdot, \cdot) \stackrel{d}{=} - \; U(- \; \cdot, \cdot)$.
\item For any $t \in [0, \infty)$, $\mathbb{P}($There exists $x \in \mathds{Z}$ such that $U(x, t) \neq \lim_{s \to t^-} U(x, t)) = 0$.
\item $U(y, 0) = y)$ for all $y \in \mathds{Z}$.
\end{enumerate}
\smallskip
Moreover, for any sequence of times $\{t_n : n \in \mathbb{N}\}$ such that $(n-1)/2 - t_n \to \infty$ as $n \to \infty$,
$$
U^{k_n}_n (\cdot, t_n, \cdot) \stackrel{d}{\to} U \qquad \;\text{as}\; n \to \infty.
$$
\end{theorem}
We will need one more result from \cite{angel2017local} regarding the expected number of swaps at a given location in $U$.
Let $C(x, y)$ be the {\bf swap time} for particles $x$ and $y$ in the limit $U$. That is,
$$
C(x, y) = \sup \{ t: [U(x, t) - U(y, t)][U(x, 0) - U(y, 0)] > 0. \}.
$$
If $x$ and $y$ never cross in $U$, then $C(x, y) = \infty.$
On the event that $C(x, y) < \infty$, we can define the swap location
$$
B(x, y) = \min \{U(x, C(x, y)), U(y, C(x, y))\}.
$$
For $i \in \mathds{Z}$ and $t \in [0, \infty)$, we can now define
$$
W(i, t) = \card{ \{(x, y) : B(x, y) = i, C(x, y) \le t \}}.
$$
The function $W(i, t)$ counts the number of swaps at location $i$ up to time $t$.
\begin{theorem}
\label{T:swaps}
Let $i \in \mathds{Z}$ and $t \in [0, \infty)$. Then $\mathbb{E} W(i, t) = \frac{4t}{\pi}.$
\end{theorem}
\begin{section}{Existence of local speeds}
\label{S:existence}
In this section, we prove that particles have speeds in the local limit $U$. To do this, we first show that the environment of $U$ is stationary from the point of view of a particle.
\begin{theorem}
\label{T:time-stat}
For any particle $y$, and any time $t \in [0, \infty)$, we have that
\begin{equation}
\label{E:complicated-stat}
\left[U(U(y, t) + \cdot, t, s) - U(y, t) \right]_{s \ge 0} \stackrel{d}{=} \left[ U(y + \cdot, s) - U(y, 0) \right]_{s \ge 0}.
\end{equation}
This implies that all particle trajectories have stationary increments. That is, for any $y \in \mathds{Z}$ and $t \in [0, \infty)$, we have that
\begin{equation}
\label{E:stat-inc}
\left[U(y, t + s) - U(y, t) \right]_{s \ge 0} \stackrel{d}{=} \left[U(y, s) - U(y, 0) \right]_{s \ge 0}.
\end{equation}
\end{theorem}
\begin{proof}
We will first prove \eqref{E:stat-inc} and then discuss what changes need to be made to prove the more general version \eqref{E:complicated-stat}. By spatial stationarity it suffices to prove \eqref{E:stat-inc} when $y = 0$. Let $A$ be any set in the Borel $\sigma$-algebra generated by the Skorokhod topology on cadlag functions from $[0, \infty)$ to $\mathds{Z}$. We compute
\begin{equation}
\label{E:U-split}
\mathbb{P} \left(\left[U(0, t + s) - U(0, t) \right]_{s \ge 0} \in A \right)
\end{equation}
by splitting up the event above depending on the value of $U(0, t)$. This gives that \eqref{E:U-split} is equal to
\begin{align*}
\sum_{j \in \mathds{Z}} \mathbb{P} \left(\left[U(0, t + s) - j \right]_{s \ge 0} \in A, \;\;U(0, t) = j \right) &= \sum_{j \in \mathds{Z}} \mathbb{P} \left(U( - j, t + s)_{s \ge 0} \in A, \;\; U(- j, t) = 0 \right) \\
&= \mathbb{P} \left(U(U^{-1}(0, t), t + s)_{s \ge 0} \in A \right) \\
&= \mathbb{P} \left(U(0, t, s)_{s \ge 0} \in A\right) \\
&= \mathbb{P} (U(0, s)_{s \ge 0} \in A).
\end{align*}
The first equality above follows from spatial stationarity of $U$. The third equality is the definition of the time increment of $U$, and the final equality follows from the stationarity of time increments.
\medskip
The proof of \eqref{E:complicated-stat} is notationally more cumbersome, but follows the exact same steps in terms of splitting up the sum into the values of $U(y, t)$ and then applying spatial stationarity and stationarity of time increments.
\end{proof}
Now recall that $Q(x, t)$ is the number of swaps made by particle $x$ in $U$ in the interval $[0, t]$. Specifically,
$$
Q(x, t) = \card{\left\{ r \in [0, t] : \lim_{s \to r^-} U(x, s) \ne U(x, r)\right\}}.
$$
In order to apply the ergodic theorem to prove that particles have speeds, it is necessary to show that $Q(x, t) \in L^1$. To do this, we use a spatial stationarity argument to relate $Q(x, t)$ to $W(0, t)$, the number of swaps at location $0$ up to time $t$. Recall that $C(x, y)$ is the swap time of particles $x$ and $y$, and $B(x, y)$ is the swap location.
\begin{lemma}
\label{L:finite-exp-chunks}
In the local limit $U$, for any $x$ we have $\mathbb{E} Q(x, t) = 8t/\pi$.
\end{lemma}
\begin{proof}
We have
\begin{align*}
\mathbb{E} Q(x, t) &= \sum_{\substack{y \in \mathds{Z} \\ y \ne x}} \sum_{i \in \mathds{Z}} \mathbb{P}(C(x, y) \le t, B(x, y) = i) \\
&= \sum_{\substack{y \in \mathds{Z} \\ y \ne x}} \; \sum_{i \in \mathds{Z}} \mathbb{P}(C(x - i , y - i) \le t, B(x - i, y - i) = 0) \\
&= \sum_{\substack{y, z \in \mathds{Z} \\ y \ne z}} \mathbb{P}(C(z , y) \le t, B(z, y) = 0) \\
&= 2 \mathbb{E} W(0, t).
\end{align*}
The second equality here comes from spatial stationarity of the process $U$. By Theorem \ref{T:swaps}, $\mathbb{E} W(0, t) = 4t/\pi$, completing the proof.
\end{proof}
We can now prove every part of Theorem \ref{T:main-local} except for the fact that the speed distribution is bounded. First define
\begin{equation}
\label{E:St}
S(x, t) = \frac{U(x, t) - U(x, 0)}{t}
\end{equation}
to be the average speed of particle $x$ up to time $t$.
\begin{corollary}
\label{C:speed-exist}
For every $x \in \mathds{Z}$, the limit
$$
S(x) = \lim_{t \to \infty} S(x, t) \qquad \text{exists } \text{almost surely}.
$$
$S(x)$ is a symmetric random variable with distribution $\mu$ independent of $x$.
Moreover, the random function $S: \mathds{Z} \to \mathbb{R}$ is stationary and mixing of all orders with respect to the spatial shift $\tau$ given by $\tau S(x) = S(x + 1)$.
\end{corollary}
\begin{proof}
The function $U(x, 1) - U(x, 0)$ is in $L^1$ by Lemma \ref{L:finite-exp-chunks} since $|U(x, 1) - U(x, 0)|$ is bounded by $Q(x, t)$. The existence of the limit follows by the stationary of particle increments in Theorem \ref{T:time-stat} and Birkhoff's ergodic theorem.
\medskip
The fact that the distribution of $S(x)$ is independent of $x$ follows from spatial stationarity of $U$, and all the properties of $S(\cdot)$ come from the corresponding properties of $U$.
\end{proof}
\end{section}
\section{Boundedness of local speeds}
\label{S:bounded-speed}
In this section, we prove that the local speed distribution $\mu$ is bounded, completing the proof of Theorem \ref{T:main-local}.
\begin{theorem}
\label{T:bounded-speed}
$\text{supp}(\mu) \subset [-\pi, \pi]$.
\end{theorem}
We first prove a lemma concerning the existence of fast particles at finite times in the local limit $U$.
\begin{lemma}
\label{L:box-implies-diagonals}
For every $\epsilon > 0$, we have that
$$
\liminf_{t\to \infty}\mathbb{P} (\text{There exists } x < 0 \;\text{such that}\; \; U(x, t) > (\pi + \epsilon)t ) < 1.
$$
\end{lemma}
\begin{proof} Let $A_{t, \epsilon}$ be the event in the statement of the lemma.
Suppose that for some $\epsilon > 0$, that
$
\lim_{t\to \infty}\mathbb{P} (A_{t, \epsilon}) = 1.
$
Fix $\delta > 0$, and let $h \in \mathbb{N}$ be large enough so that
\begin{equation}
\label{E:h-lower-bd}
\frac{h \epsilon}{2(\pi + \epsilon)\sqrt{1 - \delta^2}} \ge 2.
\end{equation}
For each $\alpha \in (-1, 1)$, define
$$
t_{\alpha, n} = \floor{{n \choose 2}\frac{\arcsin(\alpha) + \pi/2}{\pi + \epsilon/2}}\;, \qquad t_{\alpha,n}^+=t_{\alpha,n}+ \frac{hn}{(\pi + \epsilon)\sqrt{1 - \alpha^2}} ,\qquad j_{\alpha, n} = \floor{\frac{n(1+\alpha)}2}.
$$
For each $n \in \mathbb{N}$ and $\alpha \in (-1, 1)$, consider the random variable
$$
Z_{\alpha, n} := \mathbbm{1} \bigg( \exists x \in [1, n] \text{ such that }
\sigma^n \left(x, t_{\alpha, n} \right) < j_{\alpha, n}, \;
\sigma^n \left(x, t_{\alpha, n}^+ \right ) >j_{\alpha, n}+ h \bigg).
$$
When $Z_{\alpha, n}=1$, there exists an increasing subsequence of swaps in the time interval $[t_{\alpha,n},t_{\alpha,n}^+]$ at locations $j_{\alpha,n},j_{\alpha,n}+1, \ldots, j_{\alpha,n}+h-1$. Consider the set
$$
A_n=\{\alpha\in(-1+\delta,1-\delta)\,:\,j_{\alpha,n}\in h \mathbb Z\}.
$$
A straightforward computation shows that for all large enough $n$, when $\alpha,\alpha'\in A_n$ and $j_{\alpha,n}\not=j_{\alpha',n}$ then the time intervals
$[t_{\alpha,n}, t_{\alpha,n}^+]$ and $[t_{\alpha',n}, t_{\alpha',n}^+]$ are disjoint (this is where condition \eqref{E:h-lower-bd} is used). This implies that if $\alpha_1 < \alpha_2 \mathellipsis < \alpha_m$ is a sequence in $A_n$ with $j_{\alpha_i} \neq j_{\alpha_{i+1}}$ for all $i$, and $Z_{\alpha_i, n} = 1$ for all $i$, then the increasing subsequences for each $\alpha_i$ can be concatenated to get an increasing subsequence of length $mh$ in the time interval $[t_{\alpha_1, n} , t_{\alpha_m, n}^+]$.
Now we can also assume that $n$ is large enough so that
$$
t_{\alpha, n}^+ \le {n \choose 2}\frac{\pi}{\pi + \epsilon/2}
$$
whenever $\alpha \in A_n$. Since the intervals $\{\alpha:j_{\alpha,n}=j\}$ are of
Lebesgue measure $2/n$, the longest increasing subsequence in the first $\pi/(\pi + \epsilon/2)$ fraction of swaps satisfies
\begin{align}
\label{E:L-not-exp-yet}
L _n \left( \frac{\pi}{\pi + \epsilon/2} \right) &\ge \frac{nh}2\int_{A_n} Z_{\alpha, n} d\alpha.
\end{align}
Observe that the boundary of $A_{t, \epsilon}$ in the space of swap functions is contained in the set of swap functions that have a swap at time $t$. This is a null set in $\mathbb{P}_U$ by Theorem \ref{T:local} (iv), so $A_{t, \epsilon}$ is a set of continuity for $U$. The weak convergence in Theorem \ref{T:local} then implies that $\mathbb{E} Z_{\alpha, n} \to \mathbb{P}(A_{h/(\pi + \epsilon), \epsilon})$ for every $\alpha$.
\medskip
Choose $h$ large enough so that $\mathbb{P}(A_{h/(\pi + \epsilon), \epsilon}) > 1 - \delta$. Then $\lim_{n \to \infty} \mathbb{E} Z_{\alpha, n} \ge 1-\delta$ for every $\alpha$, and so by bounded convergence,
\begin{align*}
2-2\delta = \lim_{n \to \infty} \int_{-1+\delta}^{1-\delta}\frac{\mathbb{E} Z_{\alpha,n}}{1-\delta} {\wedge} 1\,d\alpha \le \liminf_{n\to\infty} \int_{A_n} \frac{\mathbb{E} Z_{\alpha,n}}{1-\delta}\,d\alpha+ (2-2\delta) \left(1-\frac{1}h \right).
\end{align*}
The last term is the limiting Lebesgue measure of $(-1+\delta,1-\delta)\setminus A_n$. Taking expectations in \eqref{E:L-not-exp-yet} and applying Fubini's Theorem then gives that for large enough $n$,
\begin{align*}
\mathbb{E} L _n \left( \frac{\pi}{\pi + \epsilon/2} \right) \ge \frac{n}{2}(2-2\delta)(1-\delta).
\end{align*}
Taking $\delta$ small enough given $\epsilon$ then contradicts Theorem \ref{T:subsequence}.
\end{proof}
We now show that the condition in Lemma \ref{L:box-implies-diagonals} implies that the speed is bounded, completing the proof of Theorem \ref{T:main-local}.
\begin{proof}[Proof of Theorem \ref{T:main-local}.]
Suppose that $\mu(\pi, \infty) > 0$, and fix $\epsilon > 0$ such that $\mu(\pi + 3\epsilon, \infty) > 0$. Then for any fixed $\delta > 0$, by spatial ergodicity we can find an $m \in \mathbb{N}$ such that
$$
\mathbb{P} \left( \text{There exists } x \in [-m, -1] \;\text{such that}\; \; S(x) > \pi + 3\epsilon \right) > 1 - \delta/2.
$$
Then there exist some $t_0 > 0$ such that for every $t > t_0$,
$$
\mathbb{P} \left( \text{There exists } x \in [-m, -1] \;\text{such that}\; \; \frac{U(x, t) - x}t > \pi + 2\epsilon \right) > 1 - \delta.
$$
If $t$ is chosen large enough so that $(\pi + 2\epsilon)t - m > (\pi + \epsilon)t$, then the above inequality immediately implies that
$
\mathbb{P} \left(A_{t, \epsilon} \right) > 1 - \delta.
$
As $\delta$ was chosen arbitrarily, this contradicts Lemma \ref{L:box-implies-diagonals}.
\end{proof}
\section{Local swap rates}
\label{S:local-swap-rates}
The main goal of this section is to prove Theorem \ref{T:swap-rate}. We first recall the statement here.
\begin{customthm}{1.8}
Let $x \in \mathds{Z}$, and let $Q(x, t)$ be the number of swaps made by particle $x$ up to time $t$ in the local limit $U$. Let $S(x)$ be the asymptotic speed of $x$, and let $\mu$ be the local speed distribution, as in Theorem \ref{T:main-local}. Then
$$
\lim_{t \to \infty} \frac{Q(x, t)}t = \int |y - S(x)|d\mu(y) \qquad \text{almost surely and in $L^1$.}
$$
\end{customthm}
This theorem allows us to control the number of swaps in $\sigma^n$ between ``typical particles" moving with local speed at most $\pi + \epsilon$. This will imply a lower bound on the number of swaps in a random sorting network made by particles with speed greater than $\pi + \epsilon$, which in turn will allow us prove that limiting trajectories are $\pi\sqrt{1-y^2}$-Lipschitz. Specifically, we will need the following corollary in our proof of Theorem \ref{T:main}:
\begin{corollary}
\label{C:av-swap-rate}
(i) For any $x$, the following statement holds almost surely for the local limit $U$.
$$
\lim_{t \to \infty} \frac{Q(x, t)}t \in [0, \pi].
$$
(ii) Let $X$ and $X'$ be two independent samples from the local speed distribution $\mu$. Then
$$
\mathbb{E} |X - X'| = \mathbb{E}\left[\lim_{t \to \infty} \frac{Q(0, t)}{t} \right]= \frac{8}{\pi}.
$$
\end{corollary}
The intuition behind Theorem \ref{T:swap-rate} is very simple. Since particles in $U$ have asymptotic speeds and $U$ is spatially ergodic, we can imagine $U$ as a collection of particles moving along linear trajectories with independent slopes sampled from the local speed distribution. With this heuristic, the quantity $\frac{Q(x, t)}t$ can be estimated as a sum of two integrals for large $t$:
$$
\int_{S(x)}^\pi \mu(y, \infty) dy + \int_{-\pi}^{S(x)} \mu(-\infty, y)dy.
$$
\medskip
To make this intuition rigorous, we first prove a corresponding theorem for lines, and then use these lines to approximate the trajectory of $x$ in $U$. Let $L$ be a line with slope $c \in \mathbb{R}$ given by the formula $L(t) = ct + d$. Define
\begin{equation*}
C^+(x, L, t) = \begin{cases}
1, \qquad \text{if} \;\; U(x, 0) \le L(0) \;\text{and}\; U(x, t) > L(t).\\
0, \qquad \text{else}.
\end{cases}
\end{equation*}
Define $C^+(L, t)$, the number of net upcrossings of the line $L$ in the interval $[0, t]$, by $C^+(L, t) = \sum_{x \in \mathds{Z}} C^+(x, L, t)$. We then have the following proposition:
\begin{prop}
\label{P:line-rate} Let $L(t) = ct + d$. Then
\begin{equation*}
\frac{C^+(L, t)}t \to \int (y - c)^+d\mu(y) \qquad \text{almost surely} \text{ and in $L^1$}.
\end{equation*}
\end{prop}
We first show that the limit always exists.
\begin{lemma}
\label{P:line-rate-exists}
For any line $L(t) = ct + d$, there exists a random $C^+(L) \in L^1(\mathbb{R})$ such that
\begin{equation*}
\frac{C^+(L, t)}t \to C^+(L) \qquad \text{almost surely} \text{ and in $L^1$}.
\end{equation*}
\end{lemma}
To prove this lemma, we introduce a space-time shift $\tau_{a, t}$ on the space of swap functions. Here $a \in \mathds{Z}$ and $t \in [0, \infty)$. The shift $\tau_{a, t}$ shifts the swap function $U$ by $a$ in space and then looks at the increment starting from time $t$:
$$
\tau_{a, t}U(x, s) = U(x + a, t, s) - a.
$$
\begin{proof}
This will follow immediately from Kingman's subadditive ergodic theorem. We first consider the case $c \ne 0$. Define $\tau := \tau_{\text{sgn}(c), |c|^{-1}}$. Since $U$ is stationary in both space and time, $\tau U \stackrel{d}{=} U$.
\medskip
Now let $f_n(U)= C^+(L, |c|^{-1}n)$. The sequence $f_n$ satisfies a subadditivity relation with respect to the shift $\tau$ given by
$$
f_{n+m}(U) \le f_n(U) + f_m(\tau^nU).
$$
Moreover, $f_n(U) \in L^1$ for all $n$. To see this, observe that if $x \le L(0)$ and $U(x, t) > L(t)$, then either $L(t) < x \le L(0)$, or else $x$ swaps at a time $s \in [0, t]$ at some position in the spatial interval $[L(t) -1, L(0)]$ (if $c < 0$) or $[L(0) -1, L(t)]$ (if $c > 0)$.
\medskip
The expected number of particles that can make swaps in this region is finite by Theorem \ref{T:swaps}, and the number of particles $x$ with $L(t) < x \le L(0)$ is bounded by $|c|t + 1$.
\medskip
Therefore by Kingman's subadditive ergodic theorem, the sequence $f_n(U)/n$ has an almost sure and $L^1$ limit $C^+(L) \in L^1(\mathbb{R})$, and therefore so does $\frac{C^+(L, t)}t$. To modify this in the case when $c=0$, consider the usual time-shift by 1.
\end{proof}
To find the value of the limit in Lemma \ref{P:line-rate-exists} we introduce a collection of approximations of $C^+(L, t)$. Let $A(x, \epsilon, s)$ be the event where
$$
\card{\frac{U(x, t') - U(x, 0)}t - S(x)} < \epsilon \text{ for all } t' > s.
$$
For any $s \in [0, \infty)$ and $\epsilon > 0$, define
$$
C^+_{s, \epsilon}(L, t) = \sum_{x \in \mathds{Z}} C^+(x, L, t) \mathbbm{1}(A(x, \epsilon, s)).
$$
\begin{lemma}
\label{L:C-ep-C-1}
For any $\epsilon > 0$, we have that
\begin{equation}
\label{E:lim-want}
\lim_{t \to \infty} \frac{C^+(L, t)}t = \lim_{s \to \infty} \limsup_{t \to \infty} \frac{C^+_{s, \epsilon}(L, t)}t \qquad \text{almost surely}.
\end{equation}
\end{lemma}
\begin{proof} We show that for any $\epsilon > 0$,
\begin{equation}
\label{E:C-ep-C}
\lim_{s \to \infty} \liminf_{t \to \infty} \frac{C^+(L, t) - C^+_{s, \epsilon}(L, t)}t = 0\qquad \text{almost surely}.
\end{equation}
We have
\begin{align}
\nonumber
0 &\le \frac{C^+(L, t)}t -\frac{C^+_{s, \epsilon}(L, t)}t \\
\nonumber
&\le \frac{1}t\sum_{x = d- (2\pi-c)t}^{d} C^+(x, L, t) \mathbbm{1}(A(x, \epsilon, s)^c) + \frac{1}t\sum_{x < d - (2\pi-c)t} C^+(x, L, t) \\
\label{E:two-terms-1} &\le \frac{1}{t}\sum_{x = d- (2\pi -c)t}^{d}\mathbbm{1}(A(x, \epsilon, s)^c) +\frac{1}t\sum_{x < d - (2\pi-c)t} C^+(x, L, t).
\end{align}
Define
$$
B(L, t) = \sum_{x < d - (2\pi-c)t} C^+(x, L, t), \qquad \text{ and let } \qquad B(L) = \liminf_{t \to \infty} \frac{B(L, t)}t.
$$
Birkhoff's ergodic theorem implies that as $t$ approaches $\infty$, the first term in \eqref{E:two-terms-1} approaches $
|2\pi - c|\mathbb{P}(A(x, \epsilon, s)^c).
$
Therefore
\begin{align*}
0 \le \liminf_{t \to \infty} \frac{C^+(L, t) - C^+_{s, \epsilon}(L, t)}t \le |2\pi - c|\mathbb{P}(A(x, \epsilon, s)^c) + B(L).
\end{align*}
We have that
$
\mathbb{P}(A(x, \epsilon, s)^c) \to 0
$
as $s \to \infty$. Therefore to prove \eqref{E:C-ep-C}, it is enough to show that $B(L) = 0$ almost surely. We first show that it is almost surely constant.
\medskip
Letting $[L + i](t) = ct + d + i$, we claim that $|B(L, t) - B(L + i, t)| \le 2i.$
To see this when $i > 0$, first observe that the only particles that can upcross $L + i$ but not $L$ in the interval $[0, t]$ are those that start between $L(0)$ and $[L +i](0)$. There are at most $i$ such particles. Similarly, the only particles that can upcross $L$ but not $L+i$ in the interval $[0, t]$ are those that are between $L(t)$ and $[L+i](t)$ at time $t$. Again, there are at most $i$ such particles. This proves the desired bound. Similar reasoning works when $i < 0$.
\medskip
Therefore the limit $B(L + i)$ is the same for all $i$, and so the random variable $B(L)$ lies in the invariant $\sigma$-algebra of the spatial shift. By spatial ergodicity, $B(L)$ is almost surely constant. We have that
\begin{align}
\nonumber \mathbb{P} \left( \frac{B(L, t)}t > 0 \right) &= \mathbb{P} \big(\text{There exists $x < d -(2\pi -c)t$ such that $U(x, t) > d + ct$} \big) \\
\label{E:prob-B-+}
&=\mathbb{P} \left(\text{There exists $x < 0$ such that $U(x, t) > 2\pi t$} \right),
\end{align}
where the second equality follows by spatial stationarity. By Lemma \ref{L:box-implies-diagonals}, \eqref{E:prob-B-+} does not approach $1$ as $t \to \infty$, and thus $B(L) = 0$ almost surely. This proves \eqref{E:C-ep-C}.
\medskip
The almost sure existence of the limit $\lim_{t \to \infty} C^+(L, t)/t$ by Lemma \ref{P:line-rate-exists} allows us to rearrange \eqref{E:C-ep-C} to get \eqref{E:lim-want}.
\end{proof}
We now establish bounds on the limits of $C^+_{s, \epsilon}$. For this we need the following lemma about sequences. The proof is straightforward, so we omit it.
\begin{lemma}
\label{L:sequence-avg}
Let $(a_n : n \in \mathbb{N})$ be a sequence such that
$$
\lim_{n \to \infty} \frac{1}{n + 1} \sum_{i=0}^n a_i = a.
$$
Then for any sequence $j(m) \in \mathds{Z}_+$ such that $j(m)/m \to k > 0$, and any $c > 0$, we have that
$$
\lim_{m \to \infty} \frac{1}{m + 1} \sum_{i = j(m)}^{j(m) + cm} a_i = ca.
$$
\end{lemma}
\begin{lemma}
\label{L:C-ep-C}
For any line $L(t) = ct + d$ and any $\epsilon > 0$, we have that
\begin{align}
\label{E:limsup}
\int_{c + \epsilon}^\infty \mu(y, \infty) dy \le
\lim_{s \to \infty} \limsup_{t \to \infty} \frac{C^+_{s, \epsilon}(L, t)}t& \le \int_{c - \epsilon}^\infty \mu(y, \infty)dy.\end{align}
\end{lemma}
\begin{proof}
We will prove this when $d = 0$. The result follows for all other $d$ by the spatial stationarity of $U$.
We can write $C^+_{s, \epsilon}(L, t)$ as follows.
\begin{align*}
C^+_{s, \epsilon}(L, t) &= \sum_{ x < 0} \mathbbm{1}(U(x, t) - x > ct - x)\mathbbm{1}(A(x, \epsilon, s)) \\
&= \sum_{ x < 0} \mathbbm{1}\left(\frac{U(x, t) - x}t - c > -\frac{x}t \right)\mathbbm{1}(A(x, \epsilon, s)).
\end{align*}
On the event $A(x, \epsilon, s)$, for $t > s$, $\frac{U(x, t) - x}t \in (S(x) - \epsilon, S(x) + \epsilon)$. This gives the following two almost sure bounds on $C^+_{s, \epsilon}(L, t).$
\begin{align*}
C^+_{s, \epsilon}(L, t) &\le \sum_{x < 0} \mathbbm{1}\left(S(x) - (c - \epsilon) > -\frac{x}t \right)\mathbbm{1}(A(x, \epsilon, s)) \\
&\le \sum_{x \in [-(\pi - c + \epsilon)t, 0)} \mathbbm{1}\left(S(x) - (c - \epsilon) > -\frac{x}t \right)\qquad \;\text{and}\; \\
C^+_{s, \epsilon}(L, t) &\ge \sum_{x \in [-(\pi - c - \epsilon)t, 0)} \mathbbm{1}\left(S(x) - (c + \epsilon) > -\frac{x}t \right)\mathbbm{1}(A(x, \epsilon, s)).
\end{align*}
Here the change in the range of $x$-values follows since $S(x) \in [-\pi, \pi]$ almost surely for every $x$ by Theorem \ref{T:bounded-speed}.
We now prove the upper bound in \eqref{E:limsup}.
For any $m \in \mathds{Z}$, we have the following:
\begin{align*}
&\sum_{x \in [-(\pi - c + \epsilon)t, 0)} \mathbbm{1}\left(S(x) - (c - \epsilon) > -\frac{x}t \right) \\
&\qquad \qquad \le \sum_{z=0}^{\lceil (\pi -c + \epsilon)m \rceil} \sum_{x = 0}^{\ceil{t/m} - 1} \mathbbm{1}\left(S\big(-x - z\ceil{t/m}\big) > \frac{z}m + c - \epsilon \right).
\end{align*}
Applying Birkhoff's ergodic theorem and Lemma \ref{L:sequence-avg} implies that for each $z$, almost surely as $t \to \infty$, we have
\begin{align*}
\frac{1}{t} \sum_{x = 0}^{\ceil{t/m} -1} \mathbbm{1}\left(S\big(-x - z\ceil{t/m}\big) > \frac{z}m + c - \epsilon\right) &\to \frac{1}m\mu \left(\frac{z}{m} + c - \epsilon, \infty \right).
\end{align*}
Summing over $z$, we get that
$$
\limsup_{t \to \infty} \frac{C^+_{s, \epsilon}(L, t)}t \le \frac{1}{m} \sum_{z=0}^{\ceil{(\pi + \epsilon - c)m}}\mu \left(\frac{z}{m} + c - \epsilon, \infty \right) \qquad \text{almost surely} \text{ for every $m$}.
$$
Taking $m \to \infty$, the above Riemann sum converges to the corresponding integral, proving the upper bound in \eqref{E:limsup}. To prove the lower bound in \eqref{E:limsup}, first observe that for any $m \in \mathds{Z}_+$, we have
\begin{align*}
\sum_{x \in [-(\pi - c - \epsilon)t, 0)} &\mathbbm{1}\left(S(x) - (c + \epsilon) > -\frac{x}t \right)\mathbbm{1}(A(x, \epsilon, s)) \\
&\ge \sum_{z=0}^{ \floor{(\pi - c - \epsilon)m} - 1} \sum_{x = 0}^{\floor{t/m} - 1} \mathbbm{1}\bigg(S(x + z\floor{t/m}) - (c + \epsilon) > \frac{z+1}{m}\;\text{and}\; \\
&\qquad \qquad |S(x + z\floor{t/m}, t') - S(x + z\floor{t/m})| < \epsilon \text{ for all } t' > s\bigg).
\end{align*}
In the above inequality, we have used the notation $S(x, t)$ for the average speed of particle $x$ up to time $t$ (see the definition in Equation \eqref{E:St}).
From here we take $t \to \infty$, and apply Lemma \ref{L:sequence-avg} as in the proof of the upper bound in \eqref{E:limsup}. This gives the almost sure bound
\begin{equation}
\label{E:almost-reimann-ready}
\limsup_{t \to \infty} \frac{C^+_{s, \epsilon}(L, t)}t \ge \frac{1}{m} \sum_{z=0}^{\floor{(\pi - \epsilon - c)m} -1}\mathbb{P} \left(S(0) - (c + \epsilon) \ge \frac{z + 1}{m} \;\text{and}\; |S(0, t') - S(0)| < \epsilon \text{ for all } t' > s\right).
\end{equation}
Now observe that as $s \to \infty$,
$$
\mathbb{P} \left(S(0) - (c + \epsilon) \ge \frac{z + 1}{m} \;\text{and}\; |S(0, t') - S(0)| < \epsilon \text{ for all } t' > s\right) \to \mu \left(\frac{z + 1}{m} + c + \epsilon , \infty\right).
$$
Therefore taking $s \to \infty$ in \eqref{E:almost-reimann-ready}, and then letting $m$ tend to infinity proves the lower bound in \eqref{E:limsup}.
\end{proof}
\begin{proof}[Proof of Proposition \ref{P:line-rate}.]
Applying Lemmas \ref{L:C-ep-C-1} and \ref{L:C-ep-C} gives that for any $\epsilon > 0$, that
$$
\int_{c + \epsilon}^\infty \mu(y, \infty)dy \le \lim_{t \to \infty} \frac{C^+(L, t)}t \le \int_{c + \epsilon}^\infty \mu(y, \infty)dy
$$
almost surely.
Taking $\epsilon$ to $0$ then completes the proof of almost sure convergence.
The fact that convergence also takes place in $L^1$ follows from Lemma \ref{P:line-rate-exists}.
\end{proof}
We can analogously define $C^-(L, t)$ as the number of net downcrossings of the line $L$ by the time $t$, and define $C(L, t) = C^+(L, t) + C^-(L, t)$. By the symmetry of the local limit $U$, analogues of Proposition \ref{P:line-rate} hold for $C^-(L, t)$ and $C(L, t)$.
\begin{theorem}
\label{T:line-rate}
Let $L(t) = ct + d$. Then as $t \to \infty$, we have that
$$
\frac{C^+(L, t)}{t} \to \int(y-c)^+d\mu(y), \;\;\;\; \frac{C^-(L, t)}{t} \to \int(y-c)^-d\mu(y), \; \;\;\; \frac{C(L, t)}{t} \to \int|y-c|d\mu(y).
$$
All three convergences are both almost sure and in $L^1$.
\end{theorem}
\begin{proof}[Proof of Theorem \ref{T:swap-rate}.]
By Theorem \ref{T:time-stat}, the process of swap times for the particle $x$ is stationary in time. Moreover, $Q(x, 1) \in L^1$ by Lemma \ref{L:finite-exp-chunks}. Therefore we can apply Birkhoff's ergodic theorem to get that $Q(x, t)/t$ converges both almost surely and in $L^1$ to a (possibly random) limit. We now identify that limit.
\medskip
Let $L_q(t) = qt + x$. By Theorem \ref{T:line-rate}, we have that with probability $1$,
\begin{equation}
\label{E:C-L-q}
\frac{C(L_q, t)}{t} \to \int |y - q|d\mu(y) \qquad \text{for every $q \in \mathbb{Q}$}.
\end{equation}
At time $t$, there are fewer than $|U(x, t) - L_q(t)| + 1$ particles that either have crossed the line $L_q(t)$ by time $t$ but have not swapped with particle $x$, or have swapped with particle $x$ by time $t$ but have not crossed the line $L_q(t)$. Therefore we have that almost surely,
\begin{equation}
\label{E:t-finite}
\card{\frac{Q(x,t) - C(L_q, t)}t} \le \frac{|U(x, t) - L_q(t)| + 1}t = |S(x) - q| + o(1).
\end{equation}
The last equality follows from Theorem \ref{T:main-local}. Letting $t \to \infty$ in \eqref{E:t-finite}, the convergence in \eqref{E:C-L-q} implies that almost surely,
\begin{equation*}
\left| \lim_{t \to \infty} \frac{Q(x, t)}t - \int |y - q|d\mu(y) \right| \le |S(x) - q| \qquad \text{for every $q \in \mathbb{Q}$}.
\end{equation*}
By the continuity of the function $F(z) = \int |y - z|d\mu(y)$, this implies that
\[
\lim_{t \to \infty} \frac{Q(x, t)}t = \int |y - S(x)|d\mu(y) \qquad \text{almost surely}. \qquad \qedhere
\]
\end{proof}
\begin{proof}[Proof of Corollary \ref{C:av-swap-rate}.]
For (i), observe that for $c \in [-\pi, \pi]$,
we have that
\begin{align*}
2\int |y - c| d\mu(y) &= \int \big( |y - c| + |y + c| \big) d\mu(y) \\
&\le \int\big( |y - \pi| + |y + \pi| \big) d\mu(y) = \int 2\pi d\mu(y) = 2\pi.
\end{align*}
Here the first equality follows from the symmetry of $\mu$, and the second equality follows since $\text{supp}(\mu) \subset [-\pi, \pi]$.
Therefore by Theorem \ref{T:bounded-speed} and Theorem \ref{T:swap-rate},
$$
0 \le \lim_{t \to \infty} \frac{Q(x, t)}t \le \pi \qquad \text{almost surely}.
$$
For (ii), Birkhoff's ergodic theorem implies that
\begin{equation*}
\mathbb{E}\left[\lim_{t \to \infty} \frac{Q(x, t)}{t}\right] = \mathbb{E} Q(x, 1).
\end{equation*}
Here the left hand side above is equal to $\mathbb{E}|X - X'|$ by Theorem \ref{T:swap-rate}, where $X$ and $X'$ are independent random variables with distribution $\mu$. By Lemma \ref{L:finite-exp-chunks}, the right hand side is equal to $8/\pi$.
\end{proof}
\section{Limiting trajectories are Lipschitz}
\label{S:Lipschitz}
Recall that a path $y(t)$ is $\pi\sqrt{1 - y^2}$-Lipschitz if it is absolutely continuous, and if $|y'(t)| \le \sqrt{1 - y^2}$ for almost every time $t$. The goal of this section is to prove Theorem \ref{T:main}, showing that weak limits of the trajectory random variables $Y_n$ are supported on $\pi\sqrt{1 - y^2}$-Lipschitz paths.
\medskip
Theorem \ref{T:bounded-speed} allows us to conclude that most particles move with bounded local speed most of the time. In order to translate this into a global speed bound we need to bound the amount of particle movement during the times that particles are not moving with bounded speed.
For $p, q \in [0, 1]$, and a path $y: [0, 1] \to [-1, 1]$, define
$$
m_{p ,q}(y) = \inf \{ |y(t)| : t \in [p, q]\}.
$$
\begin{lemma}
\label{L:bad-box}
For any $\epsilon > 0$ and $q \in (0, 1]$, the following holds:
$$
\frac{1}n \mathbb{E} \sum_{x=1}^n \left(|\sigma^n_G(x, q) - \sigma^n_G(x, 0)| - (\pi + \epsilon)q\sqrt{1 - m^2_{0, q}(\sigma^n_G(x, \cdot))} \right)^+ \to 0 \qquad \;\text{as}\; n \to \infty.
$$
\end{lemma}
\begin{proof}
We first reduce the lemma to a statement about the number of swaps made by fast-moving particles. Fix $t \in [0, \infty)$. For $j \in \{0, 1, \mathellipsis, \floor{(n-1)q/2t} \}$, define
$$
\Delta_n = \frac{2(t + 1)}{n-1}, \qquad t_{n, j} = \frac{\floor{ntj}}{{n \choose 2}} \qquad \;\text{and}\; \qquad t^+_{n, j} = t_{n, j} + \Delta_n.
$$
This intervals $[t_{n, j}, t^+_{n, j}]$ cover the interval $[0, q]$ with some overlap. The reason for adding in the overlap is so that all intervals are the same length and start at multiples of ${n \choose 2}^{-1}$. This is necessary for applying time stationarity of sorting networks. Let $Q_n(x, t_1, t_2)$ be the number of swaps made by particle $x$ in the interval $[t_1, t_2]$ in the random sorting network $\sigma^n$. Then for large enough $n$, we have that
\begin{align}
\nonumber
&\frac{1}n \mathbb{E} \sum_{x=1}^n \left(|\sigma^n_G(x, q) - \sigma^n_G(x, 0)| - (\pi + \epsilon)q\sqrt{1 - m^2_{0, q}(\sigma^n_G(x, \cdot))} \right)^+ \\
\nonumber
&\qquad \le \frac{1}n \mathbb{E} \sum_{x=1}^n \sum_{j=0}^{\floor{(n-1)q/2t}} \left(|\sigma^n_G(x, t_{n, j + 1}) - \sigma^n_G(x, t_{n, j})| - \frac{2(\pi + \epsilon/2)t}{n-1}\sqrt{1 - m^2_{0, q}(\sigma^n_G(x, \cdot))} \right)^+.
\end{align}
This inequality comes from using the convexity of the function $f(x) = x^+$ and the triangle inequality. We can now bound the distance $|\sigma^n_G(x, t_{n, j + 1}) - \sigma^n_G(x, t_{n, j})|$ by the number of swaps $Q_n(x, t_{n, j}, t_{n, j+1})$ made by particle $x$ in that interval. Then using that $Q_n(x, t, \cdot)$ is an increasing function and that $t_{n, j+1} \le t_{n, j}^+$, the right hand side above can be bounded by
\begin{align}
\label{E:ready-to-Q-bd}
\frac{1}n \mathbb{E} \sum_{x=1}^n \sum_{j=0}^{\floor{(n-1)q/2t}} \left(\frac{2Q_n(x, t_{n, j}, t_{n, j}^+)}n - \frac{2(\pi + \epsilon/2)t}{n-1}\sqrt{1 - \left[\sigma^n_G(x, t_{n, j})\right]^2} \right)^+
\end{align}
\medskip
It is enough to show that for any $\delta > 0$, there exists a $t \in [0, \infty)$ such that for large enough $n$, \eqref{E:ready-to-Q-bd} is bounded by $\delta$. By time stationarity of sorting networks, it is enough to show that there exists some $t$ such that for all large enough $n$, the quantity
\begin{align*}
\mathcal{F}_n := \mathbb{E} \sum_{x=1}^n \left(\frac{2Q_n(x, 0, \Delta_n)}n - \frac{2(\pi + \epsilon/2)t}{n-1}\sqrt{1 - \left(\frac{2x}n - 1\right)^2} \right)^+
\end{align*}
is at most $t\delta$.
For $\alpha \in (-1, 1)$, let $j_{n, \alpha} = \floor{\frac{n(\alpha + 1)}2}$, and define the random variable
$$
Z^n_{\alpha, t} = Q_n\left(j_{n, \alpha}, 0, \Delta_n\right)\mathbbm{1} \left( Q_n\left(j_{n, \alpha}, 0, \Delta_n\right) < (\pi + \epsilon/2)t\sqrt{1 - \left(\frac{2j_{n, \alpha}}n - 1\right)^2} \right).
$$
We can bound $\mathcal{F}_n$ in terms of the random variables $Z_{\alpha, t}^n$:
\begin{align}
\nonumber
\nonumber
\mathcal{F}_n &\le \frac{2}n \mathbb{E} \sum_{x=1}^n Q_n\left(x, 0, \Delta_n\right) \mathbbm{1}\left( Q_n\left(x, 0, \Delta_n\right) \ge (\pi + \epsilon/2)t\sqrt{1 - \left(\frac{2x}n - 1\right)^2}\right) \\
\label{E:Z-want}
&= 4t - \mathbb{E} \int_{-1}^1 Z^n_{\alpha, t} d\alpha.
\end{align}
It remains to bound $\mathbb{E} \int_{-1}^1 Z^n_{\alpha, t} d\alpha.$ Recall that in the local limit, that $Q(0, t)$ is the number of swaps made by particle $0$ in the interval $[0, t]$. Define the random variable
$$
Z(t) := Q(0, t + 1) \mathbbm{1}(Q(0, t + 1) < (\pi + \epsilon/2)t).
$$
We can think of $Z$ as a function on the product space $\mathcal{A} \times [0, \infty)$, where $\mathcal{A}$ is the space of swap functions. Thought of in this way, if $U_n \to U$ in $\mathcal{A}$, and $t_n \to t$, then $Z (U_n, t_n) \to Z(U, t)$ as long as particle $0$ does not swap in $U$ at time $t + 1$.
\medskip
For any $t$, the probability that the local limit $U$ has a swap at time $t$ is $0$ by Theorem \ref{T:local} (iv). Therefore by the weak convergence in Theorem \ref{T:main-local}, since $2j_{n, \alpha}/n - 1 \to \alpha$ as $n \to \infty$ for any $\alpha \in (-1, 1)$, we get that
$$
Z^n_{\alpha, t} \stackrel{d}{\to} Z(\sqrt{1- \alpha^2}t).
$$
Therefore by the bounded convergence theorem, we have that
\begin{equation}
\label{E:Z-to-A}
\lim_{n \to \infty} \int_{-1}^1 \mathbb{E} Z^n_{\alpha, t} d\alpha = t \int_{-1}^1 \mathbb{E} Z(\sqrt{1 - \alpha^2}t) d\alpha.
\end{equation}
Now by Corollary \ref{C:av-swap-rate}, $Z(t)/t \to \int |S(0) - y|d \mu(y)$, and so by the bounded convergence theorem and Corollary \ref{C:av-swap-rate} again, $\mathbb{E} Z(t)/t \to 8/\pi.$ Therefore we have that
$$
\int_{-1}^1 \frac{\mathbb{E} Z(\sqrt{1 - \alpha^2}t)}t d\alpha \xrightarrow[\;\; t \to \infty \;\;]{} \frac{8}\pi \int_{-1}^1 \sqrt{1 - \alpha^2} d\alpha = 4,
$$
where the bounded convergence theorem is once again used to establish the limit. Combining the above convergence with \eqref{E:Z-to-A} implies that there exists a $t$ such that for all large enough $n$,
$$
\int_{-1}^1 \mathbb{E} Z^n_{\alpha, t} d\alpha \ge (4 - \delta)t.
$$
Using Fubini's Theorem and then plugging the above inequality into \eqref{E:Z-want} then gives that $\mathcal{F}_n \le t\delta$ for large enough $n$, as desired.
\end{proof}
\begin{proof}[Proof of Theorem \ref{T:main}.]
By Lemma \ref{L:bad-box} and Markov's inequality, for any $\epsilon > 0$ $\;\text{and}\; q \in [0, 1]$, we have that
\begin{equation}
\label{E:translate}
\lim_{n \to \infty} \mathbb{P} \left(|Y_n(q) - Y_n(0)| \le \pi q\sqrt{1 - m^2_{0, q}(Y_n)} + \epsilon \right) = 1.
\end{equation}
Moreover, by time-stationarity of sorting networks, the above holds with any $p< q$ inserted in place of $0$. Now for any $p, q \in [0, 1]$ and $C \in \mathbb{R}$, the set $\{|f(p) - f(q)| \le C\}$ is closed in $\mathcal{D}$ under the uniform norm. Therefore since any subsequential limit $Y$ of $Y_n$ is supported on continuous paths by Lemma \ref{L:precompact}, by \eqref{E:translate},
$$
\mathbb{P} \left(\text{For all } p, q \in \mathbb{Q} \cap [0, 1] \;\text{and}\; k \in \mathbb{N}, \text{ we have } \frac{|Y(q) - Y(p)|}{q - p} \le \sqrt{1 - m^2_{p, q}(Y)} + \frac{1}k \right) = 1.
$$
Since $Y$ is almost surely continuous, this implies that
$$
\mathbb{P} \left(\text{For all } s, t \in [0, 1], \text{ we have } \frac{|Y(t) - Y(s)|}{t - s} \le \pi\sqrt{1 - m^2_{t, s}(y)} \right) = 1.
$$
This condition is equivalent to $Y$ being almost surely $\sqrt{1 - y^2}$-Lipschitz.
\end{proof}
\section{Elliptical support and sine curve trajectories at the edge}
\label{S:corollaries}
In this section, we use Theorem \ref{T:main} to prove Theorems \ref{T:main-3} and \ref{T:main-2}. Recall that $\eta^n_t$ is the permutation matrix measure at time $t$ in a uniform $n$-element sorting network.
Recall the statement of Theorem \ref{T:main-3}:
\begin{customthm}{1.5}
Let $t \in [0, 1]$, and let $\eta_t$ be a subsequential limit of $\eta^n_t$. Then the support of the random measure $\eta_t$ is almost surely contained in the support of $\mathfrak{Arch}_t$.
\end{customthm}
\begin{proof}
Fix $t$, and suppose that $\eta_t$ is the distributional limit of the subsequence $\eta^{n_i}_t$. Since the sequence $Y_n$ is precompact by Lemma \ref{L:precompact}, there must be a subsubsequence $Y_{n_{i_k}}$ which converges in distribution to a random variable $Y$ in $\mathcal{D}$. Then the support of the random measure $\eta_t$ is almost surely contained in the support of the law of $(Y(0), Y(t))$. Therefore we just need to check that $(Y(0), Y(t)) \in \text{supp}(\mathfrak{Arch}_t)$ almost surely.
\medskip
For $x \in [-1, 1]$, let $\mathbb{P}_{x}$ be the conditional distribution of $Y$ given that $Y(0) = x$. By Theorem \ref{T:main}, for almost every $x \in [-1, 1]$,
\begin{equation}
\label{E:circle}
\mathbb{P}_{x}(Y\text{ is }\pi\sqrt{1-y^2}\text{-Lipschitz}) = 1.
\end{equation}
Now if $y(t)$ is a $\sqrt{1 - y^2}$-Lipschitz path with $y(0) = -y(1) = x$, then $y$ is bounded by the solutions of the initial value problems $f'(t) = \pm \sqrt{1 - f^2(t)} ; f(0) = x$ and $f'(t) = \pm \sqrt{1 - f^2(t)} ; f(1) = - x$. Therefore for any $t \in [0, 1]$, we have that
\begin{equation}
\label{E:two-ineq}
x \cos (\pi t) - \sqrt{1- x^2} \sin (\pi t) \le y(t) \le x \cos (\pi t) + \sqrt{1- x^2} \sin (\pi t).
\end{equation}
By using that $\mathfrak{Arch}_t \stackrel{d}{=} (X, X \cos(\pi t) + Z \sin( \pi t))$, where $(X, Z) \stackrel{d}{=} \mathfrak{Arch}_{1/2}$, and by using that the support of $\mathfrak{Arch}_{1/2}$ is the unit disk, the above inequality implies that $(x, y(t)) \in \text{supp}(\mathfrak{Arch}_t).$ Combining this with \eqref{E:circle} completes the proof.
\end{proof}
Again, recall the statement of Theorem \ref{T:main-2}.
\begin{customthm}{1.6}
Suppose that $Y$ is a subsequential limit of $Y_n$. Then for any $\epsilon > 0$,
\begin{align*}
\mathbb{P} \left( Y(0) \ge 1 - \epsilon \;\text{and}\; ||Y(t) - \cos(\pi t)||_u \ge \sqrt{2\epsilon}\right) &= 0, \qquad \;\text{and}\; \\
\mathbb{P} \left(Y(0) \le -1 + \epsilon \;\text{and}\; ||Y(t) + \cos(\pi t)||_u \ge \sqrt{2\epsilon}\right) &= 0.
\end{align*}
\end{customthm}
\begin{proof}
By Theorem \ref{T:main}, we have that almost surely
\begin{equation*}
Y(0) \cos (\pi t) - \sqrt{1- Y^2(0)} \sin (\pi t) \le Y(t) \le Y(0) \cos (\pi t) + \sqrt{1- Y^2(0)} \sin (\pi t).
\end{equation*}
for every $t$. This is simply \eqref{E:two-ineq} applied to $Y$. Elementary calculus gives that $||Y(t) - \cos(\pi t)||_u \le \sqrt{2(1 - Y(0))}$ and similarly shows that $||Y(t) + \cos(\pi t)||_u \le \sqrt{2(1 - Y(1))}$.
\end{proof}
\section{Open problems}
The subsequent paper \cite{dauvergne3} proves Conjecture \ref{CJ:sine-curves}, Conjecture \ref{CJ:matrices}, and the other sorting network conjectures from \cite{angel2007random}. This gives a full description of the global limit of random sorting networks. In this section, we give a set of conjectures that focus on refining the understanding of convergence to this limit. Some of these conjectures are implicit in other papers or pictures, or have arisen in previous discussions but were not written down.
\medskip
Recall that $\sigma_G^n$ is an $n$-element uniform random sorting network in the global scaling. Let $j \in \{1, \mathellipsis n\}$, and consider the random complex-valued function
$$
Z^n_j(t) = e^{\pi i t} \left[\sigma^n_G(j, t) + i\sigma^n_G(j, t + 1/2) \right], \qquad t \in [0, 1/2].
$$
For a fixed $t$, $(Z^n_1(t), \mathellipsis Z^n_n(t))$ is the set of points in the scaled permutation matrix for $\sigma^n(\cdot, t + 1/2)(\sigma^n)^{-1}(\cdot, t)$ after a counterclockwise rotation by $2\pi t$. The random vector-valued function $F^n(\cdot) = (Z^n_1(\cdot), \mathellipsis, Z^n_n(\cdot))$ then gives a ``halfway permutation matrix evolution" for $\sigma^n$ modulo uniform rotation (see Figure \ref{fig:window}). Conjecture \ref{CJ:sine-curves} implies that
$$
\max_{j \in [1, n]} \max_{s, t \in [0, 1]} |Z^n_j(t) - Z^n_j(s)| \stackrel{\prob}{\to} 0 \qquad \;\text{as}\; n \to \infty.
$$
Figure \ref{fig:window} suggests that the size of the fluctuations for each of the functions $Z^n_j$ is of order $n^{-1/2}$, and that the size is inversely proportional to the density of the Archimedean distribution at the point $Z^n_j(0)$. This leads to the first conjecture.
\medskip
\begin{figure}
\centering
\includegraphics[scale= 1]{RSN-window.pdf}
\caption{Images of the functions $Z^{500}_j(t)$. All paths are localized, and the distribution of these localized paths within $[-1, 1]^2$ is given by $\mathfrak{Arch}_{1/2}$. This figure originally appeared in \cite{angel2007random}.}
\label{fig:window}
\end{figure}
\begin{conj}
\label{CJ:fluct}
Let $U$ be a uniform random variable on $[0, 1]$, independent of all the random sorting networks $\sigma^n$. For each $n$, let $J_n$ be a uniform random variable on $\{1, \mathellipsis n \}$, independent of $\sigma^n$ and $U$.
\begin{enumerate}[nosep,label=(\roman*)]
\smallskip
\item The sequence of random variables $\{ n\text{\fontfamily{ppl}\selectfont Var}(Z^n_{J_n}(U) \; | \; J_n, \sigma^n) : n \in \mathbb{N}\}$ is tight.
\item There exist independent random variables $X_1, X_2$ such that
\begin{equation*}
\left(n\text{\fontfamily{ppl}\selectfont Var}(Z^n_{J_n}(U) \; | \; J_n, \sigma^n), |Z^n_{J_n}(0)| \right) \stackrel{d}{\to} (X_1\sqrt{1 - X_2^2}, X_2).
\end{equation*}
\end{enumerate}
\end{conj}
The second conjecture concerns the maximum value of the fluctuations.
\begin{conj}
\label{CJ:fluct-max}
For any $\epsilon > 0$,
\begin{equation*}
\max_{j \in [1, n]} \sup_{s, t \in [0, 1]} n^{1/2- \epsilon}|Z^n_{j}(t) - Z^n_j(s)| \to 0 \qquad \text{ in probability} \;\text{as}\; n \to \infty.
\end{equation*}
\end{conj}
We now look at the local structure of the half-way permutation (see Figure \ref{fig:halfway}). Let $(x, y)$ be a point in the open unit disk, and consider the point process $\Pi_n(x, y) \subset \mathbb{R}^2$ given by
$$
\Pi_n(x, y) = \left \{ \frac{\sqrt{n}}{\sqrt{2\pi}(1 - x^2 - y^2)^{1/4}}\left(\sigma^n_G(i, 0) - x,\sigma^n_G(i, 1/2) - y\right) : i \in \{1, \mathellipsis n\} \right\}.
$$
Heuristically, the $\sqrt{n}$ scaling, combined with the density factor of $\sqrt{2\pi}(1 - x^2 - y^2)^{1/4}$ from the Archimedean measure, should imply that for large $n$, the expected number of points of $\Pi_n(x, y)$ in a box $[a, b] \times [c, d] \subset \mathbb{R}^2$ is approximately $(b-a)(d-c)$.
\begin{conj}
\label{CJ:local-limit}
There exists a rotationally symmetric, translation invariant point process $\Pi$ on $\mathbb{R}^2$ such that for any $(x, y)$ in the open unit disk, we have the following convergence in distribution:
$$
\Pi_n(x, y) \stackrel{d}{\to} \Pi.
$$
\end{conj}
We also consider deviations of the permutation matrix measures $\eta^n_t$ (see \eqref{E:eta-n-t} for the definition) from the Archimedean path $\{\mathfrak{Arch}_t : t \in [0, 1]\}$.
\begin{conj}
Let $U$ be any open set in the space of probability measures on $[-1, 1]^2$ with the topology of weak convergence, containing each of the measures $\mathfrak{Arch}_t$. There exist constants $c_1, c_2 > 0$ such that for all $n$,
$$
\mathbb{P} \left( \text{There exists } t \in [0, 1] \text{ such that } \eta^n_t \notin U \right) \le c_1e^{-c_2n^2}.
$$
\end{conj}
Finally, Conjecture \ref{CJ:sine-curves} implies that if we know the location of particle $i$ after $\epsilon n^2$ swaps, then we know its trajectory. It is natural to ask to what extent this can be improved upon.
The nature of the local limit suggests that the trajectory of particle $i$ should be determined after $O(n)$ steps.
\medskip
Again, let $J_n$ be a uniform random variable on $\{1, \mathellipsis n \}$, independent of $\sigma^n$. Let $S^n_t(i, \cdot)$ be the unique random curve of the form $A\sin(\pi t + \Theta)$ such that
$$
S^n_t(i, 0) = \sigma_G^n(i, 0) \;\; \;\text{and}\; \;\; S^n_t(i, t) = \sigma_G^n(i, t).
$$
\begin{conj}
For any $\epsilon > 0$, there exists a constant $C > 0$ such that
$$
\liminf_{n \to \infty} \mathbb{P} \left( ||\sigma^n_G(J_n, \cdot) - S^n_{C/n}(J_n, \cdot)||_u < \epsilon \right) \ge 1 - \epsilon.
$$
\end{conj}
\subsection*{Acknowledgements} D.D. was supported by an NSERC CGS D scholarship. B.V. was supported by the
Canada Research Chair program, the NSERC Discovery Accelerator
grant, the MTA Momentum Random Spectra research group, and the ERC
consolidator grant 648017 (Abert). B.V. thanks Mustazee Rahman for several interesting and useful
discussions about the topic of this paper.
\bibliographystyle{alpha}
|
{
"timestamp": "2018-11-02T01:01:42",
"yymm": "1802",
"arxiv_id": "1802.08933",
"language": "en",
"url": "https://arxiv.org/abs/1802.08933"
}
|
\section{Introduction}
\label{sec:intro}
In recent years there has been a lot of interest on algorithmic fairness in machine learning~see, e.g.,~\cite{dwork2018decoupled,hardt2016equality,zafar2017fairness,zemel2013learning,kilbertus2017avoiding,kusner2017counterfactual,calmon2017optimized,joseph2016fairness,chierichetti2017fair,jabbari2016fair,yao2017beyond,lum2016statistical,zliobaite2015relation} and references therein.
The central question is how to enhance supervised learning algorithms with fairness requirements, namely ensuring that sensitive information (e.g.~knowledge about the ethnic group of an individual)
does not `unfairly' influence the outcome of a learning algorithm.
For example if the learning problem is to decide whether a person should be offered a loan based on her previous credit card scores, we would like to build a model which does not unfair use additional sensitive information such as race or sex.
Several notions of fairness and associated learning methods have been introduced in machine learning in the past few years, including Demographic Parity~\cite{calders2009building}, Equal Odds and Equal Opportunities~\cite{hardt2016equality}, Disparate Treatment, Impact, and mistreatment~\cite{zafar2017fairness}.
The underlying idea behind such notions is to balance decisions of a classifier among the different sensitive groups and label sets.
In this paper, we build upon the notion Equal Opportunity (EO) which defines fairness as the requirement that the true positive rate of the classifier is the same across the sensitive groups.
In Section~\ref{sec:luca:th:Fairness} we introduce a generalization of this notion of fairness which constrains the conditional risk of a classifier, associated to positive labeled samples of a group, to be approximately constant with respect to group membership.
The risk is measure according to a prescribed loss function and approximation parameter $\epsilon$.
When the loss is the misclassification error and $\epsilon = 0$ we recover the notion EO above.
We study the problem of minimizing the expected risk within a prescribed class of functions subject to the fairness constraint.
As a natural estimator associated with this problem, we consider a modified version of Empirical Risk Minimization (ERM) which we call Fair ERM (FERM).
We derive both risk and fairness bounds, which support that FERM is statistically consistent, in a certain sense which we explain in the paper in Section~\ref{sec:luca:th:FERM}.
Since the FERM approach is impractical due to the non-convex nature of the constraint, we propose, still in Section~\ref{sec:luca:th:FERM}, a surrogate convex FERM problem which relates, under a natural condition, to the original goal of minimizing the misclassification error subject to a relaxed EO constraint.
We further observe that our condition can be empirically verified to judge the quality of the approximation in practice.
As a concrete example of the framework, in Section~\ref{sec:luca:th:FK} we describe how kernel methods such as support vector machines (SVMs) can be enhanced to satisfy the fairness constraint.
We observe that a particular instance of the fairness constraint for $\epsilon=0$ reduces to an orthogonality constraint.
Moreover, in the linear case, the constraint translates into a preprocessing step that implicitly imposes the fairness requirement on the data, making fair any linear model learned with them.
We report numerical experiments using both linear and nonlinear kernels, which indicate that our method improves on the state-of-the-art in four out of five datasets and is competitive on the fifth dataset\footnote{
Additional technical steps and experiments are presented in the supplementary materials.}.
In summary the contributions of this paper are twofold. First we outline a general framework for empirical risk minimization under fairness constraints. The framework can be used as a starting point to develop specific algorithms for learning under fairness constraints. As a second contribution, we shown how a linear fairness constraint arises naturally in the framework and allows us to develop a novel convex learning method that is supported by consistency properties both in terms of EO and risk of the selected model, performing favorably against state-of-the-art alternatives on a series of benchmark datasets.
\noindent {\bf Previous Work.} Work on algorithmic fairness
can be divided in three families. Methods in the first family modify a pretrained classifier in order to increase its fairness properties while maintaining as much as possible the classification performance:~\cite{pleiss2017fairness,beutel2017data,hardt2016equality,feldman2015certifying} are examples of these methods but no consistency property nor comparison with state-of-the-art proposal are provided.
Methods in the second family enforce fairness directly during the training step:~\cite{agarwal2017reductions,agarwal2018reductions,woodworth2017learning,zafar2017fairness,menon2018cost,zafar2017parity,bechavod2018Penalizing,zafar2017fairnessARXIV,kamishima2011fairness,kearns2017preventing} are examples of this method which provide non-convex approaches to the solution of the problem or they derive consistency results just for the non-convex formulation resorting later to a convex approach which is not theoretically grounded;~\cite{Prez-Suay2017Fair,dwork2018decoupled,berk2017convex,alabi2018optimizing} are other examples of convex approaches which do not compare with other state-of-the-art solutions and do not provide consistency properties except for the~\cite{dwork2018decoupled} which, contrarily to our proposal, does not enforce a fairness constraint directly in the learning phase and the~\cite{olfat2018spectral} which proposes a computational tractable fair SVM starting from a constraint on the covariance matrices. Specifically, it leads to a non-convex constraint which is imposed iteratively with a sequence of relaxation exploiting spectral decompositions.
Finally, the third family of methods implements fairness by modifying the data representation and then employs standard machine learning methods:~\cite{adebayo2016iterative,calmon2017optimized,kamiran2009classifying,zemel2013learning,kamiran2012data,kamiran2010classification} are examples of these methods but, again, no consistency property nor comparison with state-of-the-art proposal are provided.
Our method belongs to the second family of methods, in that it directly optimizes a fairness constraint related to the notion of EO discussed above.
Furthermore, in the case of linear models, our method translates to an efficient preprocessing of the input data, with methods in the third family.
As we shall see, our approach is theoretically grounded and performs favorably against the state-of-the-art\footnote{A detailed comparison between our proposal and state-of-the-art is reported in the supplementary materials.}.
\section{Fair Empirical Risk Minimization}
\label{sec:luca:th:Fairness}
In this section, we present our approach to learning with fairness. We begin by introducing our notation. We let $\mathcal{D} = \left\{ (\boldsymbol{x}_1,s_1,y_1),\dots, (\boldsymbol{x}_n,s_n,y_n) \right\}$ be a sequence of $n$ samples drawn independently from an unknown probability distribution $\mu$ over $\mathcal{X} \times \mathcal{S} \times \mathcal{Y}$, where $\mathcal{Y} = \{ -1, +1 \}$ is the set of binary output labels, $\mathcal{S} = \{a,b\}$ represents group membership among two groups\footnote{The extension to multiple groups (e.g.~ethnic group) is briefly discussed in the supplementary material.} (e.g.~`female' or `male'), and $\mathcal{X}$ is the input space.
We note that the $\boldsymbol{x} \in \mathcal{X}$ may further contain or not the sensitive feature $s \in \mathcal{S}$ in it.
We also denote by $\mathcal{D}^{+,g} {=}\{(\boldsymbol{x}_i,s_i,y_i) : y_i {=} 1,s_i {=} g \}$ for $g \in \{ a,b \}$ and $n^{+,g} = |\mathcal{D}^{+,g}|$.
Let us consider a function (or model) $f: \mathcal{X} \rightarrow \mathbb{R}$ chosen from a set $\mathcal{F}$ of possible models.
The error (risk) of $f$ in approximating $\mu$ is measured by a prescribed loss function $\ell:\mathbb{R} \times \mathcal{Y} \rightarrow \mathbb{R}$.
The risk of $f$ is defined as ${L}(f) = \mathbb{E} \left[ \ell(f(\boldsymbol{x}),y) \right]$.
When necessary we will indicate with a subscript the particular loss function used, i.e.~${L}_p(f) = \mathbb{E} \left[ \ell_p(f(\boldsymbol{x}),y) \right]$.
The purpose of a learning procedure is to find a model that minimizes the risk.
Since the probability measure $\mu$ is usually unknown, the risk cannot be computed, however we can compute the empirical risk $\hat{L}(f) = \hat{\mathbb{E}} [\ell(f(\boldsymbol{x}),y)]$, where $\hat{\mathbb{E}}$ denotes the empirical expectation.
A natural learning strategy, called Empirical Risk Minimization (ERM), is then to minimize the empirical risk within a prescribed set of functions.
\subsection{Fairness Definitions}
\label{sec:luca:th:Definitions}
In the literature there are different definitions of fairness of a model or learning algorithm~\cite{hardt2016equality,dwork2018decoupled,zafar2017fairness,zafar2017fairness}, but there is not yet a consensus on which definition is most appropriate.
In this paper, we introduce a general notion of fairness which encompasses some previously used notions and it allows to introduce new ones by specifying the loss function used below.
\begin{definition}
\label{def:fairness}
Let ${L}^{+,g}(f) {=} \mathbb{E} [ \ell(f(\boldsymbol{x}),y) | y {=} 1, s {=} g ]$ be the risk of the positive labeled samples in the $g$-th group, and let $\epsilon \in [0,1]$.
We say that a function $f$ is $\epsilon$-fair if ~$| {L}^{+,a}(f) - {L}^{+,b}(f)| \leq \epsilon$.
\end{definition}
This definition says that a model is fair if it commits approximately the same error on the positive class independently of the group membership.
That is, the conditional risk $L^{+,g}$ is approximately constant across the two groups.
Note that if $\epsilon = 0$ and we use the hard loss function, $\ell_h(f(\boldsymbol{x}),y) = \mathds{1}_{\{y f(\boldsymbol{x}) \leq 0\}}$, then Definition~\ref{def:fairness} is equivalent to definition of EO proposed by~\cite{hardt2016equality}, namely
\begin{align}
\mathbb{P}\left\{ f(\boldsymbol{x}) > 0 ~|~ y = 1, s = a \right\} =
\mathbb{P}\left\{ f(\boldsymbol{x}) > 0 ~|~ y = 1, s = b \right\}.
\label{eq:DEO}
\end{align}
This equation means that the true positive rate is the same across the two groups.
Furthermore, if we use the linear loss function
$\ell_l(f(\boldsymbol{x}),y) = (1 - y f(\boldsymbol{x}))/2 $ and set $\epsilon = 0$, then Definition~\ref{def:fairness} gives
\begin{align}
\mathbb{E}[f(\boldsymbol{x}) ~|~ y = 1, s = a] = \mathbb{E}[f(\boldsymbol{x}) ~|~ y = 1, s = b ].
\label{eq:lollo}
\end{align}
By reformulating this expression we obtain a notion of fairness that has been proposed by~\cite{dwork2018decoupled}
\begin{align}
\sum_{g \in \{a,b\}} \big| \mathbb{E}[f(\boldsymbol{x}) ~|~ y = 1, s = g] - \mathbb{E}[f(\boldsymbol{x}) ~|~ y = 1] \big| = 0.
\nonumber
\end{align}
Yet another implication of Eq.~\eqref{eq:lollo} is that the output of the model is uncorrelated with respect to the group membership conditioned on the label being positive, that is, for every $g {\in} \{ a, b \}$, we have
\begin{align}
\mathbb{E}\big[ f(\boldsymbol{x}) \mathds{1}_{\{s{=}g\}}~|~y=1 \big] = \mathbb{E} \big[f(\boldsymbol{x})|y=1\big] \mathbb{E} \big[\mathds{1}_{\{s=g\}}~|~y=1\big].
\nonumber
\end{align}
Finally, we observe that our approach naturally generalizes to other fairness measures, e.g.~equal odds~\cite{hardt2016equality}, which could be subject of future work.
Specifically, we would require in Definition~\ref{def:fairness} that $| {L}^{y,a}(f) - {L}^{y,b}(f)| \leq \epsilon$ for both $y \in \{-1,1\}$.
\subsection{Fair Empirical Risk Minimization}
\label{sec:luca:th:FERM}
In this paper, we aim at minimizing the risk subject to a fairness constraint.
Specifically, we consider the problem
\begin{align}
\min\Big\{L(f) : f {\in} \mathcal{F} ,~
\big| {L}^{+,a}(f) - {L}^{+,b}(f)\big| \leq \epsilon\Big\}
\label{eq:alg:deterministic},
\end{align}
where $\epsilon \in [0,1]$ is the amount of unfairness that we are willing to bear.
Since the measure $\mu$ is unknown we replace the deterministic quantities with their empirical counterparts.
That is, we replace Problem~\ref{eq:alg:deterministic} with
\begin{align}
\min\Big\{\hat{L}(f) : f {\in} \mathcal{F} ,~
\big| \hat{L}^{+,a}(f) - \hat{L}^{+,b}(f)\big| \leq \hat{\epsilon}\Big\}
\label{eq:alg:empirical},
\end{align}
where $\hat{\epsilon} \in [0,1]$.
We will refer to~Problem~\ref{eq:alg:empirical} as FERM.
We denote by $f^*$ a solution of Problem~\ref{eq:alg:deterministic}, and by $\hat{f}$ a solution of Problem~\ref{eq:alg:empirical}.
In this section we will show that these solutions are linked one to another.
In particular, if the parameter $\hat{\epsilon}$ is chosen appropriately, we will show that, in a certain sense, the estimator $\hat{f}$ is consistent.
In order to present our observations, we require that it holds with probability at least $1-\delta$ that
\begin{align}
\sup_{f \in \mathcal{F}} \big|L(f) - \hat{L}(f)\big| \leq B(\delta,n,\mathcal{F})
\label{eq:bartlett}
\end{align}
where the bound $B(\delta,n,\mathcal{F})$ goes to zero as $n$ grows to infinity if the class $\mathcal{F}$ is learnable with respect to the loss~\cite[see e.g.][and references therein]{shalev2014understanding}.
For example, if $\mathcal{F}$ is a compact subset of linear separators in a Reproducing Kernel Hilbert Space (RKHS), and the loss is Lipschitz in its first argument, then $B(\delta,n,\mathcal{F})$ can be obtained via Rademacher bounds~\cite[see e.g.][]{bartlett2002rademacher}.
In this case $B(\delta,n,\mathcal{F})$ goes to zero at least as ${\sqrt{1/n}}$ as $n$ grows to infinity, where $n = |\mathcal{D}|$.
We are ready to state the first result of this section (proof is reported in supplementary materials).
\begin{theorem}
\label{thm:mainresult1}
Let $\mathcal{F}$ be a learnable set of functions with respect to the loss function $\ell: \mathbb{R} \times {\cal Y} \rightarrow \mathbb{R}$, let $f^*$ be a solution of Problem (\ref{eq:alg:deterministic}) and let $\hat{f}$ be a solution of Problem (\ref{eq:alg:empirical}) with
\begin{align}
\textstyle
\hat{\epsilon} = \epsilon + \sum_{g \in \{a,b\}} B(\delta,n^{+,g},\mathcal{F}).
\end{align}
With probability at least $1-6 \delta$ it holds simultaneously that
\begin{align}
\textstyle
L(\hat{f}) - L(f^*) \leq 2 B(\delta,n,\mathcal{F}) \quad
\text{and} \quad
\textstyle
\Big| L^{+,a}(\hat{f}) - L^{+,b}(\hat{f}) \Big| \leq \epsilon + 2 \sum_{g \in \{a,b\}} B(\delta,n^{+,g},\mathcal{F}).
\nonumber
\end{align}
\end{theorem}
A consequence of the first statement of Theorem~\ref{thm:mainresult1} is that as $n$ tends to infinity $L(\hat{f})$ tends to a value which is not larger than $L(f^*)$, that is, FERM is consistent with respect to the risk of the selected model.
The second statement of Theorem~\ref{thm:mainresult1}, instead, implies that as $n$ tends to infinity we have that $\hat{f}$ tends to be $\epsilon$-fair.
In other words, FERM is consistent with respect to the fairness of the selected model.
Thanks to Theorem~\ref{thm:mainresult1} we can state that $f^{*}$ is close to $\hat{f}$ both in term of its risk and its fairness.
Nevertheless, our final goal is to find an $f^*_h$ which solves the following problem
\begin{align}\label{eq:problemHard}
\min\Big\{L_h(f) : f {\in} \mathcal{F} ,~
\big| {L}^{+,a}_h(f) - {L}^{+,b}_h(f)\big| \leq \epsilon\Big\}.
\end{align}
Note that the objective function in Problem~\ref{eq:problemHard} is the misclassification error of the classifier $f$, whereas the fairness constraint is a relaxation of the EO constraint in Eq.~\eqref{eq:DEO}.
Indeed, the quantity $\big| {L}^{+,a}_h(f) - {L}^{+,b}_h(f)\big|$ is equal to
\begin{align}
\!\!\big| \mathbb{P}\left\{ f(\boldsymbol{x}) > 0 ~|~ y = 1,\! s = a \right\}\! -
\mathbb{P}\left\{ f(\boldsymbol{x}) > 0 ~|~ y = 1,\! s = b \right\}\! \big|.
\label{def:DEOmichele}
\end{align}
We refer to this quantity as difference of EO (DEO).
Although Problem~\ref{eq:problemHard} cannot be solved, by exploiting Theorem~\ref{thm:mainresult1} we can safely search for a solution $\hat{f}_h$ of its empirical counterpart
\begin{align}\label{eq:problemHardempirical}
\min\Big\{\hat{L}_h(f) : f {\in} \mathcal{F} ,~
\big| \hat{L}^{+,a}_h(f) - \hat{L}^{+,b}_h(f)\big| \leq \hat{\epsilon}\Big\}.
\end{align}
Unfortunately~Problem~\ref{eq:problemHardempirical} is a difficult nonconvex nonsmooth problem, and for this reason it is more convenient to solve a convex relaxation.
That is, we replace the hard loss in the risk with a convex loss function $\ell_c$ (e.g.~the Hinge loss $\ell_{c} = \max\{0, \ell_l \}$) and the hard loss in the constraint with the linear loss $\ell_l$.
In this way, we look for a solution $\hat{f}_c$ of the convex FERM problem
\begin{align}\label{eq:problemSoft}
\min\Big\{\hat{L}_c(f) : f {\in} \mathcal{F} ,~
\big| \hat{L}^{+,a}_l(f) - \hat{L}^{+,b}_l(f)\big| \leq \hat{\epsilon}\Big\}.
\end{align}
The questions that arise here are whether, and how close, $\hat{f}_c$ is to $\hat{f}_h$, how much, and under which assumptions.
The following theorem sheds some lights on these issues (proof is reported in supplementary materials, Section~\ref{sec:SMproofs}).
\begin{proposition}
\label{thm:mainresult2}
If $\ell_c$ is the Hinge loss then $
\hat{L}_{h}(f) \leq \hat{L}_{c}(f)$.
Moreover, if for $f: \mathcal{X} \rightarrow \mathbb{R}$ the following condition is true
\begin{align} \label{eq:hp1}
\textstyle
\frac{1}{2} \sum_{g \in \{a,b\}} \left| \hat{\mathbb{E}} \left[ \operatorname{sign}\big(f(\boldsymbol{x})\big)- f(\boldsymbol{x}) ~\big|~ y = 1, s = g \right] \right| \leq \hat{\Delta},
\end{align}
then it also holds that
\begin{align}
\textstyle
\big| \hat{L}^{+,a}_h(f) - \hat{L}^{+,b}_h(f) \big| \leq \big| \hat{L}^{+,a}_l(f) - \hat{L}^{+,b}_l(f)\big| + \hat{\Delta}.
\nonumber
\end{align}
\end{proposition}
The first statement of Proposition~\ref{thm:mainresult2} tells us that exploiting $\ell_c$ instead of $\ell_h$ is a good approximation if $\hat{L}_{c}(\hat{f}_c)$ is small.
The second statement of Proposition~\ref{thm:mainresult2}, instead, tells us that if the hypothesis of inequality (\ref{eq:hp1}) holds, then the linear loss based fairness is close to the EO.
Obviously the smaller $\hat{\Delta}$ is, the closer they are.
Inequality (\ref{eq:hp1}) says that the functions $\operatorname{sign}\big(f(\boldsymbol{x})\big)$ and $ f(\boldsymbol{x})$ distribute, on average, in a similar way.
This condition is quite natural and it has been exploited in previous work~\cite[see e.g.][]{maurer2004note}.
Moreover, in Section~\ref{sec:exps} we present experiments showing that $\hat{\Delta}$ is small.
The bound in Proposition~\ref{thm:mainresult2} may be tighten by using different nonlinear approximations of EO~\citep[see e.g.][]{calmon2017optimized}.
However, the linear approximation proposed in this work gives a convex problem, and as we shall see in Section 5, works well in practice.
In summary, the combination of Theorem~\ref{thm:mainresult1} and Proposition~\ref{thm:mainresult2} provides conditions under which a solution $\hat{f}_c$ of Problem~\ref{eq:alg:empirical}, which is convex, is close, {\em both in terms of classification accuracy and fairness}, to a solution $f^*_h$ of Problem~\ref{eq:problemHard}, which is our final goal.
\section{Fair Learning with Kernels}
\label{sec:luca:th:FK}
In this section, we specify the FERM framework to the case that the underlying space of models is a reproducing kernel Hilbert space (RKHS)~\cite[see e.g.][and references therein]{shawe2004kernel,smola2001}.
We let $\kappa: \mathcal{X} \times \mathcal{X} \rightarrow \mathbb{R}$ be a positive definite kernel and let $\boldsymbol{\phi}: \mathcal{X} \rightarrow \mathbb{H}$ be an induced feature mapping such that $\kappa(\boldsymbol{x},\boldsymbol{x}') = \langle \boldsymbol{\phi}(\boldsymbol{x}),\boldsymbol{\phi}(\boldsymbol{x}')\rangle$, for all $\boldsymbol{x},\boldsymbol{x}' \in \mathcal{X}$, where $\mathbb{H}$ is the Hilbert space of square summable sequences.
Functions in the RKHS can be parametrized as
\begin{equation}
f(\boldsymbol{x}) = \langle \boldsymbol{w} , \boldsymbol{\phi}(\boldsymbol{x})\rangle,~~~\boldsymbol{x} \in \mathcal{X},
\label{eq:222}
\end{equation}
for some vector of parameters $\boldsymbol{w} \in \mathbb{H}$.
In practice a bias term (threshold) can be added to $f$ but to ease our presentation we do not include it here.
We solve Problem~\eqref{eq:problemSoft} with $\mathcal{F}$ a ball in the RKHS and employ a convex loss function $\ell$.
As for the fairness constraint we use the linear loss function, which implies the constraint to be convex.
Let $\boldsymbol{u}_g$ be the barycenter in the feature space of the positively labelled points in the group $g\in \{a,b\}$, that is
\begin{align}
\textstyle
\boldsymbol{u}_g= \frac{1}{n^{+,g}} \sum_{ i \in \mathcal{I}^{+,g}}
\boldsymbol{\phi}(\boldsymbol{x}_i),
\end{align}
where $\mathcal{I}^{+,g} = \{i: y_i {=} 1, x_{i,1} {=} g \}$.
Then using Eq.~\eqref{eq:222} the constraint in Problem~\eqref{eq:problemSoft} takes the form $\big|\langle \boldsymbol{w},\boldsymbol{u}_a-\boldsymbol{u}_b\rangle\big| \leq \epsilon$.
In practice, we solve the Tikhonov regularization problem
\begin{align}
\textstyle
\min\limits_{\boldsymbol{w} \in \mathbb{H}} \
\sum_{i =1}^n \ell(\langle \boldsymbol{w},\boldsymbol{\phi}(\boldsymbol{x}_i)\rangle ,y_i) + \lambda \|\boldsymbol{w}\|^2 \quad
\text{s.t.}\ \big|\langle \boldsymbol{w},\boldsymbol{u}\rangle\big| \leq \epsilon
\label{prob:ker}
\end{align}
where $\boldsymbol{u} = \boldsymbol{u}_a - \boldsymbol{u}_b$ and $\lambda$ is a positive parameter which controls model complexity.
In particular, if $\epsilon = 0$ the constraint in Problem~\eqref{prob:ker} reduces to an orthogonality constraint that has a simple geometric interpretation.
Specifically, the vector $\boldsymbol{w}$ is required to be orthogonal to the vector formed by the difference between the barycenters of the positive labelled input samples in the two groups.
By the representer theorem~\cite{scholkopf2001generalized}, the solution to Problem~\eqref{prob:ker} is a linear combination of the feature vectors $\boldsymbol{\phi}(\boldsymbol{x}_1),\dots,\boldsymbol{\phi}(\boldsymbol{x}_n)$ and the vector $\boldsymbol{u}$.
However, in our case $\boldsymbol{ u}$ is itself a linear combination of the feature vectors (in fact only those corresponding to the subset of positive labeled points) hence $\boldsymbol{w}$ is a linear combination of the input points, that is $\boldsymbol{ w}=\sum_{i=1}^n \alpha_i\phi(\boldsymbol{x}_i)$.
The corresponding function used to make predictions is then given by $f(\boldsymbol{x}) = \sum_{i=1}^n \alpha_i \kappa(\boldsymbol{x}_i,\boldsymbol{x})$.
Let $K$ be the Gram matrix.
The vector of coefficients $\boldsymbol{\alpha}$ can then be found by solving
\begin{align}
\min_{\boldsymbol{\alpha} \in \mathbb{R}^n} \hspace{-.04truecm}\Bigg\{\hspace{-.04truecm} \sum_{i=1}^n \ell\bigg(\sum_{j=1}^n K_{ij}\alpha_j,y_i\bigg) {+} \lambda \! \! \sum_{i,j=1}^n\alpha_i \alpha_j K_{ij} \quad \hspace{-.04truecm}
\text{s.t.} \ &
\bigg|
\hspace{-.04truecm}\sum_{i=1}^n \alpha_i
\bigg[
\frac{1}{n^{+,a}} \!\!\! \hspace{-.04truecm}\sum_{j \in \mathcal{I}^{+,a}} \!\!\! K_{ij}
{-}
\frac{1}{n^{+,b}} \!\!\! \sum_{j \in \mathcal{I}^{+,b}} \!\!\! K_{ij}
\bigg]
\bigg| \leq \epsilon\Bigg\}.
\nonumber
\end{align}
In our experiments below we consider this particular case of Problem~\eqref{prob:ker} and furthermore choose the loss function $\ell_c$ to be the Hinge loss.
The resulting method is an extension of SVM.
The fairness constraint and, in particular, the orthogonality constraint when $\epsilon = 0$, can be easily added within standard SVM solvers\footnote{In supplementary material we derive the dual of Problem~\eqref{prob:ker} when $\ell_c$ is the Hinge loss.}
It is instructive to consider Problem~\eqref{prob:ker} when $\boldsymbol{\phi}$ is the identity mapping (i.e.~$\kappa$ is the linear kernel on $\mathbb{R}^d$) and $\epsilon=0$.
In this special case we can solve the orthogonality constraint $\langle \boldsymbol{w},\boldsymbol{u}\rangle = 0$ for $w_i$, where the
index $i$ is such that $| u_i | = \|\boldsymbol{u}\|_\infty$, obtaining that $w_{i} = - \sum_{j=1, j \neq i}^d w_j \frac{u_j}{u_i}$.
Consequently the linear model rewrites as
$\sum_{j=1}^{d} w_j x_j = \sum_{j=1, j \neq i}^d w_j (x_j - x_i \frac{u_i}{u_j})$.
In this way, we then see the fairness constraint is implicitly enforced by making the change of representation $\boldsymbol{x} \mapsto \boldsymbol{\tilde{x}} \in \mathbb{R}^{d-1}$, with
\begin{equation}
\textstyle
\tilde{x}_j = x_j - x_i \frac{u_i}{u_j}, \quad j \in \{ 1, \dots, i-1, i+1, \dots, d \}.
\label{eq:gggg}
\end{equation}
In other words, we are able to obtain a fair linear model without any other constraint and by using a representation that has one feature fewer than the original one\footnote{
In supplementary material is reported the generalization of this argument to kernel for SVM.}
\section{Experiments}
\label{sec:exps}
In this section, we present numerical experiments with the proposed method on one synthetic and five real datasets.
The aim of the experiments is threefold.
First, we show that our approach is effective in selecting a fair model, incurring only a moderate loss in accuracy.
Second, we provide an empirical study of the properties of the method, which supports our theoretical observations in~ Section~\ref{sec:luca:th:Fairness}.
Third, we highlight the generality of our approach by showing that it can be used effectively within other linear models such as Lasso.
We use our approach with $\epsilon {=} 0$ in order to simplify the hyperparameter selection procedure.
For the sake of completeness, a set of results for different values of $\epsilon$ is presented in the supplementary material and briefly we comment on these below.
In all the experiments, we collect statistics concerning the classification accuracy and DEO of the selected model.
We recall that the DEO is defined in Eq.~\eqref{def:DEOmichele} and is the absolute difference of the true positive rate of the classifier applied to the two groups.
In all experiments, we performed a 10-fold cross validation (CV) to select the best hyperparameters\footnote{The regularization parameter $C$ (for both SVM and our method) with $30$ values, equally spaced in logarithmic scale
between $10^{-4}$ and $10^{4}$; we used both the linear or RBF kernel (i.e.~for two examples $\boldsymbol{x}$ and $\boldsymbol{z}$, the RBF kernel is $e^{-\gamma ||\boldsymbol{x}-\boldsymbol{z}||^2}$) with $\gamma \in \{0.001, 0.01, 0.1, 1\}$.
In our case, $C=\frac{1}{2 \lambda}$ of Eq.~\eqref{prob:ker}.}.
For the Arrhythmia, COMPAS, German and Drug datasets, this procedure is repeated $10$ times, and we reported the average performance on the test set alongside its standard deviation.
For the Adult dataset, we used the provided split of train and test sets.
Unless otherwise stated, we employ two steps in the 10-fold CV procedure.
In the first step, the value of the hyperparameters with highest accuracy is identified.
In the second step, we shortlist all the hyperparameters with accuracy close to the best one (in our case, above $90 \%$ of the best accuracy).
Finally, from this list, we select the hyperparameters with the lowest DEO.
This novel validation procedure, that we wil call NVP, is a sanity-check to ensure that fairness cannot be achieved by a simple modification of hyperparameter selection procedure.
{\bf The code of our method is available at:} \url{https://github.com/jmikko/fair_ERM}.
\noindent \textbf{Synthetic Experiment.}
The aim of this experiment is to study the behavior of our method, in terms of both DEO and classification accuracy, in comparison to standard SVM (with our novel validation procedure).
To this end, we generated a synthetic binary classification dataset with two sensitive groups in the following manner.
For each group in the class $-1$ and for the group $a$ in the class $+1$, we generated $1000$ examples for training and the same amount for testing.
For the group $b$ in the class $+1$, we generated $200$ examples for training and the same number for testing.
Each set of examples is sampled from a $2$-dimensional isotropic Gaussian distribution with different mean $\mu$ and variance $\sigma^2$: (i) Group $a$, Label $+1$: $\mu=(-1, -1)$, $\sigma^2=0.8$; (ii) Group $a$, Label $-1$: $\mu=(1, 1)$, $\sigma^2=0.8$; (iii) Group $b$, Label $+1$: $\mu=(0.5, -0.5)$, $\sigma^2=0.5$; (iv) Group $b$, Label $-1$: $\mu=(0.5, 0.5)$, $\sigma^2=0.5$.
When a standard machine learning method is applied to this toy dataset, the generated model is unfair with respect to the group $b$, in that the classifier tends to negatively classify the examples in this group.
We trained different models, varying the value of the hyperparameter $C$, and using the standard linear SVM and our linear method.
Figure~\ref{fig:toydistrib} (Left) shows the performance of the various generated models with respect to the classification error and DEO on the test set.
Note that our method generated models that have an higher level of fairness, maintaining a good level of accuracy.
The grid in the plots emphasizes the fact that both the error and DEO have to be simultaneously considered in the evaluation of a method.
\iffalse
\begin{figure}
\centering
\includegraphics[trim={0.0cm 2.0cm 0.0cm 0.0cm},clip,width=.35\columnwidth]{Toy.png} \quad \quad
\includegraphics[trim={0.2cm 0.2cm 1.5cm 0.0cm},clip,width=.35\columnwidth]{Toytest2.png}
\caption{({Left}) Different distributions used to generate the toy problem, with the following averages $\mu$ and variances $\sigma^2$: (i) Group $a$, Label $+1$: $\mu=(-1, -1)$, $\sigma^2=0.8$; (ii) Group $a$, Label $-1$: $\mu=(1, 1)$, $\sigma^2=0.8$; (iii) Group $b$, Label $+1$: $\mu=(0.5, -0.5)$, $\sigma^2=0.5$; (iv) Group $b$, Label $-1$: $\mu=(0.5, 0.5)$, $\sigma^2=0.5$.
({Right}) Test classification error and DEO for different values of hyperparameter $C$ for standard linear SVM (green circles) and our modified linear SVM (magenta stars).
The desiderata would be near the point $(0,0)$.}
\label{fig:toyhypers}
\end{figure}
\fi
Figure~\ref{fig:toydistrib} (Center and Left) depicts the histogram of the values of $\langle \boldsymbol{w}, \boldsymbol{x}\rangle $ (where $\boldsymbol{w}$ is the generated model) for test examples with true label equal to $+1$ for each of the two groups.
The results are reported both for our method (Right) and standard SVM (Center).
Note that our method generates a model with a similar true positive rate among the two groups (i.e.~the areas of the value when the horizontal axis is greater than zero are similar for groups $a$ and $b$).
Moreover, due to the simplicity of the toy test, the distribution with respect to the two different groups is also very similar when our model is used.
\begin{figure}
\centering
\includegraphics[trim={0.2cm 0.2cm 1.5cm 0.0cm},clip,width=.29\columnwidth]{Toytest2.png}\hspace{.5truecm}
\includegraphics[trim={0.8cm 0.5cm 1.5cm 0.8cm},clip,width=.31\columnwidth]{d_toytest_svm2.png} \hspace{.5truecm}
\includegraphics[trim={2.1cm 0.5cm 1.5cm 0.8cm},clip,width=.285\columnwidth]{d_toytest_fair2.png}
\caption{Left: Test classification error and DEO for different values of hyperparameter $C$ for standard linear SVM (green circles) and our modified linear SVM (magenta stars).
Center and Rights: Histograms of the distribution of the values $\langle \boldsymbol{w}, \boldsymbol{x}\rangle$ for the two groups ($a$ in blue and $b$ in light orange) for test examples with label equals to $+1$.
The results are collected by using the optimal validated model for the classical linear SVM (Center) and for our linear method (Right).}
\label{fig:toydistrib}
\end{figure}
\noindent \textbf{Real Data Experiments.}
We next compare the performance of our model to set of different methods on $5$ publicly available datasets:
Arrhythmia, COMPAS, Adult,
German, and Drug.
A description of the datasets is provided in the supplementary material.
These datasets have been selected from the standard databases of datasets (UCI, mldata and Fairness-Measures\footnote{Fairness-Measures website: \url{fairness-measures.org}}).
We considered only datasets with a DEO higher than $0.1$, when the model is generated by an SVM validated with the NVP.
For this reason, some of the commonly used datasets have been discarded (e.g.~Diabetes, Heart, SAT, PSU-Chile, and SOEP).
We compared our method both in the linear and not linear case against: (i) Na\"{i}ve SVM, validated with a standard nested 10-fold CV procedure.
This method ignores fairness in the validation procedure, simply trying to optimize accuracy; (ii) SVM with the NVP.
As noted above, this baseline is the simplest way to inject the fairness into the model; (iii) Hardt method~\cite{hardt2016equality} applied to the best SVM; (iv) Zafar method~\cite{zafar2017fairness}, implemented with the code provided by the authors for the linear case\footnote{Python code for~\cite{zafar2017fairness}: \url{https://github.com/mbilalzafar/fair-classification}}.
Concerning our method, in the linear case, it exploits the preprocessing presented in Section~\ref{sec:luca:th:FK}.
\begin{table*}
\tiny
\centering
\setlength{\tabcolsep}{0.05cm}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
\hline
& \multicolumn{2}{c|}{Arrhythmia}
& \multicolumn{2}{c|}{COMPAS}
& \multicolumn{2}{c|}{Adult}
& \multicolumn{2}{c|}{German}
& \multicolumn{2}{c|}{Drug} \\
Method & ACC & DEO & ACC & DEO & ACC & DEO & ACC & DEO & ACC & DEO \\
\hline
\hline
\multicolumn{11}{c}{$s$ not inside $\boldsymbol{x}$}\\
\hline
\hline
Na\"{i}ve Lin. SVM & $0.75 {\pm} 0.04$ & $0.11 {\pm} 0.03$ & $0.73 {\pm} 0.01$ & $0.13 {\pm} 0.02$ & $0.78$ & $0.10$ & $0.71 {\pm} 0.06$ & $0.16 {\pm} 0.04$ & $0.79 {\pm} 0.02$ & $0.25 {\pm} 0.03$ \\
Lin. SVM & $0.71 {\pm} 0.05$ & $0.10 {\pm} 0.03$ & $0.72 {\pm} 0.01$ & $0.12 {\pm} 0.02$ & $0.78$ & $0.09$ & $0.69 {\pm} 0.04$ & $0.11 {\pm} 0.10$ & $0.79 {\pm} 0.02$ & $0.25 {\pm} 0.04$ \\
Hardt & - & - & - & - & - & - & - & - & - & - \\
Zafar & $0.67 {\pm} 0.03$ & $0.05 {\pm} 0.02$ & $0.69 {\pm} 0.01$ & $0.10 {\pm} 0.08$ & $0.76$ & $0.05$ & $0.62 {\pm} 0.09$ & $0.13 {\pm} 0.10$ & $0.66 {\pm} 0.03$ & $0.06 {\pm} 0.06$ \\
Lin. Ours & $0.75 {\pm} 0.05$ & $0.05 {\pm} 0.02$ & $0.73 {\pm} 0.01$ & $0.07 {\pm} 0.02$ & $0.75$ & $0.01$ & $0.69 {\pm} 0.04$ & $0.06 {\pm} 0.03$ & $0.79 {\pm} 0.02$ & $0.10 {\pm} 0.06$ \\
\hline
Na\"{i}ve SVM & $0.75 {\pm} 0.04$ & $0.11 {\pm} 0.03$ & $0.72 {\pm} 0.01$ & $0.14 {\pm} 0.02$ & $0.80$ & $0.09$ & $0.74 {\pm} 0.05$ & $0.12 {\pm} 0.05$ & $0.81 {\pm} 0.02$ & $0.22 {\pm} 0.04$ \\
SVM & $0.71 {\pm} 0.05$ & $0.10 {\pm} 0.03$ & $0.73 {\pm} 0.01$ & $0.11 {\pm} 0.02$ & $0.79$ & $0.08$ & $0.74 {\pm} 0.03$ & $0.10 {\pm} 0.06$ & $0.81 {\pm} 0.02$ & $0.22 {\pm} 0.03$ \\
Hardt & - & - & - & - & - & - & - & - & - & - \\
Ours & $0.75 {\pm} 0.05$ & $0.05 {\pm} 0.02$ & $0.72 {\pm} 0.01$ & $0.08 {\pm} 0.02$ & $0.77$ & $0.01$ & $0.73 {\pm} 0.04$ & $0.05 {\pm} 0.03$ & $0.79 {\pm} 0.03$ & $0.10 {\pm} 0.05$ \\
\hline
\hline
\multicolumn{11}{c}{$s$ inside $\boldsymbol{x}$}\\
\hline
\hline
Na\"{i}ve Lin. SVM & $0.79 {\pm} 0.06$ & $0.14 {\pm} 0.03$ & $0.76 {\pm} 0.01$ & $0.17 {\pm} 0.02$ & $0.81$ & $0.14$ & $0.71 {\pm} 0.06$ & $0.17 {\pm} 0.05$ & $0.81 {\pm} 0.02$ & $0.44 {\pm} 0.03$ \\
Lin. SVM & $0.78 {\pm} 0.07$ & $0.13 {\pm} 0.04$& $0.75 {\pm} 0.01$ & $0.15 {\pm} 0.02$ & $0.80$ & $0.13$ & $0.69 {\pm} 0.04$ & $0.11 {\pm} 0.10$ & $0.81 {\pm} 0.02$ & $0.41 {\pm} 0.06$ \\
Hardt & $0.74 {\pm} 0.06$ & $0.07 {\pm} 0.04$& $0.67 {\pm} 0.03$ & $0.21 {\pm} 0.09$ & $0.80$ & $0.10$ & $0.61 {\pm} 0.15$ & $0.15 {\pm} 0.13$ & $0.77 {\pm} 0.02$ & $0.22 {\pm} 0.09$ \\
Zafar & $0.71 {\pm} 0.03$ & $0.03 {\pm} 0.02$ & $0.69 {\pm} 0.02$ & $0.10 {\pm} 0.06$ & $0.78$ & $0.05$ & $0.62 {\pm} 0.09$ & $0.13 {\pm} 0.11$ & $0.69 {\pm} 0.03$ & $0.02 {\pm} 0.07$ \\
Lin. Ours & $0.79 {\pm} 0.07$ & $0.04 {\pm} 0.03$ & $0.76 {\pm} 0.01$ & $0.04 {\pm} 0.03$ & $0.77$ & $0.01$ & $0.69 {\pm} 0.04$ & $0.05 {\pm} 0.03$ & $0.79 {\pm} 0.02$ & $0.05 {\pm} 0.03$ \\
\hline
Na\"{i}ve SVM & $0.79 {\pm} 0.06$ & $0.14 {\pm} 0.04$& $0.76 {\pm} 0.01$ & $0.18 {\pm} 0.02$ & $0.84$ & $0.18$ & $0.74 {\pm} 0.05$ & $0.12 {\pm} 0.05$ & $0.82 {\pm} 0.02$ & $0.45 {\pm} 0.04$ \\
SVM & $0.78 {\pm} 0.06$ & $0.13 {\pm} 0.04$& $0.73 {\pm} 0.01$ & $0.14 {\pm} 0.02$ & $0.82$ & $0.14$ & $0.74 {\pm} 0.03$ & $0.10 {\pm} 0.06$ & $0.81 {\pm} 0.02$ & $0.38 {\pm} 0.03$ \\
Hardt & $0.74 {\pm} 0.06$ & $0.07 {\pm} 0.04$ & $0.71 {\pm} 0.01$ & $0.08 {\pm} 0.01$ & $0.82$ & $0.11$ & $0.71 {\pm} 0.03$ & $0.11 {\pm} 0.18$ & $0.75 {\pm} 0.11$ & $0.14 {\pm} 0.08$ \\
Ours & $0.79 {\pm} 0.09$ & $0.03 {\pm} 0.02$& $0.73 {\pm} 0.01$ & $0.05 {\pm} 0.03$ & $0.81$ & $0.01$ & $0.73 {\pm} 0.04$ & $0.05 {\pm} 0.03$ & $0.80 {\pm} 0.03$ & $0.07 {\pm} 0.05$ \\
\hline
\hline
\end{tabular}
\caption{Results (average $\pm$ standard deviation, when a fixed test set is not provided) for all the datasets, concerning accuracy (ACC) and DEO \text.}
\vspace{-.5cm}
\label{tab:results}
\end{table*}
Table~\ref{tab:results} shows our experimental results for all the datasets and methods both when $s$ is inside $\boldsymbol{x}$ or not.
This result suggests that our method performs favorably over the competitors in that it decreases DEO substantially with only a moderate loss in accuracy.
Moreover having $s$ inside $\boldsymbol{x}$ increases the accuracy but - for the methods without the specific purpose of producing fairness models - decreases the fairness. On the other hand, having $s$ inside $\boldsymbol{x}$ ensures to our method the ability of improve the fairness by exploiting the value of $s$ also in the prediction phase. This is to be expected, since knowing the group membership increases our information but also leads to behaviours able to influence the fairness of the predictive model.
In order to quantify this effect, we present in Figure~\ref{fig:tableexplanation} the results of Table~\ref{tab:results} of linear (left) and nonlinear (right) methods, when the error (one minus accuracy) and the DEO are normalized in $[0,1]$ column-wise and when the $s$ is inside $\boldsymbol{x}$\footnote{The case when $s$ is not inside $\boldsymbol{x}$ is reported in the supplementary materials).}.
In the figure, different symbols and colors refer to different datasets and methods, respectively.
The closer a point is to the origin, the better the result is.
The best accuracy is, in general, reached by using the Na\"{i}ve SVM (in red) both for the linear and nonlinear case.
This behavior is expected due to the absence of any fairness constraint.
On the other hand, Na\"{i}ve SVM has unsatisfactory levels of fairness.
Hardt~\cite{hardt2016equality} (in blue) and Zafar~\cite{zafar2017fairness} (in cyan, for the linear case) methods are able to obtain a good level of fairness but the price of this fair model is a strong decrease in accuracy.
Our method (in magenta) obtains similar or better results concerning the DEO preserving the performance in accuracy.
In particular in the nonlinear case, our method reaches the lowest levels of DEO with respect to all the methods.
For the sake of completeness, in the nonlinear (bottom) part of Figure~\ref{fig:tableexplanation}, we show our method when the parameter $\epsilon$ is set to $0.1$ (in brown) instead of $0$ (in magenta).
As expected, the generated models are less fair with a (small) improvement in the accuracy.
An in depth analysis of the role of $\epsilon$ is presented in supplementary materials.
\begin{figure}
\centering
\includegraphics[width=0.45\columnwidth]{LI.png} \quad
\includegraphics[width=0.45\columnwidth]{NL.png}
\caption{{Results of Table~\ref{tab:results} of linear (left) and nonlinear (right) methods, when the error and the DEO are normalized in $[0,1]$ column-wise and when $s$ is inside $\boldsymbol{x}$. Different symbols and colors refer to different datasets and method respectively. The closer a point is to the origin, the better the result is.}}
\vspace{-.4cm}
\label{fig:tableexplanation}
\end{figure}
\noindent \textbf{Application to Lasso.}
Due to the particular proposed methodology, we are able in principle to apply our method to any learning algorithm. In particular, when the algorithm generates a linear model we can exploit the data preprocessing in
Eq.~\eqref{eq:gggg}, to directly impose fairness in the model. Here, we show how it is possible to obtain a sparse and fair model by exploiting the standard Lasso algorithm in synergy with this preprocessing step. For this purpose, we selected the Arrhythmia dataset as the Lasso works well in a high dimensional / small sample setting.
We performed the same experiment described above,
where we used the Lasso algorithm in place of the SVM.
In this case, by Na\"{i}ve Lasso, we refer to the Lasso when it is validated with a standard nested 10-fold CV procedure, whereas by Lasso we refer to the standard Lasso with the NVP outlined above.
The method of~\cite{hardt2016equality} has been applied to the best Lasso model.
Moreover, we reported the results obtained using Na\"{i}ve Linear SVM and Linear SVM. We also repeated the experiment by using a reduced training set in order to highlight the effect of the sparsity.
Table~\ref{tab:results_lasso} reported in the supplementary material shows the results. It is possible to note that, reducing the training sets, the generated models become less fair (i.e. the DEO increases). Using our method, we are able to maintain a fair model reaching satisfactory accuracy results.
\noindent \textbf{The Value of $\hat{\Delta}$.}
Finally, we show experimental results to highlight how the hypothesis of Proposition~\ref{thm:mainresult2} (Section~\ref{sec:luca:th:FERM}) are reasonable in the real cases. We know that, if the hypothesis of inequality (\ref{eq:hp1}) are satisfied, the linear loss based fairness is close to the EO. Specifically, these two quantities are closer when $\hat{\Delta}$ is small. We evaluated $\hat{\Delta}$ for benchmark and toy datasets. The obtained results are in Table~\ref{tab:delta} of supplementary material, where $\hat{\Delta}$ has the order of magnitude of $10^{-2}$ in all the datasets. Consequently, our method is able to obtain a good approximation of the DEO.
\vspace{-.3cm}
\section{Conclusion and Future Work}
\label{sec:conc}
\vspace{-.1cm}
We have presented a generalized notion of fairness, which encompasses previously introduced notion and can be used to constrain ERM, in order to learn fair classifiers. The framework is appealing both theoretically and practically. Our theoretical observations provide a statistical justification for this approach and our algorithmic observations suggest a way to implement it efficiently in the setting of kernel methods. Experimental results suggest that our approach is promising for applications, generating models with improved fairness properties while maintaining classification accuracy. We close by mentioning directions of future research. On the algorithmic side, it would be interesting to study whether our method can be improved by other relaxations of the fairness constraint beyond the linear loss used here. Applications of the fairness constraint to multi-class classification or to regression tasks would also be valuable. On the theory side, it would be interesting to study how the choice of the parameter $\epsilon$ affects the statistical performance of our method and derive optimal accuracy-fairness trade-off as a function of this parameter.
{
\small
\bibliographystyle{unsrt}
|
{
"timestamp": "2018-06-28T02:00:30",
"yymm": "1802",
"arxiv_id": "1802.08626",
"language": "en",
"url": "https://arxiv.org/abs/1802.08626"
}
|
\section{Introduction}
The choice of a particular policy or plan of action involves consideration of the costs and benefits of the policy/plan under consideration and also of alternative policies/plans that might be undertaken. Examples abound; to mention just a few: Which course of treatment will lead to the most rapid recovery? Which mode of advertisement will lead to the most orders? Which investment strategy will lead to the greatest returns? Obtaining information about the costs and benefits of alternative plans that might have been undertaken is a {\em counterfactual exercise}. One possible way to estimate the counterfactual information is by conducting controlled experiments. However, controlled experiments are expensive, involve small samples, and are frequently not available. It is therefore important to make decisions entirely on the basis of observational data in which the actions/decisions taken in the data have been selected by an existing \textit{logging} policy. Because the existing logging policy creates a selection bias, learning from observational studies is a challenging problem. This paper presents theoretical bounds on estimation errors for the evaluation of a new policy from observational data and a principled algorithm to learn the optimal policy. The methods and algorithms we develop are widely applicable (perhaps with some modifications) to an enormous range of settings, from healthcare to education to recommender systems to finance to smart cities. (See \cite{athey2015machine}, ~\cite{hoiles2016bounded} and \cite{bottou2013counterfactual}, for just a few examples.)
As we have noted, our algorithm applies in many settings. In the medical context, features are the information included in electronic health records, actions are choices of different treatments, and outcomes are the success of treatment. In the financial context, features are the aspects of the macroeconomic environment, actions are the choices of different investment decisions and outcomes are the revenues made by the investment decisions. In the recommender system context, features are the information about the user, the actions are choices of items, and outcomes are binary values indicating whether the user purchased the item or not.
Our theoretical results show that {\em true policy outcome} is at least as good as the {\em policy outcome estimated from the observational data} minus the product of the number of actions with the $\mathcal{H}$-divergence between the observational and randomized data. Our theoretical bounds are different than ones derived in \cite{swaminathan15counterfactual} because ours do not require the propensity scores to be known. We use our theory to develop algorithm to learn balanced representations for each instance such that they are indistinguishable between the randomized and observational distribution and also predictive of the decision problem at hand. We present experiments on a semi-synthetic breast cancer and supervised Statlog data to show that our algorithm out-performs various methods, and to explain why.
\begin{table*}[t]
\normalsize
\centering
\label{table:illustrate}
\begin{tabular}{|c|c|c|c|c|}
\hline
Literature & Propensities known & Objective & Actions & Solution \\ \hline
\cite{shalit2017estimating} & no & ITE estimation & $2$ & Balancing representations\\ \hline
\cite{alaa2017bayesian} & no & ITE estimation & $2$ & Risk based empirical Bayes\\ \hline
\cite{beygelzimer2009offset}& yes & policy optimization & $>2$ & Rejection sampling \\ \hline
\cite{swaminathan15counterfactual,swaminathan2015self}& yes & policy optimization & $>2$ & IPS reweighing \\ \hline
Ours & no & policy optimization & $>2$ & Balancing representations \\ \hline
\end{tabular}
\caption{Comparison with the related literature}
\end{table*}
\section{Related Work}
Roughly speaking, work on counterfactual learning from observational data falls into two categories: estimation of ITEs ~\cite{johansson2016learning,shalit2017estimating,alaa2017bayesian} and Policy Optimization~\cite{swaminathan15counterfactual,swaminathan2015self}. The work on ITE's aims to estimate the expected difference between outcomes for \textit{treated} and \textit{control} patients, given the feature vector; this work focuses on settings with only two actions (treat/don't treat) - and notes that the approaches derived do not generalize well to settings with more than two actions. The work on policy optimization aims to find a policy that maximizes the expected outcome (minimizes the risk). The policy optimization objective is somewhat easier than ITE objective in the sense that one can turn the ITE to action recommendations but not the other way around. In many applications, there are much more than $2$ actions; one is more interested in learning a good action rather than learning outcomes of each action for each instance.
The work on ITE estimation that is most closely related to ours focuses on learning balanced representations~\cite{johansson2016learning,shalit2017estimating}. These papers develop neural network algorithms to minimize the mean squared error between predictions and actual outcomes in the observational data and also the discrepancy between the representations of the factual and counterfactual data. As these papers note, there is no principled approach to extend them to more than two treatments. Other recent works in ITE estimation include tree-based methods~\cite{hill2011bayesian,athey2015machine,wager2015estimation} and Gaussian processes~\cite{alaa2017bayesian}. The last is perhaps the most successful, but the computational complexity is $O(n^3)$ (where $n$ is the number of instances) so it is not easy to apply to large observational studies.
In the policy optimization literature, the work most closely related to ours is \cite{swaminathan15counterfactual,swaminathan2015self} where they develop the Counterfactual Risk Minimization (CRM) principle. The objective of the CRM principle is to minimize both the estimated mean and variance of the Inverse Propensity Score (IPS) instances; to do so the authors propose the POEM algorithm. Our work differs from POEM in several ways: (i) POEM minimizes an objective over the class of linear policies; we allow for arbitrary policies, (ii) POEM requires the propensity scores to be available in the data; our algorithm addresses the selection bias without using propensity scores, (iii) POEM addresses selection bias by re-weighting each instance with the inverse propensities; our algorithm addresses the selection bias by learning representations. Another related paper on policy optimization is \cite{beygelzimer2009offset} which requires the propensity scores to be known and addresses the selection bias via rejection sampling. (For a more detailed comparison see Table 1.)
The off-policy evaluation methods include IPS estimator~\cite{rosenbaum1983central, strehl2010learning}, self normalizing estimator~\cite{swaminathan2015self}, direct estimation, doubly robust estimator~\cite{dudik2011doubly, jiang2015doubly} and matching based methods~\cite{hill2006interval}. The IPS and self-normalizing estimators address the selection bias by re-weighting each instance by their inverse propensities. The doubly robust estimation techniques combine the direct and IPS methods and generate more robust counterfactual estimates. Propensity Score Matching (PSM) replaces the missing counterfactual outcomes of the instance by the outcome of an instance with the closest propensity score.
Our theoretical bounds have strong connection with the domain adaptation bounds given in~\cite{ben2007analysis,blitzer2008learning}. In particular, we show that the expected policy outcome is bounded below by the estimate of the policy outcome from the observational data minus the product of the number of actions with the $\mathcal{H}$-divergence between the observational and randomized data. Our algorithm is based on domain adaptation as in~\cite{ganin2016domain}. Other domain adaptation techniques include \cite{zhang2013domain,daume2009frustratingly}.
\section{Problem Setup}
In this Section, we describe our formal model.
\subsection{Observational Data}
We denote by $\mathcal{A}$ the set of $k$ actions, by $\mathcal{X}$ the $s$-dimensional space of features and by
$\mathcal{Y} \subseteq R$ the space of outcomes. We assume that an outcome can be identified with a real number and normalize so that outcomes lie in the interval $\left[0,1\right]$. In some cases, the outcome will be either $1$ or $0$ (success or failure); in other cases the outcome may be interpreted as the probability of success or failure. We follow the potential outcome model described in the Rubin-Neyman causal model~\cite{rubin2005causal}; that is, for each instance $x \in \mathcal{X}$, there are $k$-potential outcomes: $Y^{(0)}, Y^{(1)}, \ldots, Y^{(k-1)} \in \mathcal{Y}$, corresponding to the $k$ different actions. The fundamental problem in this setting is that only the outcome of the action {\em actually performed} is recorded in the data: $Y = Y^{T}$. (This is called \textit{bandit feedback} in the machine learning literature~\cite{swaminathan15counterfactual}.) In our work, we focus on the setting in which the action assignment is {\em not} independent of the feature vector, i.e., $A \not\independent X$; that is, action assignments are {\em not random}. This dependence is modeled by the conditional distribution $\gamma(a,x) = P(A = a| X = x)$, also known as the \textit{propensity score}.
In this paper, we make the following common assumptions:
\begin{itemize}
\item \textbf{Unconfoundedness:} Potential outcomes $(Y^{(0)}, Y^{(1)}, \ldots, Y^{(k-1)})$ are independent of the action assignment given the features, that is $(Y^{(0)}, Y^{(1)}, \ldots, Y^{(k-1)}) \independent A | X$.
\item \textbf{Overlap:} For each instance $x \in \mathcal{X}$ and each action $a \in {\mathcal A}$, there is a non-zero probability that a patient with feature $x$ received the action $a$: $0 < \gamma(a,x) <1$ for all $a,x$.
\end{itemize}
These assumptions are sufficient to identify the optimal policy from the data~\cite{imbens2009recent,pearl2017detecting}.
We are given a data set
$$
\mathcal{D}^n = \{ (x_i, a_i, y_i) \}_{i=1}^n
$$
where each instance $i$ is generated by the following stochastic process:
\begin{itemize}
\item Each feature-action pair is drawn according to a fixed but unknown distribution $\mathcal{D}_S$, i.e, $(x_i, a_i) \sim \mathcal{D}_S$.
\item Potential outcomes conditional on features are drawn with respect to a distribution $\mathcal{P}$; that is, $(Y_i^{(0)}, Y_i^{(1)}, \ldots, Y_i^{(k-1)}) \sim \mathcal{P}(\cdot| X = x_i, A = a_i)$.
\item Only the outcome of the action actually performed is recorded in the data, that is, $y_i = Y_i^{(a_i)}$.
\end{itemize}
We denote the marginal distribution on the features by $\mathcal{D}$; i.e., $\mathcal{D}(x) = \sum_{a \in \mathcal{A}} \mathcal{D}_S(x,a)$.
\subsection{Definition of Policy Outcome}
A {\em policy} is a mapping $h$ from features to actions. In this paper, we are interested in learning a policy $h$ that maximizes the policy outcome, defined as:
$$
V(h) = \mathbb{E}_{x \sim \mathcal{D}} \left[ \mathbb{E} \left[ Y^{(h(X))}| X = x\right] \right].
$$
We denote by $m_a(x) = \mathbb{E}\left[ Y^{(a)} | X = x\right]$ the expected outcome of action $a$ on an instance with feature $x$. Based on these definitions, we can re-write the policy outcome of $h$ as $V(h) = \mathbb{E}_{x \sim \mathcal{D}}\left[ m_{h(x)}(x) \right]$. Estimating $V(h)$ from the data is a challenging task because the counterfactuals are missing and there is a selection bias.
\section{Counterfactual Estimation Bounds}
In this section, we provide a criterion that we will use to learn a policy $h^{*}$ the maximizes the outcome. We handle the selection bias in our dataset by mapping the features to representations are relevant to policy outcomes and are less biased. Let $\Phi: \mathcal{X} \rightarrow \mathcal{Z}$ denote a representation function which maps the features to representations. The representation function induces a distribution over representations $\mathcal{Z}$ (denoted by $\mathcal{D}^{\Phi}$) and $m_a$ as follows:
\begin{eqnarray*}
\mathbb{P}_{\mathcal{D}^{\Phi}}(\mathcal{B}) &=& \mathbb{P}_{\mathcal{D}}(\Phi^{-1}(\mathcal{B})), \notag \\
m_a^{\Phi}(z) &=& \mathbb{E}_{x \sim \mathcal{D}}[m_a(x) | \Phi(x) = z],
\end{eqnarray*}
for any $\mathcal{B} \subset \mathcal{Z}$ such that $\Phi^{-1}(\mathcal{B})$ is $\mathcal{D}$-measurable. That is, the probability of of an event $\mathcal{B}$ according to $\mathcal{D}^{\Phi}$ is the probability of the inverse image of the event $\mathcal{B}$ according to $\mathcal{D}$. Our learning setting is defined by our choice of the representation function and hypothesis class $\mathcal{H} = \{h: \mathcal{Z} \rightarrow \mathcal{A} \}$ of (deterministic) policies.
We now connect our problem to domain adaptation. Recall that $\mathcal{D}_S$ is the source distribution that generated feature-action samples in our observational data. Define the target distribution $\mathcal{D}_{T}$ by $\mathcal{D}_{T}(x, a) = (1/K) \mathcal{D}(x)$. Note that $\mathcal{D}_S$ represents an observational study in which the actions are not randomized, while
$\mathcal{D}_T$ represents a clinical study in which actions {\em are} randomized. Let $\mathcal{D}_S^{\Phi}$ and $\mathcal{D}_{T}^{\Phi}$ denote the source and target distributions induced by the representation function $\Phi$ over the space $\mathcal{Z} \times \mathcal{A}$, respectively. Let $\mathcal{D}^{\Phi}$ denote the marginal distribution over the representations and write $V^{\Phi}(h)$ for the induced policy outcome of $h$, that is, $V^{\Phi}(h) = \mathbb{E}_{z \sim \mathcal{D}^{\Phi}}\left[m_{h(z)}^{\Phi}(z)\right]$.
For the remainder of the theoretical analysis, suppose that the representation function $\Phi$ is fixed. The missing counterfactual outcomes can be addressed by importance sampling. Let $V_S^{\Phi}(h)$ and $V_T^{\Phi}(h)$ denote the expected policy outcome with respect to distributions $\mathcal{D}_S$ and $\mathcal{D}_T$, respectively. They are given by
\begin{eqnarray*}
V_S^{\Phi}(h) &=& \mathbb{E}_{(z,a) \sim \mathcal{D}_S^{\Phi}} \left[ \frac{ m_a^{\Phi}(z) 1(h(z) = a)}{1/k}\right], \\
V_T^{\Phi}(h) &=& \mathbb{E}_{(z,a) \sim \mathcal{D}_T^{\Phi}} \left[ \frac{ m_a^{\Phi}(z) 1(h(z) = a)}{1/k}\right].
\end{eqnarray*}
where $1(\cdot)$ is an indicator function if the statement is true and $0$ otherwise . We can only estimate $V_S(h)$ from the observational data. First, we'll connect $V_T^{\Phi}(h)$ with $V^{\Phi}(h)$, and provide some theoretical bounds based on the distance between source and target distribution.
\begin{proposition} Let $\Phi$ be a fixed representation function. Then: $V_T^{\Phi}(h) = V^{\Phi}(h)$.
\end{proposition}
\begin{proof}
It follows that
\begin{eqnarray*}
V^{\Phi}_T(h) &=& \mathbb{E}_{z \sim \mathcal{D}^{\Phi}} \left[ \sum_{a \in \mathcal{A}} 1/k \frac{m_a^{\Phi}(z) 1(h(z) = a)}{1/k} \right] \\
&=& \mathbb{E}_{z \sim \mathcal{D}^{\Phi} } \left[ m_{h(z)}^{\Phi}(z) \right] = V^{\Phi}(h).
\end{eqnarray*}
\end{proof}
We can not create a Monte-Carlo estimator for $V^{\Phi}_T(h)$ since we don't have samples from the target distribution - we only have samples from the source distribution. Hence, we'll use domain adaptation theory to bound the difference between $V_S^{\Phi}(h)$ and $V^{\Phi}_T(h)$ in terms of $\mathcal{H}$-divergence. In order to do that, we first need to introduce a distance metric between distributions. For any policy $h \in \mathcal{H}$, let $\mathcal{I}_h$ denote the characteristic set that contains all representation-action pairs that is mapped to label $a$ under function $h$, i.e., $\mathcal{I}_h = \{ (z,a): h(z) = a\}$.
\begin{definition} Suppose $\mathcal{D}$, $\mathcal{D}'$ be probability distributions over $\mathcal{Z} \times \mathcal{A}$ such that every characteristic set $\mathcal{I}_h$ of $h \in \mathcal{H}$ is measurable with respect to both distributions. Then, the $\mathcal{H}$ -divergence between distributions $\mathcal{D}$ and $\mathcal{D}'$ is
$$
d_{\mathcal{H}}(\mathcal{D}, \mathcal{D}') = \sup_{h \in \mathcal{H}} \left| \mathbb{P}_{(z,a) \sim \mathcal{D}}(\mathcal{I}_h) - \mathbb{P}_{(z,a) \sim \mathcal{D}^{'}}(\mathcal{I}_h) \right|.
$$
\end{definition}
The $\mathcal{H}$-divergence measures the difference between the behavior of policies in $\mathcal{H}$ when examples are drawn from $\mathcal{D}$, $\mathcal{D}'$; this plays an important role in theoretical bounds. In the next lemma, we establish a bound on the difference between $V_S^{\Phi}(h)$ and $V^{\Phi}_T(h)$ based on the $\mathcal{H}$-divergence between source and target.
\begin{lemma} Let $h \in \mathcal{H}$ and let $\Phi$ be a representation function. Then
$$
V^{\Phi}(h) \geq V_S^{\Phi}(h) - k d_{\mathcal{H}}(\mathcal{D}_T^{\Phi}, \mathcal{D}_S^{\Phi})
$$
\end{lemma}
\begin{proof} The proof is similar to \cite{ben2007analysis,blitzer2008learning}. The following inequality holds:
\begin{eqnarray*}
V_S^{\Phi}(h) &=& \mathbb{E}_{(z,a) \sim \mathcal{D}_S^{\Phi}} \left[ \frac{m_a(z)}{1/k} 1(h(z) = a) \right] \\
& \leq & \mathbb{E}_{(z,a) \sim \mathcal{D}_T^{\Phi}} \left[ \frac{m_a(z)}{1/k} 1(h(z) = a) \right] \\
&\;\;\;\;\;\;\;+& k \left| \mathbb{P}_{(z,a) \sim \mathcal{D}_T^{\Phi}}(\mathcal{I}_h) - \mathbb{P}_{(z,a) \sim \mathcal{D}_{S}^{\Phi}}(\mathcal{I}_h) \right| \\
&\leq& V^{\Phi}(h) + k d_{\mathcal{H}}(\mathcal{D}_{S}^{\Phi}, \mathcal{D}_T^{\Phi})
\end{eqnarray*}
where the first inequality holds because $\frac{m_a(z)}{1/k} \leq k$ for all pairs $(z,a)$ and outcomes lie in the interval
$\left[0,1\right]$.
\end{proof}
Lemma 1 shows that the true policy outcome is at least as good as the policy outcome in the observational data minus the product of the number of actions times the $\mathcal{H}$-divergence between the observational and randomized data. (So, if the divergence is small, a policy that is found to be good with respect to the observational data is guaranteed to be a good policy with respect to the true distribution.) We create a Monte Carlo estimator $V_S^{\Phi}(h)$ for the policy outcome in source data and then use the lower bound we have just established to find the best action recommendation policy.
\begin{definition} Let $\Phi$ be a representation function such that $\Phi(x_i) = z_i$. The {\em Monte-Carlo estimator} for the policy outcome in source data is given by:
$$
\widehat{V}_S^{\Phi}(h)= \frac{1}{n} \sum_{i=1}^n \frac{y_i 1(h(z_i) = a_i)}{1/K}.
$$
\end{definition}
In order to provide uniform bounds on the Monte-Carlo estimator for an infinitely large class of recommendation functions, we need to first define a complexity term for a class $\mathcal{H}$. For $\epsilon > 0$, a policy class $\mathcal{H}$ and integer $n$, the growth function is defined as
$$
\mathcal{N}_{\infty}(\epsilon, \mathcal{H}, n) = \sup_{\boldsymbol{z} \in \boldsymbol{Z}^n} \mathcal{N}(\epsilon, \mathcal{H}(\boldsymbol{z}), \|\cdot \|_{\infty}),
$$
where $\mathcal{H}(\boldsymbol{z}) =\{\left(h(z_1), \ldots, h(z_n)\right): h\in\mathcal{H}\} \subset \mathbb{R}^n$, $\boldsymbol{Z}^n$ is the set of all possible $n$ representations and for $\mathcal{A} \subset \mathbb{R}^n$ the number $\mathcal{N}(\epsilon, A, \|\cdot\|_{\infty})$ is the cardinality $|\mathcal{A}_0|$ of the smallest set $\mathcal{A}_0 \subseteq \mathcal{A}$ such that $A$ is contained in the union of $\epsilon$-balls centered at points in $\mathcal{A}_0$ in the metric induced by $\|\cdot\|_{\infty}$. (This is often called the covering number.) Set $\mathcal{M}(n) = 10 \mathcal{N}_{\infty}(1/n, \mathcal{H}, 2n)$. The following result provides an inequality between the estimated and true $V_S^{\Phi}(h)$ for all $h \in \mathcal{H}$.
\begin{lemma} \cite{maurer2009empirical} Fix $\delta \in \left(0,1\right)$, $n \geq 16$. Then, with probability $1 - \delta$, we have for all $h \in \mathcal{H}$:
$$
V_S^{\Phi}(h) \geq \widehat{V}_S^{\Phi}(h) - \sqrt{\frac{18 \ln(\mathcal{M}(n)/\delta)}{n}} - \frac{15\ln(\mathcal{M}(n)/\delta)}{n}
$$
\end{lemma}
In order to provide a data dependent bound on the estimation error between $V(h)$ and $\widehat{V}_S(h)$, we need to provide data-dependent bounds on the $\mathcal{H}$-divergence between source and target distributions. However, we aren't given samples from the target data so we need to generate (random) target data. Let $\widehat{\mathcal{D}}_S^{\Phi} = \{ (Z_i, A_i) \}_{i=1}^n$ denote the empirical distribution of the source data. From the empirical source distribution, we can generate target data by simply sampling the actions uniformly, that is, $\widehat{\mathcal{D}}_T^{\Phi} = \{ (Z_i, \widetilde{A}_i)\}_{i=1}^n$ where $\widetilde{A}_i \sim \operatorname{Multinomial}(\left[1/K, \ldots, 1/K \right])$. Then, we have $\widehat{\mathcal{D}}_S^{\Phi} \sim \mathcal{D}_S^{\Phi}$ and $\widehat{\mathcal{D}}_T^{\Phi} \sim \mathcal{D}_T^{\Phi}$. Then, define the empirical probability estimates of the characteristic functions as
\begin{eqnarray*}
\mathbb{P}_{(z,a) \sim \widehat{\mathcal{D}}_S^{\Phi}}(\mathcal{I}_h) &=& \frac{1}{n}\sum_{i=1}^n 1(h(Z_i) = A_i), \\
\mathbb{P}_{(z,a) \sim \widehat{\mathcal{D}}_T^{\Phi}}(\mathcal{I}_h) &=& \frac{1}{n}\sum_{i=1}^n 1(h(Z_i) = \tilde{A}_i).
\end{eqnarray*}
Then, one can compute empirical $\mathcal{H}$-divergence between two samples $\widehat{\mathcal{D}}_S^{\Phi}$ and $\widehat{\mathcal{D}}_T^{\Phi}$ by
\begin{equation}
d_{\mathcal{H}}(\widehat{\mathcal{D}}_T^{\Phi}, \widehat{\mathcal{D}}_S^{\Phi}) = \sup_{h \in \mathcal{H}} \left| \mathbb{P}_{(z,a) \sim \widehat{\mathcal{D}}_T^{\Phi}}(\mathcal{I}_h) - \mathbb{P}_{(z,a) \sim \widehat{\mathcal{D}}_S^{\Phi}}(\mathcal{I}_h) \right|.
\end{equation}
In the next lemma, we provide estimation bounds between the empirical $\mathcal{H}$-divergence and true $\mathcal{H}$-divergence.
\begin{lemma} Fix $\delta \in \left(0,1\right)$, $n \geq 16$. Then, with probability $1 - 2\delta$, we have for all $h \in \mathcal{H}$:
\begin{eqnarray*}
d_{\mathcal{H}}(\mathcal{D}_T^{\Phi}, \mathcal{D}_S^{\Phi}) &\geq& d_{\mathcal{H}}(\widehat{\mathcal{D}}_T^{\Phi}, \widehat{\mathcal{D}}_S^{\Phi}) \\
&-& 2\left[\sqrt{\frac{18 \ln(\mathcal{M}(n)/\delta)}{n}} - \frac{15\ln(\mathcal{M}(n)/\delta)}{n}\right]
\end{eqnarray*}
\end{lemma}
\begin{proof}
Define
$
\beta(\delta, n) = \sqrt{\frac{18 \ln(\mathcal{M}(n)/\delta)}{n}} - \frac{15\ln(\mathcal{M}(n)/\delta)}{n}.
$
By \cite{maurer2009empirical}, with probability $1 - \delta$, we have for each hypothesis $h \in \mathcal{H}$,
\begin{eqnarray*}
\mathbb{P}_{(z,a) \sim \mathcal{D}_T^{\Phi}}(\mathcal{I}_h) &\geq& \mathbb{P}_{(z,a) \sim \widehat{\mathcal{D}}_T^{\Phi}}(\mathcal{I}_h) - \beta(\delta, n) \\
\mathbb{P}_{(z,a) \sim \mathcal{D}_S^{\Phi}}(\mathcal{I}_h) &\leq& \mathbb{P}_{(z,a) \sim \widehat{\mathcal{D}}_S^{\Phi}}(\mathcal{I}_h) + \beta(\delta, n)
\end{eqnarray*}
Hence, by union bound, the following equation holds for all $h \in \mathcal{H}$ with probability $1 - 2 \delta$:
\begin{multline*}
\left|\mathbb{P}_{(z,a) \sim \mathcal{D}_T^{\Phi}}(\mathcal{I}_h) - \mathbb{P}_{(z,a) \sim \mathcal{D}_S^{\Phi}}(\mathcal{I}_h)\right| \\
\geq \left|\mathbb{P}_{(z,a) \sim \widehat{\mathcal{D}}_T^{\Phi}}(\mathcal{I}_h) - \mathbb{P}_{(z,a) \sim \widehat{\mathcal{D}}_S^{\Phi}}(\mathcal{I}_h) - 2\beta(\delta,n)\right|
\end{multline*}
The inequality still holds by taking supremum over $\mathcal{H}$ with $1 - 2 \delta$, that is,
\begin{eqnarray*}
&&d_{\mathcal{H}}(\mathcal{D}_T^{\Phi}, \mathcal{D}_S^{\Phi}) \\
&\geq& \sup_{h \in \mathcal{H}} \left|\mathbb{P}_{(z,a) \sim \widehat{\mathcal{D}}_T^{\Phi}}(\mathcal{I}_h) - \mathbb{P}_{(z,a) \sim \widehat{\mathcal{D}}_S^{\Phi}}(\mathcal{I}_h) - 2\beta(\delta,n)\right| \\
&\geq& d_{\mathcal{H}}(\widehat{\mathcal{D}}_T^{\Phi}, \widehat{\mathcal{D}}_S^{\Phi}) - 2 \beta(\delta, n).
\end{eqnarray*}
where the last inequality follows from the triangle inequality.
\end{proof}
Finally, by combining Lemmas 1,2 and 3, we obtain a data-dependent bound on the counterfactual estimation error.
\begin{theorem}
Fix $\delta \in (0,1)$, $n \geq 16$. Let $\Phi$ be the representation function and let $\mathcal{H}$ be the set of policies. Then, with probability at least $1 - 3\delta$, we have for all $h \in \mathcal{H}$:
\begin{eqnarray*}
V^{\Phi}(h) &\geq& \widehat{V}_S^{\Phi}(h) - k d_{\mathcal{H}}(\widehat{\mathcal{D}}_S^{\Phi}, \widehat{\mathcal{D}}_T^{\Phi}) \\
&-& 3k \bigg[\sqrt{\frac{18\ln(\mathcal{M}(n)/\delta)}{n}} - \frac{15\ln(\mathcal{M}(n)/\delta)}{n} \bigg]
\end{eqnarray*}
\end{theorem}
The result provided in Theorem 1 is constructive and motivates our optimization criteria.
\begin{figure}[h]
\includegraphics[width = 0.5\textwidth]{nn_model.png}
\label{fig:sigma}
\caption{Neural network model based on~\cite{ganin2016domain}}
\end{figure}
\section{Counterfactual Policy Optimization (CPO)}
Theorem 1 motivates a general framework for designing policy learning from observational data with bandit feedback. A learning algorithm following this criterion solves:
$$
\widehat{\Phi}, \widehat{h} = \arg\max_{\Phi, h} \; \widehat{V}_S^{\Phi}(h) - \lambda d_{\mathcal{H}}(\widehat{\mathcal{D}}_S^{\Phi}, \widehat{\mathcal{D}}_T^{\Phi}),
$$
where $\lambda > 0$ is the trade-off parameter between the empirical policy outcome in the source data and the empirical $\mathcal{H}$-divergence between the source and target distributions. This optimization criterion seeks to find a representation function where the source and the target domain are indistinguishable. Computing the empirical $\mathcal{H}$-divergence between the source and target distributions is known to be NP-hard ~\cite{ganin2016domain}, but we can use recent developments in domain adversarial neural networks to find a good approximation.
\subsection{Domain Adversarial Neural Networks}
In this paper, we follow the recent work in domain adversarial training of neural networks~\cite{ganin2016domain}. For this, we need samples from observed data - sometimes referred to as source data ($\mathcal{D}_S$) - and unlabeled samples from an ideal dataset - referred to as target data ($\mathcal{D}_T$). As mentioned, we don't have samples from an ideal dataset. Hence, we'll first talk about batch sampling of source and target from our dataset $\mathcal{D}$. Given a batch size of $m$, we randomly sample from $\mathcal{D}$ and set domain variable $d = 0$ indicating this is the source data. Then, we sample $m$ additional samples excluding the samples from the source data and randomly assign an action according to the distribution
$\operatorname{Multinomial}(\left[1/K, \ldots, 1/K\right])$; finally, we set the domain variable $d = 1$ indicating this is the target data. The batch generation procedure is depicted in Algorithm 1.
\begin{algorithm}[t]
\caption{Procedure: $\text{Generate}-\text{Batch}$}
\label{alg:gen_batch}
\normalsize
\begin{algorithmic}[1]
\STATE Input: Data: $\mathcal{D}_n$, Batch size: $m$
\STATE Sample $\mathcal{U} = \{u_1, \ldots, u_m\} \subset \mathcal{N} =\{1,\ldots,n \}$.
\STATE Set source set $\mathcal{S} = \{ (x_{u_i}, a_{u_i}, y_{u_i}, d_{i} =0) \}_{i=1}^m$.
\STATE Sample $\mathcal{V} =\{ v_1, \ldots, v_m\} \subset \mathcal{N} \setminus \mathcal{U}$.
\STATE Set $\mathcal{T} = \emptyset$
\FOR{i = 1, \ldots, m:}
\STATE Sample $\widetilde{a}_i \sim \operatorname{Multinomial}([1/K, \ldots, 1/K])$.
\STATE $\mathcal{T} = \mathcal{T} \cup \{ (x_{v_i}, \widetilde{a}_i, d_i = 1) \}$.
\ENDFOR
\STATE Output: $\mathcal{S}$, $\mathcal{T}$.
\end{algorithmic}
\end{algorithm}
Our algorithm consists of three blocks: representation, domain and policy blocks. In the representation block, we seek to find a map $\Phi: \mathcal{X} \rightarrow \mathcal{Z}$ combining two objectives: (i) high predictive power on the outcomes, (ii) low predictive power on the domain. Let $F_r$ denote a parametric function that maps the patient features to representations, that is, $z_i = F_r(x_i; \theta_r)$ where $\theta_r$ is the parameter vector of the representation block. The representations are input to both survival and policy blocks. Let $F_p$ denote the mappings from representation-action pair $(z_i, a_i)$ to probabilities over the actions $\hat{q}_i = \left[ \hat{q}_{i,0}, \ldots, \hat{q}_{i, K-1}\right]$, i.e., $\hat{q}_i = F_p(z_i, a_i; \theta_p)$ where $\theta_p$ is the parameter vector of the policy block. For an instance with features $x_i$ and action $a_i$, an element in output of policy block $\hat{q}_{i,a}$ is the probability of recommending action $a$ for subject $i$. The estimated policy outcome in source data is then given by
$$
\widehat{V}_{S}^{\Phi}(h) = \frac{1}{n} \sum_{i=1}^n \frac{y_i q_{a_i}}{1/k}.
$$
Although our theory applies only to deterministic policies, we will allow for stochastic policies in order to make the optimization problem tractable. This is not optimal; however, as we'll show in our numerical results, this approach is still able to achieve significant gains with respect to benchmark algorithms. Let $G_d$ be a mapping from representation-action pair $(z_i, a_i)$ to probability of the instances generated from target, i.e., $\hat{p}_i = G_d(z_i, a_i; \theta_d)$ where $\theta_d$ is the parameters of the domain block.
Note that the last layer of the policy block is a softmax operation, which has exponential terms. Instead of directly maximizing
$\widehat{V}_{S}(h)$, we use a modified cross-entropy loss to make the optimization criteria more robust. The policy loss is then
$$
\mathcal{L}_p^i(\theta_r, \theta_s) = \frac{-y_i \log(q_{i, a_i})}{1/k}
$$
At the testing stage, we can then convert these probabilities to action recommendations simply by recommending the action with highest probability $q_{i,a}$. We set the domain loss to be the standard cross entropy loss between the estimated domain probability $p_i$ and the actual domain probability $d_i$; this is the standard classification loss used in the literature and is given by
$$
\mathcal{L}_d^i(\theta_r, \theta_s) = d_i \log(p_i) + (1 - d_i) \log p_i.
$$
Our goal in this paper to find the saddle point that optimizes the weighted sum of the survival and domain loss. This total loss is given by
\begin{eqnarray*}
\mathcal{E}(\theta_r, \theta_s, \theta_d) &=& \sum_{i \in \mathcal{S}} \mathcal{L}_s^i(\theta_r, \theta_s) \notag \\
&\;&\;\; - \lambda \left( \sum_{i \in \mathcal{S}} \mathcal{L}_d^i(\theta_r, \theta_d) + \sum_{i \in \mathcal{T}} \mathcal{L}_d^i(\theta_r, \theta_d) \right)
\end{eqnarray*}
where $\lambda > 0$ is the trade-off between survival and domain loss. The saddle point is
\begin{eqnarray*}
\left(\hat{\theta}_{r}, \hat{\theta}_{s} \right) &=& \arg\min_{\theta_{r}, \theta_{p}} \; \mathcal{E}(\theta_{r}, \theta_p, \hat{\theta}_d), \notag \\
\hat{\theta}_d &=& \arg\max_{\theta_{d}} \; \mathcal{E}(\hat{\theta}_{r}, \hat{\theta}_p, \theta_d).
\end{eqnarray*}
The training procedure of the Domain Adverse training of Counterfactual POLicy training (DACPOL) is depicted in Algorithm 2. The neural network architecture is depicted in Figure 1.
For a test instance with covariates $x^{*}$, we compute the action recommendations with the following procedure: We first compute the representations by $z^{*}= G_r(x^{*}; \theta_r)$, then compute the action probabilities $q^{*} = F_p(z^{*}, \theta_p)$. We finally recommend the action with $\hat{A}(x^{*}) = \arg\max_{a \in \mathcal{A}} q^{*}_{a}$.
\begin{algorithm}[t]
\caption{Training Procedure: $\text{DACPOL}$}
\label{alg:gen_batch}
\normalsize
\begin{algorithmic}[1.5]
\STATE Input: Data: $\mathcal{D}$, Batch size: $m$, Learning rate: $\mu$
\STATE $(\mathcal{S}, \mathcal{T}) = \text{Generate-Batch}(\mathcal{D}, m)$.
\FOR{until convergence}
\STATE Compute $\mathcal{L}_p^{\mathcal{S}}( \theta_{r}, \theta_s) = \frac{1}{|\mathcal{S}|} \sum_{i \in \mathcal{S}} \mathcal{L}_p^i( \theta_{r}, \theta_s)$
\STATE Compute $\mathcal{L}_d^{\mathcal{S}}(\theta_{r}, \theta_d) = \frac{1}{|\mathcal{S}|} \sum_{i \in \mathcal{S}}\mathcal{L}_d^i(\theta_{r}, \theta_d)$
\STATE Compute $\mathcal{L}_d^{\mathcal{T}}(\theta_{r}, \theta_d) = \frac{1}{|\mathcal{T}|} \sum_{i \in \mathcal{T}}\mathcal{L}_d^i(\theta_{r}, \theta_d)$
\STATE Compute $\mathcal{L}_d(\theta_{r}, \theta_d) = \mathcal{L}_d^{\mathcal{S}}(\theta_{r}, \theta_d) + \mathcal{L}_d^{\mathcal{T}}(\theta_{r}, \theta_d)$
\STATE $\theta_{r} \rightarrow \theta_{r} - \mu \left (\frac{\partial \mathcal{L}_s^{\mathcal{S}}( \theta_{r}, \theta_s)}{\partial \theta_{r}} -\lambda \frac{\partial \mathcal{L}_d( \theta_{r}, \theta_d)}{\partial \theta_{r}} \right)$
\STATE $\theta_p \leftarrow \theta_p - \mu \frac{\partial \mathcal{L}_p^{\mathcal{S}}( \theta_{r}, \theta_s)}{\theta_s}$
\STATE $\theta_d \leftarrow \theta_d - \mu \frac{\partial \mathcal{L}_d( \theta_{r}, \theta_d)}{\theta_d}$
\ENDFOR
\end{algorithmic}
\end{algorithm}
\section{Numerical Results}
Here we describe the performance of our algorithm. Note that it is difficult (almost impossible) to test and validate the algorithm on real data with missing counterfactual survival outcomes. In this paper, we provide results both on a semi-synthetic breast cancer and a supervised UCI dataset (Statlog).
\subsection{Dataset Description}
\textbf{Breast cancer dataset: }
The dataset includes $10,000$ records of breast cancer patients participating in the National Surgical Adjuvant Breast and Bowel Project (NSABP); see \cite{yoon2016discovery}. Each instance consists of the following information about the patient: age, menopausal, race, estrogen receptor, progesterone receptor, human epidermal growth factor receptor 2 (HER2NEU), tumor stage, tumor grade, Positive Axillary Lymph Node Count(PLNC), WHO score, surgery type, Prior Chemotherapy, prior radiotherapy and histology. The treatment is a choice among six chemotherapy regimens of which only $5$ of them are used: AC, ACT, CAF, CEF, CMF. The outcomes for these regimens were derived based on 32 references from PubMed Clinical Queries. The data contains the feature vector $x$ and all derived outcomes for each treatment $\{ Y_t \}_{t \in \mathcal{T}}$.
\textbf{UCI Statlog Dataset:}
This dataset includes the multi-spectral values of pixels in a satellite image. The feature vector contains $36$ pixel values and the aim is to predict the true description of the plot (barren soil, grass, cotton crop, etc.) from the feature vector. We follow the same procedure summarized in \cite{beygelzimer2009offset}. That is, we treat each label as an action and set the outcome of the action which matches with the label as $1$ and the rest as $0$.
\vspace{-.5em}
\subsection{Experimental Setup}
We generate an artificially biased dataset $\mathcal{D}^n =\{ (X_i, A_i, Y_i)\}$ by the following procedure: (i) we first draw random weights $W \in \mathbb{R}^{s \times k}$ with $w_{j,a} \sim \mathcal{N}(0, \sigma I)$ where $\sigma > 0$ is a parameter used to generate datasets with different selection bias levels. We generate actions in the data according to the logistic distribution $A \sim \exp(x^T w_a)/ (\sum_{a \in \mathcal{A}} \exp(x^T w_a))$.
For the breast cancer data set, we generate a $56/24/20$ split of the data to train, validate and test our DACPOL. For the Statlog data, we spare $30\%$ of the training data for validation and use the testing set provided to evaluate our algorithm. The hyperparameter list we used in our validation set is $10^\gamma/2$ with $\gamma \in \left[ -4, -3, -2, -1, 0, 0.5, 0.75, 1, 1.5, 2, 3\right]$. We generate $100$ different datasets by following the procedure described above and report the average and $95\%$ confidence levels.
The performance metric we use to evaluate our algorithm in this paper is loss, which we define to be $1 - \text{accuracy}$; accuracy is defined as the fraction of test instances in which the recommended and best action match. Note that we can evaluate the accuracy metric since we have the ground truth outcomes in the testing set, but of course the ground truth outcomes are not used by any algorithm in the training and validation test. In our experiments, we use 1-1-2 representation/domain/outcome fully-connected layers. The neural network is trained by back propagation via Adam Optimizer~\cite{kingma2014adam} with an initial learning rate of $.01$. We begin with an initial learning rate $\mu$ and tradeoff parameter $\lambda$ and use iterative adaptive parameters to get our result; along the way we decrease the learning rate $\mu$ and increases the tradeoff parameter. This is standard procedure in training domain adversarial neural networks~\cite{ganin2016domain}. We implement DACPOL in the Tensorflow environment.
\vspace{-.5em}
\subsection{Benchmarks}
We compare performance of DACPOL with two benchmarks
\begin{itemize}
\item \textbf{POEM}~\cite{swaminathan15counterfactual} is a linear policy optimization algorithm which minimizes the empirical risk of IPS estimator and variance.
\item \textbf{IPS} is POEM without variance regularization.
\end{itemize}
Both IPS and POEM deal with the selection bias in the data by using the propensity scores. Note that DACPOL does not require the propensity scores to be known in order to address the selection bias. Hence, in order to make fair comparisons, we estimate the propensity scores from the data, and use these estimates in IPS and POEM.
\vspace{-.5em}
\subsection{Results}
\subsubsection{Comparisons with the benchmarks}
Table 2 shows the discriminative performance of DACPOL (in which we optimize $\lambda$) and DACPOL(0) (in which we set $\lambda = 0$) with the benchmark algorithms. We compare the algorithms in breast cancer data with $\sigma = 0.3$ and Statlog data with $\sigma = 0.2$. As seen from the table, our algorithm outperforms the benchmark algorithms in terms of the loss metric defined above. The empirical gain with respect to POEM algorithm has three sources: (i) DACPOL does not need propensity scores, (ii) DACPOL optimizes over all policies not just linear policies, (iii) DACPOL trades off between the predictive power and bias introduced by the features. (We further illustrate the last source of gain in the next subsection with a toy example.)
\begin{table}[h]
\normalsize
\centering
\begin{tabular}{|c|c|c|}
\hline
Algorithm & Breast Cancer & Statlog \\ \hline
DACPOL & $.292 \pm .006$ & $.249 \pm .015$ \\ \hline
DACPOL(0) & $.321 \pm .006$ & $.261 \pm .020$ \\ \hline \hline
POEM & $.394 \pm .004$ & $.432 \pm .016$ \\ \hline
IPS & $.397 \pm .004$ & $.454 \pm .017$ \\ \hline
\end{tabular}
\caption{Loss Comparisons for Breast Cancer and Statlog Dataset; Means and 95\% Confidence Intervals}
\label{table:res_benchmarks}
\end{table}
\vspace{-1em}
\subsubsection{Domain Loss and Policy Loss}
The hyperparameter $\lambda$ controls the domain loss in the training procedure. As $\lambda$ increases, the domain loss in training DACPOL increases; eventually source and target become indistinguishable, the representations become balanced, and the loss of DACPOL reaches a minimum. If we increase $\lambda$ beyond that point, the algorithm classifies the source as the target and the target as the source, representations become unbalanced, and the the loss of DACPOL increases again. Figure 2 illustrates this effect for the breast cancer dataset.
\begin{figure}[h]
\includegraphics[width = 0.5\textwidth]{lambda}
\caption{The effect of domain loss in DACPOL performance}
\label{fig:sigma}
\end{figure}
\vspace{-1.2em}
\subsubsection{The effect of selection bias in DACPOL}
In this subsection, we show the effect of the selection bias in the performance of our algorithm by varying the parameter
$\sigma$ in our data generation process: a larger value of $\sigma$ creates more biased data. Figure 3 shows two important points: (i) as the selection bias increases, the loss of DACPOL increases, (ii) as the selection bias increases, domain adversarial training becomes more efficient, and hence the improvement of DACPOL over DACPOL(0) increases.
\begin{figure}[h]
\includegraphics[width = 0.5\textwidth]{sigma}
\label{fig:sigma}
\caption{The effect of selection bias in DACPOL performance}
\end{figure}
\vspace{-0.5em}
\subsubsection{Empirical Gains with respect to CRM }
In this subsection, we show an advantage of our CPO principle over the CRM principle: selection bias from irrelevant features is less important. This happens because our representation optimization is able to remove the effect of the irrelevant features in the outputted representations and then uses only the relevant features to directly estimate the policy outcome as if it had access to randomized data. However, the performance of the CRM principle (whose objective is to maximize the IPS estimator minus the variance of the policy outcome) decreases with additional irrelevant features, because the inverse propensities due to irrelevant features become large, and hence the variance of the IPS estimator will also become large. To see this, we use a toy example. We begin with 15 relevant features $x$. We then generate $d$ additional irrelevant features $z \sim \mathcal{N}(0, I)$. We create a logging policy that depends only on the irrelevant features using the logistic distribution. As $d$ increases, the selection bias also increases. As Figure 4 shows, POEM is more sensitive than DACPOL to this increase in the selection bias.
\begin{figure}[h]
\includegraphics[width = 0.5\textwidth]{d}
\caption{The effect of irrelevant features in DACPOL vs POEM}
\end{figure}
\vspace{-2em}
\section{Conclusion} \vspace{-.5em}
This paper presented estimation bounds on the error between actual and estimated policy outcomes from observational data. Our theoretical results show that the estimation error from observational data depends on the $\mathcal{H}$-divergence between the observational and randomized data. This result motivated the development of a domain adversarial neural network to learn an optimal policy from observational data. We illustrated various features of our algorithm semi-synthetic and real data. Future work includes multi-stage actions, time-varying features etc.
\bibliographystyle{icml2018}
|
{
"timestamp": "2018-02-26T02:12:37",
"yymm": "1802",
"arxiv_id": "1802.08679",
"language": "en",
"url": "https://arxiv.org/abs/1802.08679"
}
|
\section{Asynchronous Proximal Gradient Descent}
\label{sec:aspg}
We now present our \emph{asynchronous proximal gradient descent} (Asyn-ProxSGD) algorithm, which is the main contribution in this paper. In the asynchronous algorithm, different workers may be in different local iterations due to random delays in computation and communication.
For ease of presentation, let us first assume each worker uses only one random sample at a time to compute its stochastic gradient, which naturally generalizes to using a mini-batch of random samples to compute a stochastic gradient. In this case, each worker will independently and asynchronously repeat the following steps:
\begin{itemize}
\item Pull the latest model $x$ from the server;
\item Calculate a gradient $\tilde{G}(x;\xi)$ based on a random sample $\xi$ locally;
\item Push the gradient $\tilde{G}(x; \xi)$ to the server.
\end{itemize}
Here we use $\tilde{G}$ to emphasize that the gradient computed on workers \emph{may be delayed}. For example, all workers but worker $j$ have completed their tasks of iteration $t$, while worker $j$ still works on iteration $t-1$. In this case, the gradient $\tilde{G}$ is not computed based on the current model $x^t$ but from a delayed one $x^{t-1}$.
In our algorithm, the server will perform an averaging over the received sample gradients as long as $N$ gradients are received and perform an proximal gradient descent update on the model $x$, no matter where these $N$ gradients come from; as long as $N$ gradients are received, the averaging is performed. This means that it is possible that the server may have received multiple gradients from one worker while not receiving any from another worker.
In general, when each mini-batch has $N$ samples, and each worker processes $N/m$ random samples to calculate a stochastic gradient to be pushed to the server, the proposed Asyn-ProxSGD algorithm is described in Algorithm~\ref{alg:asyn-prox-sgd} leveraging a parameter server architecture. The server maintains a counter $s$. Once $s$ reaches $m$, the server has received gradients that contain information about $N$ random samples (no matter where they come from) and will perform a proximal model update.
\section{Concluding Remarks}
\label{sec:conclude}
In this paper, we study asynchronous parallel implementations of stochastic proximal gradient methods for solving nonconvex optimization problems, with convex yet possibly nonsmooth regularization.
However, compared to asynchronous parallel stochastic gradient descent (Asyn-SGD), which is targeting smooth optimization, the understanding of the convergence and speedup behavior of stochastic algorithms for the nonsmooth regularized optimization problems is quite limited, especially when the objective function is nonconvex. To fill this gap, we propose an asynchronous proximal stochastic gradient descent (Asyn-ProxSGD) algorithm with convergence rates provided for nonconvex problems. Our theoretical analysis suggests that the same order of convergence rate can be achieved for asynchronous ProxSGD for nonsmooth problems as for the asynchronous SGD, under constant minibatch sizes, without making additional assumptions on variance reduction. And a linear speedup is proven to be achievable for both asynchronous ProxSGD when the number of workers is bounded by $O({K}^{1/4})$.
\section{Convergence Analysis}
\label{sec:theory}
To facilitate the analysis of Algorithm~\ref{alg:asyn-prox-sgd}, we rewrite it in an equivalent global view (from the server's perspective), as described in Algorithm~\ref{alg:apsgd-global}. In this algorithm, we use an iteration counter $k$ to keep track of how many times the model $x$ has been updated on the server; $k$ increments every time a push request (model update request) is completed. Note that such a counter $k$ is \emph{not} required by workers to compute gradients and is different from the counter $t$ in Algorithm~\ref{alg:asyn-prox-sgd}---$t$ is maintained by each worker to count how many sample gradients have been computed locally.
In particular, for every $N$ stochastic sample gradients received, the server simply aggregates them by averaging:
{
\begin{equation}
\tilde{G}^k := \frac{1}{N}\sum_{i=1}^N \nabla F(x^{k-\tau(k,i)}; \xi_{k, i}),
\label{eq:grad_aspg}
\end{equation}
}
where $\tau(k, i)$ indicates that the stochastic gradient $\nabla F(x^{k-\tau(k,i)}; \xi_{k, i})$ received at iteration $k$ could have been computed based on an older model $x^{k-\tau(k,i)}$ due to communication delay and asynchrony among workers. Then, the server updates $x^k$ to $x^{k+1}$ using proximal gradient descent.
\begin{algorithm}[htpb]
\caption{Asyn-ProxSGD (from a Global Perspective)}
\label{alg:apsgd-global}
\begin{algorithmic}[1]
\State Initialize $x^1$.
\For {$k=1,\ldots, K$}
\State {Randomly select $N$ training samples indexed by $\xi_{k, 1}, \ldots, \xi_{k, N}$.}
\State {Calculate the averaged gradient $\tilde{G}^k$ according to \eqref{eq:grad_aspg}.}
\State {$x^{k+1} \gets \prox{\eta_k h}(x^{k} - \eta_k \tilde{G}^{k})$.}
\EndFor
\end{algorithmic}
\end{algorithm}
\subsection{Assumptions and Metrics}
We make the following assumptions for convergence analysis. We assume that $f(\cdot)$ is a smooth function with the following properties:
\begin{assumption}[Lipschitz Gradient]
For function $f$ there are Lipschitz constants $L>0$ such that
{
\begin{equation}
\norm{\nabla f(x) - \nabla f(y)} \leq L \norm{x - y}, \forall x, y \in \real{d}.
\end{equation}
}
\label{asmp:smooth}
\vspace{-2mm}
\end{assumption}
As discussed above, assume that $h$ is a proper, closed and convex function, which is yet not necessarily smooth.
If the algorithm has been executed for $k$ iterations, we let $\mathcal{F}_k$ denote the set that consists of all the samples used up to iteration $k$. Since $\mathcal{F}_k \subseteq \mathcal{F}_{k'}$ for all $k \leq k'$, the collection of all such $\mathcal{F}_k$ forms a \emph{filtration}. Under such settings, we can restrict our attention to those stochastic gradients with an unbiased estimate and bounded variance, which are common in the analysis of \emph{stochastic} gradient descent or \emph{stochastic} proximal gradient algorithms, e.g., \citep{lian2015asynchronous,ghadimi2016mini}.
\begin{assumption}[Unbiased gradient]
For any $k$, we have $\mathbb{E}[G_k | \mathcal{F}_{k}] = g_k$.
\label{asmp:unbias_grad}
\end{assumption}
\begin{assumption}[Bounded variance]
The variance of the stochastic gradient is bounded by $\mathbb{E}[\norm{G(x;\xi) - \nabla f(x)}^2] \leq \sigma^2$.
\label{asmp:var_grad}
\end{assumption}
We make the following assumptions on the delay and independence:
\begin{assumption}[Bounded delay]
All delay variables $\tau(k,i)$ are bounded by $T$: $\max_{k, i} \tau(k,i) \leq T$.
\label{asmp:bound1}
\end{assumption}
\begin{assumption}[Independence]
All random variables $\xi_{k, i}$ for all $k$ and $i$ in Algorithm~\ref{alg:apsgd-global} are mutually independent.
\label{asmp:indie1}
\end{assumption}
The assumption of bounded delay is to guarantee that gradients from workers should not be too old. Note that the maximum delay $T$ is roughly \emph{proportional to the number of workers} in practice. This is also known as \emph{stale synchronous parallel} \citep{ho2013more} in the literature. Another assumption on independence can be met by selecting samples with \emph{replacement}, which can be implemented using some distributed file systems like HDFS \citep{borthakur2008hdfs}. These two assumptions are common in convergence analysis for asynchronous parallel algorithms, e.g., \citep{lian2015asynchronous,davis2016sound}.
\subsection{Theoretical Results}
We present our main convergence theorem as follows:
\begin{theorem}
If Assumptions~\ref{asmp:bound1} and \ref{asmp:indie1} hold and the step length sequence $\{\eta_k\}$ in Algorithm~\ref{alg:apsgd-global} satisfies
{
\begin{equation}
\eta_k \leq \frac{1}{16L},\quad 6\eta_k L^2 T \sum_{l=1}^T \eta_{k+l} \leq 1,
\end{equation}
}
for all $k=1,2,\ldots, K$, we have the following ergodic convergence rate for Algorithm ~\ref{alg:apsgd-global}:
{
\begin{equation}
\begin{split}
&\quad \frac{\sum_{k=1}^K (\eta_k - 8L\eta_k^2) \mathbb{E}[\norm{P(x^k, g^k, \eta_k)}^2]}{\sum_{k=1}^K (\eta_k - 8L\eta_k^2) } \\
&\leq \frac{8(\Psi(x^1) - \Psi(x^*))}{\sum_{k=1}^K \eta_k-8L\eta_k^2} + \frac{\sum_{k=1}^K \left(8 L\eta_k^2 + 12\eta_kL^2T \sum_{l=1}^{T}\eta_{k-l}^2 \right) \sigma^2}{N\sum_{k=1}^K (\eta_k - 8L\eta_k^2)},
\end{split}
\end{equation}
}
where the expectation is taken in terms of all random variables in Algorithm~\ref{alg:apsgd-global}.
\label{thm:aspg_convergence}
\end{theorem}
Taking a closer look at Theorem~\ref{thm:aspg_convergence}, we can properly choose the learning rate $\eta_k$ as a constant value and derive the following convergence rate:
\begin{corollary}
Let the step length be a constant, i.e.,
{
\begin{equation}
\eta := \sqrt{\frac{(\Psi(x^1) - \Psi(x^*))N}{2KL\sigma^2}}.
\end{equation}
}
If the delay bound $T$ satisfies
{
\begin{equation}\label{eq:T_bound}
K \geq \frac{128(\Psi(x^1) - \Psi(x^*))NL}{\sigma^2} (T+1)^4,
\end{equation}
}
then the output of Algorithm~\ref{thm:aspg_convergence} satisfies the following ergodic convergence rate:
{
\begin{equation}
\mathop{\min}_{k=1,\ldots,K} \mathbb{E}[\norm{P(x^k, g^k, \eta_k)}^2]
\leq \frac{1}{K}\sum_{k=1}^K \mathbb{E}[\norm{P(x^k, g^k, \eta_k)}^2]
\leq 32\sqrt{\frac{2(\Psi(x^1) - \Psi(x^*))L\sigma^2}{KN}}.
\label{eq:convergence_1}
\end{equation}
}
\label{corr:aspg_convergence}
\end{corollary}
\begin{remark}[Consistency with ProxSGD]
When $T=0$, our proposed Asyn-ProxSGD reduces to the vanilla ProxSGD (e.g., \citep{ghadimi2016mini}). Thus, the iteration complexity is $O(1/\epsilon^2)$ according to \eqref{eq:convergence_1}, attaining the same result as that in \citep{ghadimi2016mini} \emph{yet without assuming increased mini-batch sizes}.
\end{remark}
\begin{remark}[Linear speedup w.r.t. the staleness]
From \eqref{eq:convergence_1} we can see that linear speedup is achievable, as long as the delay $T$ is bounded by $O(K^{1/4})$ (if other parameters are constants). The reason is that by \eqref{eq:T_bound} and \eqref{eq:convergence_1}, as long as $T$ is no more than $O(K^{1/4})$, the iteration complexity (from a global perspective) to achieve $\epsilon$-optimality is $O(1/\epsilon^2)$, which is independent from $T$.
\end{remark}
\begin{remark}[Linear speedup w.r.t. number of workers]
As the iteration complexity is $O(1/\epsilon^2)$ to achieve $\epsilon$-optimality, it is also independent from the number of workers $m$ if assuming other parameters are constants. It is worth noting that the delay bound $T$ is roughly proportional to the number of workers. As the iteration complexity is independent from $T$, we can conclude that the total iterations will be shortened to $1/T$ of a single worker's iterations if $\Theta(T)$ workers work in parallel, achieving nearly linear speedup.
\end{remark}
\begin{remark}[Comparison with Asyn-SGD]
Compared with asynchronous SGD \citep{lian2015asynchronous}, in which $T$ or the number of workers should be bounded by $O(\sqrt{K/N})$ to achieve linear speedup, here Asyn-ProxSGD is more sensitive to delays and more suitable for a smaller cluster.
\end{remark}
\section{Preliminaries}
\label{sec:prelim}
In this paper, we use $f(x)$ as the one defined in \eqref{eq:original}, and $F(x;\xi)$ as a function whose stochastic nature comes from the random variable $\xi$ representing a random index selected from the training set $\{1,\ldots, n\}$. We use $\norm{x}$ to denote the $\ell_2$ norm of the vector $x$, and $\innerprod{x, y}$ to denote the inner product of two vectors $x$ and $y$. We use $g(x)$ to denote the ``true'' gradient $\nabla f(x)$ and use $G(x;\xi)$ to denote the stochastic gradient $\nabla F(x;\xi)$ for a function $f(x)$.
For a random variable or vector $X$, let $\mathbb{E}[X|\mathcal{F}]$ be the conditional expectation of $X$ w.r.t. a sigma algebra $\mathcal{F}$. We denote $\partial h(x)$ as the \emph{subdifferential} of $h$. A point $x$ is a critical point of $\Phi$, iff $0 \in \nabla f(x) + \partial h(x)$.
\subsection{Stochastic Optimization Problems}
In this paper, we consider the following \emph{stochastic} optimization problem instead of the original deterministic version \eqref{eq:original}:
\begin{equation}
\begin{split}
\mathop{\min}_{x\in \real{d}} &\quad \Psi(x):= \mathbb{E}_\xi [F(x; \xi)] + h(x),
\end{split}
\label{eq:stochastic}
\end{equation}
where the stochastic nature comes from the random variable $\xi$, which in our problem settings, represents a random index selected from the training set $\{1, \ldots, n\}$. Therefore, \eqref{eq:stochastic} attempts to minimize the expected loss of a random training sample plus a regularizer $h(x)$. In this work, we assume the function $h$ is proper, closed and convex, yet \emph{not necessarily smooth}.
\subsection{Proximal Gradient Descent}
The proximal operator is fundamental to many algorithms to solve problem \eqref{eq:original} as well as its stochastic variant \eqref{eq:stochastic}.
\begin{definition}[Proximal operator]
The proximal operator $\prox{}$ of a point $x \in \real{d}$ under a proper and closed function $h$ with parameter $\eta > 0$ is defined as:
\begin{equation}
\prox{\eta h}(x) = \mathop{\arg \min}_{y \in \real{d}} \left\{h(y) + \frac{1}{2\eta} \norm{y-x}^2\right\}.
\end{equation}
\end{definition}
In its vanilla version, \emph{proximal gradient descent} performs the following iterative updates:
\begin{equation*}
x^{k+1} \gets \prox{\eta_k h}(x^{k} - \eta_k \nabla f(x^k)),
\end{equation*}
for $k=1,2,\ldots$, where $\eta_k > 0$ is the step size at iteration $k$.
To solve stochastic optimization problem \eqref{eq:stochastic}, we need a variant called \emph{proximal stochastic gradient descent} (ProxSGD), with its update rule at each (synchronized) iteration $k$ given by
\begin{equation}
x^{k+1} \gets \prox{\eta_k h}\left( x^{k} - \frac{\eta_k}{N}\sum_{\xi \in \Xi_k}\nabla F(x^k; \xi) \right),
\label{eq:proxsgd_step}
\end{equation}
where $N:=|\Xi_k|$ is the mini-batch size. In ProxSGD, the aggregate gradient $\nabla f$ over all the samples is replaced by the gradients from a random subset of training samples, denoted by $\Xi_k$ at iteration $k$. Since $\xi$ is a random variable indicating a random index in $\{1,\ldots, n\}$, $F(x; \xi)$ is a random loss function for the random sample $\xi$, such that $f(x) := \mathbb{E}_\xi [F(x; \xi)]$.
\subsection{Parallel Stochastic Optimization}
Recent years have witnessed rapid development of parallel and distributed computation frameworks for large-scale machine learning problems. One popular architecture is called \emph{parameter server} \citep{dean2012large,li2014scaling}, which consists of some worker nodes and server nodes. In this architecture, one or multiple master machines play the role of parameter servers, which maintain the model $x$. Since these machines serve the same purpose, we can simply treat them as one \emph{server node} for brevity. All other machines are \emph{worker nodes} that communicate with the server for training machine learning models. In particular, each worker has two types of requests: \textbf{pull} the current model $x$ from the server, and \textbf{push} the computed gradients to the server.
Before proposing an asynchronous Proximal SGD algorithm in the next section, let us first introduce its \emph{synchronous} version. Let us use an example to illustrate the idea. Suppose we execute ProxSGD with a mini-batch of 128 random samples on 8 workers. We can let each worker randomly take 16 samples, and compute a summed gradient on these 16 samples, and push it to the server. In the synchronous case, the server will finally receive 8 summed gradients (containing information of all 128 samples) in each iteration. The server then updates the model by performing the proximal gradient descent step. In general, if we have $m$ workers, each worker will be assigned $N/m$ random samples in an iteration.
Note that in this scenario, all workers contribute to the computation of the sum of gradients on $N$ random samples in parallel, which corresponds to \emph{data parallelism} in the literature (e.g., \citep{agarwal2011distributed,ho2013more}). Another type of parallelism is called \emph{model parallelism}, in which each worker uses all $N$ random samples in the batch to compute a partial gradient on a specific block of $x$ (e.g., \citep{recht2011hogwild,pan2016cyclades}). Typically, data parallelism is more suitable when $n \gg d$, i.e., large dataset with moderate model size, and model parallelism is more suitable when $d \gg n$. We focus on data parallelism.
\section{Introduction}
\label{sec:intro}
With rapidly growing data volumes and variety, the need to scale up machine learning has sparked broad interests in developing efficient parallel optimization algorithms.
A typical parallel optimization algorithm usually decomposes the original problem into multiple subproblems, each handled by a worker node. Each worker iteratively downloads the global model parameters and computes its local gradients to be sent to the master node or servers for model updates. Recently, asynchronous parallel optimization algorithms \citep{recht2011hogwild,li2014communication,lian2015asynchronous}, exemplified by the Parameter Server architecture \citep{li2014scaling}, have been widely deployed in industry to solve practical large-scale machine learning problems. Asynchronous algorithms can largely reduce overhead and speedup training, since each worker may individually perform model updates in the system without synchronization.
Another trend to deal with large volumes of data is the use of \emph{stochastic} algorithms.
As the number of training samples $n$ increases, the cost of updating the model $x$ taking into account all error gradients becomes prohibitive. To tackle this issue, stochastic algorithms make it possible to update $x$ using only a small subset of all training samples at a time.
Stochastic gradient descent (SGD) is one of the first algorithms widely implemented in an asynchronous parallel fashion; its convergence rates and speedup properties have been analyzed for both convex \citep{agarwal2011distributed,mania2017} and nonconvex \citep{lian2015asynchronous} optimization problems.
Nevertheless, SGD is mainly applicable to the case of smooth optimization, and yet is not suitable for problems with a \emph{nonsmooth} term in the objective function, e.g., an $\ell_1$ norm regularizer. In fact, such nonsmooth regularizers are commonplace in many practical machine learning problems or constrained optimization problems. In these cases, SGD becomes ineffective, as it is hard to obtain gradients for a nonsmooth objective function.
We consider the following nonconvex regularized optimization problem:
\begin{equation}
\begin{split}
\mathop{\min}_{x\in \real{d}} &\quad \Psi(x):= f(x) + h(x),
\end{split}
\label{eq:original}
\end{equation}
}
where $f(x)$ takes a finite-sum form of $f(x) := \frac{1}{n}\sum_{i=1}^n f_i(x)$, and each $f_i(x)$ is a smooth (but not necessarily convex) function. The second term $h(x)$ is a convex (but \emph{not necessarily smooth}) function. This type of problems is prevalent in machine learning, as exemplified by deep learning with regularization \citep{dean2012large,chen2015mxnet,zhang2015deep}, LASSO \citep{tibshirani2005sparsity}, sparse logistic regression \citep{liu2009large}, robust matrix completion \citep{xu2010robust,sun2015guaranteed}, and sparse support vector machine (SVM) \citep{friedman2001elements}. In these problems, $f(x)$ is a loss function of model parameters $x$, possibly in a nonconvex form (e.g., in neural networks), while $h(x)$ is a convex regularization term, which is, however, possibly \emph{nonsmooth}, e.g., the $\ell_1$ norm regularizer.
Many classical deterministic (non-stochastic) algorithms are available to solve problem \eqref{eq:original}, including the proximal gradient (ProxGD) method \citep{parikh2014proximal} and its accelerated variants \citep{li2015accelerated} as well as the alternating direction method of multipliers (ADMM) \citep{hong2016convergence}. These methods leverage the so-called \emph{proximal operators} \citep{parikh2014proximal} to handle the nonsmoothness in the problem. Although implementing these deterministic algorithms in a \emph{synchronous} parallel fashion is straightforward, extending them to asynchronous parallel algorithms is much more complicated than it appears. In fact, existing theory on the convergence of asynchronous proximal gradient (PG) methods for nonconvex problem \eqref{eq:original} is quite limited. An asynchronous parallel proximal gradient method has been presented in \citep{li2014communication} and has been shown to converge to stationary points for nonconvex problems. However, \citep{li2014communication} has essentially proposed a non-stochastic algorithm and has not provided its convergence rate.
In this paper, we propose and analyze an asynchronous parallel \emph{proximal stochastic gradient descent} (ProxSGD) method for solving the nonconvex and nonsmooth problem \eqref{eq:original}, with provable convergence and speedup guarantees. The analysis of ProxSGD has attracted much attention in the community recently.
Under the assumption of an \emph{increasing} minibatch size used in the stochastic algorithm, the non-asymptotic convergence of ProxSGD to stationary points has been shown in \citep{ghadimi2016mini} for problem \eqref{eq:original} with a convergence rate of $O(1/\sqrt{K})$, $K$ being the times the model is updated. Moreover, additional variance reduction techniques have been introduced \citep{reddi2016proximal} to guarantee
the convergence of ProxSGD, which is different from the stochastic method we discuss here. The stochastic algorithm considered in this paper assumes that each worker selects a minibatch of randomly chosen training samples to calculate the gradients at a time, which is a scheme widely used in practice.
To the best of our knowledge, the convergence behavior of ProxSGD---under a \emph{constant} minibatch size without variance reduction---is still unknown (even for the synchronous or sequential version).
Our main contributions are summarized as follows:
\begin{itemize}
\item We propose asynchronous parallel ProxSGD (a.k.a. Asyn-ProxSGD) and prove that it can converge to stationary points of nonconvex and nonsmooth problem \eqref{eq:original} with an ergodic convergence rate of $O(1/\sqrt{K})$, where $K$ is the number of times that the model $x$ is updated. This rate matches the convergence rate known for asynchronous SGD. The latter, however, is suitable only for smooth problems. To our knowledge, this is the first work that offers convergence rate guarantees for any stochastic proximal methods in an asynchronous parallel setting.
\item Our result also suggests that the sequential (or synchronous parallel) ProxSGD can converge to stationary points of problem \eqref{eq:original}, with a convergence rate of $O(1/\sqrt{K})$. To the best of our knowledge, this is also the first work that provides convergence rates of any \emph{stochastic} algorithm for nonsmooth problem \eqref{eq:original} under a \emph{constant} batch size, while prior literature on such stochastic proximal methods assumes an increasing batch size or relies on variance reduction techniques.
\item We provide a linear speedup guarantee as the number of workers increases, provided that the number of workers is bounded by $O({K}^{1/4})$. This result has laid down a theoretical ground for the scalability and performance of our Asyn-ProxSGD algorithm in practice.
\end{itemize}
\section{Experiments}
\label{sec:simu}
\begin{figure*}[t]
\centering
\subfigure[\texttt{a9a}]{
\includegraphics[height=1.6in]{figures/a9a_grad_loss.pdf}
\label{fig:obj1-a9a}
}
\subfigure[\texttt{mnist}]{
\includegraphics[height=1.6in]{figures/mnist_grad_loss.pdf}
\label{fig:obj1-mnist}
}
\vspace{-1mm}
\subfigure[\texttt{a9a}]{
\includegraphics[height=1.6in]{figures/a9a_grad_loss_log.pdf}
\label{fig:obj2-a9a}
}
\subfigure[\texttt{mnist}]{
\includegraphics[height=1.6in]{figures/mnist_grad_loss_log.pdf}
\label{fig:obj2-mnist}
}
\caption{Performance of ProxGD and Async-ProxSGD on \texttt{a9a} (left) and \texttt{mnist} (right) datasets. Here the x-axis represents how many sample gradients is computed (divided by $n$), and the y-axis is the function suboptimality $f(x) - f(\hat{x})$ where $\hat{x}$ is obtained by running gradient descent for many iterations with multiple restarts. Note all values on the y-axis are normalized by $n$.}
\label{fig:simu_grad_loss}
\end{figure*}
We now present experimental results to confirm the capability and efficiency of our proposed algorithm to solve challenging non-convex non-smooth machine learning problems. We implemented our algorithm on TensorFlow \citep{abadi2016tensorflow}, a flexible and efficient deep learning library. We execute our algorithm on \texttt{Ray} \citep{moritz2017ray}, a general-purpose framework that enables parallel and distributed execution of Python as well as TensorFlow functions. A key feature of \texttt{Ray} is that it provides a unified task-parallel abstraction, which can serve as workers, and actor abstraction, which stores some states and acts like parameter servers.
We use a cluster of 9 instances on Google Cloud. Each instance has one CPU core with 3.75 GB RAM, running 64-bit Ubuntu 16.04 LTS. Each server or worker uses only one core, with 9 CPU cores and 60 GB RAM used in total. Only one instance is the server node, while the other nodes are workers.
\textbf{Setup:}
In our experiments, we consider the problem of non-negative principle component analysis (NN-PCA) \citep{reddi2016proximal}. Given a set of $n$ samples $\{z_i\}_{i=1}^n$, NN-PCA solves the following optimization problem
{
\begin{equation}
\mathop{\min}_{\norm{x} \leq 1, x \geq 0} -\frac{1}{2}x^\top \left(\sum_{i=1}^n z_i z_i^\top \right)x.
\end{equation}
}
This NN-PCA problem is NP-hard in general. To apply our algorithm, we can rewrite it with $f_i(x) = -(x^\top z_i)^2 / 2$ for all samples $i \in [n]$. Since the feasible set $C=\{x \in \real{d} | \norm{x} \leq 1, x \geq 0\}$ is convex, we can replace the optimization constraint by a regularizer in the form of
an indicator function $h(x) = I_C(x)$ , such that $h(x)=0$ if $x \in C$ and $\infty$ otherwise.
\begin{table}[]
\centering
\caption{Description of the two classification datasets used.}
\label{table:dataset}
\begin{tabular}{c|cc}
\specialrule{.1em}{.05em}{.05em}
datasets & dimension & sample size \\ \hline
a9a & 123 & 32,561 \\
mnist & 780 & 60,000 \\
\specialrule{.1em}{.05em}{.05em}
\end{tabular}
\end{table}
The hyper-parameters are set as follows. The step size is set using the popular $t$-inverse step size choice $\eta_k = \eta_0/(1 + \eta' (k/k'))$, which is the same as the one used in \citep{reddi2016proximal}. Here $\eta_0, \eta' > 0$ determine how learning rates change, and $k'$ controls for how many steps the learning rate would change.
We conduct experiments on two datasets \footnote{Available at \url{http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/}}, with their information summarized in Table~\ref{table:dataset}. All samples have been normalized, i.e., $\norm{z_i} = 1$ for all $i \in [n]$. In our experiments, we use a batch size of $N=8192$ in order to evaluate the performance and speedup behavior of the algorithm under constant batches.
We consider the \emph{function suboptimality} value as our performance metric. In particular, we run proximal gradient descent (ProxGD) for a large number of iterations with multiple random initializations, and obtain a solution $\hat{x}$. For all experiments, we evaluate function suboptimality, which is
the gap $f(x) - f(\hat{x})$, against
the number of sample gradients processed by the server (divided by the total number of samples $n$), and then against time.
\begin{figure*}[t]
\centering
\subfigure[\texttt{a9a}]{
\includegraphics[height=1.6in]{figures/a9a_time_loss.pdf}
\label{fig:obj3-a9a}
}
\subfigure[\texttt{mnist}]{
\includegraphics[height=1.6in]{figures/mnist_time_loss.pdf}
\label{fig:obj3-mnist}
}
\vspace{-1mm}
\subfigure[\texttt{a9a}]{
\includegraphics[height=1.6in]{figures/a9a_time_loss_log.pdf}
\label{fig:obj4-a9a}
}
\subfigure[\texttt{mnist}]{
\includegraphics[height=1.6in]{figures/mnist_time_loss_log.pdf}
\label{fig:obj4-mnist}
}
\caption{Performance of ProxGD and Async-ProxSGD on \texttt{a9a} (left) and \texttt{mnist} (right) datasets. Here the x-axis represents the actual running time, and the y-axis is the function suboptimality. Note all values on the y-axis are normalized by $n$.}
\label{fig:simu_time_loss}
\end{figure*}
\textbf{Results:}
Empirically, Assumption~\ref{asmp:bound1} (bounded delays) is observed to hold for this cluster. For our proposed Asyn-ProxSGD algorithm, we are particularly interested in the speedup in terms of iterations and running time.
In particular, if we need $T_1$ iterations (with $T_1$ sample gradients processed by the server) to achieve a certain suboptimality level using one worker, and $T_p$ iterations (with $T_p$ sample gradients processed by the server) to achieve the same suboptimality with $p$ workers, the iteration speedup is defined as $p \times T_1 / T_p$ \citep{lian2015asynchronous}. Note that all iterations are counted on the server side, i.e., how many sample gradients are processed by the server. On the other hand, the running time speedup is defined as the ratio between the running time of using one worker and that of using $p$ workers to achieve the same suboptimality.
The iteration and running time speedups on both datasets are shown in Fig.~\ref{fig:simu_grad_loss} and
Fig.~\ref{fig:simu_time_loss}, respectively. Such speedups achieved at the suboptimality level of $10^{-3}$ are presented in Table~\ref{table:iter_speedup} and~\ref{table:time_speedup}. We observe that nearly linear speedup can be achieved, although there is a loss of efficiency due to communication as the number workers increases.
\begin{table}[]
\centering
\caption{Iteration speedup and time speedup of Asyn-ProxSGD at the suboptimality level $10^{-3}$. (\texttt{a9a})}
\label{table:iter_speedup}
\begin{tabular}{c|cccc}
\specialrule{.1em}{.05em}{.05em}
Workers & 1 & 2 & 4 & 8 \\ \hline
Iteration Speedup & 1.000 & 1.982 & 3.584 & 5.973 \\
Time Speedup & 1.000 & 2.219 & 3.857 & 5.876 \\
\specialrule{.1em}{.05em}{.05em}
\end{tabular}
\end{table}
\begin{table}[]
\centering
\caption{Iteration speedup and time speedup of Asyn-ProxSGD at the suboptimality level $10^{-3}$. (\texttt{mnist})}
\label{table:time_speedup}
\begin{tabular}{c|cccc}
\specialrule{.1em}{.05em}{.05em}
Workers & 1 & 2 & 4 & 8 \\ \hline
Iteration Speedup & 1.000 & 2.031 & 3.783 & 7.352 \\
Time Speedup & 1.000 & 2.285 & 4.103 & 5.714 \\
\specialrule{.1em}{.05em}{.05em}
\end{tabular}
\end{table}
\section{Related Work}
\label{sec:related}
Stochastic optimization problems have been studied since the seminal work in 1951 \citep{robbins1951stochastic}, in which a classical stochastic approximation algorithm is proposed for solving a class of strongly convex problems. Since then, a series of studies on stochastic programming have focused on convex problems using SGD \citep{bottou1991stochastic,nemirovskii1983problem,moulines2011non}. The convergence rates of SGD for convex and strongly convex problems are known to be $O(1/\sqrt{K})$ and $O(1/K)$, respectively. For nonconvex optimization problems using SGD, Ghadimi and Lan \citep{ghadimi2013stochastic} proved an ergodic convergence rate of $O(1/\sqrt{K})$, which is consistent with the convergence rate of SGD for convex problems.
When $h(\cdot)$ in \eqref{eq:original} is not necessarily smooth, there are other methods to handle the nonsmoothness. One approach is closely related to mirror descent stochastic approximation, e.g., \citep{nemirovski2009robust,lan2012optimal}. Another approach is based on proximal operators \citep{parikh2014proximal}, and is often referred to as the \emph{proximal stochastic gradient descent} (ProxSGD) method. Duchi et al. \citep{duchi2009efficient} prove that under a diminishing learning rate $\eta_k=1/(\mu k)$ for $\mu$-strongly convex objective functions, ProxSGD can achieve a convergence rate of $O(1/\mu K)$. For a nonconvex problem like \eqref{eq:original}, rather limited studies on ProxSGD exist so far.
The closest approach to the one we consider here is \citep{ghadimi2016mini}, in which the convergence analysis is based on the assumption of an increasing minibatch size. Furthermore, Reddi et al. \citep{reddi2016proximal} prove convergence for nonconvex problems under a constant minibatch size, yet relying on additional mechanisms for variance reduction. We fill the gap in the literature by providing convergence rates for ProxSGD under constant batch sizes without variance reduction.
To deal with big data, asynchronous parallel optimization algorithms have been heavily studied. Recent work on asynchronous parallelism is mainly limited to the following categories: stochastic gradient descent for smooth optimization, e.g., \citep{recht2011hogwild,agarwal2011distributed,lian2015asynchronous,pan2016cyclades,mania2017} and deterministic ADMM, e.g. \citep{zhang2014asynchronous,hong2017distributed}. A non-stochastic asynchronous ProxSGD algorithm is presented by~\citep{li2014communication}, which however did not provide convergence rates for nonconvex problems.
\section{Auxiliary Lemmas}
\begin{lemma}[\citep{ghadimi2016mini}]
For all $y \gets \prox{\eta h}(x - \eta g)$, we have:
\begin{equation}
\innerprod{g, y-x} + (h(y) - h(x)) \leq -\frac{\norm{y-x}_2^2}{\eta}.
\end{equation}
\label{lem:lem-1}
\end{lemma}
Due to slightly different notations and definitions in \citep{ghadimi2016mini}, we provide a proof here for completeness. We refer readers to \citep{ghadimi2016mini} for more details.
\begin{proof}
By the definition of proximal function, there exists a $p \in \partial h(y)$ such that:
\begin{equation*}
\begin{split}
\innerprod{g + \frac{y-x}{\eta} + p, x - y} &\geq 0, \\
\innerprod{g, x-y} &\geq \frac{1}{\eta} \innerprod{y-x, y-x} + \innerprod{p, y-x} \\
\innerprod{g, x-y} + (h(x) - h(y)) &\geq \frac{1}{\eta} \norm{y-x}_2^2,
\end{split}
\end{equation*}
which proves the lemma.
\end{proof}
\begin{lemma}[\citep{ghadimi2016mini}]
For all $x,g,G \in \real{d}$, if $h: \real{d} \to \real{}$ is a convex function, we have
\begin{equation}
\norm{\prox{\eta h}(x - \eta G) - \prox{\eta h}(x - \eta g)} \leq \eta \norm{G-g}.
\end{equation}
\label{lem:lem-2}
\end{lemma}
\begin{proof}
Let $y$ denote $\prox{\eta h}(x - \eta G)$ and $z$ denote $\prox{\eta h}(x - \eta g)$. By definition of the proximal operator, for all $u \in \real{d}$, we have
\begin{align*}
\innerprod{G + \frac{y-x}{\eta} + p, u-y} &\geq 0, \\
\innerprod{g + \frac{z-x}{\eta} + q, u-z} &\geq 0,
\end{align*}
where $p \in \partial h(y)$ and $q \in \partial h(z)$. Let $z$ substitute $u$ in the first inequality and $y$ in the second one, we have
\begin{align*}
\innerprod{G + \frac{y-x}{\eta} + p, z-y} &\geq 0, \\
\innerprod{g + \frac{z-x}{\eta} + q, y-z} &\geq 0.
\end{align*}
Then, we have
\begin{align}
\innerprod{G, z-y} &\geq \innerprod{\frac{y-x}{\eta}, y-z} + \innerprod{p, y-z}, \\
&= \frac{1}{\eta}\innerprod{y-z, y-z} + \frac{1}{\eta}\innerprod{z-x, y-z} + \innerprod{p, y-z}, \\
&\geq \frac{\norm{y-z}^2}{\eta} + \frac{1}{\eta}\innerprod{z-x, y-z} +h(y) - h(z),
\label{eq:lem-2-1}
\end{align}
and
\begin{align}
\innerprod{g, y-z} &\geq \innerprod{\frac{z-x}{\eta} + q, z-y}, \\
&= \frac{1}{\eta} \innerprod{z-x, z-y} + \innerprod{q, z-y} \\
&\geq \frac{1}{\eta} \innerprod{z-x, z-y} + h(z) - h(y). \label{eq:lem-2-2}
\end{align}
By adding \eqref{eq:lem-2-1} and \eqref{eq:lem-2-2}, we obtain
\begin{align*}
\norm{G-g} \norm{z-y} \geq \innerprod{G-g, z-y} \geq \frac{1}{\eta} \norm{y-z}^2,
\end{align*}
which proves the lemma.
\end{proof}
\begin{lemma}[\citep{ghadimi2016mini}]
For any $g_1$ and $g_2$, we have
\begin{equation}
\norm{P(x, g_1, \eta) - P(x, g_2, \eta)} \leq \norm{g_1 - g_2}.
\end{equation}
\label{lem:lem-3}
\end{lemma}
\begin{proof}
It can be obtained by directly applying Lemma~\ref{lem:lem-2} and the definition of gradient mapping.
\end{proof}
\begin{lemma}[\citep{reddi2016proximal}]
Suppose we define $y = \prox{\eta h} (x - \eta g)$ for some $g$. Then for $y$, the following inequality holds:
\begin{equation}
\begin{split}
\Psi(y) \leq \Psi(z) + &\innerprod{y-z, \nabla f(x) - g} \\
&+ \left( \frac{L}{2} - \frac{1}{2\eta} \right) \norm{y-x}^2
+ \left( \frac{L}{2} + \frac{1}{2\eta} \right) \norm{z-x}^2
- \frac{1}{2\eta} \norm{y-z}^2,
\end{split}
\end{equation}
for all $z$.
\label{lem:grad_diff}
\end{lemma}
We recall and define some notations for convergence analysis in the subsequent. We denote $\tilde{G}_k$ as the average of \emph{delayed} stochastic gradients and $\tilde{g}_k$ as the average of \emph{delayed} true gradients, respectively:
\begin{align*}
\tilde{G}_k &:= \frac{1}{N}\sum_{i=1}^N \nabla F(x_{t(k,i)}; \xi_{t(k,i), i}) \\
\tilde{g}_k &:= \frac{1}{N}\sum_{i=1}^N \nabla f(x_{t(k,i)}).
\end{align*}
Moreover, we denote $\delta_k := \tilde{g}_k - \tilde{G}_k$ as the difference between these two differences.
\section{Convergence analysis for Asyn-ProxSGD}
\subsection{Milestone lemmas}
We put some key results of convergence analysis as milestone lemmas listed below, and the detailed proof is listed in \ref{subsec:milestone1}.
\begin{lemma}[Decent Lemma]
\begin{equation}
\mathbb{E}[\Psi(x_{k+1}) \leq \mathbb{E}[\Psi(x_k)|\mathcal{F}_k] - \frac{\eta_k - 4L\eta_k^2}{2} \norm{P(x_k, \tilde{g}_k, \eta_k)}^2
+ \frac{\eta_k}{2}\norm{g_k-\tilde{g}_k}^2 + \frac{L\eta_k^2}{N}\sigma^2.
\label{eq:desc_1}
\end{equation}
\label{lem:desc_1}
\end{lemma}
\begin{lemma}
Suppose we have a sequence $\{x_k\}$ by Algorithm~\ref{alg:apsgd-global}, then we have:
\begin{align}
\mathbb{E}[\norm{x_k - x_{k-\tau}}^2 ]
\leq \left(\frac{2 \tau}{N} \sum_{l=1}^{\tau}\eta_{k-l}^2\right)\sigma^2 + 2\Norm{\sum_{l=1}^{\tau} \eta_{k-l} P(x_{k-l}, \tilde{g}_{k-l}, \eta_{k-l}) }^2.
\label{eq:xk_diff_bound}
\end{align}
for all $\tau > 0$.
\label{lem:xk_diff_1}
\end{lemma}
\begin{lemma}
Suppose we have a sequence $\{x_k\}$ by Algorithm~\ref{alg:apsgd-global}, , then we have:
\begin{equation}
\mathbb{E}[\norm{g_k-\tilde{g}_k}^2]
\leq \left( \frac{2L^2T}{N} \sum_{l=1}^{T}\eta_{k-l}^2 \right) \sigma^2 + 2L^2T\sum_{l=1}^{T}\eta_{k-l}^2 \norm{ P(x_{k-l}, \tilde{g}_{k-l}, \eta_{k-l}) }^2.
\end{equation}
\label{lem:gk_diff_1}
\end{lemma}
\subsection{Proof of Theorem~\ref{thm:aspg_convergence}}
\begin{proof}
From the fact $2\norm{a}^2 + 2\norm{b}^2 \geq \norm{a+b}^2$, we have
\begin{align*}
\norm{P(x_k, \tilde{g}_k, \eta_k)}^2 + \norm{g_k - \tilde{g}_k}^2
&\geq \norm{P(x_k, \tilde{g}_k, \eta_k)}^2 + \norm{P(x_k, g_k, \eta_k) - P(x_k, \tilde{g}_k, \eta_k)}^2 \\
&\geq \frac{1}{2} \norm{P(x_k, g_k, \eta_k)}^2,
\end{align*}
which implies that
\begin{align*}
\norm{P(x_k, \tilde{g}_k, \eta_k)}^2 &\geq \frac{1}{2} \norm{P(x_k, g_k, \eta_k)}^2 - \norm{g_k - \tilde{g}_k}^2.
\end{align*}
We start the proof from Lemma~\ref{lem:desc_1}. According to our condition of $\eta \leq \frac{1}{16L}$, we have $8L\eta_k^2 - \eta < 0$ and therefore
{
\begin{equation*}
\begin{split}
&\quad\ \mathbb{E}[\Psi(x_{k+1})|\mathcal{F}_k] \\
&\leq \mathbb{E}[\Psi(x_k)|\mathcal{F}_k] + \frac{\eta_k}{2}\norm{g_k-\tilde{g}_k}^2
+ \frac{4L\eta_k^2-\eta_k}{2} \norm{P(x_k, \tilde{g}_k, \eta_k)}^2 + \frac{L\eta_k^2}{N}\sigma^2 \\
&= \mathbb{E}[\Psi(x_k)|\mathcal{F}_k] + \frac{\eta_k}{2}\norm{g_k-\tilde{g}_k}^2
+ \frac{8L\eta_k^2-\eta_k}{4} \norm{P(x_k, \tilde{g}_k, \eta_k)}^2 - \frac{\eta_k}{4}\norm{P(x_k, \tilde{g}_k, \eta_k)}^2 + \frac{L\eta_k^2}{N}\sigma^2 \\
&\leq \mathbb{E}[\Psi(x_k)|\mathcal{F}_k] + \frac{\eta_k}{2}\norm{g_k-\tilde{g}_k}^2
+ \frac{L\eta_k^2}{N}\sigma^2 + \frac{8L\eta_k^2-\eta_k}{4} (\frac{1}{2} \norm{P(x_k, g_k, \eta_k)}^2 - \norm{g_k - \tilde{g}_k}^2) \\
&\quad\ - \frac{\eta_k}{4}\norm{P(x_k, \tilde{g}_k, \eta_k)}^2 \nonumber \\
&\leq \mathbb{E}[\Psi(x_k)|\mathcal{F}_k] - \frac{\eta_k-8L\eta_k^2}{8} \norm{P(x_k, g_k, \eta_k)}^2
+ \frac{3\eta_k}{4}\norm{g_k-\tilde{g}_k}^2 - \frac{\eta_k}{4}\norm{P(x_k, \tilde{g}_k, \eta_k)}^2 + \frac{L\eta_k^2}{N}\sigma^2.
\end{split}
\end{equation*}
}
Apply Lemma~\ref{lem:gk_diff_1} we have
\begin{equation*}
\begin{split}
&\quad\ \mathbb{E}[\Psi(x_{k+1})|\mathcal{F}_k] \\
&\leq \mathbb{E}[\Psi(x_k)|\mathcal{F}_k] - \frac{\eta_k-8L\eta_k^2}{8} \norm{P(x_k, g_k, \eta_k)}^2
+ \frac{L\eta_k^2}{N}\sigma^2 - \frac{\eta_k}{4}\norm{P(x_k, \tilde{g}_k, \eta_k)}^2 \\
&\quad + \frac{3\eta_k}{4}\left( \frac{2L^2T}{N} \sum_{l=1}^{T}\eta_{k-l}^2 \sigma^2 + 2L^2T\sum_{l=1}^{T}\eta_{k-l}^2 \norm{ P(x_{k-l}, \tilde{g}_{k-l}, \eta_{k-l}) }^2 \right) \\
&= \mathbb{E}[\Psi(x_k)|\mathcal{F}_k] - \frac{\eta_k-8L\eta_k^2}{8} \norm{P(x_k, g_k, \eta_k)}^2
+ \left(\frac{L\eta_k^2}{N} + \frac{3\eta_kL^2T}{2N} \sum_{l=1}^{T}\eta_{k-l}^2 \right) \sigma^2 \\
&\quad - \frac{\eta_k}{4}\norm{P(x_k, \tilde{g}_k, \eta_k)}^2
+ \frac{3\eta_kL^2T}{2} \sum_{l=1}^{T}\eta_{k-l}^2 \norm{ P(x_{k-l}, \tilde{g}_{k-l}, \eta_{k-l}) }^2.
\end{split}
\end{equation*}
By taking telescope sum, we have
{
\begin{align*}
&\quad\ \mathbb{E}[\Psi(x_{K+1})|\mathcal{F}_K] \\
&\leq \Psi(x_1)
- \sum_{k=1}^K \frac{\eta_k-8L\eta_k^2}{8} \norm{P(x_k, g_k, \eta_k)}^2
- \sum_{k=1}^K \left( \frac{\eta_k}{4}- \frac{3\eta_k^2L^2T}{2}\sum_{l=1}^{l_k}\eta_{k+l} \right) \norm{ P(x_{k}, \tilde{g}_{k}, \eta_{k}) }^2 \\
&\quad + \sum_{k=1}^K \left(\frac{L\eta_k^2}{N} + \frac{3\eta_kL^2T}{2N} \sum_{l=1}^{T}\eta_{k-l}^2 \right) \sigma^2
\end{align*}
}
where $l_k := \min(k+T-1, K)$, and we have
{
\begin{align*}
&\quad\ \sum_{k=1}^K \frac{\eta_k-8L\eta_k^2}{8} \norm{P(x_k, g_k, \eta_k)}^2 \\
&\leq \Psi(x_1) - \mathbb{E}[\Psi(x_{K+1})|\mathcal{F}_K] - \sum_{k=1}^K \left( \frac{\eta_k}{4}- \frac{3\eta_k^2L^2T}{2}\sum_{l=1}^{l_k}\eta_{k+l} \right) \norm{ P(x_{k}, \tilde{g}_{k}, \eta_{k}) }^2 \\
&\quad + \sum_{k=1}^K \left(\frac{L\eta_k^2}{N} + \frac{3\eta_kL^2T}{2N} \sum_{l=1}^{T}\eta_{k-l}^2 \right) \sigma^2.
\end{align*}
}
When $6\eta_kL^2T \sum_{l=1}^{T}\eta_{k+l} \leq 1$ for all $k$ as the condition of Theorem~\ref{thm:aspg_convergence}, we have
\begin{align*}
&\quad\ \sum_{k=1}^K \frac{\eta_k-8L\eta_k^2}{8} \norm{P(x_k, g_k, \eta_k)}^2 \\
&\leq \Psi(x_1) - \mathbb{E}[\Psi(x_{K+1})|\mathcal{F}_K]
+ \sum_{k=1}^K \left(\frac{L\eta_k^2}{N} + \frac{3\eta_kL^2T}{2N} \sum_{l=1}^{T}\eta_{k-l}^2 \right) \sigma^2 \\
&\leq \Psi(x_1) - F^* + \sum_{k=1}^K \left(\frac{L\eta_k^2}{N} + \frac{3\eta_kL^2T}{2N} \sum_{l=1}^{T}\eta_{k-l}^2 \right) \sigma^2,
\end{align*}
which proves the theorem.
\end{proof}
\subsection{Proof of Corollary~\ref{corr:aspg_convergence}}
\begin{proof}
From the condition of Corollary, we have
\begin{align*}
\eta \leq \frac{1}{16L(T+1)^2}.
\end{align*}
It is clear that the above inequality also satisfies the condition in Theorem~\ref{thm:aspg_convergence}. By doing so, we can have
Furthermore, we have
\begin{align*}
\frac{3LT^2\eta}{2} &\leq \frac{3LT^2}{2}\cdot \frac{1}{16L(T+1)^2} \leq 1, \\
\frac{3L^2T^2\eta^3}{2} &\leq L\eta^2.
\end{align*}
Since $\eta \leq \frac{1}{16L}$, we have $2-16L\eta^2 \geq 1$ and thus
\begin{align*}
\frac{8}{\eta - 8L\eta^2} = \frac{16}{\eta(2-16L\eta^2)} \leq \frac{16}{\eta}.
\end{align*}
Following Theorem~\ref{thm:aspg_convergence} and the above inequality, we have
\begin{align*}
&\quad\ \frac{1}{K}\sum_{k=1}^K \mathbb{E}[\norm{P(x_k, g_k, \eta_k)}^2] \\
&\leq \frac{16(\Psi(x_1) - \Psi(x_*))}{K\eta} + 16\left(\frac{L\eta^2}{N} + \frac{3\eta L^2T}{2N} \sum_{l=1}^{T}\eta^2 \right) \frac{K\sigma^2}{K\eta} \\
&= \frac{16(\Psi(x_1) - \Psi(x_*)}{K\eta} + 16\left(\frac{L\eta^2}{N} + \frac{3L^2T^2 \eta^3}{2N} \right) \frac{\sigma^2}{\eta} \\
&\leq \frac{16(\Psi(x_1) - \Psi(x_*))}{K\eta} + \frac{32L\eta^2}{N}\cdot \frac{\sigma^2}{\eta} \\
&= \frac{16(\Psi(x_1) - \Psi(x_*))}{K\eta} + \frac{32L\eta \sigma^2}{N} \\
&= 32\sqrt{\frac{2(\Psi(x_1) - \Psi(x_*))L\sigma^2}{KN}},
\end{align*}
which proves the corollary.
\end{proof}
\subsection{Proof of milestone lemmas}
\label{subsec:milestone1}
\begin{proof}[Proof of Lemma~\ref{lem:desc_1}]
Let $\bar{x}_{k+1} = \prox{\eta_k h}(x_k - \eta_k \tilde{g}_k)$ and apply Lemma~\ref{lem:grad_diff}, we have
\begin{equation}
\begin{split}
\Psi(x_{k+1}) &\leq \Psi(\bar{x}_{k+1}) + \innerprod{x_{k+1}-\bar{x}_{k+1}, \nabla f(x_k) - \tilde{G}_k} + \left( \frac{L}{2} - \frac{1}{2\eta_k} \right) \norm{x_{k+1}-x_k}^2\\
&\quad\ + \left( \frac{L}{2} + \frac{1}{2\eta_k} \right) \norm{\bar{x}_{k+1}-x_k}^2 - \frac{1}{2\eta_k} \norm{x_{k+1}-\bar{x}_{k+1}}^2.
\end{split}
\label{eq:yk_baryk_async}
\end{equation}
Now we turn to bound $\Psi(\bar{x}_{k+1})$ as follows:
\begin{displaymath}
\begin{split}
f(\bar{x}_{k+1}) &\leq f(x_k) + \innerprod{\nabla f(x_k), \bar{x}_{k+1} - x_k} + \frac{L}{2}\norm{\bar{x}_{k+1} - x_k}^2 \\
&= f(x_k) + \innerprod{g_k, \bar{x}_{k+1} - x_k} + \frac{\eta_k^2 L}{2}\norm{P(x_k, \tilde{g}_k, \eta_k)}^2 \\
&= f(x_k) + \innerprod{\tilde{g}_k, \bar{x}_{k+1} - x_k}+ \innerprod{g_k - \tilde{g}_k, \bar{x}_{k+1} - x_k} + \frac{\eta_k^2 L}{2}\norm{P(x_k, \tilde{g}_k, \eta_k)}^2 \\
&= f(x_k) - \eta_k \innerprod{\tilde{g}_k, P(x_k, \tilde{g}_k, \eta_k)} + \innerprod{g_k - \tilde{g}_k, \bar{x}_{k+1} - x_k} + \frac{\eta_k^2 L}{2}\norm{P(x_k, \tilde{g}_k, \eta_k)}^2 \\
&\leq f(x_k) - [\eta_k \norm{P(x_k, \tilde{g}_k, \eta_k)}^2 + h(\bar{x}_{k+1}) - h(x_k)] + \innerprod{g_k - \tilde{g}_k, \bar{x}_{k+1} - x_k} \\
&\quad\ + \frac{\eta_k^2 L}{2}\norm{P(x_k, \tilde{g}_k, \eta_k)}^2,
\end{split}
\end{displaymath}
where the last inequality follows from Lemma~\ref{lem:lem-1}. By rearranging terms on both sides, we have
\begin{equation}
\Psi(\bar{x}_{k+1}) \leq \Psi(x_k) - (\eta_k - \frac{\eta_k^2 L}{2}) \norm{P(x_k, \tilde{g}_k, \eta_k)}^2 + \innerprod{g_k - \tilde{g}_k, \bar{x}_{k+1} - x_k}
\label{eq:baryk_xk_async}
\end{equation}
Taking the summation of \eqref{eq:yk_baryk_async} and \eqref{eq:baryk_xk_async}, we have
\begin{displaymath}
\begin{split}
&\quad\ \Psi(x_{k+1}) \\
&\leq \Psi(x_k) + \innerprod{x_{k+1}-\bar{x}_{k+1}, \nabla f(x_k) - \tilde{G}_k} \\
&\quad + \left( \frac{L}{2} - \frac{1}{2\eta_k} \right) \norm{x_{k+1}-x_k}^2
+ \left( \frac{L}{2} + \frac{1}{2\eta_k} \right) \norm{\bar{x}_{k+1}-x_k}^2- \frac{1}{2\eta_k} \norm{x_{k+1}-\bar{x}_{k+1}}^2 \\
&\quad - (\eta_k - \frac{\eta_k^2 L}{2}) \norm{P(x_k, \tilde{g}_k, \eta_k)}^2 + \innerprod{g_k - \tilde{g}_k, \bar{x}_{k+1} - x_k} \\
&= \Psi(x_k) + \innerprod{x_{k+1}-x_{k}, g_k - \tilde{g}_k} + \innerprod{x_{k+1}-\bar{x}_{k+1}, \delta_k} \\
&\quad + \left( \frac{L\eta_k^2}{2} - \frac{\eta_k}{2} \right) \norm{P(x_k, \tilde{G}_k, \eta_k)}^2
+ \left( \frac{L\eta_k^2}{2} + \frac{\eta_k}{2} \right) \norm{P(x_k, \tilde{g}_k, \eta_k)}^2 \\
&\quad - \frac{1}{2\eta_k} \norm{x_{k+1}-\bar{x}_{k+1}}^2 - (\eta_k - \frac{\eta_k^2 L}{2}) \norm{P(x_k, \tilde{g}_k, \eta_k)}^2 \\
&= \Psi(x_k) + \innerprod{x_{k+1}-x_k, g_k - \tilde{g}_k} + \innerprod{x_{k+1}-\bar{x}_{k+1}, \delta_k}
+ \frac{L\eta_k^2 - \eta_k}{2} \norm{P(x_k, \tilde{G}_k, \eta_k)}^2 \\
&\quad + \frac{2L\eta_k^2-\eta_k}{2} \norm{P(x_k, \tilde{g}_k, \eta_k)}^2 - \frac{1}{2\eta_k} \norm{x_{k+1}-\bar{x}_{k+1}}^2
\end{split}
\end{displaymath}
By taking the expectation on condition of filtration $\mathcal{F}_k$ and according to Assumption~\ref{asmp:unbias_grad}, we have
\begin{equation}
\begin{split}
&\quad\ \mathbb{E}[\Psi(x_{k+1})|\mathcal{F}_k] \\
&\leq \mathbb{E}[\Psi(x_k)|\mathcal{F}_k] + \mathbb{E}[\innerprod{x_{k+1}-x_k, g_k-\tilde{g}_k}|\mathcal{F}_k] + \frac{L\eta_k^2 - \eta_k}{2} \mathbb{E}[\norm{P(x_k, \tilde{G}_k, \eta_k)}^2|\mathcal{F}_k] \\
&\quad + \frac{2L\eta_k^2-\eta_k}{2} \norm{P(x_k, \tilde{g}_k, \eta_k)}^2 - \frac{1}{2\eta_k} \norm{x_{k+1}-\bar{x}_{k+1}}^2.
\end{split}
\end{equation}
Therefore, we have
\begin{equation*}
\begin{split}
&\quad\ \mathbb{E}[\Psi(x_{k+1})|\mathcal{F}_k] \\
&\leq \mathbb{E}[\Psi(x_k)|\mathcal{F}_k] + \mathbb{E}[\innerprod{x_{k+1}-x_k, g_k-\tilde{g}_k}|\mathcal{F}_k] + \frac{L\eta_k^2 - \eta_k}{2} \mathbb{E}[\norm{P(x_k, \tilde{G}_k, \eta_k)}^2|\mathcal{F}_k] \\
&\quad + \frac{2L\eta_k^2-\eta_k}{2} \norm{P(x_k, \tilde{g}_k, \eta_k)}^2 - \frac{1}{2\eta_k} \norm{x_{k+1}-\bar{x}_{k+1}}^2 \\
&\leq \mathbb{E}[\Psi(x_k)|\mathcal{F}_k] + \frac{\eta_k}{2}\norm{g_k-\tilde{g}_k}^2 + \frac{L\eta_k^2}{2} \mathbb{E}[\norm{P(x_k, \tilde{G}_k, \eta_k)}^2|\mathcal{F}_k]
+ \frac{2L\eta_k^2-\eta_k}{2} \norm{P(x_k, \tilde{g}_k, \eta_k)}^2 \\
&\leq \mathbb{E}[\Psi(x_k)|\mathcal{F}_k] - \frac{\eta_k - 4L\eta_k^2}{2} \norm{P(x_k, \tilde{g}_k, \eta_k)}^2
+ \frac{\eta_k}{2}\norm{g_k-\tilde{g}_k}^2 + \frac{L\eta_k^2}{N}\sigma^2
\end{split}
\end{equation*}
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem:xk_diff_1}]
Following the definition of $x_k$ from Algorithm~\ref{alg:apsgd-global}, we have
\begin{equation*}
\begin{split}
&\quad\ \norm{x_k - x_{k-\tau}}^2 \\
&= \Norm{\sum_{l=1}^{\tau}x_{k-l} - x_{k-l+1}}^2 \\
&= \Norm{\sum_{l=1}^{\tau} \eta_{k-l} P(x_{k-l}, \tilde{G}_{k-l}, \eta_{k-l})}^2 \\
&= 2 \Norm{\sum_{l=1}^{\tau} \eta_{k-l} [P(x_{k-l}, \tilde{G}_{k-l}, \eta_{k-l})-P(x_{k-l}, \tilde{g}_{k-l}, \eta_{k-l})]}^2 + 2\Norm{\sum_{l=1}^{\tau} \eta_{k-l} P(x_{k-l}, \tilde{g}_{k-l}, \eta_{k-l}) }^2 \\
&\leq 2\tau \sum_{l=1}^{\tau} \eta_{k-l}^2\norm{P(x_{k-l}, \tilde{G}_{k-l}, \eta_{k-l})-P(x_{k-l}, \tilde{g}_{k-l}, \eta_{k-l})}^2 + 2\Norm{\sum_{l=1}^{\tau} \eta_{k-l} P(x_{k-l}, \tilde{g}_{k-l}, \eta_{k-l}) }^2 \\
&\leq 2\tau \sum_{l=1}^{\tau}\eta_{k-l}^2 \norm{\tilde{G}_{k-l}-\tilde{g}_{k-l}}^2
+ 2\Norm{\sum_{l=1}^{\tau} \eta_{k-l} P(x_{k-l}, \tilde{g}_{k-l}, \eta_{k-l}) }^2,
\end{split}
\end{equation*}
where the last inequality is from Lemma~\ref{lem:lem-3}. By taking the expectation on both sides, we have
\begin{equation*}
\begin{split}
\mathbb{E}[\norm{x_k - x_{k-\tau}}^2 ]
&\leq 2\tau \sum_{l=1}^{\tau} \eta_{k-l}^2\norm{\tilde{G}_{k-l}-\tilde{g}_{k-l}}^2 + 2\Norm{\sum_{l=1}^{\tau} \eta_{k-l} P(x_{k-l}, \tilde{g}_{k-l}, \eta_{k-l}) }^2 \\
&\leq \frac{2 \tau}{N} \sigma^2 \sum_{l=1}^{\tau}\eta_{k-l}^2 + 2\Norm{\sum_{l=1}^{\tau} \eta_{k-l} P(x_{k-l}, \tilde{g}_{k-l}, \eta_{k-l}) }^2,
\end{split}
\end{equation*}
which proves the lemma.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem:gk_diff_1}]
From Assumption~\ref{asmp:smooth} we have
\begin{align*}
\norm{g_k - \tilde{g}_k}^2 = \Norm{\frac{1}{N}\sum_{i=1}^N g_k - \tilde{g}_{t(k,i)}}^2
\leq \frac{L^2}{N} \sum_{i=1}^N \norm{x_k - x_{k-\tau(k,i)}}^2.
\end{align*}
By applying Lemma~\ref{lem:xk_diff_1}, we have
\begin{align*}
\mathbb{E}[\norm{x_k - x_{k-\tau(k,i)}}^2] &\leq \frac{2\tau(k,i)}{N} \sigma^2 \sum_{l=1}^{\tau(k,i)}\eta_{k-l}^2 + 2\Norm{\sum_{l=1}^{\tau(k,i)} \eta_{k-l} P(x_{k-l}, \tilde{g}_{k-l}, \eta_{k-l}) }^2.
\end{align*}
Therefore, we have
\begin{equation*}
\begin{split}
\mathbb{E}[\norm{g_k-\tilde{g}_k}^2] &\leq \frac{L^2}{N} \sum_{i=1}^N \norm{x_k - x_{k-\tau(k,i)}}^2 \nonumber \\
&\leq \frac{L^2}{N} \sum_{i=1}^N \left( \frac{2\tau(k,i)}{N} \sigma^2 \sum_{l=1}^{\tau(k,i)}\eta_{k-l}^2 + 2\tau(k,i)\sum_{l=1}^{\tau(k,i)}\eta_{k-l}^2 \norm{ P(x_{k-l}, \tilde{g}_{k-l}, \eta_{k-l}) }^2 \right) \nonumber \\
&\leq \left( \frac{2L^2T}{N} \sum_{l=1}^{T}\eta_{k-l}^2 \right) \sigma^2 + 2L^2T\sum_{l=1}^{T}\eta_{k-l}^2 \norm{ P(x_{k-l}, \tilde{g}_{k-l}, \eta_{k-l}) }^2,
\end{split}
\end{equation*}
where the last inequality follows from and now we prove the lemma.
\end{proof}
|
{
"timestamp": "2018-09-18T02:03:59",
"yymm": "1802",
"arxiv_id": "1802.08880",
"language": "en",
"url": "https://arxiv.org/abs/1802.08880"
}
|
\section{Introduction}
Majorana fermions (MF) have attracted much attention in recent years due to their implications for particle physics and potential applications to fault-tolerant topological quantum computation~\cite{MFnuclear,alicea2011non,sarma2015majorana}. Many protocols for the realization of MFs and implementing nonabelian statistics have been proposed, yet no single platform has been identified to be ideal for studies in all aspects~\cite{MFphasetrans,MFEquil,1DTOPChain,MFquasi1D,MFvort,zerobias,propobserv,MFferr,obMFferr}. In versatile platforms, such as condensed matter and quantum gas systems, MFs arise as Bogoliubov quasiparticle excitations at the defect sites (vortices, interfaces, system edges, etc.).
Some recently studied systems are related to topological superconductors that derive from proximity effects of conventional superconductors and spin textures~\cite{MFferr,MFpwaveSF,SCprox,obMFferr,highres,contrfinite,localoscil,majquasi}. Theoretical models of MFs in electronic materials originate from
p-wave pairing states of fermions with broken parity and time-reversal symmetry (TRS)~\cite{pairstate}. Much attention is given to techniques for detection and control of MFs, such as the preparation of spin-triplet pairing in p-wave superconductors, and to obviation of the need for precise parameter tuning~\cite{obMFferr,highres,mixtrip,tunepair,PrepProb}.
Implementations of MFs with cold atoms in optical lattices is also of interest~\cite{PrepProb,1DTOPChain,MFquasi1D,MFpwaveSF,TOPquanmatter,Createsingmaj}. This possibility is based on progress in creating topological phases of cold atoms using the development of synthetic spin-orbit coupling (SOC) and magnetic fields, s-wave and p-wave superfluidity and single-site addressing techniques, etc~\cite{1DTOPChain,MFEquil,MFquasi1D,zhang2008p,MeasureTOP,exprealiz,TImetal,designlaser,liu2016detecting}. In optical lattices, most theoretical models begin with Kitaev's 1-dimensional (1-D) p-wave superconducting (SC) quantum wire model~\cite{unpairMF} with SOCs, and extend it to 2-D using multiple parallel chains with interchain couplings~\cite{MFEquil,1DTOPChain,MFquasi1D,fateheir}. Such approaches yield
single or multiple 1-D topologically non-trivial chains in a background of trivial higher-dimensional optical lattices; isolated MFs emerge at chain ends in an odd-number-chain phase. This exotic topology still originates from the 1-D Kitaev model, while weak transverse tunneling suppresses quantum fluctuations and stabilizes the long-range order~\cite{MFquasi1D}. It is desirable to identify schemes that naturally include tunneling in different directions and incorporate the techniques in topological fermionic optical lattices to advance research on MFs.
In this paper, we propose an effective model to create MFs at an edge of the honeycomb lattice by introducing textured pairings into a 2-D topologically nontrivial Haldane model~\cite{MeasureTOP}. The key idea is to incorporate both the spin-singlet and textured spin-triplet pairings in the pseudospin-state dependent honeycomb optical lattice which breaks the TRS with complex next-nearest-neighbor (NNN) hopping. By tuning the pair coupling strength to match the amplitude and phase of hopping terms, MFs with flat bands (also called Majorana zero mode, "MZM") will arise on a single edge of the lattice.
This is similar to the "sweet spot" conditions in Kitaev model. This suggests that to realize such MZMs, it is critical to break the 3-fold rotational symmetry of the pairing terms of the Hamiltonian, leading to a specific type of Majorana coupling. This requirement on the angular dependence of the sign of the spin-triplet pairing term is reminiscent of textured pairings in paired states of fermions~\cite{pairstate}.
In our model, the cold atom system has a gapped SC phase and a gapless phase for parameters in the "sweet spot". Either phase can have a winding number $w=0$ or $w\neq 0$. The phase diagram can be represented in the domain of phase parameters in the complex NNN hoppings. We demonstrate a method to reduce the gap-closing condition of the bulk Hamiltonian to the calculation of the discriminant.
This circumvents the analytical complexity of a 4-band model. The value of this discriminant distinguishes the gapped SC and gapless phases. It actually measures the "strength" of TRS breaking, thus further dividing the TRS-broken class into two groups. In the gapped SC phase, there always exist two pairs of MZMs while the winding number of bulk bands, $w=\pm 1$, is associated with extra normal gapless edge states. One of the MZM pairs can be fully pseudospin-polarized localized at an edge in special cases, while the other pair usually extends to deeper layers with exponentially decaying amplitudes. In the gapless phase, the second pair of MZMs vanishes due to their coupling with the bulk modes. It remains to be determined whether the two pairs of MZMs in topological trivial cases will have an energy splitting in an extended model that incorporates the interaction of MFs or other coupling channels~\cite{stronginter}.
This paper is organized as follows: In section $\textrm {\uppercase\expandafter{\romannumeral 2}}$, we introduce our theoretical model and the intuition of generating MZMs. In section $\textrm {\uppercase\expandafter{\romannumeral 3}}$, we identify the MZMs from the aspects of the band structure, density profile and wavefunction symmetry by numerical simulation. In section $\textrm {\uppercase\expandafter{\romannumeral 4}}$, the phase diagram of a cold atom system is presented and the degeneracy of MZMs in each phase is discussed. We describe a mathematical method to find a discriminant that gives the phase boundary between gapped SC and gapless phases. This discriminant characterizes the "strength" of TRS breaking. Finally, our model is compared with previous models of creating MFs in 2-D cold atom systems.
\section{Model and physical intuition}
\captionsetup[subfigure]{position=top,singlelinecheck=off,justification=raggedright}
\begin{figure}[tbp]
\centering
\subfloat[] {\includegraphics[width=0.5\textwidth]{intuition1.pdf} \label{fig1a}}\\
\subfloat[] {\includegraphics[width=0.5\textwidth]{intuition2.pdf} \label{fig1b}}\\
\captionsetup{justification=raggedright}
\caption{Physical intuition of generating unpaired MFs at an edge in the case of $\mu=0$. (a) Kitaev's 1-D spinless p-wave SC quantum wire. The two neighboring MFs constitute a normal fermion. The blue arrows in the upper chain signify the internal pairing of MFs with no unpaired MFs remaining. The red arrows in the lower chain indicate the inter-cell pairing of MFs with two unpaired MFs at the ends of the chain. (b) MF coupling at a single armchair edge of a 2-D honeycomb lattice. The upper two and lower left subfigures are for our model in which there is no 3-fold rotational symmetry (RS). The solid bonds are the net Majorana couplings contributed by terms related to $(\Delta, t)$, $(\Delta_{\uparrow}, t_{\uparrow})$ and $(\Delta_{\downarrow}, t_{\downarrow})$ in $\hat{H}$, respectively. The shaded and colored cells denote the dangling MFs in a hexagon at a single armchair edge of the lattice. The lower right subfigure shows an example of unexpected MF couplings in which there is rotational symmetry, just in comparison with that in our model.}
\end{figure}
Our model is based on a generalized Haldane model in a pseudospin-state dependent honeycomb optical lattice~\cite{MeasureTOP}, which is among many protocols proposed to implement the topological phases in systems of noninteracting fermions. In the realization of our system, ultracold atoms with two different hyperfine states would be described as two pseudospin states (spin-up "$\uparrow$" and spin-down "$\downarrow$"), each localized at one of two inequivalent sublattices (A and B). The natural tunneling between sites on the same sublattice and the laser-induced coupling between different sublattices implement the next-nearest-neighbor (NNN) and nearest-neighbor (NN) hoppings, respectively. To generate unpaired MFs, we add spin-dependent pair interactions between atoms to obtain the total effective Hamiltonian:
\begin{equation}
\hat{H}=\hat{H}_{0}+\hat{H}_{p},
\label{eq1}
\end{equation}
where
\begin{eqnarray}
\hat{H}_{0}=-t\sum_{\langle j,m\rangle }(a_{\vec{r}_{j}}^{\dagger} b_{\vec{r}_{m}}+h.c.)-t_{\uparrow}\sum_{\langle \langle j,j^{\prime}\rangle \rangle }(e^{i\phi_{A}} a_{\vec{r}_{j}}^{\dagger} a_{\vec{r}_{j^{\prime}}}+h.c.) \nonumber \\
-t_{\downarrow}\sum_{\langle \langle m,m^{\prime}\rangle \rangle }(e^{i\phi_{B}} b_{\vec{r}_{m}}^{\dagger} b_{\vec{r}_{m^{\prime}}}+h.c.) \nonumber \\
+\mu(\sum_{j}a_{\vec{r}_{j}}^{\dagger} a_{\vec{r}_{j}}-\sum_{m}b_{\vec{r}_{m}}^{\dagger} b_{\vec{r}_{m}}), \nonumber \\
\\
\label{eq2}
\hat{H}_{p}=\sum_{\langle j,m\rangle }(\Delta a_{\vec{r}_{j}}^{\dagger} b_{\vec{r}_{m}}^{\dagger}+h.c.)
+\sum_{\langle \langle j,j^{\prime}\rangle \rangle ,y_{j}< y_{j^{\prime}}}(\Delta_{\uparrow} a_{\vec{r}_{j}}^{\dagger}a_{\vec{r}_{j^{\prime}}}^{\dagger}+h.c.) \nonumber \\
+\sum_{\langle \langle m,m^{\prime}\rangle \rangle ,y_{m}< y_{m^{\prime}}}(\Delta_{\downarrow} b_{\vec{r}_{m}}^{\dagger}b_{\vec{r}_{m^{\prime}}}^{\dagger}+h.c.). \nonumber \\
\label{eq3}
\end{eqnarray}
In the equations above, $\hat{H}_{0}$ is the original effective Hamiltonian following a unitary basis transformation as described in Ref.~\cite{MeasureTOP}; $t$ is the NN hopping amplitude and $t_{\uparrow}$($t_{\downarrow}$) is the NNN hopping amplitude in sublattice A(B). These three hopping amplitudes are real numbers. The NNN hopping phases
are given by $\phi_{A}$($\phi_{B}$), and $\vec{r}_{j}$($\vec{r}_{m}$) denotes the site index of sublattice A(B) (note that there's only one spin state at one site so that $\vec{r}_{j}$ and $\vec{r}_{m}$ are from different displacement vector sets). $\mu$ denotes half of the chemical potential difference between sublattice A and B. $\hat{H}_{p}$ is the spin-dependent pairing Hamiltonian introduced by our model. $\Delta$ terms denote the pairings of atoms with opposite spins (spin-singlet), while $\Delta_{\uparrow}$ and $\Delta_{\downarrow}$ terms denote the pairings of atoms with the same spins (spin-triplet).
For simplicity, we first analyze a special case in which the on-site staggered potential vanishes, i.e., $\mu$ is zero. Then, the system can be viewed in the Majorana representation (Fig. \ref{fig1b}) by rewriting the Hamiltonian with Majorana operators:
\begin{eqnarray}
a_{\vec{r}_{j}}=\frac{1}{2}(\gamma_{\vec{r}_{j},\uparrow,1}+i\gamma_{\vec{r}_{j},\uparrow,2}),
b_{\vec{r}_{m}}=\frac{1}{2}(\gamma_{\vec{r}_{m},\downarrow,1}+i\gamma_{\vec{r}_{m},\downarrow,2}). \nonumber
\end{eqnarray}
Here, $\{\gamma_{\vec{r}_{j},\sigma,\alpha},\gamma_{\vec{r}_{m},\sigma^{\prime},\beta}\}=2\delta_{\vec{r}_{j},\vec{r}_{m}}
\delta_{\sigma\sigma^{\prime}}\delta_{\alpha\beta}$ with each Majorana operator, $\gamma$, having three subscripts denoting the position, spin and Majorana type, respectively. In the following discussions, a Majorana coupling usually means a $i\gamma\gamma^{\prime}$ term in the Hamiltonian and is denoted by a bond (double arrow) between two sites in Fig. \ref{fig1a} and \ref{fig1b}. The Majorana coupling
contributes an ordinary fermion which costs the energy of a bulk mode in the band structure. Based on the physical intuition shown in Fig. \ref{fig1b}, we choose to introduce $\hat{H}_{p}$ in the above form, in order to cancel a part of the Majorana couplings introduced by the $t$, $t_{\uparrow}$ and $t_{\downarrow}$ hopping interactions. This choice yields the Hamiltonian in Appendix A.
The new "sweet spot" conditions that create dangling MFs at edges can be deduced. We inherit the key idea of Kitaev's 1-D spinless p-wave SC chain model, which is to choose a specific type of Majorana coupling and leave unpaired MFs at the edges of the finite lattice (Fig. \ref{fig1a}). In our model, we choose to make the net effect of $\Delta$ and $t$ terms be the coupling $\gamma_{\vec{r}_{j},\uparrow,2}\gamma_{\vec{r}_{m},\downarrow,1}$ between NN sites, and the net effect of $\Delta_{\uparrow(\downarrow)}$ and $t_{\uparrow(\downarrow)}$ terms be the two couplings $\gamma_{\vec{r}_{j},\uparrow(\downarrow),2(1)}\gamma_{\vec{r}_{j^{\prime}},\uparrow(\downarrow),\alpha}$ between NNN sites with $y_{j}<y_{j^{\prime}}$ and $\alpha=1,2$ (Fig. \ref{fig1b}), where $y_{j}$ and $y_{j^{\prime}}$ are the $\hat{y}$ component of $\vec{r}_{j}$ and $\vec{r}_{j^{\prime}}$, respectively. It means that the coupling connecting A and B sublattices is only between type-2 MFs in A and type-1 MFs in B. And the coupling within A(B) sublattice is only from type-2(1) MFs to MFs of both two types with bigger indices in $\hat{y}$ direction. These requirements can be reduced to Eq. (\ref{eq4})-(\ref{eq6}) which can be called the "sweet spot" in the parameter space. So the final result is that the type-1 MFs of atoms in A sublattice and type-2 MFs of atoms in B sublattice at one armchair edge (the shaded and colored cells in Fig. \ref{fig1b}) are isolated, i.e., $\gamma_{y_{1},\uparrow,1}$ and $\gamma_{y_{1},\downarrow,2}$ don't appear in $\hat{H}$, as there's no other MFs outside the lattice to couple with them.\\
\begin{eqnarray}
\Delta&=&-t,
\label{eq4}\\
\Delta_{\uparrow}&=&-t_{\uparrow}e^{-i\phi_{A}},
\label{eq5}\\
\Delta_{\downarrow}&=&t_{\downarrow}e^{-i\phi_{B}}.
\label{eq6}
\end{eqnarray}
\captionsetup[subfigure]{position=top,singlelinecheck=off,justification=raggedright}
\begin{figure}[tbp]
\centering
\subfloat[] {\includegraphics[width=0.20\textwidth]{intuition3.pdf} \label{fig2a}}
%
\subfloat[] {\includegraphics[width=0.25\textwidth]{intuition4.pdf} \label{fig2b}}\\
\subfloat[] {\includegraphics[width=0.5\textwidth]{intuition5.pdf} \label{fig2c}}
\captionsetup{justification=raggedright}
\caption{Schematic of our physical system. (a) The depictions of the three vectors ($\vec{\delta}_{j}$) connecting NN sites and the three vectors ($\vec{r}_{j}$) connecting NNN sites for $j=1,2,3$. (b) Angular distribution of the sign in the amplitude and phase of the defined pairing which is similar to the domain wall structure. (c) The distribution of MZMs at a single armchair edge. The heights of the pillars denote the amplitudes of wavefunction at each site.}
\end{figure}
Therefore, our model has generalized "sweet spot" conditions analogous to those of a 1-D Kitaev chain, and actually possesses textured pairings analogous to that in the original model of fermionic paired states~\cite{pairstate}. Note first that the $\Delta_{\uparrow(\downarrow)}$ terms are written in a particular order ($y_{j(m)}< y_{j^{\prime}(m^{\prime})}$) to avoid mixing definitions. Second, it's critical to break the 3-fold rotational symmetry of the net Majorana couplings within a hexagon (Fig. \ref{fig1b}). Since $t$ is real, the net coupling between NN A and B sites may be reduced to one bond (the upper left subfigure in Fig. \ref{fig1b}). However, there's no degree of freedom to reduce the net coupling between NNN A or B sites to less than 2 bonds since $t_{\sigma}$ is complex. Thus, any pairing Hamiltonian that has 3-fold rotational symmetry (such as the lower right subfigure in Fig. \ref{fig1b}) doesn't allow dangling MFs at edges. We need to impose requirements on the sign in the amplitude and phase of $\Delta_{\sigma}$ to break the 3-fold rotational symmetry.
We present the requirement on pairing terms in an alternative way, which highlights the concept of textures in pairing. Following the reference~\cite{MeasureTOP}, choose the Peierls phase associated with the NNN hopping path $a_{\vec{r}_{j}}^{\dagger} a_{\vec{r}_{j^{\prime}}}$ to be:
\begin{equation}
\phi_{A}(j,j^{\prime})=-\vec{p}\cdot (\vec{r}_{j}-\vec{r}_{j^{\prime}})/2. \nonumber
\end{equation}
A pair creation term in $\hat{H}$ is $\Delta_{\uparrow}(\theta)a_{\vec{r}_{j}}^{\dagger} a_{\vec{r}_{j^{\prime}}}^{\dagger}$, where $\theta$ is the angle between $(\vec{r}_{j^{\prime}}-\vec{r}_{j})$ and the $\hat{x}$ axis. Then, we can deduce from the above requirements:
\begin{displaymath}
\Delta_{\uparrow}(\theta)=
\left\{
\begin{array}{l}
-t_{\uparrow}e^{-i\vec{p}\cdot (\vec{r}_{j^{\prime}}-\vec{r}_{j})/2}, \hspace{0.3in} (y_{j}<y_{j^{\prime}}) \\
t_{\uparrow}e^{i\vec{p}\cdot (\vec{r}_{j^{\prime}}-\vec{r}_{j})/2}. \hspace{0.52in} (y_{j}>y_{j^{\prime}}) \\
\end{array}
\right.
\end{displaymath}
We can see that the sign in $\Delta_{\uparrow}(\theta)$ in front of the amplitude $t_{\uparrow}$ and phase $\vec{p}\cdot (\vec{r}_{j^{\prime}}-\vec{r}_{j})/2$ is changed across $[0, 2\pi]$ (Fig. \ref{fig2b})). This requirement of the broken rotational symmetry leads to exotic textures in the pairing terms~\cite{pairstate}, and the angular distribution of the sign in the spin-triplet pairing term has a reorientation similar to that of the domain wall structure in magnetism~\cite{catalan2012domain}. Following the concept of spin textures strongly related to the topological superconductivity, such as the vortex, skyrmion, spiral and helix, this discrete texture in the pairing we just described is a generalization and may play an important role in future studies of MFs in honeycomb lattice structures.
It should be noted that we get our intuition from the particular case where "$\mu=0$", but the effectiveness of our model in creating MFs is not limited to this case. The case where "$\mu \neq 0$" is discussed in section $\textrm {\uppercase\expandafter{\romannumeral 4}}$ below. Also note that we just give one particular type of MF couplings. It determines the location of unpaired MFs which are not necessarily at the armchair edge of the lattice.
\section{Identification of Majorana zero modes}
\captionsetup[subfigure]{position=top,singlelinecheck=off,justification=raggedright}
\begin{figure}[tbp]
\centering
\subfloat[Gapped SC, $w=0$] {\includegraphics[width=0.25\textwidth]{bandtopsc0.pdf} \label{fig3a}}
%
\subfloat[Gapped SC, $w=\pm 1$] {\includegraphics[width=0.25\textwidth]{bandtopscneg1.pdf} \label{fig3b}}\\
\subfloat[Gapless, $w=0$] {\includegraphics[width=0.25\textwidth]{bandsemimetal0.pdf} \label{fig3c}}
%
\subfloat[Gapless, $w=\pm 1$] {\includegraphics[width=0.25\textwidth]{bandsemimetalneg1.pdf} \label{fig3d}}
\captionsetup{justification=raggedright}
\caption{Band structure for $\mu=0$ in four cases characterized by different gap conditions and winding numbers $w$. Fixed parameters: $t=1$, $t_{\uparrow}=0.4$, $t_{\downarrow}=0.6$. (a) $\vec{p}=(0.9K_{x},0)$; (b) $\vec{p}=(0,3K_{y})$; (c) $\vec{p}=(0.1K_{x},0.2K_{y})$; (d) $\vec{p}=(0.6K_{x},3K_{y})$, where $K_{x}=2\pi/3$ and $K_{y}=K_{x}/\sqrt{3}$. Each group of blue curves represents a bulk band. The dark red straight line is the MZM and the brown and green curves in (b) and (d) are gapless edge modes. In (a) and (b), the zero modes are 4-fold degenerate; in (c) and (d), the zero modes are 2-fold degenerate. The inset in (c) zooms in on the split zero modes (magenta) and flat MZMs (dark red) separately.}
\end{figure}
We can identify MZMs from the aspects of the band structure, density profile and wavefunction symmetry obtained by numerical simulations. The geometric definition of our model is depicted in Fig. \ref{fig2a}. We use three displacement vectors $\vec{\delta}$ to denote the NN hoppings and another three $\vec{r}$ to denote the NNN hoppings:
\begin{eqnarray}
\vec{\delta}_{1}=a(\frac{1}{2},\frac{\sqrt{3}}{2}),
\vec{\delta}_{2}=a(\frac{1}{2},-\frac{\sqrt{3}}{2}),
\vec{\delta}_{3}=a(-1,0), \nonumber \\
\vec{r}_{1}=a(-\frac{3}{2},\frac{\sqrt{3}}{2}),
\vec{r}_{2}=a(0,\sqrt{3}),
\vec{r}_{3}=a(\frac{3}{2},\frac{\sqrt{3}}{2}). \nonumber
\end{eqnarray}
\noindent Here, $a$ is the side length of a hexagon plaquette and is set as the unit of length in this paper. Using conventions of current techniques of implementing the complex NNN hopping by laser-induced transitions~\cite{MeasureTOP}, we choose the Peierls phase associated with $a_{\vec{r}_{j}}^{\dagger} a_{\vec{r}_{j^{\prime}}}$ to be $\phi_{A}(j,j^{\prime})=-\vec{p}\cdot (\vec{r}_{j}-\vec{r}_{j^{\prime}})/2$ and similarly $\phi_{B}(m,m^{\prime})=\vec{p}\cdot (\vec{r}_{m}-\vec{r}_{m^{\prime}})/2$, where $\vec{p}$ is the momentum transfer associated with the laser-induced tunneling.
We calculate the band structure using a momentum space representation based on a Fourier transformation in the $\hat{x}$ direction:
\begin{eqnarray}
\hat{a}_{\vec{r}_{j}}&=&\frac{1}{\sqrt{N_{x}}}\sum_{k_{x}}e^{i k_{x}x_{j}}\hat{a}_{k_{x},y_{j}}, \nonumber \\ \hat{b}_{\vec{r}_{m}}&=&\frac{1}{\sqrt{N_{x}}}\sum_{k_{x}}e^{i k_{x}x_{m}}\hat{b}_{k_{x},y_{m}}. \nonumber
\end{eqnarray}
$x_{j(m)}$ and $y_{j(m)}$ are the components of $\vec{r}_{j(m)}$ in $x$ and $y$ direction, respectively. $N_{x}$ is the number of cells along $\hat{x}$ direction, much larger than that along $\hat{y}$ direction. Thus the basis vector of the Bogoliubov-de-Gennes Hamiltonian of $\hat{H}$ for particular $k_{x}$ becomes $(\hat{a}_{k_{x},y_{1}}, ..., \hat{b}_{k_{x},y_{1}}, ..., \hat{a}^{\dagger}_{-k_{x},y_{1}}, ..., \hat{b}^{\dagger}_{-k_{x},y_{1}}, ...)^{T}$, in which the subscripts denoting $y$ component range over all the rows (also called layers in this paper). Then, the band structure containing both the edge modes and bulk bands is obtained by the diagonalization of the Hamiltonian in this basis. The results are shown in figures \ref{fig3a}-\ref{fig3d}.
Our model has a gapless and a gapped SC phase at the "sweet spot", either of which may have zero or nonzero winding numbers of the first excited band. As a common feature, there are two groups of bulk bands (blue curves) in each of the upper or lower half-planes. These are inherited from the Haldane model due to the number of inequivalent sites in a unit cell. The gapped SC phase and gapless phase are distinguished by the gap closing condition between the zero-energy line and the first excited band in the upper half-plane ("band 1"). In the gapped SC phase (Fig. \ref{fig3a}, \ref{fig3b}), there are two pairs of MZMs with complete flat bands. These are shown by straight red lines in Fig. \ref{fig3a}-\ref{fig3d}, coinciding with each other. By contrast, the gapless phase holds one pair of MZMs while the other pair of MZMs partially merges into the bulk modes in some ranges of $k_{x}$ (Fig. \ref{fig3c}, \ref{fig3d}). These "partial" MZMs that terminate at band closing points are lower dimensional Majorana analogues of Fermi arcs in 3D Weyl semi-metals~\cite{fateheir}. Furthermore, each of the gapped SC and gapless phases can have winding numbers $w=0$ or $w=\pm 1$ of the band 1. The nonzero-winding-number phase corresponds to additional ordinary gapless edge modes between the band 1 and the second excited bulk band in the upper half-plane (band 2).
To further support the correctness of our model, we get the wavefunction of the zero modes in the band structure as below by numerical simulation:
\begin{eqnarray}
\hat{\Psi}_{k_{x}}=\sum_{y_{j}}(u_{k_{x},y_{j}}, v_{k_{x},y_{j}}, u_{k_{x},y_{j}}, -v_{k_{x},y_{j}})\cdot \nonumber \\
(\hat{a}_{k_{x},y_{j}},
\hat{b}_{k_{x},y_{j}}, \hat{a}^{\dagger}_{-k_{x},y_{j}}, \hat{b}^{\dagger}_{-k_{x},y_{j}})^{T} \nonumber \\
\approx \sum_{x_{j}}(\frac{1}{\sqrt{N_{x}}}u_{k_{x},y_{1}}e^{-i k_{x}x_{j}})\hat{\gamma}^{(A)}_{x_{j},y_{1},1} \nonumber \\
+\sum_{x_{m}}(\frac{1}{\sqrt{N_{x}}}v_{k_{x},y_{1}}e^{-i k_{x}x_{m}})\hat{\gamma}^{(B)}_{x_{m},y_{1},2},
\label{eq7}
\end{eqnarray}
where $u_{k_{x},y_{i}}$ and $v_{k_{x},y_{i}}$ are the wavefunctions for the $i$th layer in the $\hat{y}$ direction using the partial Fourier-transformed basis. Note that there is a degree of freedom in choosing the coefficient in front of a specific eigenstate, which ensures the above solution is a Majorana. We use the approximation sign and just keep the wavefunctions for the first layer in the above as the numerical simulation indicates that the solution amplitudes generally decay at an exponential rate (Fig. \ref{fig2c}).
The two pairs of MZMs in the gapped SC phase display some features. One pair can be fully pseudospin-polarized only localized at an edge layer when $\mu=0$. Such a pair has forms:
\begin{eqnarray}
(\hat{a}_{k_{x},y_{1}}+\hat{a}^{\dagger}_{-k_{x},y_{1}})|0\rangle, (\hat{b}_{k_{x},y_{1}}-\hat{b}^{\dagger}_{-k_{x},y_{1}})|0\rangle. \nonumber
\end{eqnarray}
It can be shown that they are always two zero-energy eigenvectors of the Bogoliubov-de-Gennes Hamiltonian of $\hat{H}$ in the partial Fourier-transformed basis, where $|0\rangle$ is the vacuum state. Both of these two MZMs persist in the gapless phase, similar to the persistence of edge MFs in a 1-D Kitaev chain with zero chemical potential (in which the Kitaev chain is always in a topologically nontrivial phase). The other pair of MZMs usually extends to deeper layers with exponentially decaying amplitudes. They have an energy splitting due to their coupling with bulk modes in the gapless phase. It should be noted that our model applies to noninteracting fermions and it suggests a new scheme for finding MFs. It remains to be determined by future research whether the two pairs of MZMs in topological trivial cases will have an energy splitting when the interaction between MFs or other coupling channels are added~\cite{stronginter}. If that is the case, there will be detectable effective MFs only in the $w\neq0$ phases.
\section{Phase diagram at the sweet spot}
\captionsetup[subfigure]{position=top,singlelinecheck=off,justification=raggedright}
\begin{figure}[tbp]
\centering
\subfloat[Gapped SC vs. gapless]{\includegraphics[width=0.25\textwidth]{phasediaggap.pdf} \label{fig4a}}
%
\subfloat[$w=0$ vs. $w=\pm 1$] {\includegraphics[width=0.25\textwidth]{phasediagnumband1.pdf} \label{fig4b}}\\
\captionsetup{justification=raggedright}
\caption{Phase diagram at the generalized "sweet spot" when $\mu=0$, obtained by numerical simulation. (a) shows the phase boundary between the gapped SC phase (purple) and gapless phase (light green). (b) shows the phase boundary between phases with different winding numbers. The green region indicates $w=0$, the blue region $w=1$ and the red region $w=-1$. A combination of these two figures shows the full 4-phase diagram.}
\end{figure}
The phase diagram of the cold atom system at the "sweet spot" is worth analyzing. For fixed amplitude parameters ($t$, $t_{\uparrow}$ and $t_{\downarrow}$), the phase of the system varies with the NNN hopping phases
$\phi_{A}$ and $\phi_{B}$. This is displayed by the phase diagram in the momentum coordinates $p_{x}$ and $p_{y}$, which is shown in Fig. \ref{fig4a}, \ref{fig4b}.
There are a total of four phases associated with the two alternatives "gapped SC vs. gapless" and "$w=0$ vs. $w=\pm 1$".
As we illustrated in section $\textrm {\uppercase\expandafter{\romannumeral 3}}$, the phase boundary between the gapped SC phase and gapless phase can be deduced by the bulk-edge correspondence in topological physics. In the above version of our model, the pseudospin space and the particle-hole space each contribute 2 degrees of freedom. Thus, we have a 4-band model, the description of which requires solutions of a quartic equation, which in general have complicated analytical forms. To circumvent the mathematical complexity of analytical band expressions of a 4-band model, we demonstrate a method to rigorously reduce into a discriminant the parameter conditions of the gap closing between band 1 and band 2 identified in section $\textrm {\uppercase\expandafter{\romannumeral 3}}$.
In the gapped SC phase, the gap between bands 1 and 2 is open due to TRS breaking by complex NNN hoppings, protecting the MZMs from coupling with bulk modes. In a certain range, this gap is approximately proportional to the amplitude of the relative complex hopping $|\frac{t_{\sigma}}{t}|$. At the "sweet spot", the Bogoliubov-de-Gennes Hamiltonian in momentum space, $H_{BdG}(\vec{k})$, reduces to:
\begin{displaymath}
\left( \begin{array}{cccc}
-2t_{\uparrow}f_{-}+\mu & -tg^{*} & |\Delta_{\uparrow}|h & \Delta g^{*} \\
-tg & -2t_{\downarrow}f_{+}-\mu & -\Delta g & |\Delta_{\downarrow}|h^{*} \\
|\Delta_{\uparrow}|h^{*} & -\Delta g^{*} & 2t_{\uparrow}f_{+}-\mu & tg^{*} \\
\Delta g & |\Delta_{\downarrow}|h & tg & 2t_{\downarrow}f_{-}+\mu
\end{array} \right),
\end{displaymath}
where
\begin{eqnarray}
&&f_{+}=f_{+}(\vec{k},\vec{p})=\sum_{j=1}^{3}\cos((\vec{k}+\vec{p}/2)\cdot \vec{r_{j})}, \nonumber \\
&&f_{-}=f_{-}(\vec{k},\vec{p})=\sum_{j=1}^{3}\cos((\vec{k}-\vec{p}/2)\cdot \vec{r_{j})}, \nonumber \\
&&g=g(\vec{k})=\sum_{j=1}^{3}e^{-i\vec{k}\cdot \vec{\delta_{j}}}, \nonumber \\
&&h=h(\vec{k},\vec{p})=2i\sum_{j=1}^{3}e^{-i\frac{\vec{p}}{2}\cdot \vec{r_{j}}}\sin(\vec{k}\cdot \vec{r_{j}}). \nonumber
\end{eqnarray}
For a general quartic equation in $E$, $F(E)=\prod_{j=1}^{4}(E-E_{j})=0$, a two-fold root $E=0$, corresponding to band-touching conditions, implies the constraint $F(E=0)=\frac{dF}{dE}(E=0)=0$. The energy eigenvalues, $E$, can be calculated by solving
the characteristic polynomial $F(E)=\det(H_{BdG}(\vec{k})-E*I_{4\times4})=0$, where $\det()$ means the determinant. By invoking the "sweet spot" conditions Eq. (\ref{eq4})-(\ref{eq6}), $\mu=0$ and the band constraint, we get:
\begin{eqnarray}
F(E=0)&=&(t_{\uparrow}t_{\downarrow})^{2}(4f_{+}f_{-}+|h|^{2})^{2},
\label{eq8}
\\
\frac{dF}{dE}(E=0)&=&2t_{\uparrow}t_{\downarrow}(t_{\uparrow}-t_{\downarrow})(f_{+}-f_{-})(4f_{+}f_{-}+|h|^{2}).
\nonumber \\
\label{eq9}
\end{eqnarray}
We make three observations that lead to a fuller understanding of the solutions of the Eq. (\ref{eq8})-(\ref{eq9}). First, these equations contain a common nonnegative factor (discriminant)
\begin{equation}
4f_{+}f_{-}+|h|^{2}=4|\sum_{j=1}^{3}e^{i\vec{k}\cdot \vec{r}_{j}}\cos(\vec{p}\cdot \vec{r}_{j}/2)|^{2}.
\label{eq10}
\end{equation}
Thus, when Eq. (\ref{eq8}) vanishes ($t_{\uparrow}t_{\downarrow}\neq 0$ typically), Eq. (\ref{eq9}) also vanishes. So there must be 0 or 2 bands (or more bands) simultaneously touching the zero-energy line. This is consistent with the fact that the two intermediate bulk bands touch with the zero-energy line at the same points in the band structure.
Second, the terms $t$ and $g(\vec{k})$ do not affect the band touching condition. Thus, for our particular MF coupling, there is no net effect of NN interactions on the phase at the "sweet spot".
Third, the effects of geometry and of parameter strengths are independent. Since $t_{\uparrow}$ and $t_{\downarrow}$ are nonzero and unequal in most cases, we just need to focus on the equality $4f_{+}f_{-}+|h|^{2}=0$. As shown in Appendix B, this is equivalent to:
\begin{eqnarray}
|\cos(\vec{p}\cdot \vec{r}_{j}/2)|+|\cos(\vec{p}\cdot \vec{r}_{k}/2)| \ge |\cos(\vec{p}\cdot \vec{r}_{l}/2)|,
\label{eq11}
\end{eqnarray}
for any $(j, k, l)$ being a permutation of $(1, 2, 3)$.\\
\begin{figure}[tbp]
\centering
\includegraphics[width=0.35\textwidth]{gappedspace.pdf}
\captionsetup{justification=raggedright}
\caption{Regions of the gapped SC phase in the 3-D parameter space $(\sin\Phi_{1}, \sin\Phi_{2}, \sin\Phi_{3})$. The shaded surface near the 12 edges of the cube shows the domain in which at least one inequality $|\cos\Phi_{j}|+|\cos\Phi_{k}| \ge |\cos\Phi_{l}|$ is violated, where $(j, k, l)$ is a permutation of $(1, 2, 3)$. This surface defines the gapped SC phase. Note that the only points in this parameter space that have physical significance are those for which $\Phi_{2}=\Phi_{1}+\Phi_{3}$ (including both the near-edge and kernel regions).}
\label{fig5}
\end{figure}
The above inequalities describe the parameter range of the gapless phase as compared to the gapped SC phase and actually give a measure of the "strength" of TRS breaking. The phase diagram in Fig. \ref{fig4a} obtained by numerical simulations of band structures at every point in $p_{x}-p_{y}$ plane is exactly the same with that obtained according to the three inequalities. Defining the Peierls phases associated with the complex NNN hoppings inside a hexagon as $\vec{p}\cdot \vec{r}_{j}/2=\Phi_{j}$ (with restriction $\Phi_{2}=\Phi_{1}+\Phi_{3}$), the distribution of gapped SC phase in the $(\sin\Phi_{1}, \sin\Phi_{2}, \sin\Phi_{3})$ parameter space is shown in Fig. \ref{fig5}. It can be seen that the gapped SC phase is mainly localized near the edge of the cube, which indicates that at least two of the three $|\sin\Phi_{j}|$ are near $1$. We know that $2t_{\sigma}\sin\Phi_{j}$ is the amplitude difference of NNN hopping terms before and after the TRS transformation. So we claim that the gapped SC phase corresponds to "strong" TRS breaking in which at least one of the three inequalities (\ref{eq11}) is violated. For "strong" TRS breaking, two of the three Peierls phases lead to a relatively large energy difference after the TRS transformation, and the energy gap is large enough to protect the MZMs from coupling with bulk modes. By contrast, the gapless phase corresponds to "weak" TRS breaking, which is supported by its occupying the central part near $(p_{x}=0, p_{y}=0)$ in the phase diagram. Thus, this measure further divides the TRS-broken class into two groups.
Moreover, the phase diagram showing "$w=0$ vs. $w=\pm 1$" (Fig. \ref{fig4b}) has 3-fold rotational symmetry. This phase diagram is obtained by numerically calculating~\cite{Chernnumdis} the winding number, $w$, for every point in $p_{x}-p_{y}$ plane:
\begin{eqnarray}
w=\frac{i}{2\pi}\int_{T^{2}}(\langle\partial_{k_{x}}u_{1}|\partial_{k_{y}}u_{1}\rangle-\langle\partial_{k_{y}}u_{1}|\partial_{k_{x}}u_{1}\rangle)d^{2}k,
\label{eq12}
\end{eqnarray}
where $|u_{1}\rangle$ is the eigenstate of band 1 and $T^{2}$ is the first Brillouin zone. Since the sum of the topological invariants of the neighboring band 1 and band 2 are zero, it's enough to do such a calculation for one band. In the pattern of the phase diagram, the center is mainly the topologically trivial region, $w=0$, while the surrounding $w=1$ (blue) and $w=-1$ (red) nontrivial regions are separated by "thin-ribbon" regions with $w=0$. The central lines of the six "thin-ribbon" green regions indicate $\vec{p}\perp \vec{r}_{j}$, for which one of the three Peierls phases is zero. The topological transition only occurs when the gap between band 1 and band 2 closes so that the topology of bulk bands intrinsically changes. The bulk bands only touch at the typical Dirac points $(K_{x}, \pm K_{y})$ ($K_{x}=\frac{2\pi}{3}$ and $K_{y}=\frac{2\pi}{3\sqrt{3}}$), which indicates the decoupling of the two sublattices. Band touching at only one of the two inequivalent Dirac points entails the change of winding number by $1$, while the case of touching at both hasn't been observed in this model. Additionally, when $w$ is nonzero, the system is characterized by ordinary gapless edge modes connecting band 1 and band 2 at two armchair edges. The interaction between the gapless edge modes and the Majorana flat bands will be the subject of the future study.
In general, when $\mu\neq 0$, the system displays changes in both of the gapped SC and gapless phases. When $|\mu|$ increases from zero in the gapped SC phase, the gap between band 1 and the zero-energy line gradually decreases until 2 MZMs couple with the bulk modes, leading to a decreasing jumping of the degeneracy of MZMs. The pair of fully pseudospin-polarized MZMs that is exactly localized at the first layer in $\mu=0$ case, will extend to the deeper layers with exponentially decreasing amplitudes. In the gapless phase, the pairing occurs in a partially filled band. The Dirac points connecting band 1 and band 2 vanish, accompanied by the relative displacement of the upper and lower Dirac cones. The gapless property is preserved but the bands are indirectly closed.
At last, it should be mentioned that our model has several remarkable differences from previous models for realizing MFs in 2-D optical lattices~\cite{MFquasi1D,MFEquil,1DTOPChain}. First, the geometric structure in our model is not one or more topological chains with transverse tunneling in a topological trivial background. It naturally includes the tunnelings in different directions (which is longitudinal/transverse in a square lattice, and is slanted in a honeycomb lattice).
Second, the commonly used ten-fold classification of fermionic phases~\cite{schnyder2008classification} may be not directly applicable to our model due to its dependence on $\vec{p}$ as a parameter. More mathematical tools may be needed in order to apply the classification rule, such as constructing new charge, parity and time-reversal symmetry operators, which will be addressed in our future research. In addition, the gapless edge modes in our model are reminiscent of quantum anomalous edge states~\cite{wang2013anomalous}. By contrast, most other models can be covered by the ten-fold classification description. They inherit the topological properties of the 1-D TRS broken chain and have invariants of type-$Z_{2}$, while the number of effective MFs diversifies with the number of chains being even or odd.
Third, our theoretical model seems to be ahead of the development of experimental techniques in fermionic optical lattices. Some of the necessary techniques have been well developed as described in the introduction. The biggest challenge may be the precise tuning of both the spin-singlet and spin-triplet pairings. Recent progress in implementing pairings in cold atoms includes finite-momentum Cooper pairings. These can induce Fulde-Ferrell (FF) states which have superconducting order parameters with uniform amplitudes but spatially dependent phases. Protocols for creating such FF superfluids in spinful~\cite{qu2013topological,liu2016detecting,dong2013fulde,xu2014competing,he2018realizing} and spinless~\cite{zheng2018fulde} cold atoms may be helpful for implementing the crucial Hamiltonian components in our model, though it is still a big step. Further experimental progress is needed to realize our model, but our study of this model Hamiltonian may be insightful for researchers in cold atom physics.
\section{Conclusion}
We have proposed a method for creating Majorana fermions at the edge of a honeycomb optical lattice of ultracold atoms. This is done by generalizing a 2-D topologically nontrivial Haldane model and introducing textured pairings. Both the spin-singlet and textured spin-triplet pairings are added to a pseudospin-state dependent lattice, whose time-reversal symmetry is broken by complex next-nearest-neighbor hoppings. If generalized "sweet spot" conditions are satisfied, Majorana zero modes will arise on a single edge of the lattice. We have analyzed their properties, such as pseudospin polarization. We find that this system has a gapped superconducting phase and a gapless phase, each of which can have zero or nonzero winding numbers, and have calculated the phase diagrams of the system. We have simplified the understanding of the bandgap-closing condition of the bulk Hamiltonian by identifying a discriminant that distinguishes the gapped superconducting and gapless phases, and provides a measure of the "strength" of time-reversal symmetry breaking. Further developments of this model may include interactions between Majorana fermions and the interaction between Majorana zero modes and ordinary gapless edge modes.
\section{Acknowledgments}
We thank Dong-ling Deng and Haiping Hu for helpful discussions. This material is based upon work supported by the US National Science Foundation Physics Frontier Center at JQI.
|
{
"timestamp": "2018-08-28T02:06:42",
"yymm": "1802",
"arxiv_id": "1802.08804",
"language": "en",
"url": "https://arxiv.org/abs/1802.08804"
}
|
\section{Introduction}
Single image Super-Resolution (SISR) aims to generate a visually pleasing high-resolution (HR) image from its degraded low-resolution (LR) measurement. SISR is used in various computer vision tasks, such as security and surveillance imaging~\cite{zou2012very}, medical imaging~\cite{shi2013cardiac}, and image generation~\cite{karras2018progressive}. While image SR is an ill-posed inverse procedure, since there exists a multitude of solutions for any LR input. To tackle this inverse problem, plenty of image SR algorithms have been proposed, including interpolation-based~\cite{zhang2006edge}, reconstruction-based~\cite{zhang2012single}, and learning-based methods~\cite{timofte2013anchored,timofte2014a+,peleg2014statistical,dong2014learning,schulter2015fast,huang2015single,kim2016accurate,tong2017image,zhang2018learning}.
\begin{figure}[t]
\centering
\centerline{
\subfigure[{ Residual block}]{
\label{fig:blocks_RB}
\includegraphics[scale = 0.68]{blocks_RB.pdf}}
\subfigure[{ Dense block}]{
\label{fig:blocks_DB}
\includegraphics[scale = 0.68]{blocks_DB.pdf}}
}
\centerline{
\subfigure[{ Residual dense block}]{
\label{fig:blocks_RDB}
\includegraphics[scale = 0.68]{blocks_RDB.pdf}}
}
\caption{Comparison of prior network structures (a,b) and our residual dense block (c). (a) Residual block in MDSR~\cite{lim2017enhanced}. (b) Dense block in SRDenseNet~\cite{tong2017image}. (c) Our residual dense block.}
\label{fig:blocks_RB_DB_RDB}
\vspace{-4mm}
\end{figure}
Among them, Dong~et al.~\cite{dong2014learning} firstly introduced a three-layer convolutional neural network (CNN) into image SR and achieved significant improvement over conventional methods. Kim et al. increased the network depth in VDSR~\cite{kim2016accurate} and DRCN~\cite{kim2016deeply} by using gradient clipping, skip connection, or recursive-supervision to ease the difficulty of training deep network. By using effective building modules, the networks for image SR are further made deeper and wider with better performance. Lim et al. used residual blocks (Fig.~\ref{fig:blocks_RB}) to build a very wide network EDSR~\cite{lim2017enhanced} with residual scaling~\cite{szegedy2017inception} and a very deep one MDSR~\cite{lim2017enhanced}. Tai et al. proposed memory block to build MemNet~\cite{tai2017memnet}. As the network depth grows, the features in each convolutional layer would be hierarchical with different receptive fields. However, these methods neglect to fully use information of each convolutional layer. Although the gate unit in memory block was proposed to control short-term memory~\cite{tai2017memnet}, the local convolutional layers don't have direct access to the subsequent layers. So it's hard to say memory block makes full use of the information from all the layers within it.
Furthermore, objects in images have different scales, angles of view, and aspect ratios. Hierarchical features from a very deep network would give more clues for reconstruction. While, most deep learning (DL) based methods (e.g., VDSR~\cite{kim2016accurate}, LapSRN~\cite{lai2017deep}, and EDSR~\cite{lim2017enhanced}) neglect to use hierarchical features for reconstruction. Although memory block~\cite{tai2017memnet} also takes information from preceding memory blocks as input, the multi-level features are not extracted from the original LR image. MemNet interpolates the original LR image to the desired size to form the input. This pre-processing step not only increases computation complexity quadratically, but also loses some details of the original LR image. Tong et al. introduced dense block (Fig.~\ref{fig:blocks_DB}) for image SR with relatively low growth rate (e.g.,16). According to our experiments (see Section~\ref{subsec:study_DCG}), higher growth rate can further improve the performance of the network. While, it would be hard to train a wider network with dense blocks in Fig.~\ref{fig:blocks_DB}.
To address these drawbacks, we propose residual dense network (RDN) (Fig.~\ref{fig:RDN}) to fully make use of all the hierarchical features from the original LR image with our proposed residual dense block (Fig.~\ref{fig:blocks_RDB}). It's hard and impractical for a very deep network to directly extract the output of each convolutional layer in the LR space. We propose residual dense block (RDB) as the building module for RDN. RDB consists dense connected layers and local feature fusion (LFF) with local residual learning (LRL). Our RDB also support contiguous memory among RDBs. The output of one RDB has direct access to each layer of the next RDB, resulting in a contiguous state pass. Each convolutional layer in RDB has access to all the subsequent layers and passes on information that needs to be preserved~\cite{huang2017densely}. Concatenating the states of preceding RDB and all the preceding layers within the current RDB, LFF extracts local dense feature by adaptively preserving the information. Moreover, LFF allows very high growth rate by stabilizing the training of wider network. After extracting multi-level local dense features, we further conduct global feature fusion (GFF) to adaptively preserve the hierarchical features in a global way. As depicted in Figs.~\ref{fig:RDN} and~\ref{fig:RDB}, each layer has direct access to the original LR input, leading to an implicit deep supervision~\cite{lee2015deeply}.
In summary, our main contributions are three-fold:
\begin{itemize}
\item We propose a unified frame work residual dense network (RDN) for high-quality image SR with different degradation models. The network makes full use of all the hierarchical features from the original LR image.
\end{itemize}
\begin{itemize}
\item We propose residual dense block (RDB), which can not only read state from the preceding RDB via a contiguous memory (CM) mechanism, but also fully utilize all the layers within it via local dense connections. The accumulated features are then adaptively preserved by local feature fusion (LFF).
\end{itemize}
\begin{itemize}
\item We propose global feature fusion to adaptively fuse hierarchical features from all RDBs in the LR space. With global residual learning, we combine the shallow features and deep features together, resulting in global dense features from the original LR image.
\end{itemize}
\section{Related Work}
Recently, deep learning (DL)-based methods have achieved dramatic advantages against conventional methods in computer vision~\cite{zhang2017image,zhang2018density,zhang2018densely,li2018tell}. Due to the limited space, we only discuss some works on image SR. Dong et al. proposed SRCNN~\cite{dong2014learning}, establishing an end-to-end mapping between the interpolated LR images and their HR counterparts for the first time. This baseline was then further improved mainly by increasing network depth or sharing network weights. VDSR~\cite{kim2016accurate} and IRCNN~\cite{zhang2017learning} increased the network depth by stacking more convolutional layers with residual learning. DRCN~\cite{kim2016deeply} firstly introduced recursive learning in a very deep network for parameter sharing. Tai et al. introduced recursive blocks in DRRN~\cite{tai2017image} and memory block in Memnet~\cite{tai2017memnet} for deeper networks. All of these methods need to interpolate the original LR images to the desired size before applying them into the networks. This pre-processing step not only increases computation complexity quadratically~\cite{dong2016accelerating}, but also over-smooths and blurs the original LR image, from which some details are lost. As a result, these methods extract features from the interpolated LR images, failing to establish an end-to-end mapping from the original LR to HR images.
To solve the problem above, Dong et al.~\cite{dong2016accelerating} directly took the original LR image as input and introduced a transposed convolution layer (also known as deconvolution layer) for upsampling to the fine resolution. Shi et al. proposed ESPCN~\cite{shi2016real}, where an efficient sub-pixel convolution layer was introduced to upscale the final LR feature maps into the HR output. The efficient sub-pixel convolution layer was then adopted in SRResNet~\cite{ledig2017photo} and EDSR~\cite{lim2017enhanced}, which took advantage of residual leanrning~\cite{he2016deep}. All of these methods extracted features in the LR space and upscaled the final LR features with transposed or sub-pixel convolution layer. By doing so, these networks can either be capable of real-time SR (e.g., FSRCNN and ESPCN), or be built to be very deep/wide (e.g., SRResNet and EDSR). However, all of these methods stack building modules (e.g., Conv layer in FSRCNN, residual block in SRResNet and EDSR) in a chain way. They neglect to adequately utilize information from each Conv layer and only adopt CNN features from the last Conv layer in LR space for upscaling.
\begin{figure*}[htbp]
\centering
\includegraphics[scale = 1]{RDN.pdf}
\caption{The architecture of our proposed residual dense network (RDN).}
\label{fig:RDN}
\vspace{-4mm}
\end{figure*}
Recently, Huang et al. proposed DenseNet, which allows direct connections between any two layers within the same dense block~\cite{huang2017densely}. With the local dense connections, each layer reads information from all the preceding layers within the same dense block. The dense connection was introduced among memory blocks~\cite{tai2017memnet} and dense blocks~\cite{tong2017image}. More differences between DenseNet/SRDenseNet/MemNet and our RDN would be discussed in Section~\ref{sec:discussions}.
The aforementioned DL-based image SR methods have achieved significant improvement over conventional SR methods, but all of them lose some useful hierarchical features from the original LR image. Hierarchical features produced by a very deep network are useful for image restoration tasks (e.g., image SR). To fix this case, we propose residual dense network (RDN) to extract and adaptively fuse features from all the layers in the LR space efficiently. We will detail our RDN in next section.
\section{Residual Dense Network for Image SR}
\subsection{Network Structure}
\label{sec:network}
As shown in Fig.~\ref{fig:RDN}, our RDN mainly consists four parts: shallow feature extraction net (SFENet), redidual dense blocks (RDBs), dense feature fusion (DFF), and finally the up-sampling net (UPNet). Let's denote $I_{LR}$ and $I_{SR}$ as the input and output of RDN. Specifically, we use two Conv layers to extract shallow features. The first Conv layer extracts features $F_{-1}$ from the LR input.
\begin{align}
\begin{split}
\label{eq:SFE1}
F_{-1}=H_{SFE1}\left ( I_{LR} \right ),
\end{split}
\end{align}
where $H_{SFE1}\left ( \cdot \right )$ denotes convolution operation. $F_{-1}$ is then used for further shallow feature extraction and global residual learning. So we can further have
\begin{align}
\begin{split}
\label{eq:SFE2}
F_{0}=H_{SFE2}\left ( F_{-1} \right ),
\end{split}
\end{align}
where $H_{SFE2}\left ( \cdot \right )$ denotes convolution operation of the second shallow feature extraction layer and is used as input to residual dense blocks. Supposing we have $D$ residual dense blocks, the output $F_{d}$ of the $d$-th RDB can be obtained by
\begin{align}
\begin{split}
\label{eq:F_d_RDN}
F_{d}&=H_{RDB,d}\left ( F_{d-1} \right )\\
&=H_{RDB,d}\left ( H_{RDB,{d-1}}\left ( \cdots \left ( H_{RDB,1}\left ( F_{0} \right ) \right ) \cdots \right ) \right ),
\end{split}
\end{align}
where $H_{RDB,d}$ denotes the operations of the $d$-th RDB. $H_{RDB,d}$ can be a composite function of operations, such as convolution and rectified linear units (ReLU)~\cite{glorot2011deep}. As $F_{d}$ is produced by the $d$-th RDB fully utilizing each convolutional layers within the block, we can view $F_{d}$ as local feature. More details about RDB will be given in Section~\ref{subsec:RDB}.
After extracting hierarchical features with a set of RDBs, we further conduct dense feature fusion (DFF), which includes global feature fusion (GFF) and global residual learning (GRL). DFF makes full use of features from all the preceding layers and can be represented as
\begin{align}
\begin{split}
\label{eq:DFF}
F_{DF}=H_{DFF}\left ( F_{-1},F_{0},F_{1},\cdots ,F_{D} \right ),
\end{split}
\end{align}
where $F_{DF}$ is the output feature-maps of DFF by utilizing a composite function $H_{DFF}$. More details about DFF will be shown in Section~\ref{subsec:DFF}.
After extracting local and global features in the LR space, we stack a up-sampling net (UPNet) in the HR space. Inspired by~\cite{lim2017enhanced}, we utilize ESPCN~\cite{shi2016real} in UPNet followed by one Conv layer. The output of RDN can be obtained by
\begin{align}
\begin{split}
\label{eq:I_SR}
I_{SR}=H_{RDN}\left ( I_{LR} \right ),
\end{split}
\end{align}
where $H_{RDN}$ denotes the function of our RDN.
\begin{figure}[htpb]
\centering
\includegraphics[scale = 0.8]{RDB.pdf}
\caption{Residual dense block (RDB) architecture.}
\label{fig:RDB}
\vspace{-4mm}
\end{figure}
\subsection{Residual Dense Block}
\label{subsec:RDB}
Now we present details about our proposed residual dense block (RDB) in Fig.~\ref{fig:RDB}. Our RDB contains dense connected layers, local feature fusion (LFF), and local residual learning, leading to a contiguous memory (CM) mechanism.
\textbf{Contiguous memory} mechanism is realized by passing the state of preceding RDB to each layer of current RDB. Let $F_{d-1}$ and $F_{d}$ be the input and output of the $d$-th RDB respectively and both of them have G$_{0}$ feature-maps. The output of $c$-th Conv layer of $d$-th RDB can be formulated as
\begin{align}
\begin{split}
\label{eq:F_d_c}
F_{d,c}=\sigma \left ( W_{d,c}\left [ F_{d-1},F_{d,1},\cdots ,F_{d,c-1} \right ] \right ),
\end{split}
\end{align}
where $\sigma$ denotes the ReLU~\cite{glorot2011deep} activation function. $W_{d,c}$ is the weights of the $c$-th Conv layer, where the bias term is omitted for simplicity. We assume $F_{d,c}$ consists of G (also known as growth rate~\cite{huang2017densely}) feature-maps. $\left [ F_{d-1},F_{d,1},\cdots ,F_{d,c-1} \right ]$ refers to the concatenation of the feature-maps produced by the $(d-1)$-th RDB, convolutional layers $1,\cdots ,\left ( c-1 \right )$ in the $d$-th RDB, resulting in G$_{0}$+$\left ( c-1 \right )\times $G feature-maps. The outputs of the preceding RDB and each layer have direct connections to all subsequent layers, which not only preserves the feed-forward nature, but also extracts local dense feature.
\textbf{Local feature fusion} is then applied to adaptively fuse the states from preceding RDB and the whole Conv layers in current RDB. As analyzed above, the feature-maps of the $\left ( d-1 \right )$-th RDB are introduced directly to the $d$-th RDB in a concatenation way, it is essential to reduce the feature number. On the other hand, inspired by~MemNet~\cite{tai2017memnet}, we introduce a $1\times 1$ convolutional layer to adaptively control the output information. We name this operation as local feature fusion (LFF) formulated as
\begin{align}
\begin{split}
\label{eq:F_d_LF}
F_{d,LF}=H_{LFF}^{d}\left ( \left [ F_{d-1},F_{d,1},\cdots ,F_{d,c},\cdots ,F_{d,C}\right ] \right ),
\end{split}
\end{align}
where $H_{LFF}^{d}$ denotes the function of the $1\times 1$ Conv layer in the $d$-th RDB. We also find that as the growth rate G becomes larger, very deep dense network without LFF would be hard to train.
\textbf{Local residual learning} is introduced in RDB to further improve the information flow, as there are several convolutional layers in one RDB. The final output of the $d$-th RDB can be obtained by
\begin{align}
\begin{split}
\label{eq:F_d_RDB}
F_{d}=F_{d-1}+F_{d,LF}.
\end{split}
\end{align}
It should be noted that LRL can also further improve the network representation ability, resulting better performance. We introduce more results about LRL in Section~\ref{sec:results}. Because of the dense connectivity and local residual learning, we refer to this block architecture as residual dense block (RDB). More differences between RDB and original dense block~\cite{huang2017densely} would be summarized in Section~\ref{sec:discussions}.
\subsection{Dense Feature Fusion}
\label{subsec:DFF}
After extracting local dense features with a set of RDBs, we further propose dense feature fusion (DFF) to exploit hierarchical features in a global way. Our DFF consists of global feature fusion (GFF) and global residual learning.
\textbf{Global feature fusion} is proposed to extract the global feature $F_{GF}$ by fusing features from all the RDBs
\begin{align}
\begin{split}
\label{eq:GFF}
F_{GF}=H_{GFF}\left ( \left [ F_{1},\cdots ,F_{D} \right ] \right ),
\end{split}
\end{align}
where $\left [ F_{1},\cdots ,F_{D} \right ]$ refers to the concatenation of feature-maps produced by residual dense blocks $1,\cdots ,D$. $H_{GFF}$ is a composite function of $1\times 1$ and $3\times 3$ convolution. The $1\times 1$ convolutional layer is used to adaptively fuse a range of features with different levels. The following $3\times 3$ convolutional layer is introduced to further extract features for global residual learning, which has been demonstrated to be effective in~\cite{ledig2017photo}.
\textbf{Global residual learning} is then utilized to obtain the feature-maps before conducting up-scaling by
\begin{align}
\begin{split}
\label{eq:DFF_GF}
F_{DF}=F_{-1}+F_{GF},
\end{split}
\end{align}
where $F_{-1}$ denotes the shallow feature-maps. All the other layers before global feature fusion are fully utilized with our proposed residual dense blocks (RDBs). RDBs produce multi-level local dense features, which are further adaptively fused to form $F_{GF}$. After global residual learning, we obtain dense feature $F_{DF}$.
It should be noted that Tai~et al.~\cite{tai2017memnet} utilized long-term dense connections in MemNet to recover more high frequency information. However, in the memory block~\cite{tai2017memnet}, the preceding layers don't have direct access to all the subsequent layers. The local feature information are not fully used, limiting the ability of long-term connections. In addition, MemNet extracts features in the HR space, increasing computational complexity. While, inspired by~\cite{dong2016accelerating,shi2016real,lai2017deep,lim2017enhanced}, we extract local and global features in the LR space. More differences between our residual dense network and MemNet would be shown in Section~\ref{sec:discussions}. We would also demonstrate the effectiveness of global feature fusion in Section~\ref{sec:results}.
\subsection{Implementation Details}
\label{sec:implement}
In our proposed RDN, we set $3\times 3$ as the size of all convolutional layers except that in local and global feature fusion, whose kernel size is $1\times 1$. For convolutional layer with kernel size $3\times 3$, we pad zeros to each side of the input to keep size fixed. Shallow feature extraction layers, local and global feature fusion layers have G$_{0}$=64 filters. Other layers in each RDB has G filters and are followed by ReLU~\cite{glorot2011deep}. Following~\cite{lim2017enhanced}, we use ESPCNN~\cite{shi2016real} to upscale the coarse resolution features to fine ones for the UPNet. The final Conv layer has $3$ output channels, as we output color HR images. However, the network can also process gray images.
\section{Discussions}
\label{sec:discussions}
\textbf{Difference to DenseNet.} Inspired from DenseNet~\cite{huang2017densely}, we adopt the local dense connections into our proposed residual dense block (RDB). In general, DenseNet is widely used in high-level computer vision tasks (e.g., object recognition). While RDN is designed for image SR. Moreover, we remove batch nomalization (BN) layers, which consume the same amount of GPU memory as convolutional layers, increase computational complexity, and hinder performance of the network. We also remove the pooling layers, which could discard some pixel-level information. Furthermore, transition layers are placed into two adjacent dense blocks in DenseNet. While in RDN, we combine dense connected layers with local feature fusion (LFF) by using local residual learning, which would be demonstrated to be effective in Section~\ref{sec:results}. As a result, the output of the $(d-1)$-th RDB has direct connections to each layer in the $d$-th RDB and also contributes to the input of $(d+1)$-th RDB. Last not the least, we adopt global feature fusion to fully use hierarchical features, which are neglected in DenseNet.
\textbf{Difference to SRDenseNet.} There are three main differences between SRDenseNet~\cite{tong2017image} and our RDN. The first one is the design of basic building block. SRDenseNet introduces the basic dense block from DenseNet~\cite{huang2017densely}. Our residual dense block (RDB) improves it in three ways: (1). We introduce contiguous memory (CM) mechanism, which allows the state of preceding RDB have direct access to each layer of the current RDB. (2). Our RDB allow larger growth rate by using local feature fusion (LFF), which stabilizes the training of wide network. (3). Local residual learning (LRL) is utilized in RDB to further encourage the flow of information and gradient. The second one is there is no dense connections among RDB. Instead we use global feature fusion (GFF) and global residual learning to extract global features, because our RDBs with contiguous memory have fully extracted features locally. As shown in Sections~\ref{subsec:study_DCG} and~\ref{subsec:ablation}, all of these components increase the performance significantly. The third one is SRDenseNet uses $L_2$ loss function. Whereas we utilize $L_1$ loss function, which has been demonstrated to be more powerful for performance and convergence~\cite{lim2017enhanced}. As a result, our proposed RDN achieves better performance than that of SRDenseNet.
\textbf{Difference to MemNet.} In addition to the different choice of loss function ($L_2$ in MemNet~\cite{tai2017memnet}), we mainly summarize another three differences bwtween MemNet and our RDN. First, MemNet needs to upsample the original LR image to the desired size using Bicubic interpolation. This procedure results in feature extraction and reconstruction in HR space. While, RDN extracts hierarchical features from the original LR image, reducing computational complexity significantly and improving the performance. Second, the memory block in MemNet contains recursive and gate units. Most layers within one recursive unit don't receive the information from their preceding layers or memory block. While, in our proposed RDN, the output of RDB has direct access to each layer of the next RDB. Also the information of each convolutional layer flow into all the subsequent layers within one RDB. Furthermore, local residual learning in RDB improves the flow of information and gradients and performance, which is demonstrated in Section~\ref{sec:results}. Third, as analyzed above, current memory block doesn't fully make use of the information of the output of the preceding block and its layers. Even though MemNet adopts densely connections among memory blocks in the HR space, MemNet fails to fully extract hierarchical features from the original LR inputs. While, after extracting local dense features with RDBs, our RDN further fuses the hierarchical features from the whole preceding layers in a global way in the LR space.
\iffalse
\textbf{Difference to EDSR.} Being very wide, EDSR~\cite{lim2017enhanced} has achieved top ranks in both the standard benchmark datasets and the DIV2K~\cite{timofte2017ntire} dataset. As we would also use the DIV2K dataset for training, here we also discuss three main differences between EDSR and RDN. The first is the design of the basic building module in network. EDSR utilizes residual block to build the network by removing unnecessary modules, resulting in a chain structure and create short paths from early layers to later layers. While, we propose RDB as the basic building block and extract local dense features. The local dense features not only flow into each layer of the next RDB, but also prepare for efficient global feature fusion. Second, we propose global feature fusion to utilize all local dense features, obtaining global dense features. While, EDSR doesn't use the hierarchical features in a global way. Third, EDSR uses very wide network (e.g., 256 feature channels per Conv layer), which causes the training procedure to be numerically unstable. To resolve this issue, residual scaling~\cite{szegedy2017inception} is adopted into EDSR. While, we use far less number of feature channels per Conv layer in the LR space (e.g., 64). As a result, RDN don't have to adopt residual scaling to stabilize the training procedure.
\fi
\section{Experimental Results}
\label{sec:results}
\vspace{-2mm}
The source code of the proposed method can be downloaded at \href{https://github.com/yulunzhang/RDN}{https://github.com/yulunzhang/RDN}.
\subsection{Settings}
\label{subsec:settings}
\vspace{-2mm}
\textbf{Datasets and Metrics.}
Recently, Timofte et al. have released a high-quality (2K resolution) dataset DIV2K for image restoration applications~\cite{timofte2017ntire}. DIV2K consists of 800 training images, 100 validation images, and 100 test images. We train all of our models with 800 training images and use 5 validation images in the training process. For testing, we use five standard benchmark datasets: Set5~\cite{bevilacqua2012low}, Set14~\cite{zeyde2012single}, B100~\cite{martin2001database}, Urban100~\cite{huang2015single}, and Manga109~\cite{matsui2017sketch}. The SR results are evaluated with PSNR and SSIM~\cite{wang2004image} on Y channel (\textit{i.e.}, luminance) of transformed YCbCr space.
\textbf{Degradation Models.} In order to fully demonstrate the effectiveness of our proposed RDN, we use three degradation models to simulate LR images. The first one is bicubic downsampling by adopting the Matlab function \textit{imresize} with the option \textit{bicubic} (denote as \textbf{BI} for short). We use \textbf{BI} model to simulate LR images with scaling factor $\times2$, $\times3$, and $\times4$. Similar to~\cite{zhang2017learning}, the second one is to blur HR image by Gaussian kernel of size $7\times7$ with standard deviation 1.6. The blurred image is then downsampled with scaling factor $\times3$ (denote as \textbf{BD} for short). We further produce LR image in a more challenging way. We first bicubic downsample HR image with scaling factor $\times3$ and then add Gaussian noise with noise level 30 (denote as \textbf{DN} for short).
\textbf{Training Setting.}
Following settings of~\cite{lim2017enhanced}, in each training batch, we randomly extract 16 LR RGB patches with the size of $32\times32$ as inputs. We randomly augment the patches by flipping horizontally or vertically and rotating 90$^{\circ}$. 1,000 iterations of back-propagation constitute an epoch. We implement our RDN with the Torch7 framework and update it with Adam optimizer~\cite{kingma2014adam}. The learning rate is initialized to 10$^{-4}$ for all layers and decreases half for every 200 epochs. Training a RDN roughly takes 1 day with a Titan Xp GPU for 200 epochs.
\vspace{-4mm}
\begin{figure}[htbp]
\scriptsize
\centering
\centerline{
\subfigure[]{
\label{fig:investeD}
\includegraphics[width = 27mm]{InvestigateD.pdf}}
\subfigure[]{
\label{fig:investeC}
\includegraphics[width = 27mm]{InvestigateC.pdf}}
\subfigure[]{
\label{fig:investeG}
\includegraphics[width = 27mm]{InvestigateG.pdf}}
}
\vspace{-1mm}
\caption{Convergence analysis of RDN with different values of D, C, and G.}
\label{fig:study_D_C_G}
\vspace{-4mm}
\end{figure}
\subsection{Study of D, C, and G.}
\label{subsec:study_DCG}
In this subsection, we investigate the basic network parameters: the number of RDB (denote as D for short), the number of Conv layers per RDB (denote as C for short), and the growth rate (denote as G for short). We use the performance of SRCNN~\cite{dong2016image} as a reference. As shown in Figs.~\ref{fig:investeD} and~\ref{fig:investeC}, larger D or C would lead to higher performance. This is mainly because the network becomes deeper with larger D or C. As our proposed LFF allows larger G, we also observe larger G (see Fig.~\ref{fig:investeG}) contributes to better performance. On the other hand, RND with smaller D, C, or G would suffer some performance drop in the training, but RDN would still outperform SRCNN~\cite{dong2016image}. More important, our RDN allows deeper and wider network, from which more hierarchical features are extracted for higher performance.
\begin{table}[tbp]
\scriptsize
\centering
\begin{center}
\begin{tabular*}{82.4mm}{@{\extracolsep{-0.75mm}}|c|c|c|c|c|c|c|c|c|}
\hline
& \multicolumn{8}{c|}{Different combinations of CM, LRL, and GFF}
\\
\hline
\hline
CM & \XSolid & \Checkmark & \XSolid & \XSolid & \Checkmark & \Checkmark & \XSolid & \Checkmark
\\
LRL & \XSolid & \XSolid & \Checkmark & \XSolid & \Checkmark & \XSolid & \Checkmark & \Checkmark
\\
GFF & \XSolid & \XSolid & \XSolid & \Checkmark & \XSolid & \Checkmark & \Checkmark & \Checkmark
\\
\hline
\hline
PSNR & 34.87 & 37.89 & 37.92 & 37.78 & 37.99 & 37.98 & 37.97 & 38.06
\\
\hline
\end{tabular*}
\end{center}
\vspace{-3mm}
\caption{Ablation investigation of contiguous memory (CM), local residual learning (LRL), and global feature fusion (GFF). We observe the best performance (PSNR) on Set5 with scaling factor $\times2$ in 200 epochs.}
\label{tab:results_ablation}
\vspace{-6mm}
\end{table}
\subsection{Ablation Investigation}
\label{subsec:ablation}
Table~\ref{tab:results_ablation} shows the ablation investigation on the effects of contiguous memory (CM), local residual learning (LRL), and global feature fusion (GFF). The eight networks have the same RDB number (D = 20), Conv number (C = 6) per RDB, and growth rate (G = 32). We find that local feature fusion (LFF) is needed to train these networks properly, so LFF isn't removed by default. The baseline (denote as RDN\_CM0LRL0GFF0) is obtained without CM, LRL, or GFF and performs very poorly (PSNR = 34.87 dB). This is caused by the difficulty of training~\cite{dong2016image} and also demonstrates that stacking many basic dense blocks~\cite{huang2017densely} in a very deep network would not result in better performance.
We then add one of CM, LRL, or GFF to the baseline, resulting in RDN\_CM1LRL0GFF0, RDN\_CM0LRL1GFF0, and RDN\_CM0LRL0GFF1 respectively (from 2$^{nd}$ to 4$^{th}$ combination in Table~\ref{tab:results_ablation}). We can validate that each component can efficiently improve the performance of the baseline. This is mainly because each component contributes to the flow of information and gradient.
We further add two components to the baseline, resulting in RDN\_CM1LRL1GFF0, RDN\_CM1LRL0GFF1, and RDN\_CM0LRL1GFF1 respectively (from 5$^{th}$ to 7$^{th}$ combination in Table~\ref{tab:results_ablation}). It can be seen that two components would perform better than only one component. Similar phenomenon can be seen when we use these three components simultaneously (denote as RDN\_CM1LRL1GFF1). RDN using three components performs the best.
We also visualize the convergence process of these eight combinations in Fig.~\ref{fig:study_CM_LRL_GFF}. The convergence curves are consistent with the analyses above and show that CM, LRL, and GFF can further stabilize the training process without obvious performance drop. These quantitative and visual analyses demonstrate the effectiveness and benefits of our proposed CM, LRL, and GFF.
\begin{figure}[t]
\scriptsize
\centering
\centerline{
\includegraphics[scale =0.40]{ablation_CM_LRL_GFF.pdf}}
\caption{Convergence analysis on CM, LRL, and GFF. The curves for each combination are based on the PSNR on Set5 with scaling factor~$\times2$ in 200 epochs.}
\label{fig:study_CM_LRL_GFF}
\vspace{-5mm}
\end{figure}
\begin{table*}[htpb]
\scriptsize
\centering
\begin{center}
\begin{tabular*}{169.85mm}{@{\extracolsep{-0.928mm}}|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{Dataset} & \multirow{2}{*}{Scale} & \multirow{2}{*}{Bicubic} & SRCNN & LapSRN & DRRN & SRDenseNet & MemNet & MDSR & RDN & RDN+
\\
& & & \cite{dong2016image} & \cite{lai2017deep} & \cite{tai2017image} & \cite{tong2017image} & \cite{tai2017memnet} & \cite{lim2017enhanced} & (ours) & (ours)
\\
\hline
\hline
\multirow{3}{*}{Set5}
& $\times2$
& 33.66/0.9299
& 36.66/0.9542
& 37.52/0.9591
& 37.74/0.9591
& -/-
& 37.78/0.9597
& 38.11/0.9602
& 38.24/0.9614
& \textbf{38.30}/\textbf{0.9616}
\\
& $\times3$
& 30.39/0.8682
& 32.75/0.9090
& 33.82/0.9227
& 34.03/0.9244
& -/-
& 34.09/0.9248
& 34.66/0.9280
& 34.71/0.9296
& \textbf{34.78}/\textbf{0.9300}
\\
& $\times4$
& 28.42/0.8104
& 30.48/0.8628
& 31.54/0.8855
& 31.68/0.8888
& 32.02/0.8934
& 31.74/0.8893
& 32.50/0.8973
& 32.47/0.8990
& \textbf{32.61}/\textbf{0.9003}
\\
\hline
\hline
\multirow{3}{*}{Set14}
& $\times2$
& 30.24/0.8688
& 32.45/0.9067
& 33.08/0.9130
& 33.23/0.9136
& -/-
& 33.28/0.9142
& 33.85/0.9198
& 34.01/0.9212
& \textbf{34.10}/\textbf{0.9218}
\\
& $\times3$
& 27.55/0.7742
& 29.30/0.8215
& 29.79/0.8320
& 29.96/0.8349
& -/-
& 30.00/0.8350
& 30.44/0.8452
& 30.57/0.8468
& \textbf{30.67}/\textbf{0.8482}
\\
& $\times4$
& 26.00/0.7027
& 27.50/0.7513
& 28.19/0.7720
& 28.21/0.7721
& 28.50/0.7782
& 28.26/0.7723
& 28.72/0.7857
& 28.81/0.7871
& \textbf{28.92}/\textbf{0.7893}
\\
\hline
\hline
\multirow{3}{*}{B100}
& $\times2$
& 29.56/0.8431
& 31.36/0.8879
& 31.80/0.8950
& 32.05/0.8973
& -/-
& 32.08/0.8978
& 32.29/0.9007
& 32.34/0.9017
& \textbf{32.40}/\textbf{0.9022}
\\
& $\times3$
& 27.21/0.7385
& 28.41/0.7863
& 28.82/0.7973
& 28.95/0.8004
& -/-
& 28.96/0.8001
& 29.25/0.8091
& 29.26/0.8093
& \textbf{29.33}/\textbf{0.8105}
\\
& $\times4$
& 25.96/0.6675
& 26.90/0.7101
& 27.32/0.7280
& 27.38/0.7284
& 27.53/0.7337
& 27.40/0.7281
& 27.72/0.7418
& 27.72/0.7419
& \textbf{27.80}/\textbf{0.7434}
\\
\hline
\hline
\multirow{3}{*}{Urban100}
& $\times2$
& 26.88/0.8403
& 29.50/0.8946
& 30.41/0.9101
& 31.23/0.9188
& -/-
& 31.31/0.9195
& 32.84/0.9347
& 32.89/0.9353
& \textbf{33.09}/\textbf{0.9368}
\\
& $\times3$
& 24.46/0.7349
& 26.24/0.7989
& 27.07/0.8272
& 27.53/0.8378
& -/-
& 27.56/0.8376
& 28.79/0.8655
& 28.80/0.8653
& \textbf{29.00}/\textbf{0.8683}
\\
& $\times4$
& 23.14/0.6577
& 24.52/0.7221
& 25.21/0.7553
& 25.44/0.7638
& 26.05/0.7819
& 25.50/0.7630
& 26.67/0.8041
& 26.61/0.8028
& \textbf{26.82}/\textbf{0.8069}
\\
\hline
\hline
\multirow{3}{*}{Manga109}
& $\times2$
& 30.80/0.9339
& 35.60/0.9663
& 37.27/0.9740
& 37.60/0.9736
& -/-
& 37.72/0.9740
& 38.96/0.9769
& 39.18/0.9780
& \textbf{39.38}/\textbf{0.9784}
\\
& $\times3$
& 26.95/0.8556
& 30.48/0.9117
& 32.19/0.9334
& 32.42/0.9359
& -/-
& 32.51/0.9369
& 34.17/0.9473
& 34.13/0.9484
& \textbf{34.43}/\textbf{0.9498}
\\
& $\times4$
& 24.89/0.7866
& 27.58/0.8555
& 29.09/0.8893
& 29.18/0.8914
& -/-
& 29.42/0.8942
& 31.11/0.9148
& 31.00/0.9151
& \textbf{31.39}/\textbf{0.9184}
\\
\hline
\end{tabular*}
\end{center}
\vspace{-2mm}
\caption{Benchmark results with \textbf{BI} degradation model. Average PSNR/SSIM values for scaling factor $\times2$, $\times3$, and $\times4$.}
\label{tab:results_BI_5sets}
\vspace{-3mm}
\end{table*}
\begin{figure*}[htbp]
\centering
\includegraphics[width = 170mm]{visual_BI.pdf}
\caption{Visual results with \textbf{BI} model ($\times4$). The SR results are for image ``119082" from B100 and ``img\_043'' from Urban100 respectively.}
\label{fig:visual_BI}
\vspace{-5mm}
\end{figure*}
\subsection{Results with BI Degradation Model}
\label{subsec:BI-degradation}
Simulating LR image with BI degradation model is widely used in image SR settings. For BI degradation model, we compare our RDN with 6 state-of-the-art image SR methods: SRCNN~\cite{dong2016image}, LapSRN~\cite{lai2017deep}, DRRN~\cite{tai2017image}, SRDenseNet~\cite{tong2017image}, MemNet~\cite{tai2017memnet}, and MDSR~\cite{lim2017enhanced}. Similar to~\cite{timofte2016seven,lim2017enhanced}, we also adopt self-ensemble strategy~\cite{lim2017enhanced} to further improve our RDN and denote the self-ensembled RDN as RDN+. As analyzed above, a deeper and wider RDN would lead to a better performance. On the other hand, as most methods for comparison only use about 64 filters per Conv layer, we report results of RDN by using D = 16, C = 8, and G = 64 for fair comparison. EDSR~\cite{lim2017enhanced} is skipped here, because it uses far more filters (\textit{i.e.}, 256) per Conv layer, leading to a very wide network with high number of parameters. However, our RDN would also achieve comparable or even better results than those by EDSR~\cite{lim2017enhanced}.
Table~\ref{tab:results_BI_5sets} shows quantitative comparisons for $\times2$, $\times3$, and $\times4$ SR. Results of SRDenseNet~\cite{tong2017image} are cited from their paper. When compared with persistent CNN models ( SRDenseNet~\cite{tong2017image} and MemNet~\cite{tai2017memnet}), our RDN performs the best on all datasets with all scaling factors. This indicates the better effectiveness of our residual dense block (RDB) over dense block in SRDensenet~\cite{tong2017image} and memory block in MemNet~\cite{tai2017memnet}. When compared with the remaining models, our RDN also achieves the best average results on most datasets. Specifically, for the scaling factor $\times2$, our RDN performs the best on all datasets. When the scaling factor becomes larger (e.g., $\times3$ and $\times4$), RDN would not hold the similar advantage over MDSR~\cite{lim2017enhanced}. There are mainly three reasons for this case. First, MDSR is deeper (160 v.s. 128), having about 160 layers to extract features in LR space. Second, MDSR utilizes multi-scale inputs as VDSR does~\cite{kim2016accurate}. Third, MDSR uses larger input patch size (65 v.s. 32) for training. As most images in Urban100 contain self-similar structures, larger input patch size for training allows a very deep network to grasp more information by using large receptive field better. As we mainly focus on the effectiveness of our RDN and fair comparison, we don't use deeper network, multi-scale information, or larger input patch size. Moreover, our RDN+ can achieve further improvement with self-ensemble~\cite{lim2017enhanced}.
In Fig.~\ref{fig:visual_BI}, we show visual comparisons on scale $\times4$. For image ``119082'', we observe that most of compared methods would produce noticeable artifacts and produce blurred edges. In contrast, our RDN can recover sharper and clearer edges, more faithful to the ground truth. For the tiny line (pointed by the {\color{red}red arrow}) in image ``'img\_043', all the compared methods fail to recover it. While, our RDN can recover it obviously. This is mainly because RDN uses hierarchical features through dense feature fusion.
\begin{table*}[htbp]
\scriptsize
\centering
\begin{center}
\begin{tabular*}{170.9mm}{@{\extracolsep{-0.928mm}}|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{Dataset} & \multirow{2}{*}{Model} & \multirow{2}{*}{Bicubic} & SPMSR & SRCNN & FSRCNN & VDSR & IRCNN\_G & IRCNN\_C & RDN & RDN+
\\
& & & \cite{peleg2014statistical} & \cite{dong2016image} & \cite{dong2016accelerating} & \cite{kim2016accurate} & \cite{zhang2017learning} & \cite{zhang2017learning} & (ours) & (ours)
\\
\hline
\hline
\multirow{2}{*}{Set5}
& \textbf{BD}
& 28.78/0.8308
& 32.21/0.9001
& 32.05/0.8944
& 26.23/0.8124
& 33.25/0.9150
& 33.38/0.9182
& 33.17/0.9157
& 34.58/0.9280
& \textbf{34.70}/\textbf{0.9289}
\\
& \textbf{DN}
& 24.01/0.5369
& -/-
& 25.01/0.6950
& 24.18/0.6932
& 25.20/0.7183
& 25.70/0.7379
& 27.48/0.7925
& 28.47/0.8151
& \textbf{28.55}/\textbf{0.8173}
\\
\hline
\hline
\multirow{2}{*}{Set14}
& \textbf{BD}
& 26.38/0.7271
& 28.89/0.8105
& 28.80/0.8074
& 24.44/0.7106
& 29.46/0.8244
& 29.63/0.8281
& 29.55/0.8271
& 30.53/0.8447
& \textbf{30.64}/\textbf{0.8463}
\\
& \textbf{DN}
& 22.87/0.4724
& -/-
& 23.78/0.5898
& 23.02/0.5856
& 24.00/0.6112
& 24.45/0.6305
& 25.92/0.6932
& 26.60/0.7101
& \textbf{26.67}/\textbf{0.7117}
\\
\hline
\hline
\multirow{2}{*}{B100}
& \textbf{BD}
& 26.33/0.6918
& 28.13/0.7740
& 28.13/0.7736
& 24.86/0.6832
& 28.57/0.7893
& 28.65/0.7922
& 28.49/0.7886
& 29.23/0.8079
& \textbf{29.30}/\textbf{0.8093}
\\
& \textbf{DN}
& 22.92/0.4449
& -/-
& 23.76/0.5538
& 23.41/0.5556
& 24.00/0.5749
& 24.28/0.5900
& 25.55/0.6481
& 25.93/0.6573
& \textbf{25.97}/\textbf{0.6587}
\\
\hline
\hline
\multirow{2}{*}{Urban100}
& \textbf{BD}
& 23.52/0.6862
& 25.84/0.7856
& 25.70/0.7770
& 22.04/0.6745
& 26.61/0.8136
& 26.77/0.8154
& 26.47/0.8081
& 28.46/0.8582
& \textbf{28.67}/\textbf{0.8612}
\\
& \textbf{DN}
& 21.63/0.4687
& -/-
& 21.90/0.5737
& 21.15/0.5682
& 22.22/0.6096
& 22.90/0.6429
& 23.93/0.6950
& 24.92/0.7364
& \textbf{25.05}/\textbf{0.7399}
\\
\hline
\hline
\multirow{2}{*}{Manga109}
& \textbf{BD}
& 25.46/0.8149
& 29.64/0.9003
& 29.47/0.8924
& 23.04/0.7927
& 31.06/0.9234
& 31.15/0.9245
& 31.13/0.9236
& 33.97/0.9465
& \textbf{34.34}/\textbf{0.9483}
\\
& \textbf{DN}
& 23.01/0.5381
& -/-
& 23.75/0.7148
& 22.39/0.7111
& 24.20/0.7525
& 24.88/0.7765
& 26.07/0.8253
& 28.00/0.8591
& \textbf{28.18}/\textbf{0.8621}
\\
\hline
\end{tabular*}
\end{center}
\vspace{-3mm}
\caption{Benchmark results with \textbf{BD} and \textbf{DN} degradation models. Average PSNR/SSIM values for scaling factor $\times3$.}
\label{tab:results_BD_DN_5sets}
\vspace{-5mm}
\end{table*}
\subsection{Results with BD and DN Degradation Models}
\label{subsec:BD-DN-degradation}
Following~\cite{zhang2017learning}, we also show the SR results with BD degradation model and further introduce DN degradation model. Our RDN is compared with SPMSR~\cite{peleg2014statistical}, SRCNN~\cite{dong2016image}, FSRCNN~\cite{dong2016accelerating}, VDSR~\cite{kim2016accurate}, IRCNN\_G~\cite{zhang2017learning}, and IRCNN\_C~\cite{zhang2017learning}. We re-train SRCNN, FSRCNN, and VDSR for each degradation model. Table~\ref{tab:results_BD_DN_5sets} shows the average PSNR and SSIM results on Set5, Set14, B100, Urban100, and Manga109 with scaling factor $\times3$. Our RDN and RDN+ perform the best on all the datasets with BD and DN degradation models. The performance gains over other state-of-the-art methods are consistent with the visual results in Figs.~\ref{fig:visual_BD} and~\ref{fig:visual_DN}.
For \textbf{BD} degradation model (Fig.~\ref{fig:visual_BD}), the methods using interpolated LR image as input would produce noticeable artifacts and be unable to remove the blurring artifacts. In contrast, our RDN suppresses the blurring artifacts and recovers sharper edges. This comparison indicates that extracting hierarchical features from the original LR image would alleviate the blurring artifacts. It also demonstrates the strong ability of RDN for \textbf{BD} degradation model.
For \textbf{DN} degradation model (Fig.~\ref{fig:visual_DN}), where the LR image is corrupted by noise and loses some details. We observe that the noised details are hard to recovered by other methods~\cite{dong2016image,kim2016accurate,zhang2017learning}. However, our RDN can not only handle the noise efficiently, but also recover more details. This comparison indicates that RDN is applicable for jointly image denoising and SR. These results with \textbf{BD} and \textbf{DN} degradation models demonstrate the effectiveness and robustness of our RDN model.
\vspace{-3mm}
\begin{figure}[htbp]
\centering
\includegraphics[width = 84mm]{visual_BDx3.pdf}
\caption{Visual results using \textbf{BD} degradation model with scaling factor $\times3$. The SR results are for image ``img\_096" from Urban100 and ``img\_099'' from Urban100 respectively.}
\label{fig:visual_BD}
\vspace{-3mm}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width = 84mm]{visual_DN.pdf}
\caption{Visual results using \textbf{DN} degradation model with scaling factor $\times3$. The SR results are for image ``302008" from B100 and ``LancelotFullThrottle'' from Manga109 respectively.}
\label{fig:visual_DN}
\vspace{-5mm}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width = 84mm]{visual_realdata.pdf}
\caption{Visual results on real-world images with scaling factor $\times4$. The two rows show SR results for images ``chip" and ``hatc'' respectively.}
\label{fig:visual_realdata}
\vspace{-5mm}
\end{figure}
\vspace{-1mm}
\subsection{Super-Resolving Real-World Images}
\label{subsec:realworld}
\vspace{-1mm}
We also conduct SR experiments on two representative real-world images, ``chip" (with 244$\times$200 pixels) and ``hatc" (with 133$\times$174 pixels)~\cite{zhang2017collaborative}. In this case, the original HR images are not available and the degradation model is unknown either. We compare our RND with VDSR~\cite{kim2016accurate}, LapSRN~\cite{lai2017deep}, and MemNet~\cite{tai2017memnet}. As shown in Fig.~\ref{fig:visual_realdata}, our RDN recovers sharper edges and finer details than other state-of-the-art methods. These results further indicate the benefits of learning dense features from the original input image. The hierarchical features perform robustly for different or unknown degradation models.
\vspace{-3mm}
\section{Conclusions}
\vspace{-2mm}
In this paper, we proposed a very deep residual dense network (RDN) for image SR, where residual dense block (RDB) serves as the basic build module. In each RDB, the dense connections between each layers allow full usage of local layers. The local feature fusion (LFF) not only stabilizes the training wider network, but also adaptively controls the preservation of information from current and preceding RDBs. RDB further allows direct connections between the preceding RDB and each layer of current block, leading to a contiguous memory (CM) mechanism. The local residual leaning (LRL) further improves the flow of information and gradient. Moreover, we propose global feature fusion (GFF) to extract hierarchical features in the LR space. By fully using local and global features, our RDN leads to a dense feature fusion and deep supervision. We use the same RDN structure to handle three degradation models and real-world data. Extensive benchmark evaluations well demonstrate that our RDN achieves superiority over state-of-the-art methods.
\vspace{-2mm}
\section{Acknowledgements}
\vspace{-2mm}
This research is supported in part by the NSF IIS award 1651902, ONR Young Investigator Award N00014-14-1-0484, and U.S. Army Research Office Award W911NF-17-1-0367.
{\small
\bibliographystyle{ieee}
|
{
"timestamp": "2018-03-28T02:25:26",
"yymm": "1802",
"arxiv_id": "1802.08797",
"language": "en",
"url": "https://arxiv.org/abs/1802.08797"
}
|
\section{Introduction}
In this note we study a hydrodynamic model of collective behavior given by
\begin{equation}\label{e:main}
\left\{
\begin{split}
\rho_t + \nabla \cdot (\rho u) & = 0, \\
u_t + u \cdot \nabla u &= [\mathcal{L}_\phi, u](\rho),
\end{split} \right. \qquad (x,t)\in \T^n \times \R_+.
\end{equation}
where the commutator $[\mathcal{L}_\phi, u](\rho):=\mathcal{L}_\phi(\rho u)-\mathcal{L}_\phi(\rho) u$
involves a non-negative communication kernel $\phi$, and $\mathcal{L}_\phi$ is given by
\begin{equation}\label{e:L}
\mathcal{L}_\phi(f):=\int_{\T^n} \phi(|x-y|)(f(y)-f(x))dy.
\end{equation}
This model represents a macroscopic description of an agent-based Cucker-Smale dynamics. The basic objective of such models is to explain emergence of flocking and alignment in a wide range of biological and social systems, see \cite{CCTT2016,CS2007a,MT2014,PS2012} and references therein. The system is driven by a self-organization process where agents exert influence on each other's momenta through the kernel $\phi$. The long time behavior is characterized by alignment, i.e. convergence of velocities $u$ to a single value $\bar{u}$, and flocking, which in our context is defined by convergence of the density $\rho$ to a traveling wave $\rho_\infty(x - t \bar{u})$. A common deficiency of existing models is the use of long range connections expressed, for example, by $\int_1^\infty \phi(r) dr =\infty$. Such connections, albeit necessary for the alignment, should be less dominant in a realistic system. In a new class of models introduced by E. Tadmor and the author in \cite{ST3,ST1,ST2} and independently by T. Do, et al \cite{DKRT2017}, we proposed to use the singular kernel of the fractional Laplacian given by
\begin{equation}\label{e:kernel}
\phi_\alpha(x) := \sum_{k\in \Z^n} \frac{1}{|x+ 2\pi k|^{n+\alpha}}, \qquad 0<\alpha< 2.
\end{equation}
which puts more emphasis on local interactions. Global well-posedness theory for these singular models has been developed only in 1D mainly due to presence of an additional conserved quantity
\[
e = u_x + \mathcal{L}_\phi \rho.
\]
It establishes control over higher order terms by means of a pointwise bound $|e| < C \rho$. In multi-dimensional settings the corresponding quantity is given by
\[
e = \nabla \cdot u + \mathcal{L}_\phi \rho,
\]
and satisfies
\begin{equation}\label{e:e}
e_t + \nabla \cdot(u e) = (\nabla \cdot u)^2 - \tr(\nabla u)^2 ,
\end{equation}
see \cite{HeT2017} and Section~\ref{s:proof} below for derivation. Lack of control on $e$ in this case is part of the reason why in multiple dimensions the model has no developed regularity theory. The two notable exceptions are the result of Ha, et al \cite{Ha2014} demonstrating global existence in the case of smooth communication kernel $\phi$ with small initial data in higher order Sobolev spaces, $\|u\|_{H^{s+1}} < \varepsilon_0$, where $\varepsilon_0$ depends on $\|\rho_0\|_{H^s}$; and He and Tadmor, \cite{HeT2017} with global existence in 2D under a smallness assumption only on the initial amplitude of $u$, spectral gap of the stretch tensor $\nabla u + \nabla^\top u$, and under the threshold condition $e\geq 0$ (also necessary in 1D for smooth kernels, \cite{CCTT2016}).
The goal of this note is to address global existence of smooth solutions for singular models \eqref{e:main} -- \eqref{e:kernel} in any dimension under periodic boundary conditions. We establish a small initial data result where smallness is expressed only in terms of the initial amplitude of the solution. To be precise, let us introduce some notation and terminology. We define the amplitude by
\begin{equation}
A(t) = \sup_{x \in \T^n} |u(x,t) - \bar{u}|,
\end{equation}
where $\bar{u} = \mathcal{P}/ \mathcal{M}$, $\mathcal{P} = \int_{\T^n} u\rho \, \mbox{d}x$, $\mathcal{M} = \int_{\T^n} \rho \, \mbox{d}x$. Observe that the momentum $\mathcal{P}$ and mass $\mathcal{M}$ are conserved.
Exactly the same argument as in \cite{ST1} applied to each component of $u$ proves the following a priori bound:
\begin{equation}\label{e:A}
A(t) \leq A_0 e^{- \phi_{min} \mathcal{M} t}, \quad \phi_{min} = \min_{x\in \T^n} \phi(x).
\end{equation}
Hence, global solutions automatically align exponentially fast. Note that $(\bar{u}, \bar{\rho})$, where $\bar{\rho} = \rho_\infty(x - t \bar{u})$ and $\bar{u}$ is constant, is a traveling wave solution to \eqref{e:main}. We call it a flocking state. As shown in \cite{ST3,ST2} any solution to \eqref{e:main} in 1D for any $0<\alpha<2$ converges exponentially fast to some flocking state in smooth regularity classes.
We use $|\cdot|_X$ to denote standard metrics and $[\cdot]_s$ to denote the homogeneous metric of $\dot{W}^{s,\infty}$. We now state our main result.
\begin{theorem}\label{t:main}
Let $0 < \alpha < 2$. There exists an $N\in \N$ such that for any sufficiently large $R>0$ any initial condition $(u_0,\rho_0) \in H^{m}(\T^n) \times H^{m-1+\alpha}(\T^n)$, $m\geq n+4$, satisfying
\begin{equation}
\begin{split}
|\rho_0|_\infty, |\rho^{-1}_0|_\infty, [u_0]_3, [\rho_0]_3 & \leq R,\\
A_0 & \leq \frac{1}{R^N},
\end{split}
\end{equation}
gives rise to a unique global solution in class $C([0,\infty): H^{m} \times H^{m-1+\alpha})$. Moreover, the solution converges to a flocking state exponentially fast at least in $C^1$:
\[
|\rho(t) - \bar{\rho}(t)|_{C^1} < C e^{-\delta t}.
\]
\end{theorem}
Finally we note that the argument of \thm{t:main} establishes a uniform control on $C^2$-norms of $u$ and the distance between the initial density $\rho_0$ and its final distribution $\rho_\infty$. As a consequence we obtain a stability result for flocking states.
\begin{theorem}\label{t:stab}
Let $(\bar{u},\bar{\rho})$ be a flocking state, where $\bar{\rho}(x) = \rho_\infty(x - t \bar{u})$, and let $(u_0,r_0)$ be an initial data satisfying
the conditions of \thm{t:main}. Suppose $|u_0 - \bar{u}|_\infty + |r_0 - \rho_\infty|_\infty < \varepsilon$. Then the solution will converge to another flock $r_\infty$ with $|r_\infty - \rho_\infty|_\infty < \varepsilon^\theta$, where $\theta\in (0,1)$ depends only on $\alpha$.
\end{theorem}
\section{Proof of the main result}\label{s:proof}
\noindent \textsc{Local existence and regularity criterion}. First, to see \eqref{e:e}, we take the divergence of the momentum equation, denoting $d = \nabla \cdot u$,
\[
d_t+u\cdot \nabla d + \tr(\nabla u)^2 = \mathcal{L}_\phi( \nabla \cdot (u \rho)) - u \cdot \nabla \mathcal{L}_\phi \rho - d \mathcal{L}_\phi \rho.
\]
Replacing $\mathcal{L}_\phi( \nabla \cdot (u \rho)) = - \mathcal{L}_\phi \rho_t$ and collecting the terms we obtain \eqref{e:e}.
The proof of local existence in space $u,\rho \in H^m$ for $m\geq n+4$ carries over ad verbatim from 1D case, see \cite{ST1}. Although the right hand side of \eqref{e:e} is not zero, it is quadratic in $\nabla u$. This results in the same a priori bound on the $e$-quantity:
\begin{equation}\label{eq:leb}
\partial_t |e|_{H^{m-1}}^2 \leq C (|e|_{H^{m-1}}^2 + |u|_{H^{m}}^2)( |\nabla u|_\infty + |e|_\infty).
\end{equation}
The bound on $u$ is also the same as in \cite{ST1}. We still have the Riccati boun for $Y = |u|_{H^m} + |e|_{H^{m-1}} + |\rho|_2 \sim |u|_{H^m} + |\rho|_{H^{m-1+\alpha}}$ :
\[
Y_t \leq C Y^2.
\]
Along with the local existence estimates comes a Beale-Kato-Mayda type regularity criterion: as long as $|\nabla u|_\infty$ is bounded on an interval $[0,T]$ all the higher order norms remain bounded as well, and hence the solution can be extended beyond $T$. The only difference being the $\nabla u$-term in the e-equation, which under BKM condition remains bounded, and hence so is $|e|_\infty$, which closes the estimate \eqref{eq:leb}.
In order to control $|\nabla u|_\infty$ it is sufficient to establish control over a higher order H\"older metric. We choose to work with the $C^{2+\gamma}$-norm as opposed to, say, $C^{1+\gamma}$-norm for two technical reasons. First, it will be necessary to overcome the singularity of the kernel for values of $\alpha$ greater than $1$, for which $C^{1+\gamma}$ is insufficient. Second, as a byproduct we establish control over $|\nabla^2 u|_\infty$ as well, which will lead us to the proof of strong flocking as in \cite{ST2}.
From now on we will fix an exponent $0<\gamma<1$ to be identified later but dependent only on $\alpha$. The following notation will be used throughout:
\[
\begin{split}
\delta_h u(x) &= u(x+h) - u(x), \quad \tau_z u(x) = u(x+z) \\
\delta^2_h u & = \delta_h(\delta_h u), \quad \delta^3_h u = \delta_h(\delta_h(\delta_h u)).
\end{split}
\]
We use the H\"older metric defined via third order finite differences
\begin{equation}\label{e:holder}
[u]_{2+\gamma} = \sup_{x,h \in \T^n} \frac{|\delta^3_h u(x)|}{|h|^{2+\gamma}}.
\end{equation}
The equivalence of \eqref{e:holder} to the classical norm $[\nabla^2 u]_\gamma$ is a well known result in approximation theory, see \cite{Triebel}.
\medskip
\noindent \textsc{Breakthrough scenario}. We assume that we are given a local solution $(u,\rho) \in C([0,\infty): H^{m} \times H^{m-1+\alpha})$ satisfying the assumptions of the Theorem. Note that in view of the smallness assumption on $A_0$, the norm $[u(t) ]_{2+\gamma}$ will remain smaller than $1$ at least for a short period of time. We thus study a possible critical time $t^*<T$ at which the solution reaches size $R$ for the first time:
\begin{equation}\label{e:R}
[u(t^*) ]_{2+\gamma} = R , \quad [u(t) ]_{2+\gamma} < R, \quad t< t^*.
\end{equation}
A contradiction will be achieved if we show that $\partial_t [u(t^*) ]_{2+\gamma} <0$. This would establish the bound $[u(t) ]_{2+\gamma} <R$ on the entire interval of existence, and hence extension to a global solution.
\medskip
\noindent \textsc{Preliminary estimates on $[0,t^*]$}. First we observe two simple bounds:
\begin{equation}\label{e:u12}
[u(t)]_1, [u(t)]_2 < R^{-\frac{6}{\alpha}} e^{-\frac{c_0 t}{R}}, \text{ for all } t\leq t^*,
\end{equation}
provided $R$ and $N$ are sufficiently large. Indeed, in view of \eqref{e:A} and $\mathcal{M} \geq 1/R$,
\[
[u]_1 \leq A^{\frac{1+\gamma}{2+\gamma}} [u]_{2+\gamma}^{\frac{1}{2+\gamma}} \leq R^{1-N/2} e^{-c_0 t / R} <R^{-\frac{6}{\alpha}} e^{-\frac{c_0 t}{R}},
\]
and similarly,
\[
[u]_2 \leq A^{\frac{\gamma}{2+\gamma}} [u]_{2+\gamma}^{\frac{2}{2+\gamma}} < R^{1 - N \frac{\gamma}{2+\gamma}} e^{-c_0 t/R} \leq R^{-\frac{6}{\alpha}} e^{-c_0 t/R}.
\]
Next, we provide bounds on the density. Let us denote $\underline{\rho}$ and $\overline{\rho}$ the minimum and maximum of $\rho$, respectively. Denote $d = \nabla \cdot u$. The classical estimates imply
\[
\underline{\rho}_0 \exp\left\{- \int_0^t |d(s)|_\infty \, \mbox{d}s \right\} \leq \underline{\rho}(t),\quad \overline{\rho}(t) \leq \overline{\rho}_0 \exp\left\{ \int_0^t |d(s)|_\infty \, \mbox{d}s \right\} .
\]
By \eqref{e:u12}, $|d|_\infty \leq R^{-3} e^{-c_0 s / R}$. Thus,
\[
\int_0^t |d(s)|_\infty \, \mbox{d}s \leq c R^{-2} \leq \ln 2,
\]
Hence, we obtain the estimates
\begin{equation}\label{e:rrRR}
\frac{1}{2R} \leq \underline{\rho}(t),\quad \overline{\rho}(t) \leq 2R.
\end{equation}
To get similar bounds for higher order derivatives of $\rho$ we resort to the $e$-quantity. Note that the right hand side of the e-equation is bounded by
\[
| (\nabla \cdot u)^2 - \tr(\nabla u)^2 | \leq c [u]_1^2 \lesssim R^{-6 } e^{-c_0 t /R}.
\]
From \eqref{e:e} we thus obtain
\[
\frac{\mbox{d\,\,}}{\mbox{d}t} |e|_\infty \leq R^{-3} e^{-c_0 t / R} |e|_\infty + R^{-6} e^{-c_0 t /R}.
\]
Again, by Gr\"onwall, and using that $|e_0|_\infty <R$,
\begin{equation}\label{e:eR}
|e|_\infty \leq 2R,
\end{equation}
The estimate for $\nabla e$ follows similar calculation. Differentiating and performing standard estimates we obtain
\[
\frac{\mbox{d\,\,}}{\mbox{d}t} [e]_1 \lesssim [u]_1 [e]_1 + [u]_2 |e|_\infty + c [u]_1 [u]_2 .
\]
Using \eqref{e:u12} we obtain
\[
\frac{\mbox{d\,\,}}{\mbox{d}t} [e]_1 \lesssim R^{-3} e^{-c_0 t / R} [e]_1 + 2 R^{-2} e^{-c_0 t/R} + c R^{-3} e^{-c_0 t/R}.
\]
Note that initially $[e_0]_1 \leq [u_0]_2 + [\rho_0]_3 < 2R$. So, by Gr\"onwall,
\[
[e]_1 \leq 4R,
\]
and hence, for $\alpha\neq 1$, we obtain
\begin{equation}\label{e:r1a}
[\rho]_{1+\alpha} \leq 5R,
\end{equation}
while for $\alpha=1$,
\[
[\mathcal{L}_1 \rho]_1 \leq 5R.
\]
The latter does not guarantee a bound in $W^{2,\infty}$, however it implies bounds in other border-line classes such as Zygmund or Besov $B^2_{\infty,\infty}$. It will be sufficient for what follows to reduce the exponent $2$ by $\gamma>0$, which will ultimately depend on $\alpha$ only, and quote the case $\alpha\geq 1$ as
\begin{equation}\label{e:rR}
[\rho]_{1+\alpha - \gamma} \leq C_\alpha R.
\end{equation}
\medskip
\noindent \textsc{Nonlinear bound on dissipation}. We establish another auxiliary bound on the dissipation term similar to the nonlinear maximum principle estimate of Constantin and Vicol \cite{CV2012}, see also \cite{CCotiV2016}.
Denote
\[
D_\alpha f (x) = \int_{\R^n} |f(x+z) - f(x)|^2 \frac{\, \mbox{d}z}{|z|^{n+\alpha}}.
\]
\begin{lemma} There is an absolute constant $c_0>0$ such that
\begin{equation}\label{e:D}
D_\alpha \delta_h^3 u (x) \geq c_0 \frac{| \delta_h^3 u (x) |^{2 + \alpha}}{ [u]_2^\alpha |h|^{3 \alpha}}.
\end{equation}
\end{lemma}
\begin{proof}
Let us fix a smooth cut-off function $\psi$, and fix an $r>0$. We obtain
\[
\begin{split}
D_\alpha \delta_h^3 u (x) & \geq \int |\delta_z \delta_h^3 u (x)|^2 \frac{1-\psi(z/r)}{|z|^{n+\alpha}} \, \mbox{d}z \\
& \geq \int (|\delta_h^3 u (x)|^2 - 2 \delta_h^3 u (x) \delta_h^3 u (x+z)) \frac{1-\psi(z/r)}{|z|^{n+\alpha}} \, \mbox{d}z\\
&\geq | \delta_h^3 u (x)|^2 \frac{1}{r^\alpha} - 2 \delta_h^3 u (x) \int \delta_h^3 u (x+z) \frac{1-\psi(z/r)}{|z|^{n+\alpha}} \, \mbox{d}z
\end{split}
\]
Notice that
\[
\delta_h^3 u (x+z) = \int_0^1\int_0^1\int_0^1 \nabla^3_z u(x+z+(\theta_1+\theta_2+\theta_3)h) (h,h,h) \, \mbox{d}\th_1\, \mbox{d}\th_2 \, \mbox{d}\th_3.
\]
Integrating by parts in $z$ once, and using the bound
\[
\left| \nabla_z \frac{1-\psi(z/r)}{|z|^{n+\alpha}} \right| \leq \frac{c}{|z|^{n+\alpha+1}} \chi_{|z|>r},
\]
we obtain
\[
\left| \int \delta_h^3 u (x+z) \frac{1-\psi(z/r)}{|z|^{n+\alpha}} \, \mbox{d}z \right| \leq C [u]_2 \frac{|h|^3}{r^{\alpha+1}}.
\]
We continue with the estimate:
\[
D_\alpha \delta_h^3 u (x) \geq | \delta_h^3 u (x)|^2 \frac{1}{r^\alpha} - C [u]_2 |\delta_h^3 u (x)| \frac{|h|^3}{r^{\alpha+1}}.
\]
Optimizing in $r$ yields the result.
\end{proof}
In view of the preliminary estimates we established and the assumption of the breakthrough scenario, we consequently obtain
\begin{equation}\label{e:diss}
\frac{1}{|h|^{4+ 2\gamma}} D_\alpha \delta_h^3 u (x) \geq \frac{R^{8+\alpha}}{|h|^{\alpha(1-\gamma)}}.
\end{equation}
\medskip
\noindent \textsc{Main estimates.} With all ingredients at hand we are now ready to use the equation to make estimate on the derivative of $[u]_{2+\gamma}$. Let $(x,h)\in \T^n$ be a pair for which the supremum \eqref{e:holder} is attained. We write the equation for the third order difference:
\begin{equation}
\partial_t \delta^3_h u + \delta^3_h ( u \nabla u) = \int_\R \delta_h^3[\rho(\cdot +z)( u(\cdot +z) - u(\cdot))] \frac{\, \mbox{d}z}{|z|^{n+\alpha}}.
\end{equation}
Denote
\begin{equation}
\begin{split}
B& = \delta^3_h ( u \nabla u), \\
I &= \int_\R \delta_h^3[\rho(\cdot +z)( u(\cdot +z) - u(\cdot))] \frac{\, \mbox{d}z}{|z|^{n+\alpha}}.
\end{split}
\end{equation}
We will be testing the equation with $\delta_h^3 u(x)/|h|^{4+2\gamma}$. To expand the bilinear terms we make use of the product formula
\[
\delta^3_h(fg) = \delta_h^3 f \tau_{3h} g + 3 \delta_h^2 f \delta_h \tau_{2h} g + 3 \delta_h f \delta_h^2 \tau_h g + f \delta_h^3 g.
\]
For the B-term we obtain
\[
B = \delta_h^3 u \tau_{3h} \nabla u + 3 \delta_h^2 u \delta_h \tau_{2h} \nabla u + 3 \delta_h u \delta_h^2 \tau_h \nabla u + u \nabla \delta_h^3 u.
\]
Note that the last term vanishes due to criticality. Thus, we can estimate
\[
\frac{1}{|h|^{2+\gamma}} |B| \leq [u]_{2+\gamma} [u]_1 + 3|h|^{1-\gamma}[u]_2^2 + 3 [u]_1 [u]_{2+\gamma} \lesssim [u]_{2+\gamma} [u]_1 + |h|^{1-\gamma} [u]_2^2.
\]
Multiplying by another $[u]_{2+\gamma} = R$ and using \eqref{e:u12} we obtain
\begin{equation}\label{e:B}
\frac{[u]_{2+\gamma}}{|h|^{2+\gamma}} |B| \lesssim R^{-1} + R^{-5} < 1.
\end{equation}
We now turn to the I-term which contains dissipation.
The integrand is given by $\delta^3_h [ \tau_z \rho\, \delta_z u]$. So, we expand similarly using commutativity $\delta_h \delta_z = \delta_z \delta_h$:
\begin{equation}
\begin{split}
\delta^3_h [ \tau_z \rho\, \delta_z u] = \delta_h^3 \tau_z \rho\, \tau_{3h} \delta_z u + 3 \delta_h^2 \tau_z \rho\, \tau_{2h} \delta_h \delta_z u+ 3 \delta_h \tau_z \rho\, \tau_h \delta_h^2 \delta_z u + \tau_z \rho\, \delta_z \delta_h^3 u.
\end{split}
\end{equation}
Multiplying upon $\delta_h^3 u$ the last term becomes dissipative:
\[
\tau_z \rho\, \delta_z \delta_h^3 u \, \delta_h^3 u \leq - \frac12 \underline{\rho}\, |\delta_z \delta_h^3 u |^2.
\]
Dividing by $|h|^{4+2 \gamma}$ and using \eqref{e:diss} we obtain
\begin{equation}\label{e:Duh}
\frac{1}{2|h|^{4+2 \gamma}} \underline{\rho} D_\alpha \delta_h^3 u (x) \geq \frac{ R^8}{ |h|^{\alpha(1-\gamma)}}.
\end{equation}
At this point it is clear that the transport term estimated in \eqref{e:B} is completely absorbed into dissipation at the critical time $t^*$:
\[
\partial_t [u]_{2+\gamma}^2 \leq - \frac{R^7}{ |h|^{\alpha(1-\gamma)}} + \frac{\delta_h^3 u(x)}{|h|^{4+2\gamma}} II.
\]
Here $II$ contains all the remaining three terms of $I$:
\[
II = \int_{\R^n}[ \delta_h^3 \tau_z \rho\, \tau_{3h} \delta_z u + 3 \delta_h^2 \tau_z \rho\, \tau_{2h} \delta_h \delta_z u+ 3 \delta_h \tau_z \rho\, \tau_h \delta_h^2 \delta_z u ] \frac{\, \mbox{d}z}{|z|^{n+\alpha}} = II_1+3 II_2+3 II_3.
\]
Let us now turn to estimates on each of the remaining $II_i$ terms. Specifically, we will be aiming to obtain bounds of the form
\begin{equation}\label{e:aim}
\frac{1}{|h|^{2+\gamma}} |II_i| \lesssim \frac{ |h|^\varepsilon }{ |h|^{\alpha(1-\gamma)}}
\end{equation}
for some $\varepsilon>0$ provided $\gamma$ is sufficiently small. This consequently makes the dissipation term absorb all the remaining terms in the equation.
We start with $II_2$. For $\alpha<1$, we use \eqref{e:u12} and \eqref{e:r1a} to obtain
\[
\begin{split}
|\delta_h^2 \tau_z \rho | & \leq [\rho]_{1+\alpha} |h|^{1+\alpha} \lesssim R|h|^{1+\alpha}\\
|\tau_{2h} \delta_h \delta_z u |& \leq [u]_2 |h| \min\{|z|,1\} \lesssim R^{-1} |h| \min\{|z|,1\}.
\end{split}
\]
Thus, the singularity is removed and we obtain
\[
\frac{1}{|h|^{2+\gamma}}|II_2| \lesssim |h|^{\alpha - \gamma},
\]
which clearly implies \eqref{e:aim} for sufficiently small $\gamma$. In the case $\alpha \geq 1$ we first symmetrize
\[
II_2 = \frac12 \int_{\R^n}[ \delta_h^2 (\tau_z - \tau_{-z})\rho\, \tau_{2h} \delta_h \delta_z u+\delta_h^2 \tau_z \rho\, \tau_{2h} \delta_h (\delta_z + \delta_{-z}) u ] \frac{\, \mbox{d}z}{|z|^{n+\alpha}}.
\]
For the first part we use \eqref{e:rR}:
\[
\begin{split}
|\delta_h^2 (\tau_z - \tau_{-z})\rho | & \leq R \min\{|h|^{1+\alpha-\gamma}, |h|^{\alpha-\gamma} |z|\} \\
|\tau_{2h} \delta_h \delta_z u |& \leq R^{-1} |h| \min\{|z|,1\}.
\end{split}
\]
Hence,
\[
\frac{1}{|h|^{2+\gamma}} \int_{\R^n} | \delta_h^2 (\tau_z - \tau_{-z})\rho\, \tau_{2h} \delta_h \delta_z u | \frac{\, \mbox{d}z}{|z|^{n+\alpha}} \leq \frac{|h|^{1+\alpha - \gamma}}{|h|^{2+\gamma}} \leq \frac{ |h|^{\alpha -2 \gamma -1 + \alpha(1-\gamma)} }{ |h|^{\alpha(1-\gamma)}}.
\]
Clearly, $\alpha - 2 \gamma -1 + \alpha(1-\gamma)>0$. Lastly, using that $(\delta_z + \delta_{-z}) u $ is the second difference,
\[
| \delta_h^2 \tau_z \rho\, \tau_{2h} \delta_h (\delta_z + \delta_{-z}) u | \leq |h|^{2-\gamma} \min\{ |z|^2,1\}
\]
we obtain
\[
\frac{1}{|h|^{2+\gamma}} \int_{\R^n} | \delta_h^2 \tau_z \rho\, \tau_{2h} \delta_h (\delta_z + \delta_{-z}) u | \frac{\, \mbox{d}z}{|z|^{n+\alpha}} \leq \frac{|h|^{2-\gamma}}{|h|^{2+\gamma}} \leq \frac{h^{\alpha(1-\gamma) - 2\gamma }}{|h|^{\alpha(1-\gamma)}}.
\]
This completes the bounds on $II_2$.
As to $II_3$ we proceed similarly. For $\alpha<1$, we use
\[
|\delta_h \tau_z \rho\, \tau_h \delta_h^2 \delta_z u | \leq |h|^2 \min\{|z|,1\}.
\]
Hence,
\[
\frac{1}{|h|^{2+\gamma}} |II_3| \lesssim \frac{|h|^2}{|h|^{2+\gamma}} \leq \frac{h^{\alpha(1-\gamma) - \gamma }}{|h|^{\alpha(1-\gamma)}}.
\]
The power in the numerator is positive for sufficiently small $\gamma$. For $\alpha\geq 1$, we again symmetrize first
\[
II_3 = \frac12 \int_{\R^n}[ \delta_h (\tau_z - \tau_{-z})\rho\, \tau_{h} \delta^2_h \delta_z u+\delta_h \tau_z \rho\, \tau_{h} \delta^2_h (\delta_z + \delta_{-z}) u ] \frac{\, \mbox{d}z}{|z|^{n+\alpha}}.
\]
Thus,
\[
\begin{split}
|\delta_h (\tau_z - \tau_{-z})\rho\, \tau_{h} \delta^2_h \delta_z u | & \leq \min\{|h|^{3+\alpha - \gamma}, |h|^{1+ \alpha} |z|^2 \} \\
| \delta_h \tau_z \rho\, \tau_{h} \delta^2_h (\delta_z + \delta_{-z}) u | & \leq \min\{|h|^3, |h| |z|^2\}
\end{split}
\]
The first term results in an estimate as before. For the second we split the integration into regions $|z|<r$ and $|z|>r$ to obtain the bound by $|h| r^{2-\alpha} + |h|^3 r^{-\alpha}$. Setting $r = |h|$ leads to a further bound by $|h|^{3-\alpha}$ which implies the desired \eqref{e:aim}.
Estimates on $II_1$ are somewhat more involved. For the case $\alpha \geq 1$ we symmetrize:
\[
II_1 = \frac12 \int_{\R^n}[ \delta_h^3 ( \tau_z \rho - \tau_{-z}\rho) \, \tau_{3h} \delta_z u + \delta_h^3 \tau_z \rho\, \tau_{3h} (\delta_z u + \delta_{-z} u) ] \frac{\, \mbox{d}z}{|z|^{n+\alpha}}.
\]
For the first half we use
\[
| \delta_h^3 ( \tau_z \rho - \tau_{-z}\rho) \, \tau_{3h} \delta_z u | \leq |h|^{\alpha-\gamma} \min\{|z|^2, |h|\}.
\]
For the second half we use
\[
| \delta_h^3 \tau_z \rho\, \tau_{3h} (\delta_z u + \delta_{-z} u) | \leq |h|^{1+\alpha-\gamma} \min\{|z|^2,1\},
\]
so, this is esimated as before. One can see that in fact the estimates above extend to the range $\alpha>\frac12$, but not all the way to zero. The problem is that the density takes all the variations in $h$ and not fully uses them, while $u$ cannot directly contribute. So, we will swap one $h$-difference back onto $u$. We start from the original formula
\[
II_1 = \int_{\R^n}\delta_h^3 \tau_z \rho(x) \, \tau_{3h} \delta_z u (x) \frac{\, \mbox{d}z}{|z|^{n+\alpha}}.
\]
Over the region $|z|<10 |h|$ we estimate directly using the same cut-off function $\psi$ as earlier:
\[
\int_{\R^n} | \delta_h^3 \tau_z \rho(x) \, \tau_{3h} \delta_z u (x) | \, \psi\left(\frac{z}{10|h|}\right) \frac{ \, \mbox{d}z}{|z|^{n+\alpha}} \leq \int_{|z|<10 |h| } |h|^{1+\alpha} \frac{\, \mbox{d}z}{|z|^{n+\alpha-1}} \lesssim |h|^2,
\]
this results in \eqref{e:aim}. For the remaining part, let us denote for clarity $f = \delta_h^2 \rho$. So, $\delta_h^3 \tau_z \rho(x) = f(x+h+z) - f(x+z)$. We write
\[
\begin{split}
& \int_{\R^n} (f(x+h+z) - f(x+z)) \, \tau_{3h} \delta_z u (x) |\frac{(1-\psi(\frac{z}{10|h|})) \, \mbox{d}z}{|z|^{n+\alpha}} \\
&= \int_{\R^n} f(x+z) \left( \tau_{3h} \delta_{z-h} u (x) \frac{(1-\psi(\frac{z-h}{10|h|}))}{|z - h|^{n+\alpha}} - \tau_{3h} \delta_{z} u (x) \frac{(1-\psi(\frac{z}{10|h|}))}{|z |^{n+\alpha}} \right) \, \mbox{d}z \\
& = \int_{\R^n} f(x+z) \tau_{3h} (\delta_{z-h} u (x) - \delta_{z} u (x)) \frac{(1-\psi(\frac{z-h}{10|h|}))}{|z - h|^{n+\alpha}} \, \mbox{d}z \\
& - \int_{\R^n} f(x+z) \tau_{3h} \delta_{z} u (x) \left( \frac{(1-\psi(\frac{z-h}{10|h|}))}{|z - h|^{n+\alpha}} - \frac{(1-\psi(\frac{z}{10|h|}))}{|z |^{n+\alpha}} \right) \, \mbox{d}z
\end{split}
\]
Note that the integrals are still supported on $|z| > 9|h|$, where $|z - h| \sim |z|$. Estimating the first part we use
\[
\begin{split}
|\delta_{z-h} u (x) - \delta_{z} u (x)| &=| u(x+z - h) - u(x+z)| \leq |h| \\
|f(x+z)| & \leq |h|^{1+\alpha}.
\end{split}
\]
thus,
\[
\left| \int_{\R^n} f(x+z) \tau_{3h} (\delta_{z-h} u (x) - \delta_{z} u (x)) \frac{(1-\psi(\frac{z-h}{10|h|}))}{|z - h|^{n+\alpha}} \, \mbox{d}z \right| \leq |h|^{2+\alpha} \int_{|z|\geq |h|} \frac{\, \mbox{d}z}{|z |^{n+\alpha}} \leq |h|^2,
\]
which implies \eqref{e:aim}. Finally, for the second part we use
\[
\left| \frac{(1-\psi(\frac{z-h}{10|h|}))}{|z - h|^{n+\alpha}} - \frac{(1-\psi(\frac{z}{10|h|}))}{|z |^{n+\alpha}} \right| \leq |h| \frac{I_{|z|>9|h|}}{|z - \theta h |^{n+\alpha+1}} \lesssim \frac{ |h| I_{|z|>9|h|}}{|z |^{n+\alpha+1}} ,
\]
and
\[
| f(x+z) \tau_{3h} \delta_{z} u (x) | \leq |h|^{1+\alpha} |z|.
\]
Integration produces the same estimate as for the first part.
We have established that $\partial_t [u(t^*) ]^2_{2+\gamma} <0$ at the critical time, which finishes the proof.
\medskip
\noindent \textsc{Flocking}. We have constructed solutions which enjoy the global bounds \eqref{e:u12} and \eqref{e:rR}, which in turn implies $|\nabla \rho|_\infty < C R$.
Arguing as in \cite{ST2}, we denote $\widetilde{\rho}(x,t) := \rho(x+ t \bar{u},t)$:
\[
\partial_t \widetilde{\rho} = - (u - \bar{u}) \cdot \nabla \widetilde{\rho} - d \widetilde{\rho} ,
\]
where all the $u$'s are evaluated at $x+ t \bar{u}$. According to the established bounds, the right hand side is exponentially decaying quantity in $L^\infty$:
\[
|(u - \bar{u}) \cdot \nabla \widetilde{\rho} + d \widetilde{\rho}|_\infty
\leq C e^{-\delta t}.
\]
Hence, $\widetilde{\rho}(t) $ is Cauchy as $t \rightarrow \infty$, and hence there exists a unique limiting state, $\rho_\infty(x)$, such that
\[
| \widetilde{\rho} (\cdot,t) - \rho_\infty(\cdot)|_\infty < C_1 e^{-\delta t}.
\]
Shifting back to labels $x$, $\bar{\rho}(x,t)=\rho_\infty(x-t\bar{u})$, we have
\[
| \rho(\cdot,t ) - \bar{\rho}(\cdot,t)|_\infty < C_1 e^{-\delta t}.
\]
We also have $\bar{\rho} \in W^{1+\alpha - \gamma,\infty}$ by compactness. Using again \eqref{e:rR} and by interpolation we have convergence in the $W^{1,\infty}$-metric as well:
\[
[ \rho(\cdot,t ) - \bar{\rho}(\cdot,t)]_1 < C_2 e^{-\delta t}.
\]
\medskip
\noindent \textsc{Stability}. The computation above shows that in fact the limiting flock $r_\infty$ differs little from initial density $r_0$ under the conditions of \thm{t:stab}. Indeed, setting $R$ such that $\varepsilon = 1/R^N$ (here $\varepsilon>0$ is small), we obtain via \eqref{e:u12},
\[
| \partial_t \tilde{r} |_\infty \leq C R^{-2} e^{-c_0 t /R}.
\]
Hence, $| r_\infty - r_0 |_\infty \leq \frac{C}{c_0 R} = \varepsilon^\theta$. Since $|r_0 - \rho_\infty|<\varepsilon$, this finishes the result.
|
{
"timestamp": "2018-03-15T01:04:46",
"yymm": "1802",
"arxiv_id": "1802.08926",
"language": "en",
"url": "https://arxiv.org/abs/1802.08926"
}
|
\section{Introduction}
\def\Alph{section}.\arabic{equation}{\arabic{section}.\arabic{equation}}
\label{sec:intro}
Nuclear forces have been extensively studied within the framework
of chiral effective field theory (EFT) over the past two decades; see
Refs.~\cite{Epelbaum:2008ga,Machleidt:2011zz} for review articles. In this approach, two-,
three- and more-nucleon forces are calculated from the most general
effective Lagrangian order by order in the
chiral expansion, i.e., a perturbative expansion in powers of $Q \in
\{p/\Lambda_b, \; M_\pi / \Lambda_b \}$ with $p$, $M_\pi$ and $\Lambda_b$ referring
to the magnitude of the typical nucleon three-momenta, the pion
mass and the breakdown scale, respectively.
Most of the calculations available so far utilize the heavy-baryon
formulation of chiral EFT with pions and nucleons being the only
active degrees of freedom and make use
of Weinberg's power counting for contact interactions based on naive
dimensional analysis \cite{Weinberg:1990rz,Weinberg:1991um}; see, however,
Refs.~\cite{Nogga:2005hy,Birse:2005um,Valderrama:2009ei,Epelbaum:2012ua,Gasparyan:2012km,Oller:2014uxa}
and
references therein for alternative formulations. Much progress has
been made within this framework in the past two years to improve the description of the
nucleon-nucleon (NN) force. First, the order-$Q^5$
(i.e., N$^4$LO) \cite{Entem:2014msa} and even most of the
order-$Q^6$ (N$^5$LO) contributions to the
NN force have been worked out \cite{Entem:2015xwa}. Second, a new
generation of NN potentials up to N$^4$LO has been developed
using semilocal \cite{Epelbaum:2014efa,Epelbaum:2014sza} and
nonlocal \cite{Entem:2017gor} regularization schemes; see also
Refs.~\cite{Piarulli:2014bda,Carlsson:2015vda,Piarulli:2016vel,Ekstrom:2017koy}
for related studies along similar lines.
In contrast to the previous
order-$Q^4$ (N$^3$LO) chiral NN potentials of
Refs.~\cite{Entem:2003ft,Epelbaum:2004fk}, the long-range part of
the interaction introduced in
Refs.~\cite{Epelbaum:2014efa,Epelbaum:2014sza} is regularized in
coordinate space by multiplying
with the function
\begin{equation}
\label{DefReg}
f \bigg( \frac{r}{R} \bigg) = \bigg[ 1 - \exp \bigg( -\frac{r^2}{R^2}
\bigg) \bigg]^n\,, \quad \quad n=6\,,
\end{equation}
while the contact interactions are regularized in momentum space using a non-local
Gaussian regulator with the cutoff $\Lambda = 2 R^{-1}$.
(See Refs.~\cite{Gezerlis:2013ipa,Gezerlis:2014zia,Piarulli:2014bda,Piarulli:2016vel}
for recently constructed chiral potentials with
locally regularized long-range interactions.)
The resulting semilocal coordinate-space regularized (SCS) chiral
potentials are available for $R=0.8$, $0.9$, $1.0$, $1.1$, and
$1.2$~fm. The use of a local regulator for the short-range part of
the interaction allows one to reduce the amount of finite-cutoff
artifacts; see, however, Ref.~\cite{Dyhdalo:2016ygz} for a discussion of regulator
artifacts in uniform matter. Furthermore, in contrast to the
first generation of the chiral N$^3$LO potentials, all pion-nucleon ($\pi$N)
low-energy constants (LECs) were determined from the $\pi$N system
without any fine tuning. Consequently, the long-range part of the
NN force is predicted in a parameter-free way. In fact, clear
evidence of the resulting (parameter-free) contributions to the two-pion
exchange at orders $Q^3$ (i.e., N$^2$LO) and $Q^5$ was found in NN
phase shifts \cite{Epelbaum:2014efa,Epelbaum:2014sza}. We further
emphasize that the approximate independence of the results for phase
shifts on the functional form of the coordinate-space regulator in Eq.~(\ref{DefReg})
was demonstrated in Ref.~\cite{Epelbaum:2014efa} at N$^3$LO by employing different
exponents $n=5$ and $n=7$ and introducing an additional spectral
function regularization with the momentum cutoff in the range of
$\Lambda = 1$ to $2$~GeV.
Very recently, a new family of semilocal
momentum-space regularized (SMS) chiral
NN potentials was introduced \cite{Reinert:2017usi}. In addition to
employing a momentum-space version of a local regulator for the
long-range interactions and using the $\pi N$ LECs from matching
pion-nucleon Roy-Steiner equations to chiral perturbation theory
\cite{Hoferichter:2015tha} (see also
Refs.~\cite{Krebs:2012yv,Alarcon:2012kn,Chen:2012nx,Wendt:2014lja,Yao:2016vbz,Siemens:2016hdi,Siemens:2016jwj,Siemens:2017opr,Hoferichter:2015hva,Hoferichter:2015tha}
for related work on
the determination of the $\pi N$ LECs),
the SMS potentials of Ref.~\cite{Reinert:2017usi} differ from the SCS
ones of Ref.~\cite{Epelbaum:2014efa,Epelbaum:2014sza} in the
determination of the NN contact interactions.
That is, the SMS potentials of Ref.~\cite{Reinert:2017usi} were fitted
directly to NN scattering data rather than to the Nijmegen partial wave
analysis \cite{NIJMI}. Another important difference concerns the
implementation of the contact interactions. In particular,
the SMS potentials of Ref.~\cite{Reinert:2017usi} utilize a
specific choice for $3$ redundant N$^3$LO contact operators out of
$15$,
which parametrize the unitary ambiguity in the
short-range part of the
nuclear force at this chiral order. This is in contrast to the
potentials of
Refs.~\cite{Entem:2003ft,Epelbaum:2004fk,Epelbaum:2014efa,Epelbaum:2014sza,Entem:2017gor},
where all $15$
order-$Q^4$ contact interactions were fitted to Nijmegen PWA and/or NN
scattering data.
Another important recent development is the establishment of a simple
algorithm for estimating the theoretical uncertainty from the
truncation of the chiral expansion \cite{Epelbaum:2014efa}. The new
method uses the available information on the chiral expansion of a
given observable to estimate the magnitude of neglected higher
order terms. To be specific, consider some few-nucleon observable $X(p)$ with $p$
being the corresponding momentum scale. The chiral expansion of $X$ up
to order $Q^n$ can be written in the form
\begin{equation}
X^{(n)} = X^{(0)} + \Delta X^{(2)} + \ldots + \Delta X^{(n)} \,,
\end{equation}
where we have defined
\begin{equation}
\Delta X^{(2)} \equiv X^{(2)} - X^{(0)}, \quad \quad \Delta X^{(i)}
\equiv X^{(i)} - X^{(i-1)} \quad
\mbox{for} \quad i \ge 3\,.
\end{equation}
Assuming that the chiral expansion of the nuclear force
translates into a similar expansion of the observable, one expects
\begin{equation}
\label{PCAssumption}
\Delta X^{(i)} = \mathcal{O} (Q^i X^{(0)})\,.
\end{equation}
In \cite{Epelbaum:2014efa}, the size of truncated contributions at a given order $Q^i$
was then
estimated via
\begin{equation}
\label{ErrorOrig}
\delta X^{(0)} = Q^2 | X^{(0)}|, \quad \quad \delta X^{(i)} =
\max_{2 \le j \le i} \Big( Q^{i+1} | X^{(0)} |, \; Q^{i+1-j} | \Delta
X^{(j)} | \Big) \; \; \mbox{for} \; \; i \ge 2\,,
\end{equation}
subject to the additional constraint
\begin{equation}
\label{ErrorOrig2}
\delta X^{(i)} \ge \max \Big( | X^{(j \ge i)} - X^{(k \ge i)} | \Big)\,,
\end{equation}
where the expansion parameter $Q$ was chosen as
\begin{equation}
\label{ExpPar}
Q = \max \bigg( \frac{p}{\Lambda_b}, \; \frac{M_\pi}{\Lambda_b} \bigg)\,.
\end{equation}
For the breakdown scale of the chiral expansion $\Lambda_b$, the
values of $\Lambda_b = 600$~MeV for $R=0.8,~0.9$ and $1.0$~fm, $\Lambda_b
= 500$~MeV for $R=1.1$~fm and $\Lambda_b= 400$~MeV for $R=1.2$~fm were
adopted based on an analysis of error plots~\cite{Epelbaum:2014efa}. Smaller
values of the breakdown scale for softer cutoffs reflect an
increasing amount of regulator artifacts.
The algorithm for uncertainty quantification specified above allows one to
circumvent some of the drawbacks of the previous
approach based on cutoff variation~\cite{Epelbaum:2004fk}, such as the relatively
narrow available range of cutoffs and the fact that
residual regulator dependence shows the impact of neglected contact
interactions, which contribute only at even orders $Q^{2n}$ of the
chiral expansion; see \cite{Epelbaum:2014efa} for a comprehensive
discussion. In addition, it provides an independent estimation of the theoretical
uncertainty for any given cutoff value.
This novel algorithm was already successfully applied in the
two-nucleon sector. In particular, the actual size of the N$^4$LO
corrections to NN phase shifts and scattering observables was shown in
Ref.~\cite{Epelbaum:2014sza} to be in a good agreement with the
estimated uncertainty at N$^3$LO \cite{Epelbaum:2014efa}.
A statistical interpretation of the theoretical error bars is discussed in Refs.~\cite{Furnstahl:2015rha,Melendez:2017phj}.
The theoretical developments outlined above open the way for
understanding and validating the details of the many-body forces and
exchange currents that constitute an important frontier in nuclear
physics. First steps along these lines were taken in
Ref.~\cite{Binder:2015mbz} by employing the SCS NN potentials of
Refs.~\cite{Epelbaum:2014efa,Epelbaum:2014sza} along with the novel algorithm for uncertainty
quantification to analyze elastic nucleon-deuteron scattering and
selected observables in $^3$H, $^4$He and $^6$Li. In order to allow for a
meaningful quantification of truncation errors in incomplete
calculations based on NN interactions only, a slightly modified
procedure for estimating the uncertainty at N$^2$LO and higher orders
was adopted, by using for $i \ge 3$
\begin{equation}
\label{ErrorMod}
\delta X^{(i)} = \max \Big( Q^{i+1} |X^{(0)}|, \; Q^{i-1} |\Delta
X^{(2)}|, \; Q^{i-2} |\Delta X^{(3)}|, \; Q \delta X^{(i-1)} \Big)
\end{equation}
instead of Eqs.~(\ref{ErrorOrig}), (\ref{ErrorOrig2}); see
Ref.~\cite{Binder:2015mbz} for more details. For many considered
observables, the results at N$^2$LO and higher orders were then found to
differ from experiment well outside the range of quantified
uncertainties, thus providing a clear evidence for missing three-nucleon
forces.\footnote{We remind the reader that nuclear forces are scheme
dependent and not directly measurable. Our conclusions regarding the
expected size of three-body contributions refer to the framework we
employ.} Furthermore, the magnitude of the
deviations was found to be in agreement with the expected size of the
chiral three-nucleon force (3NF) whose first contributions appear at
N$^2$LO.
The same modified approach to error analysis was used recently in
Ref.~\cite{Skibinski:2016dve} to analyze the cross section and selected
polarization observables for deuteron photodisintegration,
nucleon-deuteron radiative capture and three-body $^3$He
photodisintegration and to study the muon capture rates in $^2$H
and $^3$He. While these calculations are also incomplete as the 3NF was
not included and the axial (electromagnetic) currents were only taken
into account at the single-nucleon level (up to the two-nucleon level
via the Siegert theorem), most of the considered observables were
found to be in a good agreement with experimental data.
For recent applications of the approaches to error analysis outlined
above to nuclear matter properties and muonic deuterium see
Refs.~\cite{Hu:2016nkw} and \cite{Hernandez:2017mof}, respectively.
These
promising results provide important tests of the novel chiral
potentials.
In this paper we focus on the novel SCS chiral
potentials of Refs.~\cite{Epelbaum:2014efa,Epelbaum:2014sza} and
extend our earlier work \cite{Binder:2015mbz} in
various directions. First, in addition to elastic
nucleon-deuteron scattering, we also study some of the most
interesting breakup observables. We present the first applications of
the SCS chiral NN potentials to light- and medium-mass nuclei
beyond $^6$Li using a variety of \emph{ab initio} methods and discuss
in detail the corresponding convergence pattern with respect to
truncations of the model space. Last but not least, we address the
limitations and robustness of the
approach for uncertainty quantifications and consider some possible
alternatives.
Our paper is organized as follows. Section \ref{sec:Nd} is devoted to
the nucleon-deuteron
elastic and breakup scattering reactions. Our results for light nuclei
calculated by solving the Faddeev-Yakubovsky equations and/or using
the No-Core Configuration Interaction (NCCI) method are presented in section
\ref{sec:LightNuclei}, while those for medium-mass nuclei obtained
within the coupled-cluster (CC)
method and in-medium similarity
renormalization group (IM-SRG) method are given in section
\ref{sec:CoupledCluster}. Next, in section \ref{sec:Uncertainties}, we
explore some alternative approaches for uncertainty quantification.
Finally, the results of our work are summarized in section \ref{sec:Summary}.
\section{Nucleon-deuteron scattering}
\def\Alph{section}.\arabic{equation}{\arabic{section}.\arabic{equation}}
\label{sec:Nd}
\subsection{Faddeev calculations}
\label{nucl_ham}
Neutron-deuteron (nd) scattering with neutrons and protons interacting
via pairwise-interactions is
described in terms of an amplitude $T\vert \phi \rangle $ satisfying the
Faddeev-type integral equation~\cite{wit88,glo96}
\begin{eqnarray}
T\vert \phi \rangle &=& t P \vert \phi \rangle
+ t P G_0 T \vert \phi \rangle .
\label{eq1a}
\end{eqnarray}
Here $t$ represents the two-nucleon $t$-matrix, which is the solution of the
Lippmann-Schwinger equation with a given NN interaction.
The permutation operator $P=P_{12}P_{23} +
P_{13}P_{23}$ is given in terms of the transposition operators,
$P_{ij}$, which interchange nucleons $i$ and $j$. The incoming state $
\vert \phi \rangle = \vert \vec q_0 \rangle \vert \phi_d \rangle $
describes the free nd motion with relative momentum
$\vec q_0$ and the deuteron state $\vert \phi_d \rangle$.
Finally, $G_0$ is the resolvent of the three-body center-of-mass kinetic
energy.
The amplitude for elastic scattering leading to the corresponding
two-body final state $\vert \phi ' \rangle$ is then given by~\cite{glo96,hub97}
\begin{eqnarray}
\langle \phi' \vert U \vert \phi \rangle &=& \langle \phi'
\vert PG_0^{-1} \vert
\phi \rangle +
\langle \phi' \vert PT \vert \phi \rangle ,
\label{eq3}
\end{eqnarray}
while for the breakup reaction one has
\begin{eqnarray}
\langle \phi_0'\vert U_0 \vert \phi \rangle &=&\langle
\phi_0'\vert (1 + P)T\vert
\phi \rangle ,
\label{eq3_br}
\end{eqnarray}
where $\vert \phi_0' \rangle$ is the free three-body breakup channel state.
We refer to \cite{glo96,hub97,book} for a general overview of
3N scattering and for details on the practical implementation of
the Faddeev equations.
When solving the 3N Faddeev equation (\ref{eq1a}), we include the
NN force components with a total two-nucleon angular momenta $j \le 5$
in 3N partial-wave states with the total 3N system angular momentum
below $J \le 25/2$. This is sufficient to get converged results for incoming
neutron energies of $E_{\rm lab, \, n} \le 200$ MeV.
\subsection{Elastic nd scattering scattering}
\label{nd_elas}
At low energies of the incoming neutron,
the elastic nd scattering analyzing
power $A_y$ with polarized neutrons has been a quantity of great interest
because predictions using standard high-precision NN
potentials (AV18~\cite{Wiringa:1994wb}, CDBonn~\cite{CDBOnucleon-nucleon},
Nijm1 and Nijm2~\cite{NIJMI})
fail to explain the experimental data for $A_y$. The data are
underestimated
by $\sim 30 \%$ in the region of the $A_y$ maximum, which occurs
at c.m. angles $\Theta_{c.m.} \sim 125 ^o$. Combining standard NN
potentials
with commonly used models of a 3NF, such as the Tucson-Melbourne (TM99)~\cite{TM99} or
Urbana IX~\cite{uIX} models,
removes approximately only half of the discrepancy (see left column in Fig.~\ref{fig1}).
\begin{figure}[tb]
\includegraphics[width=0.6\textwidth,keepaspectratio,angle=0,clip]{Nd_fig1.pdf}
\caption{(Color online) The nd elastic scattering analyzing power $A_y$ at
$E_{\rm lab, \, n}=5$~MeV, $10$~MeV, and $14.1$~MeV.
In the left panels the bottom (red) band covers
predictions of standard NN potentials: AV18, CD~Bonn, Nijm1 and
Nijm2. The upper (magenta) band results when these potentials
are combined with the TM99 3NF. The dashed (black) line shows prediction
of the AV18+Urbana IX combination.
In the right panel, predictions based
on the SCS chiral NN potentials of
Refs.~\cite{Epelbaum:2014efa,Epelbaum:2014sza} with the
coordinate-space cutoff parameter $R=0.9$~fm are shown.
The bands of
increasing width show estimated theoretical uncertainty at N$^4$LO (red),
N$^3$LO (blue), N$^2$LO (green) and NLO (yellow).
The filled circles are nd data from Ref.~\cite{tornow_ay} at
$6.5$~MeV, from Ref.~\cite{tornow_ay_10} at $10$~MeV, and from
Ref.~\cite{ay_14.1} at $14.1$~MeV.
}
\label{fig1}
\end{figure}
Using the old, nonlocally regularized chiral NN potentials of Refs.~\cite{Epelbaum:2004fk,epel_mod}, the
predictions for $A_y$ vary with the order of the chiral expansion.
In particular, as reported in
Ref.~\cite{epel2002}, the NLO results overestimate the $A_y$ data
while the N$^2$LO NN forces seem to be in quite a good
agreement with experiment, see Fig.~\ref{fig2}.
\begin{figure}[tb]
\hskip -1.2 true cm
\includegraphics[scale=0.6]{Nd_fig2.pdf}
\caption{
(Color online) The nd elastic scattering analyzing power $A_y$ at
$E_{\rm lab, \, n}=10$~MeV.
In the left panel, bands of predictions for five versions of
the old nonlocal chiral NN potentials of Refs.~\cite{Epelbaum:2004fk,epel_mod}
at different orders of the chiral expansion are
shown: NLO - the upper (magenta) band,
N$^2$LO - the middle (red) band, and N$^3$LO - the bottom
(green) band. These five versions correspond to different cutoff
values used for the Lippmann-Schwinger equation
and the spectral function regularizations, namely $(450,500)$~MeV,
$(450,700)$~MeV, $(550,600)$~MeV, $(600,500)$~MeV, and
$(600,700)$~MeV.
In the right panel, predictions based
on SCS chiral NN potentials of
Refs.~\cite{Epelbaum:2014efa,Epelbaum:2014sza} with the
coordinate-space cutoff parameter of $R=0.9$~fm are shown.
The bands of
increasing width show estimated theoretical uncertainty at N$^4$LO (red),
N$^3$LO (blue), N$^2$LO (green) and NLO (yellow).
The full circles are nd data from Ref.~\cite{tornow_ay_10} at $10$~MeV.
}
\label{fig2}
\end{figure}
Only when the N$^3$LO NN chiral
forces are used does
a clear discrepancy between theory and data emerge in the region of $A_y$
maximum, which is similar to the one for standard forces. This is
visualized for $E_n=10$~MeV in the left panel of Fig.~\ref{fig2}, where
bands of predictions correspond to five versions of
the nonlocal NLO, N$^2$LO and N$^3$LO potentials of Refs.~\cite{Epelbaum:2004fk,epel_mod},
which differ from each other by the
cutoff parameters used for the Lippmann-Schwinger equation
and the spectral function
regularizations.
Such a behavior of $A_y$ predictions at different orders in the chiral
expansion can be traced back to a high sensitivity
of $A_y$ to $^3P_j$ NN force components
\cite{nijm_phase1,nijm_phase2}, which are
accurately
reproduced for the old nonlocal chiral potentials
only at N$^3$LO. This
is visualized in the left panel of Fig.~\ref{fig3}.
\begin{figure}[tb]
\includegraphics[scale=0.7]{Nd_fig3.pdf}
\caption{(Color online)
The neutron-proton $^3P_j$ phase-shifts as
a function of laboratory energy $E_{\rm lab}$.
In the left panel the solid (red), dashed (blue), and dotted (black) lines
show predictions of
the old chiral Bochum NLO, N$^2$LO, and N$^3$LO NN potentials with
the cutoff parameters of $(600,500)$~MeV.
In the right panel the dashed-dotted (indigo), solid (red), dashed (blue),
dotted (black), and
dashed-double-dotted (magenta) are predictions of SCS chiral
potentials with local
regulator and parameter $R=0.9$~fm at LO, NLO, N$^2$LO, N$^3$LO and N$^4$LO,
respectively.
The solid (brown) circles are experimental Nijmegen phase-shifts
\cite{nijm_phase1,nijm_phase2}.
}
\label{fig3}
\end{figure}
Contrary to the observed behavior of old potentials,
the predictions for $A_y$ based on the SCS NN chiral forces
turn out to be similar to those of the high-precision phenomenological
potentials already starting from NLO; see the right panel of
Fig.~\ref{fig2}.
This reflects the considerably improved convergence with the order of
the chiral
expansion of the novel semilocal potentials, as visualized in the
right panel of Fig.~\ref{fig3} for the case of the $^3P_2$ phase shift.
Only LO values are far away from the empirical values while the NLO
results already turn out to be very close to those of the Nijmegen
partial wave analysis (NPWA)
at energies below $\approx 40$~MeV. The N$^2$LO, N$^3$LO, and N$^4$LO
results for the phase shifts overlap with each other and
with the NPWA values. The corresponding
$A_y$ predictions at orders above LO are very close to each other
as seen in the right panels of Figs.~\ref{fig1} and ~\ref{fig2}.
It is instructive to look at the estimated theoretical uncertainty from the
truncation of the chiral expansion shown in the right panels of
Figs.~\ref{fig1} and \ref{fig2}. Notice that our calculations for
three- and more-nucleon observables are incomplete starting from
N$^2$LO due to the missing 3NFs. The width of the bands calculated
using Eqs.~(\ref{ErrorOrig}), (\ref{ErrorOrig2}) at LO and NLO and
using Eq.~(\ref{ErrorMod}) starting from N$^2$LO show our
estimations of the expected theoretical uncertainties \emph{after}
inclusion of the corresponding 3NF contributions.
At the considered low energies, the theoretical uncertainty decreases
quite rapidly so that one expects precise predictions for $A_y$
starting from N$^3$LO.\footnote{We emphasize, however, that the usage
of Eq.~(\ref{ErrorMod}) in the incomplete calculations presented
here may lead to underestimation of the theoretical uncertainty at higher
orders. A more reliable estimation of the truncation error is
expected from
performing complete calculations that include 3NFs and using
Eqs.~(\ref{ErrorOrig}, \ref{ErrorOrig2}) at all orders. This work
is in progress.} Interestingly, our novel approach to uncertainty
quantification is capable of accounting for the already mentioned
strongly fine-tuned nature of this observable which results in a large
theoretical uncertainty at NLO. Notice that the experimental data are correctly
described at this order within the errors.
It remains to be seen upon the inclusion of the 3NF
and performing complete calculations whether the $A_y$-puzzle will
survive at higher orders of the chiral expansion. Notice further that
at N$^4$LO, the 3NF involves purely short-range contributions with two
derivatives, which affect nucleon-deuteron (Nd) P-waves \cite{Girlanda:2011fh}. It is conceivable
that the inclusion of such terms will lead to a proper description
of $A_y$ once the corresponding LECs are tuned to reproduce
Nd scattering observables.
Apart from $A_y$ and the deuteron tensor analyzing power $i
T_{11}$, which is known to show a similar behavior to $A_y$,
there is not much room for three-nucleon force effects in
elastic Nd scattering at low energies; see Ref.~\cite{Binder:2015mbz} for the
predictions of other observables at $10$~MeV. On the other hand,
significant disagreements with the data start to appear at
intermediate energies of $\sim 50$~MeV and higher.
As a representative example, we
show in Fig.~\ref{fig3a} our predictions for selected elastic scattering
observables at $135$~MeV.
\begin{figure}[tb]
\includegraphics[width=\textwidth,keepaspectratio,angle=0,clip]{Nd_fig3a.pdf}
\caption{(Color online) Predictions for the differential cross
section, nucleon and deuteron vector analyzing powers $A_y^{n}$
and $A_y^{d}$, deuteron tensor analyzing powers $A_{yy}$, $A_{xz}$ and
$A_{xx}$ and polarization-transfer coefficients $K_{xx}^{y'}$, $K_{y}^{y'}$,
and $K_{yy}^{y'}$ at the laboratory energy of $135$~MeV based on the
NN potentials of Refs.~\cite{Epelbaum:2014efa,Epelbaum:2014sza} for
$R=0.9$~fm without including the 3NF.
The bands of
increasing width show estimated theoretical uncertainty at N$^4$LO (red),
N$^3$LO (blue), N$^2$LO (green) and NLO (yellow). The dotted (dashed)
lines show the results based on the CD Bonn NN potential (CD Bonn NN
potential in combination with the Tucson-Melbourne 3NF).
Open circles are proton-deuteron data from
Refs.~\cite{Sakamoto:1996xdz,Sakai:2000mm,Sekiguchi:2002sf,Sekiguchi:2004yb}.
}
\label{fig3a}
\end{figure}
In addition to
the well-known underestimation of the differential cross section
minima, the spin-observables calculated using the NN interactions
only start to show deviations from the data. These deviations tend to increase
with energy; see \cite{KalantarNayestanaki:2011wz} for a comprehensive
discussion. As shown in Ref.~\cite{Binder:2015mbz}, the
theoretical uncertainty of the chiral EFT results at N$^3$LO and N$^4$LO
is considerably smaller than the observed disagreements between the
predictions based on the NN forces and the experimental data even at
energies of the order of $200$~MeV. Our results suggest that elastic Nd
scattering in the energy range of $\sim 50-200$~MeV is very well
suited to study the detailed structure of the chiral 3NF.
\subsection{Nd breakup}
\label{nd_breakup}
Among numerous kinematically complete configurations of the Nd breakup
reaction the so-called symmetric space star (SST) and quasi-free
scattering (QFS) configurations have attracted special
attention.
The cross sections for these geometries
are very stable with respect to the underlying dynamics.
To be specific, different phenomenological potentials, alone or combined
with standard 3NFs, lead to very similar results for the
cross sections \cite{din1} which deviate significantly from the available
SST and neutron-neutron (nn) QFS data.
At low energies, the cross sections in the SST and QFS configurations are
dominated by the S-waves. For the SST configuration, the largest
contribution to the cross section comes from the $^3S_1$ partial
wave, while for the nn QFS
the $^1S_0$ partial wave dominates.
Neglecting rescattering, the QFS configuration resembles free NN
scattering.
For elastic low-energy neutron-proton (np) scattering one expects
contributions from the $^1S_0$ np and $^3S_1$ force components. For elastic nn
scattering, only the $^1S_0$ nn channel is allowed by the Pauli principle. This suggests that
the nn QFS
is a powerful tool to study the nn interaction.
The measurements of np QFS cross sections have revealed a good agreement
between the data and theory \cite{nnqfs1}, thus confirming the knowledge of
the np force.
On the other hand, for the nn QFS, it was found that the theory underestimates the data by
$\sim 20\%$ \cite{nnqfs1,nnqfs2}. The
stability of the QFS cross sections
with respect to the underlying dynamics means that, assuming
correctness of the nn QFS data, the present day
$^1S_0$ nn interaction is probably incorrect \cite{din1,din2,din3}.
\begin{figure}[tb]
\includegraphics[scale=0.65]{Nd_fig4.pdf}
\caption{(Color online) The SST (upper panel) and QFS (lower panel) nd breakup cross
sections at incoming neutron laboratory energy $E_{\rm lab, \, n}=13$~MeV (left graphs)
and $65$~MeV (right graphs), as a
function of the arc-length S along the kinematical locus in the
$E_1-E_2$ plane.
The bands of
increasing width show estimated theoretical uncertainty at N$^4$LO (red),
N$^3$LO (blue), N$^2$LO (green) and NLO (yellow) based
on SCS chiral NN potentials of Refs.~\cite{Epelbaum:2014efa,Epelbaum:2014sza} with local
regulator and parameter $R=0.9$~fm.
(Blue) circles and (red) triangles are nd data from
Ref.~\cite{sst} and \cite{erlangen13a,erlangen13b}, respectively.
Proton-deuteron experimental data are shown as (black) squares. At $13$~MeV,
the pd data are from Ref.~\cite{koln13}.
At $65$~MeV, the pd data are from
Ref.~\cite{psi65sst} for the SST and from Ref.~\cite{psi65qfs}
for the QFS configurations.
}
\label{fig4}
\end{figure}
In the upper panel of Fig.~\ref{fig4}, we compare predictions of the SCS chiral potentials
at different orders to the SST cross section data at two incoming nucleon
energies $E=13$~MeV and $65$~MeV.
At $65$~MeV the theoretical uncertainty is large at NLO but decreases
rapidly at higher orders of the chiral expansion. One expects accurate
predictions at N$^3$LO and N$^4$LO. Given the good agreement with the
experimental data of Ref.~\cite{psi65sst} as visualized in the right panel of Fig.~\ref{fig4},
there is not much room for 3NF effects for this observable.
At $13$~MeV, the uncertainty bands are rather narrow at all considered
orders, but the nd and proton-deuteron (pd) breakup data are far away from the theory. The two nd data
sets are from different measurements and both show a significant
disagreement with our theoretical results, even though the data seem
to be inconsistent with each other
for the values of the kinematical locus variable $S$ in the range
of $S=5 \ldots 7$ MeV.
The pd data set shown in the upper-left panel of Fig.~\ref{fig4}
is supported by
other SST pd breakup measurements~\cite{sagarasst} in a similar energy range. The
calculations of the pd breakup with inclusion of the pp Coulomb
force \cite{deltuvabr}
revealed only very small Coulomb force effects for this configuration.
Since, at that energy, the SST configuration is practically dominated by
the S-wave NN force components, the big difference between pd and nd
data seems to indicate
a large charge-symmetry breaking in the $^1S_0$ NN partial wave. We
anticipate it to be very difficult to explain the large difference
between the nd and pd data sets by the inclusion of a 3NF without
introducing large charge symmetry breaking interactions.
Furthermore, the discrepancy between the theory and experimental pd
data is puzzling. It remains to be seen whether the inclusion of the
chiral 3NF will affect the results for the pd SST configuration at
this energy.
\begin{figure}[tb]
\includegraphics[scale=0.7]{Nd_fig5.pdf}
\caption{(Color online) The pp QFS pd breakup cross
section at incoming neutron laboratory energy $E_{\rm lab, \, n}=156$~MeV
as a function of the energy $E_1$ in the
$E_1-E_2$ plane.
The bands of
increasing width show estimated theoretical uncertainty at N$^4$LO (red),
N$^3$LO (blue), N$^2$LO (green) and NLO (yellow) based
on the SCS chiral NN potentials of Refs.~\cite{Epelbaum:2014efa,Epelbaum:2014sza} with local
regulator and parameter $R=0.9$~fm.
The (black) squares are pd data
of Ref.~\cite{pd156}.
}
\label{fig5}
\end{figure}
For the pp QFS geometry, we show in the lower panel of Fig.~\ref{fig4}
and in Fig.~\ref{fig5} the predictions based on the
SCS chiral potentials at $E=13$~MeV, $65$~MeV, and $156$~MeV,
together with the available pd breakup data. Again the theoretical
uncertainty rapidly decreases with an increasing order
of the chiral expansion, leading to very precise predictions
at N$^3$LO and N$^4$LO, which, in addition, agree well
with the pd breakup data. Assuming that the agreement will hold after
the inclusion of the corresponding 3NF, this provides, together with
the drastic $\approx 20 \%$ underestimation of nn QFS data
found in \cite{nnqfs1,nnqfs2}, yet another indication of our poor
knowledge of low energy $^1S_0$ neutron-neutron force.
\begin{figure}[tb]
\includegraphics[scale=0.65]{Nd_fig6.pdf}
\caption{(Color online) The SST pd analyzing power
at incoming neutron laboratory energy $E_{\rm lab, \, n}=13$~MeV (left panel)
and $65$~MeV (right panel), as a
function of the arc-length S along the kinematical locus in the
$E_1-E_2$ plane.
The bands of
increasing width show estimated theoretical uncertainty at N$^4$LO (red),
N$^3$LO (blue), N$^2$LO (green) and NLO (yellow) based
on the SCS chiral NN potentials of Refs.~\cite{Epelbaum:2014efa,Epelbaum:2014sza} with local
regulator and parameter $R=0.9$~fm.
In the left panel the (black) squares are pd data
of Ref.~\cite{koln13}.
In the right panel the (black) squares are pd data of
Ref.~\cite{psi65sst}.
}
\label{fig6}
\end{figure}
Finally, in Fig.~\ref{fig6}, the results for the nucleon analyzing power
$A_y$ in the SST and pp QFS geometries of the Nd
breakup reaction at $13$~MeV and $65$~MeV
are presented. Again, the band widths
of theoretical uncertainties become quite narrow with an
increasing order of chiral expansion. There
appears to be reasonable agreement between experiment
and theory without 3NF contributions given
the large error bars of the available data.
\section{Light nuclei}
\def\Alph{section}.\arabic{equation}{\arabic{section}.\arabic{equation}}
\label{sec:LightNuclei}
\subsection{Faddeev--Yakubovsky calculations}
For the $A=3$ and $A=4$ bound state calculations, we solve Faddeev and Yakubovsky equations, respectively. These
calculations are performed in momentum space, which enables us to obtain high accuracies for binding energies and
also for the properties of the wave function.
For $A=3$, similar to the 3N scattering case, we rewrite the Schr\"odinger equation
into Faddeev equations
\begin{equation}
| \psi \rangle = G_0 \, t \, P \, | \psi \rangle \;.
\end{equation}
Here, $|\psi \rangle$ denotes the Faddeev component. The 3N wave function is related to the Faddeev
component by $| \Psi \rangle = (1+P)\, | \psi \rangle$. In contrast
to the 3N scattering problem, no singularities show up
for bound states since the energy is negative and below the binding energy of the deuteron.
We represent the equation using partial wave decomposed momentum eigenstates
\begin{equation}
| \, p_{12} \, p_3 \, \alpha \, \rangle = | \, p_{12} \, p_3 \, [
(l_{12} s_{12}) j_{12} (l_3 \frac{1}{2}) I_3 ] J_3 \ (t_{12}
\frac{1}{2} ) T_3\rangle
\;,
\end{equation}
where $p_{12}$, $l_{12}$, $s_{12}$ , $t_{12}$ and $j_{12}$ are the
magnitude of the momentum, the orbital angular momentum, the spin, the
isospin
and the total angular momentum of the subsystem of nucleons 1 and 2.
$p_3$, $l_3$ and $I_3$ denote the magnitude of the momentum, the
orbital angular momentum and the total angular momentum
of the spectator nucleon relative to the (12) subsystem,
respectively.
The angular momenta and isospin quantum numbers are coupled together
with the spin and isospin $\frac{1}{2}$
of the third nucleon to the total angular momentum $J_3=\frac{1}{2}$
and isospin $T_3$. For the results shown below, we take angular
momenta
up through $j_{12}=7$ and $T_3=\frac{1}{2}$ and $\frac{3}{2}$ states into account.
We adopt $N_{12}=N_3=64$ mesh points
for the discretization of the momenta between 0 and
$p_{\max}=15$~fm$^{-1}$. We note that the solution of the
Lippmann-Schwinger equation
for $t$ requires a more extended momentum grid up to momenta of
$35$~fm$^{-1}$. We find that this choice of momenta guarantees that
our numerical
accuracy is better than 1~keV for the binding energy and, for the 3N
systems, also the expectation values of the Hamiltonian $H$. The latter ones require
the calculation of
wave functions and are therefore more difficult to obtain.
We take isospin breaking of the nuclear interaction
into account.
For the SCS chiral interactions, we add the point Coulomb
interaction in $pp$. The contribution
of the neutron/proton mass difference is later treated perturbatively and given in Table~\ref{tab:3n-exp}.
For the calculation of the binding energies and wave functions, we use an
averaged mass of $m_N=938.918$~MeV. More details on the computational aspects
can be found in \cite{Nogga:2001cz}. Results for the binding energies
are summarized in Table~\ref{tab:3n-faddeev}.
\begin{table}[t]
\renewcommand{\arraystretch}{1.2}
\begin{ruledtabular}
\begin{tabular}{l|lllll}
\multicolumn{1}{l}{$R$ [fm]} &
\multicolumn{1}{c}{LO} &
\multicolumn{1}{c}{NLO} &
\multicolumn{1}{c}{N$^2$LO} &
\multicolumn{1}{c}{N$^3$LO} &
\multicolumn{1}{c}{N$^4$LO}
\\ \hline
\multicolumn{6}{l}{$^3$H}
\\ \hline
0.8 & $-$12.038 & $-$8.044 & $-$8.039 & $-$7.569 & $-$7.489 \\
0.9 & $-$11.747 & $-$8.216 & $-$8.146 & $-$7.575 & $-$7.600 \\
1.0 & $-$11.295 & $-$8.380 & $-$8.282 & $-$7.534 & $-$7.642 \\
1.1 & $-$10.822 & $-$8.554 & $-$8.428 & $-$7.514 & $-$7.630 \\
1.2 & $-$10.394 & $-$8.727 & $-$8.579 & $-$7.481 & $-$7.580 \\
\hline
\multicolumn{6}{l}{$^3$He}
\\ \hline
0.8 & $-$11.151 & $-$7.312 & $-$7.303 & $-$6.867 & $-$6.794 \\
0.9 & $-$10.862 & $-$7.472 & $-$7.402 & $-$6.875 & $-$6.897 \\
1.0 & $-$10.423 & $-$7.624 & $-$7.528 & $-$6.837 & $-$6.935 \\
1.1 & $-$9.968 & $-$7.786 & $-$7.664 & $-$6.816 & $-$6.923 \\
1.2 & $-$9.561 & $-$7.948 & $-$7.806 & $-$6.783 & $-$6.876 \\
\end{tabular}
\end{ruledtabular}
\caption{\label{tab:3n-faddeev}
Calculated $^3$H and $^3$He binding energies
using chiral NN interactions at different orders of the chiral
expansion and at five different values of $R$. Energies are given
in MeV.}
\end{table}
In order to provide benchmark results, we also summarize expectation
values for the kinetic energy, the potential, the point proton and
neutron
rms radii and probabilities for $S$-, $P$- and $D$-states in Table~\ref{tab:3n-exp}.
Here, we restrict ourselves to N$^4$LO and $^3$H and compare to
results of two phenomenological interactions, AV18 and CDBonn, and
to ones based on the older series of chiral interactions of
Ref.~\cite{Epelbaum:2004fk} ($\Lambda,\tilde \Lambda=$(600, 700) MeV)
and the chiral interaction of Ref~\cite{Entem:2003ft}.
Note that we have used, for the calculations with these forces, the EM
interaction of AV18 \cite{Wiringa:1994wb} acting in $pp$ and $nn$ in
order to be consistent
with previous calculations.
The deviation of the binding energy $E$ and expectation value
$\langle H \rangle$ of the Hamiltonian is due to the
contribution of the mass difference of the proton and neutron to the
kinetic
energy $\langle T_{CSB} \rangle$, which we take into account for the
calculation
of $\langle H \rangle$ but which we do not consider for the solution
of the Faddeev equations.
We checked that results for $^3$He are close to the results for $^3$H
except that the sign of the contribution of the
proton/neutron mass difference is opposite and proton and neutron
radii are interchanged, as expected for mirror nuclei.
Because the convergence with respect to partial waves of the Faddeev
component is faster, it is advantageous to project on
Faddeev components whenever possible. Therefore, the wave function and
Faddeev component are normalized to $3 \langle \Psi | \psi \rangle=
1$.
The results for the norm $\langle \Psi | \Psi \rangle$ show that our
representation of the wave function includes 99.9\% of
the wave function. Nevertheless, we evaluate the kinetic energy using
again the faster convergence
for overlaps of Faddeev component and wave function by $\langle T
\rangle = 3 \langle \Psi | \, T \, | \psi \rangle$.
A similar trick for the potential operator is not possible and not
necessary since the potential operator suppresses contributions of
high angular momenta due to its finite range. Note that our choice of
normalization ensures that the relevant partial waves are properly
normalized and, therefore, the calculation of the expectation values
does not require a division by the norm of the wave function.
Comparing the new results to those from non-local chiral interactions of Ref.~\cite{Epelbaum:2004fk},
the kinetic energies tend to be larger now.
But they only
become comparable to a standard local phenomenological interaction
like AV18 for smaller configuration-space regulators $R$.
For larger $R$, the expectation values are in better agreement with non-local interactions like CDBonn.
Generally, the observed pattern indicates
that the new interactions induce more NN short-range correlations than the chiral
interactions of Refs.~\cite{Epelbaum:2004fk,Entem:2003ft}
but, at least for larger $R$, still less than
phenomenological ones.
Notice that the kinetic energies at
N$^3$LO, which are not shown explicitly, are found to take similar
values, while those at NLO and N$^2$LO appear to be
significantly smaller. These findings are in line with the
nonperturbative nature of the SCS potentials at N$^3$LO and N$^4$LO
as found in the Weinberg eigenvalue analysis of
Ref.~\cite{Hoppe:2017lok}. As demonstrated in Ref.~\cite{Reinert:2017usi}, this feature can
be traced back to the large values of the LECs accompanying the
redundant N$^3$LO contact interactions in the $^1$S$_0$ and
$^3$S$_1$-$^3$D$_1$ channels.
The contributions of the $D$-wave component of the wave function is of the order of 6-7\%, which is comparable
to the non-local and older chiral interactions but smaller than results for AV18. We note that the $D$-state
probability increases with increasing $R$. This is a feature of the higher-order interactions at N$^3$LO
and N$^4$LO. The lower order interactions show the opposite behavior.
We found that
the proton and neutron rms radii are not strongly affected by the
regulator $R$. This is somewhat surprising given that the
kinetic energies are strongly dependent on $R$. It is also
interesting that the radii do not appear to be strictly correlated to the binding energies.
\begin{table}[t]
\renewcommand{\arraystretch}{1.2}
\begin{ruledtabular}
\begin{tabular}{l|lllllllllll}
\multicolumn{1}{c}{$R$ [fm]} &
\multicolumn{1}{c}{$E$} &
\multicolumn{1}{c}{$\langle H \rangle$} &
\multicolumn{1}{c}{$\langle T \rangle$} &
\multicolumn{1}{c}{$\langle V \rangle$} &
\multicolumn{1}{c}{$\langle T_{CSB} \rangle$} &
\multicolumn{1}{c}{$\langle \Psi | \Psi \rangle $} &
\multicolumn{1}{c}{$P(S) $} &
\multicolumn{1}{c}{$P(P) $} &
\multicolumn{1}{c}{$P(D) $} &
\multicolumn{1}{c}{$r_p $} &
\multicolumn{1}{c}{$r_n$}
\\ \hline
0.8 & $-$7.489 & $-$7.499 & 53.59 & $-$61.08 & $-$9.48 & 0.9989 & 93.95 & 0.033 & 6.02 & 1.675 & 1.849 \\
0.9 & $-$7.600 & $-$7.608 & 48.45 & $-$56.05 & $-$8.45 & 0.9993 & 93.91 & 0.034 & 6.06 & 1.669 & 1.838 \\
1.0 & $-$7.642 & $-$7.649 & 44.30 & $-$51.94 & $-$7.70 & 0.9995 & 93.70 & 0.035 & 6.27 & 1.670 & 1.838 \\
1.1 & $-$7.630 & $-$7.637 & 40.74 & $-$48.37 & $-$7.08 & 0.9996 & 93.16 & 0.040 & 6.80 & 1.678 & 1.845 \\
1.2 & $-$7.580 & $-$7.587 & 37.57 & $-$45.15 & $-$6.52 & 0.9998 & 92.58 & 0.046 & 7.37 & 1.689 & 1.858 \\
\hline
AV18 & $-$7.620 & $-$7.626 & 46.71 & $-$54.34 & $-$6.75 & 0.9988 & 91.43 & 0.066 & 8.51 & 1.653 & 1.824 \\
CDBonn & $-$7.981 & $-$7.987 & 37.59 & $-$45.57 & $-$5.85 & 0.9996 & 92.93 & 0.047 & 7.02 & 1.614 & 1.775 \\
N$^2$LO \cite{Epelbaum:2004fk} & $-$7.867 & $-$7.872 & 31.85 & $-$39.72 & $-$5.11 & 0.9995 & 93.43 & 0.039 & 6.53 & 1.624 & 1.787 \\
Idaho N$^{3}$LO \cite{Entem:2003ft} & $-$7.840 & $-$7.845 & 34.52 & $-$42.36 & $-$5.52 & 0.9998 & 93.65 & 0.037 & 6.32 & 1.653 & 1.812 \\
\end{tabular}
\end{ruledtabular}
\caption{\label{tab:3n-exp}
Binding energy $E$, expectation values of the Hamiltonian
$\langle H \rangle$, the kinetic energy $\langle T \rangle$, the
potential energy
$\langle V \rangle$ and the contribution of the
mass difference of proton and neutron to the kinetic energy
$\langle T_{CSB} \rangle$ for $^3$H at order N$^4$LO.
The calculated norm of the wave function
and probabilities for $S$-, $P$-, and $D$-states are also
shown. Finally, we also list results for the point proton and neutron
rms radii $r_p$ and $r_n$. Energies are given in MeV (except for
the $\langle T_{CSB} \rangle$ which is given in keV), radii in fm
and probabilities in \%.
}
\end{table}
For $A=4$, we can rewrite the Schr\"odinger equation into Yakubovsky
equations for the two Yakubovsky components $|\psi_1\rangle $ and
$|\psi_2\rangle $. Again, we can recover the wave function by applications of permutation operators
$ | \Psi \rangle = [ 1- (1+P)P_{34}] (1+P) | \, \psi_1 \, \rangle +
(1+P)(1+ \tilde P) | \psi_2 \rangle $. In addition to the sum of
cyclic and anticyclic
permutations used in the 3N system, we also need a transposition of
nucleons 3 and 4, $P_{34}$, and the interchange of the subsystems (12)
and (34)
given by $\tilde P = P_{13} P_{24}$. The two coupled Yakubovsky equations then read
\begin{eqnarray}
| \, \psi_1 \, \rangle & = & G_0 \, t \, (1+P) \, [ \, (1-P_{34}) | \psi_1 \rangle + | \psi_2 \rangle \, ] \\
| \psi_2 \rangle & = & G_0 \, t \, \tilde P \, \, [ \, (1-P_{34}) | \psi_1 \rangle + | \psi_2 \rangle \, ] \ \ .
\end{eqnarray}
Here, $G_0$ and $t$ are, again, the free propagator and NN t-matrix
respectively. It is understood that they are embedded into the 4N
Hilbert space for this application.
We again solve the equations in momentum space using a partial-wave
decomposed basis. The form of the equations
guarantees a rather fast convergence with respect to partial waves if
the two Yakubovsky components are expressed
in different basis sets. The first component is expanded in a set of
Jacobi momenta that separate the (12) subsystem ($p_{12}$), the motion
of the third nucleon relative to (12) ($p_3$) and the fourth nucleon
relative to the (123) subsystem ($q_4$)
\begin{equation}
| \, p_{12} \, p_3 \, q_4 \, \alpha \, \rangle = | \, p_{12} \, p_3 \,
q_4 \, \{ [ (l_{12} s_{12}) j_{12} (l_3 \frac{1}{2}) I_3 ] \, J_3
(l_4 \frac{1}{2}) \, I_4 \, \} J_4
\ [ (t_{12} \frac{1}{2} ) T_3 \frac{1}{2} ] T_4 \rangle \ \ .
\end{equation}
In addition to the quantities defined for the 3N system,
we require the orbital angular
momentum corresponding to the momentum of the fourth particle $l_4$,
its
total angular momentum $I_4$ and the total angular momentum and
isospin of the 4N system, $J_4$ and $T_4$, respectively. We refer to
this set of
basis states as 3+1 coordinates.
$| \psi_2 \rangle$ is expanded in states introducing relative momenta
within the subsystems (12) and (34), $p_{12}$ and $p_{34}$,
respectively, and
the relative momentum of these two subsystems $q$
\begin{equation}
| \, p_{12} \, p_{34} \, q \, \alpha \, \rangle = | \, p_{12} \,
p_{34} \, q \, \{ [ (l_{12} s_{12}) j_{12} \lambda ] \, I
(l_{34} s_{34}) j_{34} \, \} J_4
\ (t_{12} t_{34} ) \frac{1}{2} ] T_4 \rangle \ \ .
\end{equation}
The angular momenta, spin and isospin of the (34) system are given by
$l_{34}$, $s_{34}$, $j_{34}$ and $t_{34}$.
The angular dependence of the $q$ momentum is expanded in orbital
angular momenta $\lambda$. The angular momenta
are coupled as indicated by brackets to the total 4N angular momentum
$J_4=0$ and isospin $T_4=0$. Below, we refer to these basis
states as 2+2 coordinates.
We again use 64 mesh points for the discretization of the momenta up
to 15~fm$^{-1}$. The only exception is the $q$ momentum where 48
mesh points were sufficient to get binding energies with a accuracy
better than 10~keV and the expectation value of the Hamiltonian
with an accuracy of better than 50~keV. Again the two-body angular
momentum is restricted to $j_{12}^{\max} =7$. We also restrict all
orbital
angular momenta to 8 and the sum of all angular momenta to 16. For
the four-body calculations, we assume that the $^4$He system is a pure
$T_4=0$
state. The neutron/proton mass difference does not contribute in this
case. More details on the computational aspects can again be found in
\cite{Nogga:2001cz}. Results for the binding energies of the ground
state are summarized in Table~\ref{tab:4n-yakubovsky}. Notice that we
predict
a bound excited state for the leading order interactions. The binding
energies for these excited states vary between $-12.6$ and
$-10.4$\,MeV depending
on the regulator parameter $R$. This second bound state disappears at higher orders.
\begin{table}[t]
\renewcommand{\arraystretch}{1.2}
\begin{ruledtabular}
\begin{tabular}{l|lllll}
\multicolumn{1}{c}{$R$ [fm]} &
\multicolumn{1}{c}{LO} &
\multicolumn{1}{c}{NLO} &
\multicolumn{1}{c}{N$^2$LO} &
\multicolumn{1}{c}{N$^3$LO} &
\multicolumn{1}{c}{N$^4$LO}
\\ \hline
\multicolumn{6}{l}{$^4$He}
\\ \hline
0.8 & $-$50.14 & $-$26.50 & $-$26.68 & $-$23.93 & $-$23.43 \\
0.9 & $-$48.39 & $-$27.52 & $-$27.28 & $-$23.93 & $-$24.02 \\
1.0 & $-$45.46 & $-$28.55 & $-$28.13 & $-$23.77 & $-$24.29 \\
1.1 & $-$42.34 & $-$29.72 & $-$29.11 & $-$23.73 & $-$24.30 \\
1.2 & $-$39.43 & $-$30.92 & $-$30.16 & $-$23.64 & $-$24.13 \\
\end{tabular}
\end{ruledtabular}
\caption{\label{tab:4n-yakubovsky}
Calculated $^4$He binding energies
using chiral NN interactions at different orders of the chiral expansion and at five different values of $R$. Energies are given in MeV. }
\end{table}
For N$^4$LO, we also show expectation values of the Hamiltonian, the kinetic energy, the potential and the point proton rms
radius in Table~\ref{tab:4n-exp}. We again compare our results to AV18, CD-Bonn and the two older chiral interactions.
The Yakubovsky components, as the Faddeev component for the 3N system, converge
much faster with respect to partial waves than the wave function. Therefore, we normalize the wave function using the
relation $12 \langle \psi_1 | \Psi \rangle + 6 \langle \psi_2 | \Psi \rangle = 1$ and calculate the kinetic energy using a
corresponding overlap of the wave function and the Yakubovsky components in the coordinates natural for the
Yakubovsky component involved. The wave function itself can be expanded in 3+1 or 2+2 coordinates. We therefore
give two values for the expectation value of $H$ and $V$ in the table. The first ones are obtained using the wave function
expressed in 3+1 coordinates. The second ones are based on 2+2 coordinates. Especially, for $H$, we observe small
deviations of the results that indicate that higher partial wave contributions are not completely negligible when small
cutoffs $R$ are used. The deviation of the binding energy and the expectation values is partly due to the missing
angular momentum states but also due to the restriction to isospin $T=0$ states. Generally,
the wave function seems to be better represented in 3+1 coordinates. Nevertheless, even in 2+2 coordinates,
the agreement of expectation values and binding energies is excellent. This is a non-trivial confirmation of our
results. We note that the N$^4$LO results are the numerically most demanding ones since they required denser
momentum grids and more partial waves for convergence. Finally, Table~\ref{tab:4n-exp} gives results for the
point proton radii that, in our $T_4=0$ approximation, exactly agree with the point neutron radii. Again we find
that there is no strict correlation of the radii and binding energies. The radii are remarkably independent of the
cutoff parameter $R$. In the following section, we extend these calculations towards more complex systems
using the no-core configuration interaction (NCCI) approach.
\begin{table}[t]
\renewcommand{\arraystretch}{1.2}
\begin{ruledtabular}
\begin{tabular}{l|lllllll}
\multicolumn{1}{c}{$R$ [fm]} &
\multicolumn{1}{c}{$E$} &
\multicolumn{1}{c}{$\langle H \rangle_1$} &
\multicolumn{1}{c}{$\langle H \rangle_2$} &
\multicolumn{1}{c}{$\langle T \rangle$} &
\multicolumn{1}{c}{$\langle V \rangle_1$} &
\multicolumn{1}{c}{$\langle V \rangle_2$} &
\multicolumn{1}{c}{$r_p $}
\\ \hline
0.8 & $-$23.43 & $-$23.39 & $-$23.37 & 112.9 & $-$136.2 & $-$136.2 & 1.557 \\
0.9 & $-$24.02 & $-$24.00 & $-$23.99 & 101.4 & $-$125.3 & $-$125.3 & 1.545 \\
1.0 & $-$24.29 & $-$24.27 & $-$24.27 & 91.9 & $-$116.1 & $-$116.2 & 1.546 \\
1.1 & $-$24.30 & $-$24.28 & $-$24.29 & 83.7 & $-$108.0 & $-$108.0 & 1.554 \\
1.2 & $-$24.13 & $-$24.11 & $-$24.12 & 76.5 & $-$100.6 & $-$100.6 & 1.568 \\
\hline
AV18 & $-$24.25 & $-$24.21 & $-$24.16 & 97.7 & $-$121.9 & $-$121.9 & 1.515 \\
CDBonn & $-$26.16 & $-$26.08 & $-$26.07 & 77.6 & $-$103.6 & $-$103.6 & 1.457 \\
N$^{2}$LO \cite{Epelbaum:2004fk} & $-$25.60 & $-$25.58 & $-$25.59 & 62.58 & $-$88.16 & $-$88.16 & 1.478 \\
Idaho N$^3$LO \cite{Entem:2003ft} & $-$25.38 & $-$25.37 & $-$25.37 & 69.18 & $-$94.55 & $-$94.55 & 1.518 \\
\end{tabular}
\end{ruledtabular}
\caption{\label{tab:4n-exp}
Binding energy $E$, expectation values of the Hamiltonian in 3+1 (2+2) coordinates $\langle H \rangle_1$ ($\langle H \rangle_2)$,
the kinetic energy $\langle T \rangle$, the potential energy in 3+1 (2+2) coordinates
$\langle V \rangle_1$ ($\langle V \rangle_2$) for $^4$He at order N$^4$LO. We also give results for the point proton
rms radii $r_p$. Energies are given in MeV and radii in fm. }
\end{table}
\subsection{No-Core Configuration Interaction calculations}
\label{sec:ncci}
For larger nuclei, $A > 4$, we use
NCCI methods to solve the many-body Schr\"odinger equation. These
methods have advanced rapidly in recent years and one can now
accurately solve fundamental problems in nuclear structure and
reaction physics using realistic interactions, see
e.g., Ref.~\cite{Barrett:2013nh} and references therein. In this
section we follow Refs.~\cite{Maris:2008ax,Maris:2013poa} where, for a
given interaction, we diagonalize the resulting many-body Hamiltonian
in a sequence of truncated harmonic-oscillator (HO) basis spaces. The
basis spaces are characterized by two parameters: $N_{\max}$ specifies
the maximum number of total HO quanta beyond the HO Slater determinant
with all nucleons occupying their lowest-allowed orbitals and
$\hbar\omega$ specifies the HO energy. The goal is to achieve
convergence as indicated by independence of these two basis
parameters, either directly or by extrapolation~\cite{Maris:2008ax,Coon:2012ab,Furnstahl:2012qg,More:2013rma,Wendt:2015nba}.
In order to improve the convergence behavior of the many-body
calculations we employ a consistent unitary transformation of the
chiral Hamiltonians. Specifically, we use the Similarity
Renormalization Group
(SRG)~\cite{Glazek:1993rc,Wegner:1994,Bogner:2007rx,Bogner:2009bt}
approach that provides a straightforward and flexible framework for
consistently evolving (softening) the Hamiltonian and other operators,
including three-nucleon
interactions~\cite{Jurgenson:2009qs,Roth:2011ar,Jurgenson:2013yya,Roth:2013fqa}.
In particular, at N$^3$LO and N$^4$LO this additional ``softening'' of
the chiral NN potential is necessary in order to obtain sufficiently
converged results on current supercomputers.
In the SRG approach, the unitary transformation of an operator,
e.g., the Hamiltonian, is formulated in terms of a flow equation
\begin{eqnarray}
\frac{d}{d\alpha}H_{\alpha} &=& [\eta_{\alpha},H_{\alpha}] \,,
\label{eq:flow}
\end{eqnarray}
with a continuous flow parameter $\alpha$. The initial condition for
the solution of this flow equation is given by the `bare' chiral
Hamiltonian. The physics of the SRG evolution is governed by the
anti-hermitian generator $\eta_{\alpha}$. A specific form widely used
in nuclear physics~\cite{Bogner:2009bt} is given by
\begin{eqnarray}
\eta_{\alpha} &=& m_N^2 [T_{\text{int}},H_{\alpha}]\,,
\end{eqnarray}
where $m_N$ is the nucleon mass and $T_{\text{int}}$ is the intrinsic
kinetic-energy operator. This generator drives the Hamiltonian
towards a diagonal form in a basis of eigenstates of the intrinsic
kinetic energy, i.e., towards a diagonal in momentum space.
Along with the reduction in the coupling of low-momentum and
high-momentum components by the Hamiltonian, the SRG induces many-body
operators beyond the particle rank of the initial Hamiltonian. In
principle, all induced terms up to the $A$-body level should be
retained to ensure that the transformation is unitary and the spectrum
of the Hamiltonian is independent of the flow parameter. Here, we
truncate the evolution at the three-nucleon level, neglecting four-
and higher multi-nucleon induced interactions, which formally violates
unitarity. For consistency, we check that for $A=3$ we recover the
exact results (for a given input potential); and for $A \ge 4$ we
perform our calculations at two different values of $\alpha$ and
compare our results with calculations without SRG evolution.
The flow equation for the three-body system is solved using a HO
Jacobi-coordinate basis~\cite{Roth:2013fqa}. The intermediate sums
in the three-body Jacobi basis are truncated at $N_{\max} = 40$ for
channels with $J \leq 7/2$, $N_{\max}=38$ for $J=9/2$, and $N_{\max} = 36$
for $J \geq 11/2$. The SRG evolution and subsequent transformation to
single-particle coordinates were performed on a single node using an
efficient OpenMP parallelized code.
For the NCCI calculations we employ the code
MFDn~\cite{Maris:2010,Aktulga:2014}, which is highly optimized for
parallel computing on current high-performance computing platforms.
The size of the largest
feasible basis space is constrained by the total number of three-body
matrix elements required as input, as well as by the number of
many-body matrix elements that are computed and stored for the
iterative Lanczos diagonalization procedure.
We can perform $^4$He calculations up to $N_{\max} = 14$ with 3N
interactions, but calculations of $A=6$ and $7$ nuclei are
restricted to $N_{\max} = 12$, and for $A>10$ we can only go up to
$N_{\max} = 8$ with (induced) 3N interactions. Note that with bare NN
interactions, i.e., without the SRG evolution and the induced 3N terms,
we can go to significantly larger basis spaces, namely $N_{\max} = 20$
for $^4$He; $N_{\max} = 18$ for $A=6$; $N_{\max} = 16$ for $A=7$;
$N_{\max} = 14$ for $A=8$; $N_{\max} = 12$ for $A=9$ and $10$; and
$N_{\max} = 10$ for $A=16$. The larger basis spaces achievable with
NN-only interactions arise due to the significantly smaller memory
footprint of the input Hamiltonian matrix element files and the
smaller memory footprint of the many-body Hamiltonian itself which is
stored completely in our calculations.
The latter issue has been reported as approximately a factor of 40
reduction in memory footprint with NN-only interactions compared to
NN+3N interactions~\cite{Vary:2009qp}. The many-body calculations
were performed on the Cray XC30 Edison at NERSC and the IBM BG/Q Mira
at Argonne National Laboratory.
Finally, compared to the few-body bound state calculations presented
above, we use the following simplifications in our many-body
calculations: we employ the same (average) nucleon mass for the protons
and the neutrons, $m_N=938.92$~MeV.
Also, we do add the two-body Coulomb potential
between (point-like) protons, but we do not take any higher-order
electromagnetic effects into account.
Furthermore, here and in what follows we
restrict ourselves to the intermediate values of the coordinate-space
regulator of $R=0.9$, $1.0$ and $1.1$~fm. The smallest available
cutoff choice of $R=0.8$~fm leads to highly nonperturbative NN
potentials \cite{Hoppe:2017lok,Reinert:2017usi}, which cannot be employed in many-body calculations without
SRG evolution or similar softening approaches. On the other hand, the softest regulator choice of $R=1.2$~fm is known to
lead to large finite-regulator artifacts \cite{Epelbaum:2014efa,Furnstahl:2015rha,Melendez:2017phj}, and for this reason
we do not consider it in the following calculations.
\subsection{Results for ground state energies}
\begin{figure}
\includegraphics[width=0.99\columnwidth]{res_gs_4He6Li_BCD_extrapolated.pdf}
\caption{\label{Fig:groundstate_extrapolated}
(Color online) Ground state energies of $^4$He and $^6$Li at or just above the
variational minimum in $\hbar\omega$ as a function of the basis truncation parameter
$N_{\max}$ for LO to N$^4$LO chiral NN potentials:
without SRG evolution (black dots) at $R=1.0$~fm;
and for NN potentials at $R=0.9$~fm (open squares and open diamonds),
$R=1.0$~fm (plusses and crosses),
and $R=1.1$~fm (solid squares and solid diamonds),
SRG evolved to $0.04$~fm$^4$ (red) and $0.08$~fm$^4$ (blue),
including induced 3NF.
Dotted ($R=1.1$~fm), solid ($R=1.0$~fm), and dashed ($R=0.9$~fm)
lines are exponential fits to the highest three $N_{\max}$ points for cases
where convergence is not well-established by direct calculation.}
\end{figure}
In Fig.~\ref{Fig:groundstate_extrapolated} we show our results for the
ground state energies of $^4$He and $^6$Li at LO to N$^4$LO, both
without SRG evolution (for $R=1.0$~fm only) and with SRG evolution
(for $R=0.9$, $1.0$, and $1.1$~fm) including the induced 3N terms as
mentioned above. (Note that, starting at N$^2$LO, there are also 3NFs
in the chiral expansion, which are not incorporated in the
calculations presented here.) Before examining these results in
detail, we first make several qualitative observations:
(1) The overall trends are the same for the different chiral cutoffs:
significant overbinding at LO, close to the experimental values at NLO
and N$^2$LO, and underbinding at N$^3$LO and N$^4$LO.
(2) The dependence on the chiral cutoff $R$ decreases with increasing
chiral order, as expected.
(3) The convergence rate changes dramatically with the chiral order --
in particular when going from N$^2$LO to N$^3$LO, as anticipated by
the Weinberg eigenvalue analysis of Ref.~\cite{Hoppe:2017lok}. However,
after applying the SRG evolution, convergence is reasonable, and the
dependence of the converged energies on the SRG parameter $\alpha$ is
negligible on the scale of these plots.
\begin{table}[t]
\renewcommand{\arraystretch}{1.2}
\begin{ruledtabular}
\begin{tabular}{l|lllll}
\multicolumn{1}{l}{$R$ [fm]} &
\multicolumn{1}{c}{LO} &
\multicolumn{1}{c}{NLO} &
\multicolumn{1}{c}{N$^2$LO} &
\multicolumn{1}{c}{N$^3$LO} &
\multicolumn{1}{c}{N$^4$LO}
\\ \hline
\multicolumn{6}{l}{$^4$He}
\\ \hline
$0.9$ & $-48.284\pm 0.002\pm 0.17$ & $-27.49 \pm 0.01 \pm 0.03$ & $-27.23 \pm 0.01 \pm 0.01$ & $-23.71 \pm 0.01 \pm 0.01$ & $-23.85\pm 0.01\pm 0.01$
\\
$1.0$ & $-45.407\pm 0.001\pm 0.12$ & $-28.542\pm 0.004\pm 0.03$ & $-28.113\pm 0.006\pm 0.01$ & $-23.59 \pm 0.01 \pm 0.01$ & $-24.14\pm 0.01\pm 0.01$
\\
$1.1$ & $-42.312\pm 0.001\pm 0.07$ & $-29.723\pm 0.002\pm 0.02$ & $-29.102\pm 0.003\pm 0.01$ & $-23.59 \pm 0.01 \pm 0.01$ & $-24.18\pm 0.01\pm 0.01$
\\ \hline
\multicolumn{6}{l}{$^6$Li}
\\ \hline
$0.9$ & $-48.7 \pm 0.4 \pm 0.2$ & $-30.5 \pm 0.1 \pm 0.1$ & $-30.2 \pm 0.1 \pm 0.1$ & $-26.2 \pm 0.2 \pm 0.1$ & $-26.3 \pm 0.2 \pm 0.1$
\\
$1.0$ & $-46.7 \pm 0.3 \pm 0.2$ & $-31.6 \pm 0.1 \pm 0.1$ & $-31.0 \pm 0.1 \pm 0.1$ & $-26.3 \pm 0.2 \pm 0.1$ & $-26.9 \pm 0.3 \pm 0.1$
\\
$1.1$ & $-44.4 \pm 0.3 \pm 0.1$ & $-32.8 \pm 0.1 \pm 0.1$ & $-32.0 \pm 0.1 \pm 0.1$ & $-26.4 \pm 0.2 \pm 0.1$ & $-27.1 \pm 0.3 \pm 0.1$
\end{tabular}
\end{ruledtabular}
\caption{\label{Tab:groundstate_A46_R}
Calculated $^4$He and $^6$Li ground state energies (in MeV)
using chiral NN interactions at three different values of $R$,
SRG evolved to $\alpha=0.04$~fm$^4$ (including induced
3NFs). The first theoretical error is the extrapolation
uncertainty estimate following Ref.~\cite{Maris:2013poa},
whereas the second is an estimate of the SRG error, based on
the difference between results at $\alpha=0.04$~fm$^4$ and
$\alpha=0.08$~fm$^4$, due to omitting the induced multi-nucleon
interactions at and above 4NFs.}
\end{table}
Based on the results in finite basis spaces, we can use extrapolations
to the complete (infinite-dimensional) basis. Here we use a three
parameter fit at fixed $\hbar\omega$ at or just above the variational
minimum
\begin{eqnarray}
E(N_{\max}) &\approx& E_{\infty} + a \exp{(-b N_{\max})} \,,
\end{eqnarray}
which seems to work well for a range of interactions and
nuclei~\cite{Maris:2008ax,Maris:2013poa}. The lines in
Fig.~\ref{Fig:groundstate_extrapolated} correspond to the
extrapolating function fitted to the highest available $N_{\max}$
values.
Our estimate of the extrapolation uncertainty is based on the
difference with smaller $N_{\max}$ extrapolations, as well as the
basis $\hbar\omega$ dependence over an $8$ to $12$~MeV span in
$\hbar\omega$ values around the variational
minimum~\cite{Maris:2013poa}.
As a consistency check, we first performed calculations for $A=3$:
including induced 3N contributions the results with and without SRG
evolution are in agreement with each other, to within the estimated
convergence or extrapolation uncertainty. Furthermore, they also
agree with the Faddeev binding energies of Table~\ref{tab:3n-faddeev}
to within the estimated accuracy.
Our results with SRG evolution to $\alpha=0.04$~fm$^4$ for the ground
state energies of $^4$He and $^6$Li are summarized in
Table~\ref{Tab:groundstate_A46_R}. In addition to the extrapolation
uncertainty, we also give, as a second (systematic) contribution to
the uncertainties, the difference between the ground state energies at
$\alpha=0.04$~fm$^4$ and at $\alpha=0.08$~fm$^4$, which may serve as an
indication of the `error' made by neglecting induced many-body forces.
The $^4$He results of Table~\ref{Tab:groundstate_A46_R} agree within
the estimated uncertainties with the binding energies presented in
Table~\ref{tab:4n-yakubovsky}, at least at LO, NLO, and N$^2$LO.
However, at N$^3$LO and N$^4$LO there are systematic differences:
at N$^3$LO these differences are between $140$~keV and $220$~keV,
depending on $R$, and at N$^4$LO between $120$~keV and $170$~keV.
These differences are an order of magnitude larger than the estimated
numerical uncertainties, and are largest for $R=0.9$~fm and smallest
at $R=1.1$~fm. That is, these difference are smallest for the softest
interactions. A possible explanation for this discrepancy could be
induced 4N forces, which we have neglected in the SRG evolution.
This suggests that the difference between the ground state energies at
$\alpha=0.04$~fm$^4$ and at $\alpha=0.08$~fm$^4$, may indeed serve as an
indication of the `error' made by neglecting induced many-body forces
up to N$^2$LO, but is likely to underestimate the effect of neglected
many-body forces at N$^3$LO and N$^4$LO. Note that without the SRG
evolution the many-body calculation of the binding energy does agree
with the Yakubovsky calculation, though the extrapolation uncertainty
is significantly larger, see Table~\ref{Tab:groundstate_A461016_alpha}
below.
As already mentioned, at LO we find considerable overbinding for all
three values of the chiral cutoff $R$. This overbinding depends
significantly on $R$ and is strongest for $R=0.9$~fm. At NLO (and
N$^2$LO), the $R$-dependence is reduced by a factor of about two to
three. Furthermore, the pattern is reversed compared to the LO
results: At NLO and N$^2$LO, $R=0.9$~fm leads to a slight
underbinding, whereas $R=1.1$~fm gives slight overbinding for $^4$He
and $^6$Li. At N$^3$LO the $R$-dependence is further reduced by about
an order of magnitude compared to NLO and N$^2$LO, and for $^6$Li
becomes of the same order as the many-body extrapolation uncertainty.
Interestingly, at LO in the chiral expansion, $^6$Li is not actually
bound with respect to the $\alpha + d$ breakup, whereas at NLO and
N$^2$LO it is bound by about $0.7$ to $0.9$~MeV (and it appears
to remain bound at higher orders as well).
\begin{table}[t]
\renewcommand{\arraystretch}{1.2}
\begin{ruledtabular}
\begin{tabular}{l|lllll}
\multicolumn{1}{l}{$\alpha$ [fm$^4$]} &
\multicolumn{1}{c}{LO} &
\multicolumn{1}{c}{NLO} &
\multicolumn{1}{c}{N$^2$LO} &
\multicolumn{1}{c}{N$^3$LO} &
\multicolumn{1}{c}{N$^4$LO}
\\ \hline
\multicolumn{6}{l}{$^4$He, $J^P=0^+$}
\\ \hline
$ 0 $ & $-45.453 \pm 0.006$ & $-28.533 \pm 0.004$ & $-28.11 \pm 0.01 $ & $-23.7 \pm 0.1 $ & $-24.2 \pm 0.1 $
\\
$0.04$& $-45.407 \pm 0.001$ & $-28.542 \pm 0.004$ & $-28.113 \pm 0.006$ & $-23.59 \pm 0.01$ & $-24.14 \pm 0.01$
\\
$0.08$& $-45.289 \pm 0.001$ & $-28.566 \pm 0.001$ & $-28.119 \pm 0.001$ & $-23.582\pm 0.002$& $-24.145\pm 0.002$
\\ \hline
\multicolumn{6}{l}{$^6$Li, $J^P=1^+$}
\\ \hline
$ 0 $ & $-46.7 \pm 0.1$ & $-31.6 \pm 0.2$ & $-31.0 \pm 0.2$ & $-24.4 \pm 2.3$ & $-25.7 \pm 1.9$
\\
$0.04$ & $-46.7 \pm 0.3$ & $-31.6 \pm 0.1$ & $-31.0 \pm 0.1$ & $-26.3 \pm 0.2$ & $-26.9 \pm 0.3$
\\
$0.08$ & $-46.9 \pm 0.3$ & $-31.7 \pm 0.1$ & $-31.1 \pm 0.1$ & $-26.3 \pm 0.2$ & $-26.9 \pm 0.3$
\\ \hline
\multicolumn{6}{l}{$^{10}$B, $J^P=1^+$}
\\ \hline
$ 0 $ & $ -93.9 \pm 0.8 $ & $ -64.9 \pm 1.8$ & $ -63.1 \pm 1.9$ & -- & --
\\
$0.04$& $ -94.0 \pm 1.5 $ & $ -64.5 \pm 0.8$ & $ -63.1 \pm 0.8$ & $-55. \pm 2. $ & $-55. \pm 2. $
\\
$0.08$& $ -94.9 \pm 0.9 $ & $ -64.3 \pm 0.8$ & $ -63.1 \pm 0.6$ & $-52.2\pm 0.8$ & $-53.3\pm 0.7$
\\ \hline
\multicolumn{6}{l}{$^{10}$B, $J^P=3^+$}
\\ \hline
$ 0 $ & $ -88.1 \pm 1.2 $ & $ -64.6 \pm 1.5$ & $ -62.3 \pm 1.7$ & -- & --
\\
$0.04$& $ -88.2 \pm 1.6 $ & $ -64.1 \pm 0.7$ & $ -62.1 \pm 0.8$ & $-51. \pm 4. $ & $-52. \pm 3. $
\\
$0.08$& $ -88.8 \pm 1.0 $ & $ -64.1 \pm 0.6$ & $ -61.9 \pm 0.6$ & $-50.1\pm 1.0$ & $-51.2\pm 0.9$
\\ \hline
\multicolumn{6}{l}{$^{16}$O, $J^P=0^+$}
\\ \hline
$ 0 $ & $-224. \pm 2. $ & $-156. \pm 5. $ & $-149. \pm 5. $ & -- & --
\\
$0.04$& $-223.2 \pm 0.4$ & $-152.0 \pm 1.3$ & $-146.2 \pm 0.9$ & $-121. \pm 4. $ & $-121. \pm 4.$
\\
$0.08$& $-220.9 \pm 0.2$ & $-150.1 \pm 0.8$ & $-144.8 \pm 0.6$ & $-113. \pm 2. $ & $-114. \pm 2.$
\end{tabular}
\end{ruledtabular}
\caption{\label{Tab:groundstate_A461016_alpha}
Calculated $^4$He, $^6$Li, $^{10}$B, and $^{16}$O ground state
energies (in MeV) using chiral NN interactions at $R=1.0$~fm
without SRG evolution, and SRG evolved to $\alpha=0.04$~fm$^4$
and $\alpha=0.08$~fm$^4$ (including induced 3NFs).
The theoretical error is the extrapolation uncertainty estimate
following Ref.~\cite{Maris:2013poa}, adjusted to be at least
20\% of the difference with the variational minimum.}
\end{table}
In Table~\ref{Tab:groundstate_A461016_alpha} we summarize our results
with and without SRG evolution for several representative $p$-shell
nuclei at LO through N$^4$LO for $R=1.0$~fm. The errors listed in
Table~\ref{Tab:groundstate_A461016_alpha} are our estimates of the
extrapolation uncertainties, adjusted to be at least 20\% of the
difference with the variational minimum. Again, induced 3N contributions to
the SRG-evolved interaction are included, but induced 4N and higher
multi-nucleon induced interactions neglected. The differences between
results without SRG evolution and at SRG values of
$\alpha=0.04$~fm$^4$ and at $\alpha=0.08$~fm$^4$ tend to be of the
same order as (or smaller than) the extrapolation uncertainties,
except at leading order. When compared with the results at
$\alpha=0.04$~fm$^4$, the results at $\alpha=0.08$~fm$^4$ generally do
have smaller extrapolation uncertainties (i.e., are more converged in
the many-body basis expansion) as one would expect, but are slightly
further away from the results without SRG renormalization, where
available.
At N$^3$LO and N$^4$LO, we have to rely on SRG evolution (or other
renormalization schemes) for $p$-shell nuclei. For $^6$Li we can do
an extrapolation of the bare interaction results, but the extrapolation
uncertainty is large, whereas the results at $\alpha=0.04$~fm$^4$ and
$\alpha=0.08$~fm$^4$ differ by less than $100$~keV.
For the upper half of the $p$-shell, SRG evolution also
becomes beneficial at NLO and N$^2$LO.
\begin{table}[t]
\renewcommand{\arraystretch}{1.2}
\begin{ruledtabular}
\begin{tabular}{cc|lllll}
\multicolumn{1}{c}{nucleus} &
\multicolumn{1}{c|}{$J^P$} &
\multicolumn{1}{c}{LO} &
\multicolumn{1}{c}{NLO} &
\multicolumn{1}{c}{N$^2$LO} &
\multicolumn{1}{c}{expt}
\\ \hline
$^3$H &$\frac{1}{2}^+$& $-11.30 \pm 0.01 $ & $ -8.38 \pm 0.01 $ & $ -8.28 \pm 0.01 $ & -8.482 \\
$^4$He & $0^+$ & $-45.453 \pm 0.006$ & $-28.533 \pm 0.004$ & $-28.11 \pm 0.01 $ & -28.296 \\
$^6$He & $0^+$ & $-43.2 \pm 0.2 $ & $-28.7 \pm 0.2 $ & $-27.9 \pm 0.2 $ & -29.27 \\
$^6$Li & $1^+$ & $-46.7 \pm 0.1 $ & $-31.6 \pm 0.2 $ & $-31.0 \pm 0.2 $ & -31.99 \\
$^7$Li &$\frac{3}{2}^-$& $-57.1 \pm 0.2$ & $-38.7 \pm 0.3 $ & $-38.0 \pm 0.4 $ & -39.24 \\
$^8$He & $0^+$ & $-39.8 \pm 0.6$ & $-29.7 \pm 0.5 $ & $-27.8 \pm 0.6 $ & -31.41 \\
$^8$Li & $2^+$ & $-55.7 \pm 0.5$ & $-40.3 \pm 0.7 $ & $-39.0 \pm 0.8 $ & -41.28 \\
$^8$Be & $0^+$ & $-87.7 \pm 0.4$ & $-56.0 \pm 0.7 $ & $-55.4 \pm 0.9 $ & -56.50 \\
$^9$Li &$\frac{3}{2}^-$& $-57.1 \pm 0.4$ & $-43.9 \pm 0.7 $ & $-41.7 \pm 0.8 $ & -45.34 \\
$^9$Be &$\frac{3}{2}^-$& $-84.7 \pm 0.7$ & $-58.0 \pm 1.4 $ & $-56.4 \pm 1.5 $ & -58.16 \\
$^{10}$B & $1^+$ & $-93.9 \pm 0.8$ & $-64.9 \pm 1.8 $ & $-63.1 \pm 1.9 $ & -64.03 \\
$^{10}$B & $3^+$ & $-88.1 \pm 1.2$ & $-64.6 \pm 1.5 $ & $-62.3 \pm 1.7 $ & -64.75 \\
$^{16}$O & $0^+$ & $-224. \pm 2.$ & $-156. \pm 5.$ & $-149.\pm 5. $ & -127.62 \\
\end{tabular}
\end{ruledtabular}
\caption{\label{Tab:groundstates}
Calculated NCCI ground state energies, in MeV, using chiral NN
interactions at $R=1.0$~fm (without SRG evolution). Results are
compared with experimental data in the last column. The quoted
theoretical errors are due to extrapolation uncertainties
following Ref.~\cite{Maris:2013poa}, adjusted to be at least 20\%
of the difference with the variational minimum. }
\end{table}
In Table~\ref{Tab:groundstates} and
Fig.~\ref{Fig:res_gs_Eb_A03A10}
we
summarize our results for the ground state energies of $A=3$ to $A=10$
nuclei, as well as for $^{16}$O in Table~\ref{Tab:groundstates},
based on extrapolations of the chiral LO, NLO, and N$^2$LO
interactions without applying any further SRG renormalization.
With the exception of $^7$Li at LO, and of $^{10}$B, the ground state
spins all agree with the experimental spin of the ground state.
The results with SRG evolution, through the limited range that we
investigate, (including induced 3NFs) are very similar, and fall
within the quoted uncertainty estimates for all cases. Given this
similarity of results with and without SRG evolution we do not present
here the results with SRG evolution.
The ground state energies of all nuclei in
Table~\ref{Tab:groundstates} follow similar patterns:
significantly overbound at LO, closer to the experimental values
at NLO, and then shifted towards less binding at N$^2$LO.
E.g., the $J^\pi=\frac{3}{2}^-$ ground state of $^7$Li follows the same
overall pattern as that of $^4$He and $^3$H, and is actually bound
with respect to breakup into $^4$He plus $^3$H at LO, NLO and N$^2$LO.
However, at $A=8$ (and to a lesser extend also at $A=9$) we see that
the difference between LO and NLO results decreases significantly with
increasing isospin: it is much smaller for the $^8$He than it is for
$^8$Be.
Also note that the deviation from experiment at N$^2$LO is largest for
$^8$He, and smallest for $^8$Be. (Similar effects can be seen for
$^9$Li and $^9$Be.)
Neither $^8$He nor $^8$Be are bound at LO ($^8$He is about
$5.5$~MeV above $^4$He, and $^8$Be is about $3.3$~MeV above two
$\alpha$-particles, and the applicability of the HO basis is rather
questionable for these states). On the other hand, at NLO $^8$He does become bound,
whereas $^8$Be remains unbound, both in qualitative agreement with
experiment. Whether or not $^6$He (and $^8$He) are bound at
N$^2$LO (and higher orders) depends crucially on the chiral 3NFs --
without these, they are not bound.
Note that $^9$Be is also not bound at LO: despite the enormous
overbinding compared to experiment, it is not bound with respect to breakup
into two $\alpha$-particles plus a neutron, and its ground state
energy is even above that of $^8$Be. Only at NLO does $^9$Be become
bound and it may remain bound at N$^2$LO but the uncertainties do not
allow us to make a definite statement.
Finally, the level ordering of the lowest states of $^{10}$B is known
to be sensitive to the details of the interaction~\cite{Navratil:2007we},
and typically one finds a $J^\pi=1^+$ ground state with NN-only
potentials, instead of a $3^+$ ground state. With a 3NF one can
obtain the correct $J^\pi=3^+$ ground state spin for $^{10}$B, but the
convergence pattern of the lowest $1^+$ is different than that of the
lowest $3^+$ state; furthermore, the splitting between these two
states appears to be very sensitive both to the parameters of the
interaction and to the SRG evolution~\cite{Jurgenson:2013yya}. In our
calculations, the $1^+$ is the ground state at LO, and about 6 MeV
below the $3^+$ state, but at NLO and at N$^2$LO the level splitting
between these two states is less than our estimated extrapolation
uncertainties.
\begin{figure}
\includegraphics[width=0.99\columnwidth]{res_gs_Eb_A03A10.pdf}
\caption{\label{Fig:res_gs_Eb_A03A10}
(Color online) Calculated (red dots) ground state energies in MeV
using chiral LO, NLO, and N$^2$LO NN interactions at $R=1.0$~fm
(without SRG evolution) based on the NN forces only in comparison
with experimental values (blue levels). Red error bars indicate
NCCI extrapolation uncertainty and shaded bars indicate the
estimated truncation error at each chiral order as defined in the
Introduction.}
\end{figure}
We show the chiral truncation error estimate for the ground state energies of
light nuclei up to $A=10$ using the methods reviewed in
Sec.~\ref{sec:intro} but limited to N$^2$LO in
Fig.~\ref{Fig:res_gs_Eb_A03A10}. We remind the reader that the shown
results at N$^2$LO are incomplete as the corresponding 3NF are not
included. Accordingly, at leading order, the chiral error estimate
appears to be given by $\delta E^{(0)} = | E^{(3)} - E^{(0)} |$, and at NLO and
N$^2$LO by $Q \delta E^{(0)}$ and $Q^2 \delta E^{(0)}$ respectively,
for all 10 nuclei.
As in Ref.~\cite{Binder:2015mbz}, the expansion parameter for these
light nuclei is estimated here
as $Q=M_\pi / \Lambda_b$ (see Section~\ref{sec:chiraltruncationerror}).
Note that if we were to include results up to
N$^4$LO without including 3NFs (and possibly 4NFs), all chiral error
estimates following this prescription would increase noticeably,
because the N$^3$LO and N$^4$LO results without consistent 3NFs leads
to a larger $\max( | E^{(i)} - E^{(0)} | )$ that appears in
Eq.~(\ref{ErrorOrig2}). Alternative chiral truncation error estimates for these
results are discussed in Section~\ref{sec:Uncertainties} below.
Looking further into the results in Fig.~\ref{Fig:res_gs_Eb_A03A10},
one notices that at N$^2$LO, where omitted 3NFs may have an impact,
we see significant differences between the current results and
experiment that go beyond the estimated chiral truncation uncertainty.
These differences are easily visible for $^6$He, $^8$He, $^8$Li and
$^9$Li. Future work that includes the 3NFs is needed to discern their
role and to understand if they resolve these differences while not
creating significant differences in the cases where little difference
is currently found.
\subsection{Magnetic moments}
\begin{figure}
\includegraphics[width=0.99\columnwidth]{res_gs_mu_A03A10.pdf}
%
\caption{\label{Fig:res_gs_mu_A03A10} (Color online) Calculated (red
dots) ground state magnetic moments of light nuclei up to $A=10$
at LO, NLO, and N$^2$LO with $R=1.0$~fm in comparison with
experimental values (blue horizontal lines). Red error bars
indicate NCCI extrapolation uncertainty and shaded bars indicate
the estimated truncation error at each chiral order as defined in
the Introduction.}
\end{figure}
In addition to binding energies we also calculated the magnetic
moments for the ground states of $p$-shell nuclei up to $A=10$.
In contrast to long-range observables such as radii,
magnetic moments tend to converge rapidly in a HO basis.
Indeed, the magnetic moments for the ground states of $^6$Li, $^7$Li,
and $^7$Be are very well converged. Furthermore, the dependence on
the chiral order is very weak, and the results are remarkably close to
the experimental values. For $A=8$ and $9$, the convergence is not as
good, and there is a stronger dependence on the chiral order, but the
magnetic moment of the ground state of $^{10}$B is again very well
converged, and only very weakly dependent on the chiral order.
Note that here we only used the canonical one-body current operator
and we defer to a future effort the development and application of
consistent chiral current operators at each order. We expect that
with such improved current operators, including meson-exchange
currents~\cite{Kolling:2009iq,Kolling:2011mt,Krebs:2016rqz,Pastore:2008ui,Pastore:2012rp,Baroni:2015uza},
the calculated magnetic moments (as well as magnetic transition matrix
elements) will be in good agreement with experimental values -- the
deviation with experimental magnetic moments that we find here are of the same sign and
magnitude as suggested by phenomenological meson-exchange
contributions~\cite{Pastore:2012rp}.
\section{Medium-mass nuclei}
\label{sec:CoupledCluster}
Over the past few years, several \emph{ab initio} methods have been
developed to address ground states of nuclei in the medium-mass
regime, beyond the reach of standard NCCI calculations. Already the
simplest observables, like ground-state energies and radii for
medium-mass nuclei, e.g., the doubly magic calcium isotopes, provide a
valuable testing ground for chiral interactions,
far away from the few-body domain that
was used to constrain the Hamiltonians.
For a first characterization of the new generation of chiral NN
interactions in the medium-mass regime, we employ the most advanced
coupled-cluster (CC) formulations and state-of-the-art in-medium
similarity renormalization group (IM-SRG) calculations for
ground-state observables of $^{16,24}$O and $^{40,48}$Ca. We mirror
the discussion of the previous section and analyze the order-by-order
behavior and the theoretical uncertainties. In addition we compare
to results with other, widely used chiral forces.
\subsection{Coupled-Cluster Theory}
Single-reference CC theory expresses the exact
many-body state as $| \Psi \rangle = e^T| \Phi\rangle$, where
$|\Phi\rangle$ is a single-Slater-determinant reference state based on
a Hartree-Fock calculation
\cite{Hagen:2007ew,Hagen:2010gd,Binder:2012mk,Jansen:2012ey,Binder:2013oea,Hagen:2012rq,Baardsen:2013vwa,Hagen:2012fb,Hagen:2012sh,Hagen:2013nca}.
Correlations are introduced by the action of the exponential $e^T$ of
the particle-hole excitation operator $T=T_1 + T_2 + \dots + T_A$ on
the reference state. In practical calculations, the cluster operator
$T$ is truncated at some low $n$-particle-$n$-hole ($n$p$n$h)
excitation level, such as the 2p2h excitations,
$T\approx T_1+T_2$. This constitutes the very popular CC with singles
and doubles excitations (CCSD) approach.
Due to the exponential ansatz, all powers of $T_1$, $T_2$ and
mixed products of these are present in the description of the wave
function, resulting in the facility to describe many-body correlations
of considerable complexity that may be difficult to achieve
in alternative many-body methods.
The essential ingredient in CC theory is the similarity-transformed
Hamiltonian $\bar{H} = e^{-T}H e^T$. In terms of $\bar{H}$,
one can solve for the $T$ amplitudes by projecting from the left with
particle-hole excited reference states $| \Phi^{ab\dots}_{ij\dots}\rangle$ in
order to obtain the set of equations $0=\langle
\Phi^{ab\dots}_{ij\dots} | \bar{H} | \Phi \rangle$ which
determines the cluster amplitudes. The energy is obtained from
calculating the closed diagrams of $\bar{H}$ according to
$E=\langle \Phi| \bar{H} | \Phi\rangle$~\cite{ShBa09}.
Going beyond the singles and doubles approximation in CC calculations
leads to an increased complexity of the equations to be solved
and to increased computational cost. Therefore, the current approach in
nuclear structure theory to incorporate higher-than-doubles
excitations in ground-state CC calculations is by a non-iterative
inclusion of triples excitation effects to the ground-state energy
(but not the wave function) via the CCSD(T)~\cite{RaTr89},
$\Lambda$CCSD(T)~\cite{TaBa08,TaBa08-2}, or the
CR-CC(2,3)~\cite{PiGo09} method.
Three-body interactions can be included in CC calculations using the
normal-ordering approximation at the
two-body level (NO2B)~\cite{Hagen:2007ew,Roth:2011vt}. Alternatively, the CC
method can straightforwardly be
extended to the full treatment of three-body Hamiltonians, however,
often at prohibitively large computational
cost \cite{Binder:2012mk,Binder:2013oea}. In this work, we will work with the CCSD approach combined with the
CR-CC(2,3) energy correction including 3N interactions in the NO2B approximation.
\subsection{In-Medium Similarity Renormalization Group}
The IM-SRG aims at decoupling an $A$-body reference state $\ket{\Phi}$ from all particle-hole
excitations or, equivalently, at suppressing a specific
``off-diagonal'' part of the Hamiltonian
\cite{Tsukiyama:2010rj,Hergert:2012nb,Hergert:2015awm,Hergert:2016etg}. This
decoupling at the $A$-body level can be implemented using the concepts
of the similarity renormalization group, that we already exploited in
few-body spaces (cf. Section \ref{sec:ncci}).
We formulate a continuous unitary transformation of the Hamiltonian
$H(s) = U^{\dagger}(s) H(0) U(s)$ in $A$-body space, where $s$ denotes the flow parameter of the IM-SRG.
This transformation is rewritten into the following operator differential equation
\begin{equation}
\totdiff{H}{s}(s) = \comm{\eta(s)}{H(s)}
\text{ ,}
\label{eq:operator_flow_equation_hamiltonian}
\end{equation}
where $\eta(s)$ refers to the so-called generator of the transformation.
The Hamiltonian $H(s)$ and the generator $\eta(s)$ are normal-ordered
with respect to the reference state
$\ket{\Phi}$ and truncated at
the normal-ordered two-body level, e.g.,
\begin{equation}
H(s)
= E(s)
+ \sum_{pq} f^p_q(s) \{a_p^{\dagger} a_q\}
+ \frac{1}{4} \sum_{pqrs} \Gamma^{pq}_{rs}(s) \{a_p^{\dagger} a_q^{\dagger} a_r a_s\}
\text{ ,}
\end{equation}
where normal-ordered products of single-particle creation and annihilation operators appear.
Evaluating the right-hand side of equation
(\ref{eq:operator_flow_equation_hamiltonian}) via Wick's theorem, one
can derive the flow equations for the
matrix elements of the normal-ordered zero-, one-, and two-body part,
i.e., $E(s)$, $f^p_q(s)$ and $\Gamma^{pq}_{rs}(s)$, respectively, of
the Hamiltonian.
As an example, the flow equation for zero-body part, which represents
energy expectation value in the reference state, reads
\begin{equation}
\totdiff{E(s)}{s}=
\sum_{pq}\
\left(n_p-n_q\right) \
\opme{\eta}{p}{q}(s) \
\opme{f}{q}{p}(s)
+
\frac{1}{4} \
\sum_{pqrs} \
\left(
\tpme{\eta}{p}{q}{r}{s}(s) \
\tpme{\Gamma}{r}{s}{p}{q}(s) \
n_{p}n_{q} (1-{n}_{r})(1-n_{s})
- \ichange{\eta}{\Gamma}
\right)
\text{ ,}
\end{equation}
where $n_{p}$ is the occupation number w.r.t.\ the reference state $\ket{\Phi}$.
Formally, the flow equations of the IM-SRG are a coupled system of
first-order ordinary differential equations
which can be solved numerically as an initial value problem until decoupling is reached.
A great advantage of the IM-SRG is the simplicity and flexibility of its basic concept.
Through different choices for the generator $\eta(s)$,
we obtain different decoupling patterns, numerical characteristics and efficiencies.
As a consequence, we can tailor the IM-SRG for specific applications,
e.g., the derivation of valence-space shell model interactions
\cite{Tsukiyama:2012sm,Bogner:2014baa,Stroberg:2016ung}.
Furthermore, it is straightforward to use the formalism of the IM-SRG for a consistent evolution of observables
since the flow equation for an observable is similar to the one given
in equation (\ref{eq:operator_flow_equation_hamiltonian}).
The IM-SRG was first applied for the study of ground-state energies of closed-shell nuclei
but can be easily extended to open-shell nuclei via multi-reference generalizations of normal
ordering and Wick's theorem \cite{Hergert:2013vag,Hergert:2014iaa,Gebrerufael:2016xih}.
\subsection{Chiral Truncation Error}
\label{sec:chiraltruncationerror}
In order to quantify the truncation errors in nuclear ground-state energies at various chiral orders,
we recall the approach introduced in
Ref.~\cite{Binder:2015mbz}, see the discussion in section \ref{sec:intro}, and employ Eqs.~(\ref{ErrorOrig}) and
(\ref{ErrorOrig2}) at LO and NLO and Eq.~(\ref{ErrorMod}) at N$^2$LO and higher chiral orders.
In Ref.~\cite{Binder:2015mbz}, the expansion parameter $Q$ of the chiral
expansion defined in Eq.~(\ref{ExpPar}), which enters Eqs.~(\ref{ErrorOrig}), (\ref{ErrorOrig2}) and (\ref{ErrorMod}),
was estimated for $^3$H, $^4$He and $^6$Li as $Q=M_\pi / \Lambda_b$. While
this is reasonable for very light nuclei as seen in the discussion of chiral truncation errors
in light nuclei in Sec. \ref{sec:LightNuclei}, one may expect the typical momentum
to increase in heavier systems due to the increased role of Pauli blocking.
In order to estimate these effects, we employ two different methods to evaluate a
nucleus-dependent characteristic momentum scale: the Hartree-Fock (HF) approximation
and the NCCI method. We use the resulting ground-state wave function, in each case,
to evaluate the expectation value of the relative
kinetic energy operator $\langle T_{\hbox{\scriptsize rel}} \rangle$ given by
\begin{eqnarray}
T_{\hbox{\scriptsize rel}} & \equiv & \sum_{i<j}\frac{(\vec{p}_i-\vec{p}_j)^2}{2 \, A \, m}
\; = \; \frac{2}{A} \sum_{i<j}\frac{(\vec{p}_{ij})^2}{2 \, \mu}
\end{eqnarray}
in terms of the relative momenta $\vec{p}_{ij} = (\vec{p}_i-\vec{p}_j)/2$
and the reduced two-nucleon mass $\mu = m/2$.
Based on this expectation value, we define the average relative momentum scale as follows:
\begin{eqnarray}
\label{pavg}
p_{\hbox{\scriptsize avg}} & = & \sqrt{ \frac{2\mu}{(A-1)} \langle T_{\hbox{\scriptsize rel}} \rangle }
= \sqrt{ \frac{2}{A(A-1)} \bigg\langle \sum_{i<j} (\vec{p}_{ij})^2 \bigg\rangle } \;.
\end{eqnarray}
As the last expression shows, this simply corresponds to the
root-mean-square relative momentum of all nucleon pairs, i.e., the
square root of the expectation value of the squared relative momenta
summed over all particle pairs and divided by the number of
pairs. Thus, this quantity reflects a characteristic scale for
relative two-body momenta in the nucleus, which will depend on the
nucleus under consideration and on the underlying interaction.
The results for $p_{\hbox{\scriptsize avg}}$ obtained in HF and NCCI are summarized in Tables \ref{tab:MomScale} and \ref{tab:MomScale2} of Appendix \ref{ExpVal}. For HF, we employ the SRG-evolved Hamiltonian with the SRG parameter $\alpha = 0.08~$fm$^4$
and evaluate the expectation value of the SRG-transformed relative kinetic energy operator for
input to the calculation of $p_{\hbox{\scriptsize avg}}$. We also employ the spherical HF approximation
and the filling fraction approximation for open shell nuclei. For NCCI, we extrapolate the expectation
value of the relative kinetic energy to the infinite basis limit using NCCI results from
currently attainable $N_{\rm max}$ values.
The HF results are available for all chiral orders and show a systematic decrease in $p_{\hbox{\scriptsize avg}}$ with increasing order. Pronounced changes appear from LO to NLO and from N${}^2$LO to N${}^3$LO. This general trend can be explained in a simple mean-field type picture, keeping the behaviour of the ground-state charge radii in mind. With increasing chiral order the radius of a given nucleus shows a systematic increase (including the more pronounced changes, cf. Figs. \ref{fig:ccimsrg_a1} and \ref{fig:ccimsrg_a2}), which translates into a decrease of the Fermi energy and the associated momentum scale in a mean-field picture. The $p_{\hbox{\scriptsize avg}}$ scale evaluated at the HF level captures exactly this mean-field or low-momentum physics.
It is interesting to compare this to the NCCI calculations, which converge to the exact solution of the many-body problem, including all correlations beyond the mean-field level. These results are available up to N${}^2$LO for the p-shell nuclei and up to N${}^4$LO for s-shell nuclei (both, from Faddeev-Yakubovsky and NCCI calculations). Up to N${}^2$LO the $p_{\hbox{\scriptsize avg}}$ scales extracted from the NCCI kinetic energies for the bare Hamiltonian agree surprisingly well with scales extracted from HF expectation values based on SRG-evolved operators. This indicates the the SRG transformation does capture the main beyond-HF correlations such that the kinetic energy expectation values are very similar to the full NCCI values. Still, even with the SRG transformation, not all correlations are covered and the HF ground-state energies differ significantly from the converged NCCI result.
This difference becomes apparent at N${}^3$LO and N${}^4$LO, where the
SCS NN interactions are significantly harder and much more difficult
to converge in the NCCI than at lower orders (see e.g.,
Fig. \ref{Fig:groundstate_extrapolated}). This is the reason why no
NCCI scales can be extracted for p-shell nuclei beyond N${}^2$LO. For
s-shell nuclei the $p_{\hbox{\scriptsize avg}}$ scales obtained from
NCCI at N${}^3$LO and N${}^4$LO are significantly larger than for the
lower orders, in contrast to the mean-field trend shown by the
HF-based scale estimates. At this point, short-range or high-momentum
physics explicitly affects the momentum scales extracted from NCCI
wave functions, which is absent in the HF treatment. Such short-range
correlation effects are regulator scale and scheme dependent and represent specific
high-momentum aspects of the wave function and not a gross momentum
scale corresponding to the Fermi-momentum in a homogeneous
system.
We do not have a strong physics reason for preferring one or another
approach to estimating the nucleus-dependent momentum scale
$p_{\hbox{\scriptsize avg}}$. In the following, we adopt the HF-based
scale estimate as input for the uncertainty quantification out of
convenience.
Given that the $p_{\hbox{\scriptsize avg}}$ values
show significant variations at different chiral orders, we average over the
available results from LO to N$^4$LO to arrive at a single nucleus-dependent and $R$-dependent
value for $p_{\hbox{\scriptsize avg}}$ quoted in the last column in Tables \ref{tab:MomScale} and \ref{tab:MomScale2}.
Then, for a given nucleus, the expansion parameter $Q$ is estimated as
\begin{equation}
\label{newQ}
Q = \frac{\max( p_{\hbox{\scriptsize avg}}, \, M_\pi)}{\Lambda_b},
\end{equation}
where $p_{\hbox{\scriptsize avg}}$ is the result in the last column of
Tables \ref{tab:MomScale} and \ref{tab:MomScale2}.
Another feature of the results in Tables \ref{tab:MomScale} and \ref{tab:MomScale2} is the increase in
$p_{\hbox{\scriptsize avg}}$ with increasing $A$. For very heavy nuclei,
the relevant momentum scale should be closer to
the Fermi momentum $p_F \sim 260$~MeV corresponding to the
saturation density of nuclear matter. The trend in the results of the last
column of Tables \ref{tab:MomScale} and \ref{tab:MomScale2} appears consistent with that
expectation. However, for light nuclei $p_{\hbox{\scriptsize avg}}$
is within a few percent of of $M_\pi$, at least up to $A = 9$ for $R=1.0~$fm.
Since the chiral uncertainty estimates shown in
Figs.~\ref{Fig:res_gs_Eb_A03A10} and \ref{Fig:res_gs_mu_A03A10}
would change only
minimally by adopting $p_{\hbox{\scriptsize avg}}$ for the definition of Q,
we do not show them for light nuclei. However, for the following
discussion of ground-state observables of medium-mass nuclei, we will
adopt the nucleus-dependent momentum scales for the order-by-order
uncertainty quantification.
\subsection{Results}
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.45\textwidth]{imsrg_EA_cutoff_B.pdf}
\includegraphics[width=0.45\textwidth]{imsrg_radii_cutoff_B.pdf}
\end{center}
\caption{(Color online)
Ground-state energies and charge radii for the
ground-state of $^{16,24}$O and $^{40,48}$Ca obtained
from CC and IM-SRG based on HF reference states. The different columns correspond
to different initial interactions, starting with the
SCS chiral NN interaction from LO to N$^4$LO with
the cutoff $R=0.9\,\text{fm}$, followed by the
N$^2$LO-SAT NN+3N interaction \cite{Ekstrom:2015rta}, chiral NN interaction at
N$^3$LO by Entem and Machleidt \cite{Entem:2003ft} without (EM-ind) and with (EM-full) an
additional local chiral 3N interaction at N$^2$LO \cite{Navratil:2007zn} with
reduced cutoff $\Lambda_{3N}=400\;\text{MeV}$ \cite{Roth:2011vt}. Solid symbols refer to a free-space SRG
parameter $\alpha=0.08\;\text{fm}^{4}$ whereas open
symbols refer to $\alpha=0.04\;\text{fm}^{4}$. The
grey bars indicate the estimated theoretical
uncertainties at various chiral orders.
}
\label{fig:ccimsrg_a1}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.45\textwidth]{imsrg_EA_cutoff_C.pdf}
\includegraphics[width=0.45\textwidth]{imsrg_radii_cutoff_C.pdf}
\end{center}
\caption{ Same as Figure \ref{fig:ccimsrg_a1} but for the cutoff value of $R=1.0~$fm.}
\label{fig:ccimsrg_a2}
\end{figure}
Using CC and IM-SRG we explore the ground-state energies and charge radii of the doubly magic
nuclei $^{16,24}$O and $^{40,48}$Ca with the SCS NN
interactions from LO to N$^4$LO. The focus of these calculations is
the investigation of the order-by-order behavior of the chiral
expansion in the medium-mass regime and the theory uncertainties
derived from it.
For all calculations presented in this section we use SRG-evolved
interactions including the induced three-nucleon contributions. We use
two different SRG flow parameters, $\alpha=0.04\,\text{fm}^4$ and
$0.08\,\text{fm}^4$, to probe the contributions of higher-order
induced forces that are not explicitly included. For the specific
interaction and nucleus under consideration, we first perform a
Hartree-Fock calculation for the full Hamiltonian in a HO basis
truncated with respect to the maximum single-particle principal
quantum number $e_{\max}=(2n+l)_{\max}$. The HF solution defines the
reference state and an optimized single-particle basis, which
eliminates the dependence of the subsequent many-body solutions on the
oscillator frequency. The full Hamiltonian is normal-ordered with
respect to the reference Slater determinant and residual
normal-ordered three-body terms are discarded. We have explored the
accuracy of the normal-ordered two-body approximation in the
medium-mass regime, e.g., through direct comparisons of CC
calculations with and without the residual three-body terms and found
agreement at the $1\%$ level or better
\cite{Binder:2012mk,Roth:2011vt}.
With these inputs, we perform CC calculations at the level of CCSD and
CR-CC(2,3), which provide a direct way to quantify the residual
uncertainty due to the cluster truncation. In addition, we perform
single-reference IM-SRG(2) calculations. The results for the
ground-states energies and the charge radii of $^{16,24}$O and
$^{40,48}$Ca are summarized in Fig.~\ref{fig:ccimsrg_a1} for the
sequence of SCS NN interaction at cutoff $R=0.9\,\text{fm}$ and
in Fig.~\ref{fig:ccimsrg_a2} for $R=1.0\,\text{fm}$. For comparison
each panel also shows the corresponding results with established
chiral interactions, i.e., the N$^2$LO-sat NN+3N interaction by
Ekstr\"om et al. \cite{Ekstrom:2015rta}, the N$^3$LO NN interaction by
Entem and Machleidt \cite{Entem:2003ft} without (EM-ind) and with
(EM-full) a supplementary local 3N interaction at N$^2$LO with cutoff
400 MeV \cite{Navratil:2007zn,Roth:2011vt}.
The numerical values for the ground-state energies and charge radii obtained
with the SCS NN interactions at cutoff values of $R=0.9\,\text{fm}$
and $R=1.0\,\text{fm}$ can be found
in Tables \ref{tab:ccimsrg_energy} and \ref{tab:ccimsrg_radii}, respectively.
The different symbol shapes and colors distinguish the three many-body
methods while solid and open symbols indicate the two SRG flow-parameters
we use. The variation within the set of six calculations for any given
chiral interaction and nucleus provides an estimate for the
uncertainties in the solution of the many-body problem, including the
free-space SRG evolution and the many-body truncations.
These many-body uncertainties can be compared to the uncertainties
inherent to the chiral interaction at any given order, which are
quantified using the protocol discussed in
Sec. \ref{sec:chiraltruncationerror}. We use the intrinsic kinetic
energy expectation value obtained in HF calculations with SRG
transformed operators to define a momentum scale. The uncertainties
for the ground-state energies and the charge radii are then determined
from Eqs.~(\ref{ErrorOrig}) and (\ref{ErrorOrig2}) for LO and NLO and
Eq.~(\ref{ErrorMod}) from N$^2$LO on. The gray bands in
Figs.~\ref{fig:ccimsrg_a1} and \ref{fig:ccimsrg_a2} indicate these
uncertainties extracted from the IM-SRG results as representatives for
the three different many-body approaches. For the neutron-rich
isotopes $^{24}$O and $^{48}$Ca, the LO interaction does not reproduce
the correct shell closures at the Hartree-Fock level and, thus, the
closed-shell formulations of CC and IM-SRG typically fail to
converge. In these cases we simply use the HF ground-state energy for
the uncertainty quantification.
Generally we find a systematic decrease of the uncertainties with
increasing chiral order, as expected. For the lower orders up to
N$^2$LO, the interaction uncertainties are significantly larger than
the many-body uncertainties. Only at N$^3$LO and N$^4$LO the
interaction and many-body uncertainties are of comparable size. We
conclude from this observation that the many-body methods and their
truncation uncertainties are sufficiently well controlled in order to
address nuclei in the medium-mass regime with chiral
interactions. Even at the highest available order of the chiral
expansion, the different sources of uncertainties are comparable in
size, so that a significant improvement on the total uncertainty would
require improvements on all aspects of the calculation.
The sequence of ground-state energies from LO to N$^4$LO for these
medium-mass nuclei shows the same systematic pattern observed in light
nuclei: The LO interactions for both cutoffs produce drastic
overbinding and unrealistic ground states. Going to NLO the energy
jumps and the overbinding is reduced significantly. The step to
N$^2$LO does not affect the ground-state energies for the oxygen
isotopes, but lowers the ground-state energies for the calcium
isotopes again. Going to N$^3$LO the ground-state energies exhibit another
jump leading to a moderate underbinding compared to experiment. From
N$^3$LO to N$^4$LO the energies remain stable for all nuclei.
As repeatedly emphasized,
one has to keep in mind that the 3N interactions, which appear from
N$^2$LO on, are not included in these calculations. Therefore, we
cannot draw rigorous conclusions about the convergence of the chiral
expansion at this stage. It will be very interesting to explore how
the inclusion of a consistent 3N interaction fitted in the few-body
sector for N$^2$LO and beyond will change the observed trends in
ground-state energies of medium-mass nuclei. This is the prime goal of
our ongoing research program.
The charge radii mirror the pattern observed for the ground-state
energies. As the ground-state energy increases and the binding
decreases, the charge radii increase as expected from a naive
mean-field picture. For N$^3$LO and N$^4$LO the charge radii for
$^{16}$O are close to the experimental value, however, for
$^{40,48}$Ca the radii are underestimated by about $0.4\,\text{fm}$
although the nuclei are underbound. It remains to be seen, how the 3N
contributions affect the radii, but it is unlikely that the inclusion
of the consistent 3N interactions alone will resolve this discrepancy.
\begin{table}[t]
\renewcommand{\arraystretch}{1.2}
\begin{ruledtabular}
\begin{tabular}{ c | c | c | c | c | c | c | c }
Nucleus & Method & LO & NLO & N${}^2$LO & N${}^3$LO & N${}^4$LO &
Exp. \\ \hline
\multicolumn{8}{l}{$R = 0.9\mbox{ fm}$}
\\ \hline
$^{16}$O & CCSD & $-$13.92 ; $-$13.84 & $-$8.81 ; $-$8.76 & $-$8.74 ; $-$8.71 & $-$6.97 ; $-$6.90 & $-$6.74 ; $-$6.69 & $-$7.98 \\
& CR-CC(2,3) & $-$13.97 ; $-$13.86 & $-$8.93 ; $-$8.82 & $-$8.88 ; $-$8.79 & $-$7.10 ; $-$6.97 & $-$6.87 ; $-$6.77 & \\
& IM-SRG & $-$13.96 ; $-$13.86 & $-$8.96 ; $-$8.83 & $-$8.94 ; $-$8.82 & $-$7.17 ; $-$7.01 & $-$6.96 ; $-$6.81 &\\ \hline
$^{24}$O & CCSD & $-$10.11 ; $-$10.33 & $-$8.05 ; $-$7.93 & $-$8.09 ; $-$8.03 & $-$5.80 ; $-$5.68 & $-$5.48 ; $-$5.40 & $-$7.04 \\
& CR-CC(2,3) & $-$10.59 ; $-$10.89 & $-$8.17 ; $-$8.00 & $-$8.23 ; $-$8.11 & $-$5.94 ; $-$5.76 & $-$5.63 ; $-$5.48 & \\
& IM-SRG & --- ; --- & $-$8.19 ; $-$8.00 & $-$8.26 ; $-$8.12 & $-$5.99 ; $-$5.78 & $-$5.70 ; $-$5.52 & \\ \hline
$^{40}$Ca & CCSD & $-$16.78 ; $-$16.30 & $-$10.69 ; $-$10.37 & $-$12.19 ; $-$12.01 & $-$7.79 ; $-$7.42 & $-$7.27 ; $-$6.99 & $-$8.55 \\
& CR-CC(2,3) & $-$16.83 ; $-$16.33 & $-$10.84 ; $-$10.44 & $-$12.37 ; $-$12.10 & $-$7.94 ; $-$7.49 & $-$7.44 ; $-$7.07 & \\
& IM-SRG & $-$16.82 ; $-$16.32 & $-$10.86 ; $-$10.46 & $-$12.40 ; $-$12.12 & $-$7.96 ; $-$7.50 & $-$7.48 ; $-$7.09 & \\ \hline
$^{48}$Ca & CCSD & --- ; --- & $-$10.77 ; $-$10.35 & $-$13.05 ; $-$12.82 & $-$7.03 ; $-$6.52 & $-$6.59 ; $-$6.17 & $-$8.67 \\
& CR-CC(2,3) & --- ; --- & $-$10.92 ; $-$10.42 & $-$13.21 ; $-$12.89 & $-$7.19 ; $-$6.59 & $-$6.75 ; $-$6.25 & \\
& IM-SRG & --- ; --- & $-$10.93 ; $-$10.43 & $-$13.20 ; $-$12.89 & $-$7.20 ; $-$6.59 & $-$6.78 ; $-$6.26 & \\ \hline
\multicolumn{8}{l}{$R = 1.0\mbox{ fm}$}
\\ \hline
$^{16}$O & CCSD & $-$13.84 ; $-$13.75 & $-$9.36 ; $-$9.30 & $-$8.98 ; $-$8.95 & $-$7.00 ; $-$6.93 & $-$7.06 ; $-$7.02 & $-$7.98\\
& CR-CC(2,3) & $-$13.88 ; $-$13.77 & $-$9.45 ; $-$9.36 & $-$9.10 ; $-$9.02 & $-$7.14 ; $-$7.01 & $-$7.20 ; $-$7.10 & \\
& IM-SRG & $-$13.87 ; $-$13.76 & $-$9.47 ; $-$9.36 & $-$9.15 ; $-$9.05 & $-$7.25 ; $-$7.06 & $-$7.29 ; $-$7.14 & \\ \hline
$^{24}$O & CCSD & --- ; $-$10.53 & $-$8.59 ; $-$8.47 & $-$8.34 ; $-$8.27 & $-$5.72 ; $-$5.64 & $-$5.78 ; $-$5.72 & $-$7.04\\
& CR-CC(2,3) & --- ; $-$10.97 & $-$8.69 ; $-$8.53 & $-$8.45 ; $-$8.33 & $-$5.87 ; $-$5.72 & $-$5.92 ; $-$5.80 & \\
& IM-SRG & --- ; --- & $-$8.70 ; $-$8.53 & $-$8.46 ; $-$8.34 & $-$5.95 ; $-$5.76 & $-$5.99 ; $-$5.83 & \\ \hline
$^{40}$Ca & CCSD & $-$17.07 ; $-$16.68 & $-$11.81 ; $-$11.46 & $-$12.69 ; $-$12.47 & $-$7.34 ; $-$7.12 & $-$7.45 ; $-$7.25 & $-$8.55 \\
& CR-CC(2,3) & $-$17.10 ; $-$16.69 & $-$11.91 ; $-$11.52 & $-$12.81 ; $-$12.53 & $-$7.49 ; $-$7.18 & $-$7.58 ; $-$7.31 & \\
& IM-SRG & $-$17.10 ; $-$16.69 & $-$11.92 ; $-$11.52 & $-$12.83 ; $-$12.54 & $-$7.51 ; $-$7.18 & $-$7.60 ; $-$7.31 & \\ \hline
$^{48}$Ca & CCSD & --- ; $-$13.82 & $-$11.87 ; $-$11.42 & $-$13.59 ; $-$13.27 & $-$4.37 ; --- & $-$4.91 ; $-$4.51 & $-$8.67\\
& CR-CC(2,3) & --- ; $-$14.13 & $-$11.98 ; $-$11.48 & $-$13.70 ; $-$13.32 & $-$4.53 ; --- & $-$5.10 ; $-$4.58 & \\
& IM-SRG & --- ; --- & $-$11.98 ; $-$11.48 & $-$13.68 ; $-$13.31 & --- ; --- & --- ; --- & \\ \hline
\end{tabular}
\end{ruledtabular}
\caption{\label{tab:ccimsrg_energy}
Ground-state energies per nucleon, in MeV, using SRG-evolved SCS NN
interactions from LO to N${}^{4}$LO at $R=0.9$~fm and $R=1.0$~fm obtained form CCSD, CR-CC(2,3) and IM-SRG calculations. For each isotope, method and chiral truncation two numbers are given, where the first corresponds to an SRG-flow parameter $\alpha = 0.04 \text{fm}^4$ and the second to $\alpha = 0.08 \text{fm}^4$. If no result is quoted then the CC or IM-SRG equations did not provide a stable solution, because the initial HF single-particle spectrum does not exhibit the correct shell closures.}
\end{table}
\begin{table}[t]
\renewcommand{\arraystretch}{1.2}
\begin{ruledtabular}
\begin{tabular}{ c | c | c | c | c | c | c | c }
Nucleus & Method & LO & NLO & N${}^2$LO & N${}^3$LO & N${}^4$LO & Exp. \\ \hline
\multicolumn{8}{l}{$R = 0.9\mbox{ fm}$}
\\ \hline
$^{16}$O & CCSD & 1.72 ; 1.78 & 2.18 ; 2.22 & 2.25 ; 2.28 & 2.50 ; 2.52 & 2.55 ; 2.57 & 2.70 \\
& IM-SRG & 1.72 ; 1.78 & 2.20 ; 2.23 & 2.26 ; 2.29 & 2.51 ; 2.54 & 2.57 ; 2.59 & \\ \hline
$^{24}$O & CCSD & 1.77 ; 1.84 & 2.15 ; 2.19 & 2.20 ; 2.22 & 2.52 ; 2.54 & 2.60 ; 2.63 & --- \\
& IM-SRG & --- ; --- & 2.16 ; 2.20 & 2.20 ; 2.23 & 2.53 ; 2.55 & 2.62 ; 2.65 & \\ \hline
$^{40}$Ca & CCSD & 2.00 ; 2.07 & 2.60 ; 2.65 & 2.53 ; 2.55 & 2.94 ; 2.97 & 3.09 ; 3.11 & 3.48 \\
& IM-SRG & 2.00 ; 2.07 & 2.63 ; 2.68 & 2.55 ; 2.57 & 2.95 ; 2.97 & 3.11 ; 3.13 & \\ \hline
$^{48}$Ca & CCSD & --- ; --- & 2.58 ; 2.64 & 2.41 ; 2.43 & 2.89 ; 2.91 & 3.06 ; 3.08 & 3.48 \\
& IM-SRG & --- ; --- & 2.61 ; 2.67 & 2.43 ; 2.44 & 2.91 ; 2.92 & 3.08 ; 3.10 & \\ \hline
\multicolumn{8}{l}{$R = 1.0\mbox{ fm}$}
\\ \hline
$^{16}$O & CCSD & 1.77 ; 1.83 & 2.12 ; 2.16 & 2.21 ; 2.24 & 2.54 ; 2.56 & 2.54 ; 2.56 & 2.70\\
& IM-SRG & 1.77 ; 1.83 & 2.13 ; 2.17 & 2.22 ; 2.25 & 2.55 ; 2.57 & 2.55 ; 2.57 & \\ \hline
$^{24}$O & CCSD & --- ; 1.88 & 2.08 ; 2.12 & 2.14 ; 2.17 & 2.56 ; 2.58 & 2.57 ; 2.59 & ---\\
& IM-SRG & --- ; --- & 2.09 ; 2.13 & 2.15 ; 2.18 & 2.57 ; 2.60 & 2.59 ; 2.61 & \\ \hline
$^{40}$Ca & CCSD & 2.07 ; 2.14 & 2.48 ; 2.54 & 2.43 ; 2.47 & 2.98 ; 3.00 & 3.03 ; 3.05 & 3.48\\
& IM-SRG & 2.07 ; 2.13 & 2.50 ; 2.55 & 2.45 ; 2.48 & 2.99 ; 3.01 & 3.04 ; 3.06 & \\ \hline
$^{48}$Ca & CCSD & --- ; 2.19 & 2.46 ; 2.51 & 2.31 ; 2.35 & 2.84 ; --- & 2.91 ; 2.93 & 3.48\\
& IM-SRG & --- ; --- & 2.48 ; 2.54 & 2.32 ; 2.36 & --- ; --- & --- ; --- & \\ \hline
\end{tabular}
\end{ruledtabular}
\caption{\label{tab:ccimsrg_radii}
Charge radii, in fm, using SRG-evolved SCS NN
interactions from LO to N${}^{4}$LO at $R=0.9$~fm and $R=1.0$~fm obtained form CCSD and IM-SRG calculations. For each isotope, method and chiral truncation two numbers are given, where the first corresponds to an SRG-flow parameter $\alpha = 0.04 \text{fm}^4$ and the second to $\alpha = 0.08 \text{fm}^4$. If no result is quoted then the CC or IM-SRG equations did not provide a stable solution, because the initial HF single-particle spectrum does not exhibit the correct shell closures.}
\end{table}
\section{Alternative approaches for uncertainty quantification}
\def\Alph{section}.\arabic{equation}{\arabic{section}.\arabic{equation}}
\label{sec:Uncertainties}
As explained in the introduction, our simple and universal approach to estimating
truncation errors assumes that the chiral expansion of the nuclear forces translates into
a similar expansion for the calculated observables, see Eq.~(\ref{PCAssumption}).
While this assumption holds true for the scattering amplitude in a
perturbative regime, it is violated in the near-threshold kinematics
if the corresponding scattering lengths take large values
\cite{Epelbaum:2017byx}, as is the case for the NN $^1$S$_0$ and $^3$S$_1$
partial
waves. The large S-wave NN scattering lengths also result in the strong
cancellations between the kinetic and potential energies
when calculating the spectra of light nuclei \cite{Weinberg:1991um}. Instead of
trying to account for all relevant dynamically generated fine-tuned scales in
all partial waves and for all kinematical conditions,
we use a simplistic, universal approach to uncertainty quantification by
incorporating the
information about the actual pattern of the chiral expansion for a
given observable in order to account for the above-mentioned departures from
naive dimensional analysis. In the following, we address the reliability
of the resulting error estimations, discuss the robustness of our
approach and consider two alternative formulations.
\begin{itemize}
\item
{\bf Alternative approach 1}\\
We first explore the possibility of relaxing the constraints in Eq.~(\ref{ErrorOrig2}). To
retain a realistic estimation of the truncation error especially at
low orders of the chiral expansion, we still make use of the information about
the explicit size of the order-$Q^i$ contributions to an observable of
interest for all available chiral orders. Specifically, we replace
Eqs.~(\ref{ErrorOrig}) and (\ref{ErrorOrig2}) by
\begin{equation}
\label{ErrorDickComplete}
\delta X^{(0)} = \max_{i \geq 2} \Big( Q^2 | X^{(0)} |, \; Q^{2 - i} | \Delta
X^{(i)} | \Big), \quad \quad \delta X^{(j)} = Q^{j-1} \delta X^{(0)},
\quad \text{for} \; j \geq 2
\end{equation}
for the case of complete calculations. Such an approach
may be expected to provide a more realistic estimation of
uncertainties at lower orders in the chiral expansion as compared to
the standard method.
For incomplete calculations
based on two-nucleon forces only, we rather estimate $\delta X^{(0)}$
via
\begin{equation}
\label{ErrorDickIncomplete}
\delta X^{(0)} = \max_{i \geq 3} \Big( Q^2 |X^{(0)}|, |\Delta X^{(2)}|, Q^{-1} |\Delta X^{(i)}| \Big)
, \quad \quad \delta X^{(j)} = Q^{j-1} \delta X^{(0)},
\quad \text{for} \; j \geq 2 \,.
\end{equation}
In practice, the above modifications are found to lead to very small changes in the
estimated theoretical uncertainties. For example, using
Eq.~(\ref{ErrorDickComplete}), we obtain for the neutron-proton total cross section
at $E_{\rm lab} = 143~$MeV for the cutoff of $R=0.9~$fm
\begin{equation}
52.5 \pm 11.8_{[Q^0]} \; \rightarrow \; 49.1 \pm 5.1_{[Q^2]} \; \rightarrow
\; 54.2 \pm 2.2_{[Q^3]} \; \rightarrow \; 53.7 \pm 1.0_{[Q^4]}
\; \rightarrow \; 53.9 \pm 0.4_{[Q^5]} \,,
\end{equation}
which has to be compared with the estimation based on the original
approach using Eqs.~(\ref{ErrorOrig}) and (\ref{ErrorOrig2}):
\begin{equation}
52.5 \pm 9.8_{[Q^0]} \; \rightarrow \; 49.1 \pm 5.1_{[Q^2]} \; \rightarrow
\; 54.2 \pm 2.2_{[Q^3]} \; \rightarrow \; 53.7 \pm 1.0_{[Q^4]}
\; \rightarrow \; 53.9 \pm 0.4_{[Q^5]} \,.
\end{equation}
Thus, in this particular case, the modification only amounts to a
slight increase of the theoretical uncertainty at LO. Similarly, we
find very minor changes when using Eq.~(\ref{ErrorDickIncomplete})
instead of Eq.~(\ref{ErrorMod}) to estimate truncation errors in incomplete few-body
calculations based on two-nucleon interactions only, see Fig.~\ref{fig:Nd_Errors}
for representative examples.
\begin{figure}[tb]
\includegraphics[width=\textwidth,keepaspectratio,angle=0,clip]{Nd_200MeV_Errors.pdf}
\caption{(Color online) Predictions for the differential cross
section, and deuteron tensor analyzing powers $A_{yy}$, $A_{xz}$ and
$A_{xx}$ at the laboratory energy of $200$~MeV based on the
NN potentials of Refs.~\cite{Epelbaum:2014efa,Epelbaum:2014sza} for
$R=0.9$~fm without including the 3NF.
The bands of
increasing width show estimated theoretical uncertainty at N$^4$LO (red),
N$^3$LO (blue), N$^2$LO (green) and NLO (yellow).
The theoretical uncertainties in the upper and lower rows are estimated
using Eq.~(\ref{ErrorMod}) and (\ref{ErrorDickIncomplete}),
respectively.
The dotted (dashed)
lines show the results based on the CD Bonn NN potential (CD Bonn NN
potential in combination with the Tucson-Melbourne 3NF).
Open circles are proton-deuteron data from
Refs.~\cite{vonPrzewoski:2003ig}.
}
\label{fig:Nd_Errors}
\end{figure}
\item
{\bf Alternative approach 2}\\
Furthermore, we consider a minimalistic approach for uncertainty
quantification of calculated ground state energies that does not rely on
the knowledge of contributions beyond the leading order by assigning
the uncertainties as
\begin{equation}
\label{approach2}
\delta E^{(0)} = Q^2 | \langle V \rangle^{(0)} |, \quad \quad \delta E^{(i
\ge 2)} = Q^{i+1} | \langle V \rangle^{(0)} |\,.
\end{equation}
without any further constraints. Also the momentum scale $Q$ is based
on the calculated $p_{\hbox{\scriptsize avg}}$ at leading order given in
Table~\ref{tab:MomScale}, whereas in the original approach and
Alternative 1 we used the average of $p_{\hbox{\scriptsize avg}}$ over
all available chiral orders. Thus, the uncertainties are estimated
entirely based on the leading order information.
Notice that using the expectation value of the potential energy rather
than the binding energy as done in the original approach and
Alternative 1 is crucial in order to account for the fine-tuning
associated with the NN interaction being close to the unitary limit
(large S-wave scattering lengths). While the ignorance of the
fine-tuned nature of the binding energies in the other two approaches
is, to a large extent, effectively corrected by employing the
available information about the actual pattern of the chiral
expansion, an attempt to use the binding energy instead of
$ \langle V \rangle^{(0)} $ in Eq.~(\ref{approach2}) will
yield drastically underestimated truncation errors.
This simple minimalistic approach has an appealing feature that the
estimated uncertainties for the energies beyond the leading order do
not involve any information on the specific behavior at higher orders
as it only builds upon the expected suppression of higher-order
contributions of the chiral EFT expansion.
On the other hand,
this method is less universal than the other two approaches
since it is defined specifically for the bound state energy.
\end{itemize}
\begin{figure}
\includegraphics[width=0.99\columnwidth]{res_gs_Eb_light_R10_originalA1A2.pdf}
%
\caption{(Color online)
Results from Fig.~\ref{Fig:res_gs_Eb_A03A10} showing chiral
uncertainties as presented in the Introduction (grey bars)
compared with the two alternative uncertainty estimates (green and
pale red bars), discussed in the text. The red error bars indicate
the many-body uncertainties. For comparison, the experimental
ground state energies are also shown as the blue bars.
\label{Fig:res_gs_Eb_light_R10_originalA1A2}}
\end{figure}
\begin{figure}
\includegraphics[width=0.99\columnwidth]{res_gs_EA_medium_R10a08_originalA1A2.pdf}
%
\caption{(Color online)
Results for ground state energies per nucleon of closed (sub)shell
nuclei, showing chiral
uncertainties as presented in the Introduction (gray bars)
compared with the two alternative uncertainty estimates (green
and pale red bars), discussed in the text. No numerical
many-body uncertainties are shown.
All results correspond to $R=1.0$~fm and SRG $\alpha=0.08$~fm$^4$ except for
$^{48}$Ca at N$^3$LO, where the
results for $\alpha=0.04$~fm$^4$ were taken due to the
unavailability of the ones for
$\alpha=0.08$~fm$^4$.
For comparison, the
experimental values are also shown as the blue bars.
\label{Fig:res_gs_EA_medium_R10a08_originalA1A2}}
\end{figure}
In Fig.~\ref{Fig:res_gs_Eb_light_R10_originalA1A2} we show the results
for light nuclei along with the uncertainty estimates presented in
Fig.~\ref{Fig:res_gs_Eb_A03A10} and the uncertainty estimates from these
alternative approaches. Overall, Alternative 1 produces
very similar uncertainty estimates as the original approach for these
calculations which are truncated at N$^2$LO, but there are some
significant differences in the error estimates with Alternative 2.
One of the most notable differences is for $^{10}$B, where Alternative
2 produced the largest chiral uncertainty, and in general, Alternative
2 suggests larger chiral uncertainties than the original approach or
Alternative 1 as $A$ increases. Another significant difference is for
$A = 8$ ($^8$He, $^8$Li, and $^8$Be), where the original error
estimates increase significantly as one proceeds towards $N=Z$ at
fixed $A$, but this does not happen as strongly with Alternative 2.
In Fig.~\ref{Fig:res_gs_EA_medium_R10a08_originalA1A2} we show the
results for ground state energies per nucleon of light and medium
nuclei with closed (sub)shells up to N$^4$LO with the different chiral
error estimates. Overall, Alternative 2 produces very similar
uncertainty estimates as the original approach for these calculations,
but there are significant differences in the error estimates with
Alternative 1. In particular, for the medium-mass nuclei $^{24}$O,
$^{40}$Ca, and $^{48}$Ca, Alternative 1 produces very large
uncertainties at LO. This can be traced back to the large differences
between the N$^2$LO results and N$^3$LO and N$^4$LO results for these
nuclei.
We emphasize, that the original error estimates and Alternative 1
are significantly influenced by the missing 3N (and possibly 4N)
forces at N$^3$LO and N$^4$LO.
Clearly the role of 3N (and possibly 4N) forces becomes more
important for these medium-mass nuclei, not only for the actual ground
state energies, but also for the chiral truncation uncertainty
estimates.
We interpret the differences in the estimated truncation errors,
emerging from using the considered schemes, as an intrinsic
uncertainty of our approach to error analysis. It would be interesting to see if
it can be reduced by performing a more refined analysis using Bayesian
methods, which would also provide a statistical interpretation of the
theoretical error bars~\cite{Furnstahl:2015rha,Melendez:2017phj}.
\section{Summary and conclusions}
\def\Alph{section}.\arabic{equation}{\arabic{section}.\arabic{equation}}
\label{sec:Summary}
In this paper, we performed a comprehensive study of few- and
many-nucleon observables based on the novel SCS chiral NN potentials
of Refs.~\cite{Epelbaum:2014efa,Epelbaum:2014sza}.
The pertinent results of our calculations can be summarized as follows:
\begin{itemize}
\item
We have analyzed various
nd elastic scattering and breakup observables and estimated
truncation errors at different orders of the chiral
expansion. Similarly to other calculations, we
observe a considerable underprediction of the nd elastic scattering
analyzing power $A_y$ at low energy starting from N$^2$LO, the feature
commonly referred to as the $A_y$-puzzle. At intermediate energies,
the discrepancies between the calculated elastic scattering
observables based on NN forces only and experimental data are, in many
cases, significantly larger than the theoretical uncertainty
at N$^3$LO and N$^4$LO and agree well
with the expected size of the 3NF contributions.
This makes elastic nucleon-deuteron scattering
in this energy range a particularly promising testing ground for the
chiral 3NF. On the other hand, the considered breakup observables
are well reproduced, leaving little room for possible 3NF effects
except for the symmetric space star configuration at low energy,
where large deviations are observed. For these observables,
known to represent another low-energy puzzle, our
calculations agree with the ones based on other phenomenological and
chiral EFT nuclear forces, and the truncation errors turn out to be very
small.
\item
We have calculated various properties of $A=3,4$ nuclei in the
framework of the Faddeev-Yakubovsky equations and studied light
p-shell nuclei using the NCCI method. In the latter case, we were able
to perform calculations at all chiral orders without SRG
transformations for $A \le 6$
using the cutoffs $R=0.9$, $1.0$ and $1.1$~fm. For heavier nuclei, we
had to rely on SRG evolution starting from N$^3$LO in order to
achieve converged results. We found a qualitatively similar
convergence pattern in all considered cases, namely a significant
overbinding at LO, results close to the experimental values at NLO and
N$^2$LO and underbinding at N$^3$LO and N$^4$LO. We have also
calculated ground state magnetic moments of light nuclei based
on the single-nucleon current operator and estimated the corresponding
NCCI extrapolation and truncation errors.
\item
To obtain results for medium-mass nuclei, we have performed
state-of-the-art calculations within the coupled-cluster and in-medium
similarity renormalization group frameworks. The obtained results for
the ground state energies of $^{16,24}$O and $^{40,48}$Ca show a
similar pattern to that for light nuclei with the amount of
overbinding (underbinding) at LO, NLO and N$^2$LO (N$^3$LO and
N$^4$LO) tending to increase with the number of nucleons $A$. The slower
convergence of the chiral expansion for heavier nuclei is to be
expected and reflects the increasing sensitivity to higher-momentum
components of the interaction. The calculated charge radii of the considered
medium-mass nuclei show a systematic improvement with the chiral
order, but remain underestimated at N$^4$LO.
\item
Finally, we have addressed the reliability of our error analysis by
exploring alternative approaches for uncertainty
quantifications. We found, in general, a satisfactory agreement
between all considered methods.
\end{itemize}
Our results demonstrate that the SCS chiral NN potentials are well
suited for \emph{ab initio} few- and many-body calculations and provide a
natural reference point for systematic studies of 3NF effects and
specific details of the NN interactions such as the choice of the
basis of contact interactions and regularization schemes. It would be
interesting to perform similar calculations using the new
SMS chiral NN potentials of Ref.~\cite{Reinert:2017usi}, which
provide an outstanding description of neutron-proton and
proton-proton scattering data below
$E_{\rm lab} = 300$~MeV and are considerably
softer than the SCS potentials starting from N$^3$LO. Such a study would, in
particular, bring insights into the role of the redundant contact
interactions at N$^3$LO. Notice that the new SMS interactions
also provide the flexibility to propagate statistical uncertainties of
the NN LECs and to quantify the error from the uncertainty in the $\pi N$ LECs and the
choice of the energy range used in the determination of the NN contact
interactions. Finally and most importantly, the calculations should
be extended by the inclusion of the consistent 3NFs
\cite{vanKolck:1994yi,Epelbaum:2002vt,Bernard:2007sp,Bernard:2011zr,Krebs:2012yv,Krebs:2013kha,Epelbaum:2014sea}. Work
along these
lines is in progress by the LENPIC Collaboration \cite{Hebeler:2015wxa}.
\section*{Acknowledgments}
This work was supported by BMBF (contracts No.~05P2015 - NUSTAR R\&D and No. 05P15RDFN1 - NUSTAR.DA),
by the European
Community-Research Infrastructure Integrating Activity ``Study of
Strongly Interacting Matter'' (acronym HadronPhysics3,
Grant Agreement n. 283286) under the Seventh Framework Programme of EU,
the ERC project 259218 NUCLEAREFT, by the DFG (SFB 1245), by DFG and
NSFC (CRC 110), by the Polish National Science
Center under Grants No. 2016/22/M/ST2/00173
and 2016/21/D/ST2/01120
and by the Chinese Academy of Sciences (CAS) President’s
International Fellowship Initiative (PIFI) (Grant No. 2018DM0034)
In addition, this
research was supported in part by the National Science Foundation
under Grants No.~NSF PHY11-25915 and NSF PHY16-14460, by the US Department of Energy under
Grants DE-FG02-87ER40371, DESC0008485, DE-SC0008533, DESC0018223 and DESC0015376.
This research used resources of the National
Energy Research Scientific Computing Center (NERSC) and
the Argonne Leadership Computing Facility (ALCF), which
are US Department of Energy Office of Science user facilities,
supported under Contracts No. DE-AC02-05CH11231 and
No. DE-AC02-06CH11357, and computing resources provided
under the INCITE award `Nuclear Structure and Nuclear Reactions' from
the US Department of Energy, Office of Advanced Scientific Computing
Research. Further computing resources were provided by the TU Darmstadt (lichtenberg) and on JUQUEEN and JURECA
of the J\"ulich Supercomputing Center, J\"ulich, Germany.
|
{
"timestamp": "2018-02-26T02:10:06",
"yymm": "1802",
"arxiv_id": "1802.08584",
"language": "en",
"url": "https://arxiv.org/abs/1802.08584"
}
|
\section{Introduction}
The experimental research on Bose-Einstein condensates trapped in
ring-shaped optical lattices constitutes a promising area that is
meant to opening the possibility to study a rich emerging physics. In
consequence, important efforts are being made towards the effective
realization of such configurations \cite{amico1,hen09,jen16}. For
instance, a lattice of tunnel junctions on a ring would enable the
creation of lattice models with periodic boundary conditions and with
the resulting ability to support piercing magnetic fluxes and the
associated topological phenomena \cite{jen16}. In particular, it
would provide an ideal environment for the study of the Kibble-Zurek
mechanism, where the buildup of winding number in the phase transition
from Mott insulator to superfluid driven by tunneling rate increase,
is expected to occur, except for very slow quench times
\cite{zurek}. On the other hand, quantum information applications of
such configurations have begun to be devised, such as the
experimentally feasible qubit system based on bosonic cold atoms
trapped in ring-shaped optical lattices proposed by Amico {\it et al.}
\cite{amico}. A practical implementation of this system could lead to
substantially lower decoherence rates, as the use of neutral atoms as
flux carriers would minimize the well-known characteristic
fluctuations in the magnetic fields of solid state Josephson qubits.
Concerning theoretical studies on this issue, the dynamics on ring
lattices with three \cite{trespozos2011} and four wells
\cite{cuatropozos06} have been previously investigated through
multimode (MM) models that utilized {\em ad-hoc} values for the
hopping and on-site energy parameters. Substantial improvements were
reported in Ref. \cite{jezek13b}, where such parameters were
calculated {\em ab initio} by constructing a set of two-dimensional
localized wave functions in terms of the stationary solutions of the
Gross-Pitaevskii (GP) equation for a ring with an arbitrary number of
wells. This can be regarded as a similar procedure of the two-mode
(TM) model of a double-well condensate
\cite{smerzi97,ragh99,anan06,jia08,albiez05,mele11,abad11,doublewell},
where the order parameter is described as a superposition of wave
functions localized in each well with time dependent coefficients
\cite{smerzi97,ragh99}. Such localized wave functions are
straightforwardly obtained in terms of the stationary symmetric and
antisymmetric states, which in turn determine the parameters involved
in the TM equations of motion \cite{smerzi97,ragh99,anan06,jia08}. The
corresponding dynamics exhibits Josephson and self-trapping regimes
\cite{smerzi97,ragh99} which have been experimentally observed by
Albiez {\it et al.} \cite{albiez05}. The self-trapping (ST)
phenomenon, which is also present in extended optical lattices
\cite{optlat,Anker2005,Wang2006}, is a non linear effect where the
difference of populations between neighboring sites does not change
sign during the whole time evolution. There is nowadays an active
research on the ST effect, which involves different types of systems,
including mixtures of atomic species \cite{stlastoplat,mele11}.
In recent works it has been shown that the on-site interaction energy
dependence on the population imbalance has to be taken into account
for the TM model, in order to accurately describe the exact dynamics in
double-well systems \cite{jezek13a,jezek13b,nosEPJD}. Such an
imbalance dependence gives rise to a reduced effective on-site
interaction energy parameter when it is introduced into the equations
of motion of the model. In the Thomas-Fermi approximation it has been
shown that such a parameter is reduced by a factor of 7/10, 3/4, or
5/6 depending on the dimensionality of the system. Later, it has been
proven that the effective on-site interaction energy parameter is also
fundamental to describe the dynamics in a ring-shaped lattice, within
the frame of MM models in two-dimensional condensates as well
\cite{jezek13b}.
The phase space of a MM dynamics in a $N_c$-well system has $2N_c-2$
dimensions. Hence, the analysis of such a dynamics for $N_c\ge 3$
does not pose a simple task to handle. The goal of this work is to
show that useful results can still be obtained by using mathematical
tools such as symmetry criteria and special techniques developed for
non-linear differential equations \cite{Floquet}. In particular, we
will numerically treat a three-dimensional four-site ring-shaped
optical lattice. The construction of its multimode parameters will be
based on previous works \cite{jezek13b,cat11}, where a method to
obtain localized on-site Wannier-like (WL) functions in a ring-shaped
optical lattice was developed. These states are obtained as a
superposition of stationary states of the GP equation with different
winding numbers. Here we will show how to optimally localize these WL
states to finally obtain the effective on-site interaction energy
parameter for furnishing an accurate model. On the other hand, by
restricting the dynamics to a symmetric case, we will construct a
two-mode type Hamiltonian able to predict transitions to the ST
regime. With such a Hamiltonian we will calculate the orbit periods in
both Josephson and ST regimes. Next, we will show that the location
of the two-mode critical point of the Josephson to ST transition turns
out to be quite useful to determine the domains of different regimes
in an extended region compared to that of the symmetric case.
These findings will be confirmed by local TM models involving
only pairs of neighboring sites \cite{Anker2005,Wang2006}.
Finally, by calculating Floquet multipliers \cite{Floquet} we
will analyze the dynamical stability in the surroundings of the
TM solutions and we will show that in the more stable regions a
criteria for calculating characteristic times can be established.
This paper is organized as follows. In Sec. \ref{multi} we describe
the trapping potential and include the equations of motion of the MM
model. Next, we explain the procedure for obtaining the localized
states used to describe the dynamics and analyze the conditions to
achieve maximally localized WL functions. To conclude this section, we
summarize the method for calculating the effective on-site energy
parameter and analyze the corresponding results of a few
representative configurations. In Sec. \ref{sec:sym} we numerically
study the dynamics, showing that one can predict the ST and Josephson
regimes using a reduced-space Hamiltonian that describes high-symmetry
systems. This reduction allows us to extend previous analytic results
of the period of the trajectories in the TM model to these systems.
Next in Sec.\ \ref{sec:floquet}, we study the dynamics close to this
highly symmetric situation by means of a Floquet analysis. Finally in
Sec. \ref{sec:regN1N3}, we numerically obtain the MM dynamics for
non-symmetric configurations in the vicinity of the symmetric
condition, establishing useful connections to the symmetric case
results and comparing with several full GP solutions. To conclude, a
summary of our work is presented in Sec. \ref{sum}. The definition of
the parameters employed in the equations of motion are gathered in the
Appendix \ref{sec:parameters}, while in Appendix \ref{sec:appB} we
give some details on the Floquet analysis theory.
\section{ The multimode model}\label{multi}
\subsection{ The trap }\label{trap4}
We consider a three-dimensional Bose-Einstein condensate of Rubidium
atoms confined by the external trap
\begin{align}
V_{\text{trap}}({\bf r} ) =& \frac{ 1 }{2 } \, m \, \left[
\omega_{x}^2 x^2 + \omega_{y}^2 y^2
+ \omega_{z}^2 z^2 \right] \nonumber \\
&+
V_b \left[ \, \cos^2(\pi x/q_0)+
\, \cos^2(\pi y/q_0)\right],
\label{eq:trap4}
\end{align}
where $m$ is the atom mass. The harmonic frequencies are given by
$ \omega_{x}= \omega_y= 2 \pi \times 70 $ Hz, and
$ \omega_{z}= 2 \pi \times 90 $ Hz, and the lattice parameter is
$ q_0= 5.1 \mu$m. The barrier height parameter $V_b $ and the number
of particles will take different values upon the calculation. For
instance, in Fig. \ref{fig:rho3D} we have plotted isosurfaces of the
ground-state density and the trapping potential for $N= 10^4 $ and
$V_b/(\hbar\omega_x)=25$. Hereafter, time and energy will be given in
units of $\omega_x^{-1}$ and $\hbar\omega_x$, respectively.
\begin{figure}
\begin{center}
\includegraphics[width=0.85\columnwidth,clip=true]{fig1.eps}
\end{center}
\caption{\label{fig:rho3D} (color online) Isosurfaces of the
ground-state density and the trapping potential of the four-site
system with $N=10^4$ and $V_b/\hbar\omega_x=25$ (in arbitrary
units).}
\end{figure}
\subsection{Equations of motion}
For completeness, in this section we will sketch the procedure for
obtaining the equations of motions reported in Ref. \cite{jezek13b}.
The multimode order parameter for $N_c$ sites is expressed in terms of
$N_c$ localized WL functions $w_k$ as,
\begin{equation}
\psi ({\mathbf r})_{MM} = \sum_{k} \, b_k(t) \, w_k ({ r, \theta,z})
\,.
\label{orderparameter}
\end{equation}
where $-[(N_c-1)/2]\le k \le [N_c/2]$. Inserting the above expression
into the time dependent GP equation, the equations of motion for the
coefficients $b_k(t)= e^{i \phi_k} \, |b_k| $ are obtained, which can
be cast into $ 2 N_c $ real equations for the populations
$ n_k =|b_{k}|^2 = N_k / N $ and the phase differences
$ \varphi_k= \phi_k - \phi_{k-1}$ of each site as,
\begin{widetext}
\begin{align}
\hbar\,\frac{dn_k}{dt} =& - 2 J [ \sqrt{n_k \, n_{k+1}} \, \sin\varphi_{k+1}
-\sqrt{n_k \, n_{k-1} } \, \sin\varphi_k ]\nonumber\\
&- 2 F [ \sqrt{n_k \, n_{k+1} } (n_k + n_{k+1} ) \, \sin\varphi_{k+1}
-\sqrt{n_k \, n_{k-1} } (n_k + n_{k-1} ) \, \sin\varphi_k]
\label{ncmode1hn}
\end{align}
\begin{align}
\hbar\,\frac{d\varphi_k}{dt} = & N (U_{k-1}n_{k-1} - U_kn_k)
- J \left[ \left(\sqrt{\frac{n_k}{ n_{k-1}}} - \sqrt{\frac{n_{k-1} }{ n_k}}\,\right) \, \cos\varphi_k
+ \sqrt{\frac{n_{k-2}}{ n_{k-1} }} \, \cos\varphi_{k-1}
- \sqrt{\frac{n_{k+1} }{ n_k}} \, \cos\varphi_{k+1}\right]\nonumber\\
&- F \left[ \left( n_k \sqrt{\frac{n_k}{ n_{k-1} }} - n_{k-1} \sqrt{\frac{n_{k-1}}{ n_k}}\,\right)
\, \cos\varphi_k
+ \left(3\, \sqrt{n_{k-2} \, n_{k-1}} + n_{k-2} \sqrt{\frac{n_{k-2}}{ n_{k-1}}}\,\right)
\, \cos\varphi_{k-1} \right.\nonumber\\
&- \left.\left(3\, \sqrt{n_{k+1} \, n_k} + n_{k+1} \sqrt{\frac{n_{k+1}} { n_k}}\,\right) \,
\cos\varphi_{k+1}\right],
\label{ncmode2hn}
\end{align}
\end{widetext}
where $U_k$ is the on-site interaction energy in the $k$-site. The
bare MM model assumes a constant $U_k=U$ value. In contrast, in the
effective MM model its dependence on the imbalance is considered
$U_k=U_k(\Delta N_k)$, which gives rise to a reduced effective
parameter $U_{\text{eff}}$ (see Sec.\ \ref{sec:Ue}). The definition of
the bare interaction parameter $U$ and the tunneling parameters $J$
and $F$ are given in the Appendix \ref{sec:parameters}. As the
populations and phase differences must fulfill $\sum_k n_k=1$ and
$\sum_k \varphi_k=0$, respectively, only $2N_c- 2$ equations are
independent. In Eq.\ (\ref{ncmode2hn}) we have excluded the terms
involving the overlap between the localized densities in each site, as
this parameter turns out to be two orders of magnitude smaller than
the rest of the tunneling parameters, $J$ and $F$.
\subsection{ Localized states and multimode model parameters}
In this section we summarize the procedure to obtain the localized
states \cite{cat11,jezek13b} necessary to describe the dynamics. The
choice of these states is not unique, as they inherits the freedom in
the choice of the global phase of stationary states. In the field of
atomic and molecular physics it has been long applied the concept of
localized molecular orbitals, or ``Boys orbitals'' in chemistry, and
after that in electronic calculations in periodic systems (see e.g the
review of Ref.\ \cite{mar12} and references therein) in order to
optimize the basis set used. We will therefore analyze how the
localization of the WL functions affect the determination of the model
parameters, especially the on-site energy parameter $U$.
The stationary states $\psi_n( r, \theta, z )$ are obtained as the
numerical solutions of the three-dimensional GP equation \cite{gros61}
with different winding numbers $n$ \cite{je11,jezek13b}. Assuming
large barrier heights \cite{cat11}, the winding numbers will be
restricted to the values $-[(N_c-1)/2]\leq n \leq [N_c/2]$
\cite{je11}. We have shown in Ref. \cite{cat11} that
stationary states of different winding number are orthogonal, and can
be used to define localized, orthogonal, WL functions on each $k$
site. These are given by
\begin{equation}
w_k({ r, \theta, z })=\frac{1}{\sqrt{N_c}} \sum_{n} \psi_n({ r, \theta, z})
\, e^{-i n\theta_k } \,,
\label{wannier}
\end{equation}
where $\theta_k=2\pi k/N_c$.
The ground state ($n=0$) and the state with maximum winding number,
$n=2$ for the four-site system, have completely uniform phases in each
well \cite{je11}. Both functions can be chosen to be real with
$\psi_0 >0 $ and $\psi_2 >0$ in the first quadrant
($0<\theta<\pi/2$). This means that we have fixed their phases to zero
in that quadrant. On the other hand, the winding numbers with
$n=\pm 1 $ correspond to vortex-like states, which have an associated
velocity field that gives rise to a non vanishing angular momentum
\cite{je11}. Although their velocity fields are very small in each
site, the phase is not absolutely uniform and to perform the sum of
Eq. (\ref{wannier}) it is important to correctly choose the global
phases of $\psi_1$ and $\psi_{-1}$ to obtain maximum localization. In
our case, without loosing generality one can set the phases of
$ \psi_1({ r, \theta, z}) $ and $\psi_{-1}(r, \theta, z)$ to zero at
the bisectrix $\theta= \pi/4$ and $z=0$. And taking into account that
$\psi_1=\psi_{-1}^*$ it is sufficient to consider a single variational
parameter $\eta$ in their phases as $e^{\pm i \eta}\psi_{\pm 1}$ to
analyze the localization of the Wannier function. The localized state
$w_k$ as a function of $ \eta $ thus acquires the form,
\begin{align}
w_k({ r, \theta,z , \eta})=&\frac{1}{2} \left[
\psi_0({ r, \theta, z}) + \psi_2({ r, \theta, z})\cos k\pi \right.
\nonumber \\
&\left. + 2 \mathrm{Re}( e^{i (\eta-k\pi/2)} \psi_1({ r, \theta,z})) \right] \,,
\label{wannier0}
\end{align}
with the conditions on each $\psi_n({ r, \theta, z}) $ given above.
The maximum localization is achieved by minimizing the spatial
dispersion of the WL wave functions in the $xy$ plane
$ \sigma^2= \langle x^2 + y^2\rangle -(\langle x\rangle ^2 + \langle
y\rangle^2)$
with respect to $\eta$. The degree of localization of the WL wave
functions strongly affects the values of the model parameters. This is
shown in Fig.\ \ref{fig:Us_eta} where we depict the calculated on-site
interaction energy $U$ as a function of $\eta$, together with the
dispersion $\sigma$. Whereas the hopping parameters turn out to be
rather independent of this phase, the parameter $U(\eta)$ has shown to
be strongly dependent. Such a variation would qualitatively alter the
dynamics predicted by multimode models, thus demonstrating the
importance of a properly localized wave function.
\begin{figure}
\includegraphics[width=\columnwidth,clip=true]{fig2.eps}
\caption{\label{fig:Us_eta} Bare on-site interaction energy parameter
$U$ as a function of the phase $\eta$. The inset shows the
dispersion $\sigma$ of the WL functions in the $xy$ plane as a
function of $\eta$, with $l_x=l_y=\sqrt{\hbar/(m\omega_x)}$}
\end{figure}
In Fig. \ref{fig:w0wpi4} we show the three dimensional WL function
density $ w_0^2$ at $z=0$ with $\eta = 0 $ and $\eta = \pi/4 $,
showing that in the first case it is clearly more localized.
\begin{figure}
\vspace*{1.5em}
\includegraphics[width=0.95\columnwidth,clip=true]{fig3.eps}
\caption{\label{fig:w0wpi4} WL wave function densities $|w_0|^2$ (in
arbitrary units) at the $z=0$ plane for $\eta=0$ (left panel) and
$\eta=\pi/4$ (right panel).}
\end{figure}
In summary, to obtain an accurate MM dynamics one should achieve the
maximum localization of the WL functions, which is found to be
fulfilled when the phases of all stationary states are chosen equal at
the bisectrix of a given site.
\subsection{\label{sec:Ue}Inclusion of effective on-site interaction effects}
In addition to the localization effects, to construct an accurate
model we need to calculate the effective on-site interaction energy
parameter $U_{\text{eff}}$. For that matter, we follow the procedure
described in Ref.\ \cite{jezek13b} valid for a ring-shaped lattice
with equal wells. We thus first numerically calculate the on-site
interaction energy $U_k(\Delta N_k)$ in the $k$ site as a function of
$\Delta N_k =N_k - N/N_c$ as \cite{jezek13b}
\begin{equation}
\frac{U_k(\Delta N_k)}{U} = \frac{\int d^3\mathbf{r}\rho_N(\mathbf{r})\rho_{N+\Delta N}(\mathbf{r})}{\int d^3\mathbf{r}\rho_N^2(\mathbf{r})},
\label{eq:Uk}
\end{equation}
with the normalized-to-unity ground-state densities $\rho_N$ and
$\rho_{N+\Delta N}$ for four-well systems of $N$ and $N+\Delta N$ total
number of particles, respectively, where $\Delta N = N_c\Delta N_k$.
The on-site interaction energy $U_{k}$ exhibits a linear dependence on
$\Delta N_k $ and it can be approximated by
\begin{equation}
\frac{U_{k}(\Delta N_k)}{U} \simeq 1 - \alpha \frac{ N_c \Delta N_k} {N} .
\label{UekN}
\end{equation}
Replacing $ U_{k-1}(\Delta N_{k-1})$ and $ U_{k}(\Delta N_k)$ given by
the above equation in Eq. (\ref{ncmode2hn}), the effective multimode
(EMM) model equations of motion read
\begin{widetext}
\begin{align}
\hbar\,\frac{dn_k}{dt} = & - 2 J [ \sqrt{n_k \, n_{k+1}} \, \sin\varphi_{k+1}
-\sqrt{n_k \, n_{k-1} } \, \sin\varphi_k ]\nonumber\\
&- 2 F [ \sqrt{n_k \, n_{k+1} } (n_k + n_{k+1} ) \, \sin\varphi_{k+1}
-\sqrt{n_k \, n_{k-1} } (n_k + n_{k-1} ) \, \sin\varphi_k]
\label{encmode1hn}
\end{align}
\begin{align}
\hbar\,\frac{d\varphi_k}{dt} = & f_{3D} ( n_{k-1} -n_{k}) N U -
\alpha ( n_{k-1} - n_{k}) N U [ N_c (n_{k-1}+ n_{k})-2 ] \nonumber\\
&- J \left[ \left(\sqrt{\frac{n_k}{ n_{k-1}}} - \sqrt{\frac{n_{k-1} }{ n_k}}\,\right) \, \cos\varphi_k
+ \sqrt{\frac{n_{k-2}}{ n_{k-1} }} \, \cos\varphi_{k-1}
- \sqrt{\frac{n_{k+1} }{ n_k}} \, \cos\varphi_{k+1}\right]\nonumber\\
&- F \left[ \left( n_k \sqrt{\frac{n_k}{ n_{k-1} }} - n_{k-1} \sqrt{\frac{n_{k-1}}{ n_k}}\,\right)
\, \cos\varphi_k
+ \left(3\, \sqrt{n_{k-2} \, n_{k-1}} + n_{k-2} \sqrt{\frac{n_{k-2}}{ n_{k-1}}}\,\right)
\, \cos\varphi_{k-1} \right.\nonumber\\
&- \left.\left(3\, \sqrt{n_{k+1} \, n_k} + n_{k+1} \sqrt{\frac{n_{k+1}} { n_k}}\,\right) \,
\cos\varphi_{k+1}\right],
\label{encmode2hn}
\end{align}
where $f_{3D}= 1-\alpha$, and hence we obtain
$U_{\text{eff}}=f_{3D}U$.
\end{widetext}
In Table \ref{tab:2} we quote the values of $\alpha$ and $f_{3D}$ for
a few configurations, and observe that for larger particle numbers and
higher barrier heights, the parameter $f_{3D}$ approaches from above
the three-dimensional Thomas-Fermi limiting value of $ 7/ 10 $,
derived for the double-well model \cite{jezek13a}. Also, it is
worthwhile noticing that the second term of the rhs of Eq.\
(\ref{encmode2hn}) is a second order correction on the population
imbalance, which in general does not give rise to noticeable changes in the
dynamics.
\begin{table}
\caption{\label{tab:2} Linear coefficient $\alpha$ of the
on-site interaction energy $U_k$ and factor $f_{3D}$ for $ N_c =4 $
and different number of particles and barrier heights.}
\begin{ruledtabular}
\begin{tabular}{lcccc}
$ N $ & $ V_b/(\hbar\omega_x) $ & $ \alpha $ & $ f_{3D} $ \\[3pt]
\hline \\[-5pt]
$10^3$ & $ 10 $ & $ 0.20 $ & $ 0.80 $ \\[3pt]
$10^3$ & $ 15 $ & $ 0.20 $ & $ 0.80 $ \\[3pt]
$10^4$ & $ 25 $ & $ 0.28 $ & $ 0.72 $ \\[3pt]
$10^4$ & $ 35 $ & $ 0.29 $ & $ 0.71 $ \\[3pt]
\end{tabular}
\end{ruledtabular}
\label{table}
\end{table}
\section{\label{sec:dyn}The dynamics}
\subsection{ \label{sec:sym}TM symmetric case }
The orbits of the dynamical equations lie in a six-dimensional space,
and thus it becomes challenging to classify them taking into account
the possible different features. In particular, it is important to
predict the regions of self-trapped and Josephson trajectories, where
in the multi-well system we define a self-trapped site $k$ as a site
whose population difference with neighboring sites $N_k-N_{k\pm 1}$
does not change sign during the whole time evolution
\cite{cuatropozos06}. A subset of such trajectories can be found by
restricting the dynamics to a more symmetric case, which can be
described with a TM Hamiltonian.
Therefore, we will first analyze the multimode model in a symmetric
case where $n_k = n_{k+2}$ and $ \varphi_k= \varphi_{k+2}$. In this
case, the second term on the right hand side of Eq.~(\ref{encmode2hn})
vanishes, and the only difference between the MM and EMM equations of
motion is given by the use of the effective interaction parameter
$ U_{\mathrm{eff}}$ instead of the bare $ U $.
Defining the imbalance $ Z = 2 (N_0 - N_1)/N $, the phase
difference $ \varphi= \varphi_1$, and $K= 2J+ F $, the equations of
motion Eqs. (\ref{encmode1hn}) and (\ref{encmode2hn}) reduce to
\begin{equation}
\hbar \frac{dZ}{dt} = - 2 K \sqrt{1-Z^2}\sin\varphi \, ,
\label{imbe}
\end{equation}
\begin{equation}
\hbar \frac{d\varphi}{dt} = \frac{ U_{\mathrm{eff}}}{2} N Z + 2 K \frac{ Z}{\sqrt{ 1 - Z^2}} \cos\varphi \,.
\label{phasee}
\end{equation}
Then, changing for convenience the time units to $\hbar/2K$, one can
obtain a TM-type Hamiltonian for the reduced space,
\begin{equation}
H ({Z},\varphi) = \frac{1}{2} \Lambda {Z}^2 - \sqrt{1-{Z}^2}\cos\varphi ,
\label{eq:Hred}
\end{equation}
where $ \Lambda = {U_{\mathrm{eff}} N}/{(4 K)}$.
We can thus obtain the critical imbalance between the Josephson and ST
regimes,
\begin{equation}
{Z}_c =
2 \frac{\sqrt{\Lambda-1 }}{\Lambda} ,
\label{eq:Zct}
\end{equation}
and calculate the exact time periods $T^{\mathrm{EMM}}$ using
$\Lambda$ \cite{ragh99,nosEPJD}, together with the approximations obtained in
the small-oscillation limit
\begin{equation}
T_{\mathrm{so}}^{\mathrm{EMM}}= \frac{\pi\hbar}{ K \sqrt{\Lambda + 1}} \,,
\label{eq:tpeqoscr}
\end{equation}
and in the ST regime \cite{nosEPJD}
\begin{equation}
T_{\mathrm{st}}^{\mathrm{EMM}}= \frac{ {Z}_i \pi\hbar}{ 2 K }\left( 1 - \sqrt{1- \frac{4}{\Lambda {Z}_i^2}}\right) \, ,
\label{tstbuenar}
\end{equation}
where ${Z}_i$ is the initial imbalance.
For the system with $ N= 10^4 $ particles and a barrier height
$V_b = 25 $ $ \hbar \omega_x $, we have obtained
$ J=-6.60\times 10^{-4}\hbar\omega_x $,
$F= 2.08\times 10^{-3} \hbar\omega_x$,
$U= 3.16\times 10^{-3} \hbar\omega_x$, and
$U_{\mathrm{eff}}=2.27\times10^{-3}\hbar\omega_x$ which yields a
critical imbalance $ N {Z}_c = 232$ within the EMM model. On the other
hand, using the bare value of $U$, we would have obtained a smaller
threshold $ N {Z}_c = 196 $. For the same system the
small-oscillation period yields
$ T_{\mathrm{so}}^{\mathrm{EMM}}= 47.84 \omega_x^{-1} $, which may be
compared to that obtained by means of GP simulations,
$T_{\mathrm{so}}^{\mathrm{GP}}\simeq 47.7 \omega_x^{-1}$. In Fig.
\ref {fig:joseph4} we show the time evolution for the initial
$ N {Z}_i=120 < N {Z}_c $ in the Josephson regime, within the GP, MM,
and EMM frameworks. It is worthwhile noticing that although the period
for this value of $Z_i$ departs from the small-oscillation limit, the
GP and exact EMM are in good agreement.
\begin{figure}
\includegraphics[width=0.9\columnwidth,clip=true]{fig4top.eps}
\includegraphics[width=0.9\columnwidth,clip=true]{fig4bot.eps}
\caption{\label{fig:joseph4}(color online) Condensate dynamics
arising from the GP equation, and from the EMM and the MM models for
an initial condition $N_0=2530$ and $N_1=2470$ corresponding to the
Josephson regime. Top panel: populations in each well $N_k$, bottom
panel: phase differences $\varphi_k$. }
\end{figure}
In Fig. \ref{fig:self4} we show the evolution of the populations in
each site for the initial condition $ N_0 = 2640$ and $N_1=2360$. In
this case $ N {Z}_i = 560 > N{Z}_c $ and then the system
is in the ST regime. We may further calculate the period from Eq.\
(\ref{tstbuenar}), using in this case $ {Z}_i= 0.056 $ yielding
$ T_{\mathrm{st}}^{\mathrm{EMM}}= 10.391\, \omega_x^{-1} $, which turns out to
be in a good agreement with the GP simulation and EMM model results,
$T^{\mathrm{EMM}}=10.396\,\omega_x^{-1}$.
\begin{figure}
\includegraphics[width=0.9\columnwidth,clip=true]{fig5top.eps}
\includegraphics[width=0.9\columnwidth,clip=true]{fig5bot.eps}
\caption{\label{fig:self4}(color online) Condensate dynamics for the
initial values $N_0=2640$, and $N_1=2360$ in the ST regime. We
depict the GP simulation result, together with those arising from
the EMM and MM models. Top panel: populations in each well $N_k$,
bottom panel: phase differences $ \varphi_k$. }
\end{figure}
\subsection{\label{sec:floquet}Near the TM symmetric case}
In the TM symmetric situation of the previous section the system is
governed by the TM Hamiltonian, Eq.\ (\ref{eq:Hred}), and thus the
orbits are periodic. However, for arbitrary non-symmetric initial
conditions the dynamics become non-periodic in general. To investigate
this scenario we perform a linear analysis of the six-dimensional
dynamical system around the TM symmetric case. With this aim, we first
rewrite the time evolution equations in terms of the mean populations
and phases of non-neighboring sites $\bar{n}_{ij}=(n_{i}+n_{j})/2$,
$\bar{\varphi}_{ij}=(\varphi_{i}+\varphi_{j})/2$, and corresponding
differences $\bm\delta$ (see Appendix \ref{sec:appB}). Then,
linearizing the resulting dynamical equations in the differences we
obtain two sets of equations. On the one hand, we recover the TM
equations for ${Z}=2(\bar{n}_{02}-\bar{n}_{13})$ and
$\varphi=\bar{\varphi}_{13}$. And, on the other hand we obtain a
non-autonomous linear system for the differences, which can be cast as
\begin{equation}
\frac{d \bm{\delta}}{dt} = \mathbb{A}[{Z}(t),\varphi(t)] \bm{\delta},
\label{eq:Adt}
\end{equation}
where the vector $\bm\delta$ is defined as the following differences
\begin{multline}
\bm{\delta} = \frac{1}{2}(n_0-n_2,\varphi_0+\varphi_3-\varphi_1-\varphi_2, \\ n_1-n_3,
\varphi_0+\varphi_1-\varphi_2-\varphi_3).
\end{multline}
The periodicity of ${Z}(t)$ and $\varphi(t)$ gives rise to a Floquet
problem for $\bm{\delta}$ \cite{Floquet}. Therefore, we shall pursue
the study of the characteristic multipliers $\rho_j$ as functions of
the initial imbalance $Z(0)=Z_i$ and $\varphi(0)=0$. The multipliers
are the eigenvalues of the monodromy matrix $\mathbb{M}$ associated to
Eq.\ (\ref{eq:Adt}) \cite{Floquet} and they contain information on the
evolution of $\bm\delta$ after a period $T$, since
$\bm\delta(T)=\mathbb{M}\cdot \bm\delta(0)$. In particular, each
multiplier $\rho_j$ gives the ratio of change of a linearly
independent solution $\bm{\delta}_j$, i.e.,
$\bm{\delta}_j(T)=\rho_j \bm{\delta}_j(0)$. Each Floquet multiplier
thus falls into one of the following categories that characterize the
dynamics of the solutions:
\begin{enumerate}
\item If $|\rho_j| < 1$, there is a solution $\bm\delta_j$ asymptotically stable.
\item If $|\rho_j| = 1$, we have a pseudo-periodic solution. If
$\rho_j = \pm 1$, then the solution is periodic.
\item If $|\rho_j| > 1$, there is a linearly unstable solution $\bm\delta_j(t)$.
\end{enumerate}
The entire solution is asymptotically stable if all the characteristic
multipliers satisfy $|\rho_j | \le 1$. In Fig.\ \ref{fig:CharFloq} we
show the absolute value of the four Floquet multipliers for the
periodic orbits of Sec.\ \ref{sec:sym}. Given that
$\Tr(\mathbb{A})=0$, and the fact that the matrix $\mathbb{A}$ is
decoupled into blocks of $2\times2$ matrices, the product of the
eigenvalues verify $\rho_1\rho_2=1$ and $\rho_3\rho_4=1$. As it may be
seen, far from the critical imbalance in the Josephson regime the
dynamics is pseudo-periodic, whereas for the ST regime the linear
dynamics is unstable. We also observe a small region,
$0.0185 \lesssim Z \lesssim 0.0195$, where two multipliers exceed the
value 1, indicating that the effect of the instability extends towards
the Josephson regime. From this analysis one may conclude that near
symmetric initial conditions in the Josephson regime, the dynamics of
the exact model is almost always close to that of the effective TM
model, while for initial conditions around the ST regime the Floquet
analysis predicts linear instability, and thus the non-linearized
dynamics are expected to differ considerably from the effective TM
results. However, for a particular evolution one should also inspect
the involved values of the monodromy matrix elements to understand in
more detail the initial deviation from the TM orbits. This behavior
is illustrated in Fig.\ \ref{fig:FloIll}, where we plot the numerical
solutions of the EMM dynamics for several initial conditions with a
small $(n_0-n_2)/(2Z)=0.1$, $N_1=N_3$, and zero phases. Furthermore,
it can be easily shown that the choice of zero phases and $N_1=N_3$ as
initial conditions warrants that the dynamics can be described in
terms of four variables only, namely $Z$, $\varphi$ and the
differences $N_0-N_2$ and $\varphi_0-\varphi_2$, instead of the full
six dimensions, as can be expected from the symmetry of such
configuration.
\begin{figure}
\includegraphics[width=0.9\columnwidth]{fig6.eps}
\caption{\label{fig:CharFloq} (color online) Characteristic Floquet
multipliers for the orbits of the effective TM model as functions of
the initial values of $N_0-N_1$. The dashed line marks the critical
condition $N_0-N_1=116$ for the transition between Josephson and ST
regimes, and the dotted ones indicate the initial values of
$N_0-N_1$ for the evolutions shown in Fig.\ \ref{fig:FloIll}.}
\end{figure}
As a consequence of the linear instability, it is not possible to
reliably predict the full dynamics close to $Z_c$ only from the TM
Hamiltonian and a numerical solution of the EMM model must be
performed for each initial condition. This is clearly shown in the
middle panel of Fig.\ \ref{fig:FloIll} with ${Z}_i\gtrsim {Z}_c$,
where the shape of the oscillations of $N_k$ is depicted.
Nonetheless, already for slightly larger values of ${Z}_i$ where all
the $|\rho_j|$ are closer to one, we observe that there is a
characteristic time close to the time period prediction of the TM
model, $T\simeq T^{\mathrm{EMM}}(Z_i)$. In addition to $T$, as seen
clearly in the bottom panel of Fig.\ \ref{fig:FloIll}, there is a
beat-type oscillation with a much longer characteristic time
$T_M\approx 16T$. This beating can be understood as the composition
of two ST modes for each pair of neighbors with nearby time periods
$T_1$ and $T_2$ given by Eq.\ (\ref{tstbuenar}), leading to the
modulating period $T_M=2T_1T_2/(T_1-T_2)$.
\begin{figure}
\begin{center}
\includegraphics[width=0.9\columnwidth]{fig7top.eps} \\
\includegraphics[width=0.9\columnwidth]{fig7mid.eps} \\
\includegraphics[width=0.9\columnwidth]{fig7bot.eps} \\
\end{center}
\caption{\label{fig:FloIll}(color online) Time evolution of the
populations $N_k$ for initial conditions close to the TM symmetric
case for ${Z}_i=0.016$ (top), $0.024$ (middle) and $0.04$ (bottom),
and $(n_0-n_2)/(2Z_i)=0.1$. The vertical dashed lines mark the orbit
periods for the effective TM model at each value of $Z_i$.}
\end{figure}
The change in the population imbalance after a single period $T$ can
be read off directly from the monodromy matrix. In particular, for the
initial conditions of Fig.\ \ref{fig:FloIll} the element of the matrix
$\mathbb{M}_{11}$ corresponds to the ratio $\delta_1(T)/\delta_1(0)$,
shown in Fig.\ \ref{fig:Monos}. In the Josephson regime for
$Z<0.0215$, $\delta_1(t)$ changes sign and reduces its magnitude after
a period, as can be seen in the top panel of Fig.\ \ref{fig:FloIll},
whereas for $Z_i=0.04$ in the ST regime we observe that $\delta_1(t)$
does not change after a period $T$ (bottom panel of Fig.\ \ref{fig:FloIll}).
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{fig8.eps}
\end{center}
\caption{\label{fig:Monos} (color online) Monodromy matrix elements
$\mathbb{M}_{ij}$ as functions of the initial imbalance
$N_0-N_1$. The empty circles mark the values corresponding to
Fig.\ \ref{fig:FloIll}. }
\end{figure}
\subsection{\label{sec:regN1N3}Regimes in the $N_1=N_3$ symmetric case}
In addition to the investigation of the characteristic times, it is
interesting to wonder whether the system could remain in a Josephson
or in a ST regime in the surrounding region of a given TM symmetric
configuration. Therefore, we have studied the EMM equations of motion
for $n_i(t)$ and $\varphi_i(t)$ for several initial conditions in a
specific plane of the full phase-space containing the symmetric
configuration. In particular, we numerically integrated
Eqs. (\ref{encmode1hn}) and (\ref{encmode2hn}) from $t=0$ to
$t=500\omega_x^{-1}$ with fixed initial phase differences
$\varphi_i=0$ and $N_1=N_3$, while varying $N_0-N_1$ and $N_2-N_3$.
We classify the dynamics for each initial condition in one of the
following three regimes: Josephson (J), mixed (M), and self-trapping
(ST) depending on whether all (J), some (M), or none (ST) of the
populations of neighboring sites cross each other during the time
evolution.
Assuming the double-well condition for a ST regime
(cf. Eq. (\ref{eq:Zct})) for each pair of neighbors forming a
junction, one can obtain a first estimate of the phase-space domains
of the J, M, and ST regimes. Thus, defining the parameters
\begin{equation}
A_{ij}=(N_i-N_j)^2 \frac{U_{\text{eff}}}{2K},\;\,\text{and}\;\, B_{ij}=4 (N_i+ N_j) -\frac{8K}{U_{\mathrm{eff}}},
\label{eq:local}
\end{equation}
the populations of neighboring sites $i$ and $j$ should cross each
other when $A_{ij}<B_{ij}$. Therefore, if this condition is fulfilled
for the four junctions, we expect a J regime; if no junction satisfy
it, we expect a ST regime, and a mixed regime otherwise. Similar
local two-mode model conditions have been considered in the study of
self-trapping in extended one-dimensional optical lattices
\cite{Anker2005,Wang2006}.
In Fig.\ \ref{fig:PSP} we show a phase-space diagram resulting from
the numerical integration of the EMM dynamics together with the local
predictions implied by Eqs. (\ref{eq:local}). We observe that in the
neighborhood of a symmetric configuration not immediate to the
critical point, the character of the regime does not change,
\text{i.e.} a ST regime remains as ST, and so does a J regime. Only in
the close vicinity of the critical point in the symmetric
configuration, we numerically observe that the dynamics possesses
qualitative details not contained in the local analysis of Eq.\
(\ref{eq:local}). This result was expected from the Floquet analysis
of the previous section, where the absolute value of some multipliers
greatly exceeded 1 around $Z_c$. The detailed study of such
phase-space regions deserve a separate and careful numerical analysis
which lies beyond the scope of the present work.
\begin{figure}
\includegraphics[width=1.15\columnwidth,clip=true]{fig9.eps}
\caption{\label{fig:PSP} (color online) Phase-space diagram of the
four-mode model with $\varphi_i=0$ and $N_1=N_3$. The solid line
marks the symmetric case with $N_0=N_2$ and $N_1=N_3$. The dashed
lines indicate the local conditions $A_{ij}=B_{ij}$. The triangles
correspond to the critical condition $N_0-N_1=N_2-N_3=\pm 116$ extracted
from Eq.~(\ref{eq:Zct}), whereas the circles indicate the symmetric
initial conditions of Figs.\ \ref{fig:joseph4} and \ref{fig:self4},
and the crosses the non-symmetric ones in
Figs. \ref{fig:joseph4M_Mixed}--\ref{fig:pata}. }
\end{figure}
To verify our findings within the framework of the full GP equations,
we have numerically solved several non-symmetric initial conditions.
First, we slightly move from the symmetric point in the ST regime in
Fig.\ \ref{fig:self4}, and choose $ N_0 = 2650$, $N_1=2360$,
$ N_2 = 2630$, and $N_3=2360$. We show in Fig.\
\ref{fig:joseph4M_Mixed} the corresponding time evolution. We note
that the characteristic times in the surrounding region of the
symmetric case smoothly depart from the orbit periods of the two-mode
models. In particular, it can be seen that in
Fig.~\ref{fig:joseph4M_Mixed} the oscillation times keep around
$10 \omega_x^{-1}$, very close to that of Fig.\ \ref{fig:self4},
although this simulation does not correspond to a closed
orbit. Similarly in Fig.\ \ref{fig:dentro} we compare the time
evolutions in the Josephson regime within the GP and EMM approaches,
finding an excellent agreement.
\begin{figure}
\includegraphics[width=0.9\columnwidth,clip=true]{fig10top.eps}
\includegraphics[width=0.9\columnwidth,clip=true]{fig10bot.eps}
\caption{\label{fig:joseph4M_Mixed} (color online) Dynamics arising
from the GP simulation and the EMM model, for initial values
$N_0=2650, N_1=2360, N_2=2630$, and $N_3=2360$, with
$V_b/(\hbar\omega_x)=25$. Top panel: populations $N_k$, bottom
panel: phase differences $\varphi_k$.}
\end{figure}
\begin{figure}
\includegraphics[width=0.9\columnwidth,clip=true]{fig11top.eps}
\includegraphics[width=0.9\columnwidth,clip=true]{fig11bot.eps}
\caption{\label{fig:dentro}(color online) Same as Fig.\
\ref{fig:joseph4M_Mixed} for $N_0=2482$, $N_1=2482$, $N_2=2554$, and $N_3=2482$. }
\end{figure}
\begin{figure}
\includegraphics[width=0.9\columnwidth,clip=true]{fig12top.eps}
\includegraphics[width=0.9\columnwidth,clip=true]{fig12bot.eps}
\caption{\label{fig:joseph4M_Mixed3}(color online) Same as Fig.\
\ref{fig:joseph4M_Mixed} for $N_0=2490, N_1=2420, N_2=2670$, and
$N_3=2420$. }
\end{figure}
We present in Fig.\ \ref{fig:joseph4M_Mixed3} another example of a
non-symmetric initial condition within the GP equations, where we
expect that this time it will be in a mixed regime. We take the
initial populations $N_0=2490, N_1=2420, N_2=2670$ and $N_3=2420$.
Note that in Fig.\ \ref{fig:joseph4M_Mixed3} some of the initial
populations differences lie below the critical threshold obtained for
the symmetric case, and these conditions turned out to be useful for
predicting that this state would be in a mixed regime also in the GP
framework.
\begin{figure}
\includegraphics[width=0.9\columnwidth,clip=true]{fig13top.eps}
\includegraphics[width=0.9\columnwidth,clip=true]{fig13bot.eps}
\caption{\label{fig:pata}(color online) Same as Fig.\
\ref{fig:joseph4M_Mixed} for $N_0=2581$, $N_1=2440$, $N_2=2539$, and $N_3=2440$. }
\end{figure}
Finally, in Fig.\ \ref{fig:pata} we show the GP evolution for a
particular initial condition which, according to the simplified local
model, should have been in the mixed regime but, as predicted within
the EMM model, it is in a Josephson regime.
To conclude, from the previous analysis it can be seen that the study
of symmetric configurations is a convenient starting point for
exploring the full six dimensional system ($n_i$,$\varphi_i$) arising
from the four-mode model. We have shown that the adoption of TM tools
allows us to predict different regimes and characteristic times for
the dynamics of four-site condensates. These results may be easily
extended to condensates with a larger even number of sites.
\section{\label{sum} Summary and concluding remarks}
We have studied the dynamics of three-dimensional four-well
Bose-Einstein condensates using a multimode model with an effective
interaction parameter, and compared it to the Gross-Pitaevskii
solutions. In order to predict orbits in four-well systems and to
establish the characteristic time scales of the dynamics, we have
first studied a highly symmetric configuration that admits a two-mode
Hamiltonian description. This allowed us to apply previous two-mode
results to this reduced-space case. Moreover, the location of the
critical point marking the transition from Josephson to self-trapping
regimes in the symmetric case has shown to be useful for defining in
the extended phase space, zones with different dynamical regimes. For
general initial conditions close to the two-mode symmetric ones, we
performed a Floquet analysis that revealed that the linearized
dynamics is unstable around the critical imbalance points. The linear
instability in the Floquet problem explains the coexistence of
different regimes in the neighborhood of the critical imbalance in the
extended phase space.
We have characterized the possible dynamical regimes according to the
population behavior between neighboring sites. Hence, we have
performed a partition of an extended region of the phase space into
self-trapped, Josephson, and mixed regimes. We have confirmed these
findings by an extensive numerical study of non-symmetric initial
conditions within the multimode model, together with a set of time
evolutions within the three-dimensional Gross-Pitaevskii framework.
We must emphasize that the accuracy of the predictions of the
multimode model depends on the proper determination of its parameters.
On the one hand, we have obtained very good agreements due to the use
of the effective on-site interaction energy parameter
$U_{\text{eff}}$. Such a parameter amounts to a reduction of about a
factor 0.72 with respect to the bare $U$ of Eq.\ (\ref{U0}). On the
other hand, we have also shown that $U$ strongly depends on the
localization of the underlying Wannier-like functions. Such a
localization has to be maximized in order to obtain an accurate
model. This can be achieved by minimizing its spatial dispersion with
respect to a parameter that defines the global phases of the
stationary states with $n=\pm 1$ winding numbers. This procedure
should also be applied to systems with a larger number of sites, where
more stationary states with non zero velocity circulations exist, and
therefore another variational parameter has to be added for each new
absolute value of the winding number.
To conclude we would like to remark that the present study of
four-well systems could pave the way for an eventual generalization to
condensates with a higher number of sites.
\begin{acknowledgments}
This work was supported by CONICET and Universidad de Buenos
Aires through grants PIP 11220150100442CO and UBACyT
20020150100157BA,
respectively.
\end{acknowledgments}
|
{
"timestamp": "2018-02-27T02:01:52",
"yymm": "1802",
"arxiv_id": "1802.08746",
"language": "en",
"url": "https://arxiv.org/abs/1802.08746"
}
|
\section{\label{sec:level1}Introduction}
Precision measurements of the cosmic-ray spectrum have brought an increased focus on antiparticles as a valuable tool for studying fundamental physics at very high energies. In standard models of cosmic-ray propagation, antiparticles such as $e^+$, $\bar{p}$ and $^3\overline{\text{He}}$ are produced as secondary species when primary cosmic-ray protons collide with interstellar gas in the Galaxy \cite{1475-7516-2015-12-039}. This picture is consistent with measurements of the antiproton to proton ratio between 10 GeV and 60 GeV made by several experiments, including BESS \cite{2002cosp...34E1239M}, HEAT \cite{PhysRevLett.87.271101}, CAPRICE \cite{Weber:1997zwa}, PAMELA \cite{2009PhRvL.102e1101A} and AMS-02 \cite{PhysRevLett.117.091103}. At higher energies the ratio of secondary to primary components is an important testing ground for hitherto undiscovered sources of cosmic rays.
Measurements of the fluxes of individual species up to a few hundred GeV have revealed spectral features that are at odds with the predictions of propagation models \cite{PhysRevLett.117.091103,1742-6596-718-5-052012,1992ApJ...394..174G} that take into account diffusion, energy losses and gains, and particle production and disintegration \cite{Moskalenko:2003kq,1988A&A...202....1S}. While adequately explaining some secondary to primary ratios such as B/C, the models do not seem to produce enough antiparticles to match the observations of recent high-statistics experiments \cite{2014PhRvD..89d3013C,2002ApJ...565..280M,1742-6596-384-1-012016,2017arXiv170906507B,2017ApJ...844L..26T,2016arXiv161204001L}. The latest data from the Alpha Magnetic Spectrometer (AMS-02) shows that the antiproton to proton ratio $\bar{p}/p$ is independent of rigidity (momentum divided by electric charge) between 10 GV and 450 GV \cite{PhysRevLett.117.091103}, whereas in pure secondary production the ratio is expected to decrease with increasing rigidity \cite{2002ApJ...565..280M}. The antiproton to positron flux ratio $\bar{p}/e^+$ is also constant above 30 GV \cite{PhysRevLett.117.091103}, which is inconsistent with the different energy loss rates suffered by $\bar{p}$ and $e^+$ in the interstellar medium. \\
The observed excesses in fluxes of antiparticles could be due to unaccounted for astrophysical sources,decay or annihilation of exotic particles in physics beyond the standard model, or simply a reflection of uncertainties in our knowledge of the interstellar medium (ISM) and the interaction cross-sections used in secondary production models \cite{2017arXiv170906507B,PhysRevLett.117.091103}.
The search for new sources of antiparticles also makes antiprotons a potential target for indirect detection of dark matter. Annihilating or decaying dark matter may produce abundant $\bar{p}$ in hadronization processes that show up as an excess above the secondary-particle background in the spectrum \cite{Cirelli:2013hv}. A sharp cut-off in the fraction of antiprotons at a given energy could signal a dark matter particle in the same mass range undergoing annihilation \cite{Fornengo:2013xda}. The flattening of the $\bar{p}/p$ ratio at a few hundred GeV has led to lower limits on dark matter mass $m_{\chi}$ up to $\sim2$ TeV \cite{PhysRevD.92.055027,2015arXiv150407230L}, leaving the multi-TeV range open as testing ground for different scenarios.
Supernova remnants (SNRs) can also contribute to the $\bar{p}$ flux, resulting in a smooth increase in $\bar{p}/p$ until a maximum cut-off energy where it flattens \cite{PhysRevLett.103.081103,2041-8205-791-2-L22}. Depending on the age of the supernova remnant, the maximum acceleration energy can be $\mathcal{O}$(10 TeV), making TeV antiprotons important probes of astrophysical sources \cite{PhysRevLett.103.081103,PhysRevD.95.063021}. Characterizing the secondary antiparticle spectra across all accessible energies is therefore a well-motivated problem.
Measuring cosmic-ray antiprotons is a challenge owing to their very low flux at high energies and the difficulty of charge separation amongst hadrons in cosmic-ray detectors. While balloon and satellite experiments have good charge-sign resolution, they are limited in their exposure and their maximum energy sensitivity \cite{2007NIMPA.579.1034A, AGUILAR2002331}. AMS-02, for example, has provided direct measurements of the antiproton fraction up to 450 GeV. Ground based air shower arrays, with their large effective areas, can probe higher energies but are limited in their capability to identify individual primary particles. One approach to circumvent this problem is to study the deficit produced by the Moon in the cosmic-ray flux. The observation of this deficit or the Moon shadow is a common technique used by ground based cosmic-ray detectors to calibrate their angular resolution and pointing accuracy \cite{2014ApJ...796..108A, 2013arXiv1305.6811I}. Moreover, the position of the shadow is offset from the true location of the Moon due to the deflection of cosmic rays in the geomagnetic field. As a result, observations of the shadow can be used for momentum and charge-based separation of cosmic rays \cite{URBAN1990223,2007APh....28..137T,2012PhRvD..85b2002B}. By observing the Moon shadow, the ARGO-YBJ, MILAGRO, Tibet AS-$\gamma$ and the L3 collaborations estimated upper limits on the $\bar{p}/p$ ratio above 1 TeV at a few percent level \cite{2012PhRvD..85b2002B, 2011PhDT........70C, 2007APh....28..137T, Achard:2005az}.
The High Altitude Water Cherenkov Observatory (HAWC) is one of the very few operational ground-based experiments that can extend the $\bar{p}/p$ limits further into the very high energy regime. In this paper we use the measured Moon shadow to obtain the most constraining upper limits on the $\bar{p}/p$ ratio at energies between 1 TeV and 10 TeV. The paper is structured as follows: Section II describes the HAWC detector and the procedure of data selection. Section III discusses how the Moon shadow can be used to separate antiprotons from protons and also how to infer the experimental sensitivity of this measurement. Section IV shows the results of the search and the $95\%$ upper limits on the $\bar{p}/p$ ratio. Systematic uncertainties are also discussed in Section IV. Section V provides an outlook and concludes the paper.
\section{\label{sec:level1}High Altitude Water Cherenkov Observatory}
\subsection{\label{sec:level2}The Detector}
The HAWC Observatory, located at an altitude of 4100 m above sea-level at Sierra Negra, Mexico, is a wide field-of-view detector array for TeV gamma rays and cosmic rays. It consists of 300 water Cherenkov detectors (WCDs) laid out over 22,000 m$^2$. Each WCD is a tank 7.3 m in diameter and 4.5 m in height filled with 180,000 liters of purified water. Four upward-facing large-area photomultiplier tubes (PMTs) are anchored to the bottom of each WCD. Extensive air showers produced by incoming cosmic rays and gamma rays can trigger the PMTs as the cascade of secondary particles passes through the WCDs. \\
HAWC's nominal trigger rate is 25kHz, with the vast majority of triggers being due to air shower events. For this analysis, we use a multiplicity condition of at least 75 channels(PMTs) to be hit within a 150 ns time window to sort events as candidate air showers. After determining the effective charge in each PMT \cite{2017ApJ...843...39A}, any incorrectly calibrated triggers are removed and the events are reconstructed to obtain a lateral fit to the distribution of charge on the array. Combining this with the hit times of the PMTs allows us to infer shower parameters such as direction, location of the shower core, energy of the primary particle and particle type (cosmic ray or gamma ray).
For estimating the energy of a cosmic-ray shower, we search a set of probability tables containing the lateral distribution of hits for a range of simulated proton energies and zenith angles. A likelihood value for each PMT is extracted from the table for a given shower with reconstructed zenith angle and core position. For each simulated bin of energy, the likelihood values are summed for all PMTs. The best estimate of energy corresponds to the bin with the maximum likelihood \cite{2017arXiv171000890H}. A complete description of the hardware and the data reconstruction methods used can be found in \cite{Abeysekara:2013qka,2017ApJ...843...39A}.
\subsection{\label{sec:level3}Simulations}
The event reconstruction and detector calibration make use of simulated extensive air showers generated with the CORSIKA package (v 7.40) \cite{1998cmcc.book.....H}, using the QGSJet-II-03 \cite{Ostapchenko:2004ss} model for hadronic interactions. This is followed by a GEANT4 (v4.10) simulation of secondary particles interacting with the HAWC array \cite{AGOSTINELLI2003250}. Custom software is then used to model the detector response taking into account the PMT efficiencies and noise in the readout channels \cite{Abeysekara:2013qka, 2017ApJ...843...39A}.
The cosmic-ray spectrum used in the simulations includes eight primary species (H, He, C, O, Ne, Mg, Si, Fe) with abundances based on the measurements by satellite and balloon experiments like CREAM \cite{2011ApJ...728..122Y}, PAMELA \cite{2011Sci...332...69A}, ATIC-2 \cite{2009BRASP..73..564P} and AMS \cite{PhysRevLett.114.171103}. The fluxes are parameterized using broken power law fits \cite{2017arXiv171000890H}. In addition, we also calculate the geomagnetic deflection of each cosmic-ray species. This is done by backtracing particles in the geomagnetic field from the location of HAWC to the Moon. The magnetic field is described by the most recent International Geomagnetic Reference Field (IGRF) model \cite{Thebault2015}. The detailed implementation of the particle propagation is described in \cite{2017arXiv171000890H}. By comparing the observed deflection with the expected results from the simulation, we can validate the energy scale and the pointing accuracy of the detector.
\subsection{\label{sec:level3}The Data}
This work uses data collected by HAWC between November 2014 and August 2017. To ensure optimal reconstruction and energy estimation, only the events with a zenith angle of less than $45^\circ$ are used. Additional cuts reject events with shower cores far from the array \cite{2017arXiv171000890H,2017ApJ...843...39A}. The data are divided into energy bins from 1 TeV to 100 TeV with a width of $0.2$ in $\log_{10}{(E/\text{GeV)}}$. Over 81 billion cosmic rays survive these stringent quality cuts as shown in table \ref{tab:data}.
\begin{table}[]
\centering
\begin{tabular}{c|c|c}
\textbf{Bin} & \textbf{log($\mathbf{E}$/GeV)} & \textbf{Events/ $\mathbf{10^9}$} \\
\hline
0 & 3.0 - 3.2 & 3.49 \\
1 & 3.2 - 3.4 & 17.67 \\
2 & 3.4 - 3.6 & 18.98 \\
3 & 3.6 - 3.8 & 13.50 \\
4 & 3.8 - 4.0 & 11.21 \\
5 & 4.0 - 4.2 & 7.63 \\
6 & 4.2 - 4.4 & 4.45 \\
7 & $>$ 4.4 & 4.44 \\
\end{tabular}
\caption{Reconstructed energy and number of events in each bin after applying the data quality cuts.}
\label{tab:data}
\end{table}
\section{\label{sec:level1}Moon Shadow and the Search for Antiprotons}
\subsection{\label{sec:level2}Observation of the Moon by HAWC}
We analyze the cosmic-ray flux by producing a sky-map of the data. The sky is divided into a grid of pixels of equal area in equatorial coordinates using the HEALPix library \cite{2005ApJ...622..759G}. Each pixel is centered at a right ascension and declination given by $(\alpha,\delta)$ and covers an angular width of about $0.1^\circ$. The map-making procedure quantifies the excess or deficit of cosmic-ray counts in every pixel with respect to an isotropic background. We define the relative intensity $\delta I$ as the fractional excess or deficit of counts in each pixel,
\begin{equation}
\label{eq:ri}
\delta I = \frac{N(\alpha_i,\delta_i) - \langle N(\alpha_i,\delta_i)\rangle}{\langle N(\alpha_i,\delta_i)\rangle}
\end{equation}
where $N(\alpha_i,\delta_i)$ is the number of events in the data map and $\langle N(\alpha_i,\delta_i)\rangle$ is the counts in the isotropic reference map: the background distribution calculated using the method of Direct Integration \cite{2003ApJ...595..803A}. A significance $\sigma$ is also assigned to each pixel. The significance is a measure of the deviation of the data in each bin from the expectation of the isotropic map, and is calculated according to the techniques in Li\&Ma 1983 \cite{1983ApJ...272..317L}.
To focus on the Moon, we subtract the calculated equatorial coordinates of the Moon from the coordinates of each event so that the final map is centered on the equatorial position of the Moon $(\alpha' = \alpha - \alpha_{\text{moon}}, \delta' = \delta - \delta_{\text{moon}})$ - the Moon being located at $(0,0)$ after the transformation. The Moon blocks the incoming cosmic rays, creating a deficit in the observed signal as shown in Fig. \ref{fig:all shadow bins}. This deficit or ``Moon shadow" is displaced from the Moon's actual position at $(\alpha'=0,\delta'=0)$ because of the deflection of cosmic rays in the earth's magnetic field. The expected angular deflection of a hadronic particle of charge $Z$ and energy $E$ at the location of HAWC is,
\begin{equation}
\label{eq4}
\delta \omega \simeq 1.6^{\circ} Z (E/\text{1000 GeV})^{-1},
\end{equation}
which was obtained from simulations \cite{Abeysekara:2013qka,2017arXiv171000890H}. We fit the shape of the shadow to an asymmetric 2D Gaussian as discussed in \ref{sec:level2} and use the centroid at $(\Delta\alpha',\Delta\delta')$ to describe the offset in position. The shape of the shadow is smeared along right ascension. This is because the reconstructed energy bins have a finite width, resulting in a broad distribution of geomagnetic deflections. The evolution of the shadow's width with energy is also a demonstration of the angular resolution of the detector - the angular width of the Moon-disc being $0.5^\circ$.
\begin{figure*}[htb!]
\centering
\makebox[\textwidth][c]{\begin{tabular}{@{}ccc@{}}
\includegraphics[width=.35\textwidth]{smooth_bin0-eps-converted-to.pdf} &
\includegraphics[width=.35\textwidth]{smooth_bin1-eps-converted-to.pdf} &
\includegraphics[width=.35\textwidth]{smooth_bin2-eps-converted-to.pdf} \\
\includegraphics[width=.35\textwidth]{smooth_bin3-eps-converted-to.pdf} &
\includegraphics[width=.35\textwidth]{smooth_bin4-eps-converted-to.pdf} &
\includegraphics[width=.35\textwidth]{smooth_bin5-eps-converted-to.pdf} \\
\includegraphics[width=.35\textwidth]{smooth_bin6-eps-converted-to.pdf} &
\includegraphics[width=.35\textwidth]{smooth_bin7-eps-converted-to.pdf}
\end{tabular}}
\caption{The cosmic-ray Moon shadow at different energies in 33 months of data from HAWC. The maps have been smoothed by a $1^\circ$ top-hat function to visually enhance the shadow. The black cross indicates the actual position of the Moon in the moon-centered coordinates. The displacement in the centroid of the shadow due to geomagnetic deflection is highest at 1 TeV, $1.9^\circ$ in R.A and $0.3^\circ$ in declination. The offset in both directions decreases with energy approaching $(0.21 \pm 0.01)^\circ$ in RA and $(0.05 \pm 0.02)^\circ$ in Dec at 10 TeV.}
\label{fig:all shadow bins}
\end{figure*}
\FloatBarrier
Fig. \ref{fig:deltaRA} illustrates the fit offset in right ascension as a function of energy. The expected offset of $p$ and He nuclei is also shown. We fit the same function from Eq.~\eqref{eq4} to the observed data and calculate $Z = 1.30 \pm 0.02$, obtaining an approximate estimate of the composition of the spectrum. Assuming $Z$ is an average of $p$ and He charges weighted by their abundance in the data, with negligible contribution from heavier elements, we estimate that about $(70 \pm 2)\%$ of the measured primary cosmic ray flux below 10 TeV is protons. While this fraction is only a rough estimate of the relative abundance of protons to helium, it is consistent with our detector efficiency for the assumed composition models which are based on direct measurements above 100 GeV \cite{2017arXiv171000890H}.
\begin{figure}[h!]
\centering
\includegraphics[width=0.52\textwidth]{deltaRA_mc.pdf}
\caption{The deflection of Moon shadow in right ascension as a function of energy in 33 months of data from HAWC. The dotted and dashed lines show the estimated deflection for pure protons and helium nuclei spectra respectively. The solid line is a fit to the mean deflection obtained from simulation. The blue points show the observed HAWC data.}
\label{fig:deltaRA}
\end{figure}
\subsection{\label{sec:level2}Finding a $\bar{p}$ shadow}
The observed deflection of the Moon shadow to the negative values of $\alpha'$ in Fig. \ref{fig:all shadow bins} is due to the positively charged protons and He nuclei in the cosmic-ray flux. In principle, negatively charged particles would be deflected in the opposite direction, creating another Moon shadow in the positive $\alpha',\delta'$ quadrant as shown in Fig. \ref{fig:pbarbin2}. Hence, one can search for antiparticles in the cosmic-ray flux by looking for a second deficit. Below we describe our search for a second, spatially distinct shadow in the data whose ``depth" or relative intensity is proportional to the flux of antiprotons blocked by the Moon.
\begin{figure}[h]
\centering
\includegraphics[width=0.52\textwidth]{pbar_contour.pdf}
\caption{The observed proton shadow at $1.6$ TeV, with 1$\sigma$ and 2$\sigma$ width contours of the fitted Gaussian overlaid. The white ellipses show the expected position of an antiproton shadow obtained by a $180^\circ$ rotation about the origin.}
\label{fig:pbarbin2}
\end{figure}
We start with a 2D Gaussian function, Eq.~\eqref{eq:gauss}, to describe the shape of the deficit in the Moon shadow. There are six free parameters in the fit: the centroids $x_0$ and $y_0$ (or $\Delta\alpha',\Delta\delta'$), the widths $\sigma_x$ and $\sigma_y$, the tilt angle $\theta$ the shadow makes with the $\alpha'$ axis, and the amplitude $A$. The value of the function at each $(\alpha',\delta')$ or $(x,y)$ corresponds to the relative intensity at the respective coordinate.
\begin{multline}
f_i(x,y) = A \exp(-a(x-x_0)^2 +2b(x-x_0)(y-y_0)\\- c(y-y_0)^2)
\label{eq:gauss}
\end{multline}
with\\
\begin{align*}
a &= \frac{\cos^2{\theta}}{2\sigma_{x}^2} + \frac{\sin^2{\theta}}{2\sigma_{y}^2},
&
b &= \frac{-\sin{2\theta}}{4\sigma_{x}^2} + \frac{\sin{2\theta}}{4\sigma_{y}^2},
\\
c &= \frac{\sin^2{\theta}}{2\sigma_{x}^2} + \frac{\cos^2{\theta}}{2\sigma_{y}^2}.
\end{align*}
Assuming the data contain both a $p$ and $\bar{p}$ shadow, we fit a sum of two elliptical Gaussian functions to the map:
\begin{equation}
\label{eq:gaussian}
\delta I(x,y) = F_p(x,y) + F_{\bar{p}}(x,y) = F_p(x,y) + r\cdot F_p(x,y),
\end{equation}
This can be used to measure the ratio $\bar{p}/p$ (denoted by $r$) or place upper limits if no second shadow is observed. Considering that antiproton and proton spectra have similar functional behavior at high energies \cite{PhysRevLett.117.091103}, we assume that an antiproton shadow should be a symmetric counterpart of the proton shadow, with the same magnitudes of all parameters except the amplitude $A$, and reflected about the declination and the right ascension axes. This means that in principle if we know the shape and position of the proton shadow from data or Monte Carlo, then we also have that information for the antiproton shadow. For conservative limits, we assume that the shadows are almost purely due to $p$ and $\bar{p}$ with similar spectra, and a negligible fraction of heavier nuclei. The systematic uncertainty introduced by this assumption is discussed in Section \ref{sec:level3}.
To simplify the problem, we perform the fit in two steps. First, we fit only the proton shadow to a single Gaussian and obtain the best fit values for the six free parameters. Then we fit the antiproton shadow by fixing its width and position using the values obtained in step 1. The amplitude of $\bar{p}$ can be written as $r\cdot A$ where $r = \bar{p}/p$ is the ratio of antiprotons to protons. We then use a simple maximum likelihood to obtain the value of $r$.
\subsubsection{Likelihood fit}
To fit the antiproton shadow, we maximize the log-likelihood function
\begin{equation}
\label{eq1}
\log \mathcal{L} = -\frac{1}{2} \sum \limits_{i}^N \frac{(\delta I_i - \delta I(x_i,y_i,r))^2}{\sigma_i^2}
\end{equation}
where $\delta I_i$ is the relative intensity in the $i^{th}$ pixel of the Moon map for a given energy bin, $\sigma_i$ is the standard deviation in the relative intensity and $\delta I(x,y,r)$ is the superposition of two Gaussians as shown in Eq.~\eqref{eq:gaussian}. We minimize $-\log \mathcal{L}$ using a grid search of over $10^4$ values of $r$. The resulting curve (Fig. \ref{fig:LL}) follows a Gaussian distribution. Its minimum corresponds to the optimal value $\hat{r}$. The contours of $\Delta \log \mathcal{L}(r)$ with respect to the minimum define the uncertainties in $\hat{r}$.\\
\begin{figure*}[ht!]
\centering
\includegraphics[width=0.95\textwidth]{zoom_chisqbin2.png}
\caption{\textit{Left}:The log-likelihood distribution for bin 2 (reconstructed median energy $= 2.5$ TeV). The blue dotted line shows the r interval corresponding to a $2\Delta \log \mathcal{L}$ of 2.71. \textit{Right}: The corresponding Feldman-Cousins interval \cite{1998PhRvD..57.3873F}. The green dashed line shows the Feldman-Cousins $95\%$ upper limit for the measurement shown on the left.}
\label{fig:LL}
\end{figure*}
As illustrated for bin 2 in the left panel of Fig. \ref{fig:LL}, the Gaussian likelihood indicates a null result, i.e no antiproton shadow, with a negative value for $\hat{r}$ which is outside the physically allowed interval. To account for such underfluctuations of data and ensure that the reported $r$ at the $95\%$ confidence level (C.L.) is always positive, we calculate the upper limits following the Feldman \& Cousins procedure \cite{1998PhRvD..57.3873F}. We use an implementation of the Feldman-Cousins confidence interval construction technique for a Gaussian that is truncated at zero, corresponding to our likelihood function for $r>0$, Fig. \ref{fig:LL}. The right panel in Fig. \ref{fig:LL} shows the $95\%$ upper and lower limits versus the measured values of $r$ in this scheme.\\
Applying the procedure described above to all bins with energy below 10 TeV, we obtain the upper limits shown in Fig. \ref{fig:limitsall}. We restrict the analysis to bin 5 (10 TeV) and below because the increased abundance of helium may bias the results at higher energies \cite{2011ApJ...728..122Y}. These issues will be addressed with improved particle discrimination techniques in future work.
\subsubsection{Sensitivity Calculation}
To study the effect of statistical fluctuations in the data on our computed upper limits, we calculate the sensitivity of HAWC to the antiproton shadow. In this context, the sensitivity refers to the average limit HAWC would obtain in an ensemble of similar experiments with no antiproton signal \cite{1998PhRvD..57.3873F}. This provides us with an independent range of \textit{minimum} values of $r$ that could be detected with at least a $95\%$ probability. In this way we can check for anomalous fluctuations in the background that may cause the measured upper limits to be significantly lower or greater than the sensitivity.\\
In this analysis, the absence of a shadow in any sampled region (other than the Moon) indicates that, barring fluctuations, the sampled data is consistent with the background. We compute the expected limit or sensitivity by searching for the antiproton shadow in 72 different regions --- each a circle of radius $5^\circ$ --- that are not within ten degrees of the Moon's position. We followed the procedure described in Section \ref{sec:level2} to fit the proton shadow at $\Delta \alpha',\Delta \delta'$. However, instead of $-\Delta \alpha,-\Delta \delta$ for the antiproton shadow, we used a random centroid at least $10^\circ$ away from the true Moon position. This ensures that we are only sampling off-source or background-only regions. After repeating the fit on the 72 selected regions, we obtain a distribution of upper limits (yellow band in Fig. \ref{fig:limitsall}) or expected limits from only background. We notice that our 95\% upper limits fall within the range defined by the sensitivity of HAWC, alongside room for improvement with more statistics in future.
\section{\label{sec:level1}Results}
Table \ref{table:res2} lists the 95\%(90\%) upper limits from HAWC for different energy bins. With the high statistics available, the best results are $1.1\%$ at 95\% C.L. and $0.3\%$ at 90\% C.L. which is an order of magnitude improvement on previously published limits \cite{2012PhRvD..85b2002B,2007APh....28..137T, Achard:2005az}. Figure \ref{fig:limitsall} places our results in the context of past measurements and theoretical models. We are able to demonstrate HAWC's capability in performing an important constraining measurement at energies currently not accessible to direct detection experiments.
\begin{table}
\centering
\begin{tabular}{c|c|c|c}
$\mathbf{\log {(E/GeV)}}$ & $\mathbf{\sigma_x}$ & $\mathbf{\sigma_y}$ & $\mathbf{\bar{p}/p}$\textbf{[95(90) CL][\%]}\\
\hline
3.0 &$1.45\pm0.12$ &$0.90\pm0.07$ & 8.4 (6.6) \\
3.2 &$1.24\pm0.05$ &$0.74\pm 0.02$ & 3.2 (2.5) \\
3.4 &$0.93\pm0.02$ &$0.58\pm 0.01$ & 1.1 (0.3) \\
3.6 &$0.65\pm0.01$ &$0.51\pm 0.01$ & 1.1 (0.8) \\
3.8 &$0.56\pm0.01$ &$0.49\pm 0.01$ & 1.9 (1.2) \\
4.0 &$0.48\pm0.01$ &$0.47\pm 0.01$ & 1.9 (1.1) \\
\end{tabular}
\caption{Estimated mean energies, shadow widths and HAWC 95\% and 90\% upper limits on the antiproton fraction.}
\label{table:res2}
\end{table}
\begin{figure*}[ht!]
\includegraphics[width=1.0\textwidth]{UL95_final.png}
\caption{Measurements of $\bar{p}/p$ in the GeV range and upper limits at the TeV scale. The yellow and shaded bands show HAWC sensitivity and systematic uncertainties respectively. The solid line shows the expected ratio from a purely secondary production of antiprotons \cite{PhysRevLett.102.071301}. The dotted line postulates primary antiproton production in supernovae \cite{PhysRevLett.103.081103}. Note that the other upper limits published above 1 TeV by ARGO-YBJ, L3 and Tibet AS-$\gamma$ are $90\%$ intervals while the HAWC limits are at the 95\% C.L.}
\label{fig:limitsall}
\end{figure*}
\subsection{\label{sec:level3}Systematic Uncertainties}
\subsubsection{Composition}
One underlying assumption behind the fitting process is that the observed Moon shadow is predominantly due to incident cosmic-ray protons. However, the observed deflection and its comparison with simulations in Fig. \ref{fig:deltaRA} indicate a He component of up to $30\%$. This leads to an overestimate of the denominator in $\bar{p}/p$, making the limits on $r$ overly conservative. Several cosmic-ray experiments including AMS \cite{PhysRevLett.115.211101}, CREAM \cite{2011ApJ...728..122Y} and PAMELA \cite{2011Sci...332...69A} have measured a He fraction at 1 TeV around $25\%$ and show a hardening of the spectrum at multi-TeV energies \cite{2011ApJ...728..122Y}. The parameters of the fit showing the greatest sensitivity to He contamination are the amplitude and the offset in right ascension. We notice that the difference between the observed offset and pure proton offset is a small fraction of the width of the shadow at all energies explored in this study.\\
We investigated the systematic effect of varying the shadow parameters based on the proton spectrum of the composition model used in HAWC simulations. The upper limits were computed again after reducing the shadow deficit by $20\% - 30\%$ and shifting the offset to that expected from a pure proton shadow. These two factors were varied jointly, keeping all other parameters constant. Assuming no antihelium in the composition, we notice that a $20-25\%$ decrease in the shadow intensity along with a corresponding change in offset improves the limits by a factor of $2 - 8$ depending on energy. Figure \ref{fig:limitsall} shows the composition uncertainty in the shaded band, illustrating that our current results are conservative.
\subsubsection{Energy Reconstruction}The energy binning also contains systematic errors propagated from the probability tables used for estimating the energy of an air shower. The four dimensional tables have bins in zenith angle, the charge measured by a PMT, the distance of the PMT from the shower core and primary energy. The finite resolution and limited statistics in the tables contribute to the uncertainty in the likelihood and hence a bias between the true energy value and the reconstructed energy \cite{2017arXiv171000890H}. The trigger multiplicity and strict zenith angle cuts in this work were used to ensure the optimal performance of the energy estimator such that the bias in $\log_{10}{E}$ is restricted to the width of each energy bin \cite{2017PhDT........59H}. Any systematic shift in energy scale is directly propagated into the estimated flux \cite{2017arXiv171000890H} of protons. We studied this again by varying the shadow amplitude corresponding to the shift in flux that would result from a 10\% change in the energy scale. Figure \ref{fig:limitsall} shows that the corresponding shift in results falls within the range of expected limits.
The event reconstruction is also affected by shower fluctuations, the quantum efficiency and charge resolution of PMTs, and the interaction models used in array simulations \cite{2017arXiv171000890H,2017ApJ...843...39A}. In addition, the approximation of the Moon-disc with a 2D Gaussian may also produce a bias in the calculated deficit in different regions of the shadow. However, the systematic contribution of these effects on the estimated flux is of the order $5\%$ \cite{2017arXiv171000890H}, leaving the He contamination and energy scaling as the dominant sources of uncertainty.
\section{\label{sec:level1}Conclusions}Probing the antiproton spectrum at TeV energies is an important prelude to developing a consistent theory to explain the production and propagation of secondary cosmic rays. The HAWC Observatory, with its continuous operation and sensitivity to TeV cosmic rays can constrain the $\bar{p}$ fraction. We achieve this by using the high-significance observation of the Moon shadow offset in position as a template for an antiproton shadow. The shape of the shadow is described by a two dimensional Gaussian with the ratio of $\bar{p}/p$ as a key parameter of the fit. With no observed antiproton shadow, we are able to place upper limits on $\bar{p}/p$ up to 10 TeV. The limits of 1.1\% at 2.5 TeV and 4 TeV, and $1.9\%$ at 10 TeV set an experimental bound that any models predicting a rise in the $\bar{p}/p$ fraction must satisfy \cite{PhysRevD.95.063021}. While these constraints are the strongest available at multi-TeV energies, we expect they can be improved with more HAWC data in the future, and can shed light on the secondary cosmic-ray background and potential signatures of new physics.
|
{
"timestamp": "2018-04-24T02:10:56",
"yymm": "1802",
"arxiv_id": "1802.08913",
"language": "en",
"url": "https://arxiv.org/abs/1802.08913"
}
|
\section{Introduction}~\label{sec:intro}
Severe weather events such as thunderstorms cause significant losses in property and lives. Many countries and regions suffer from storms regularly, leading to a global issue. For example, severe storms kill over 20 people per year in the U.S.~\cite{avg_storm_kill}. The U.S. government has invested more than 0.5 billion dollars~\cite{noaa_budget_2016} on research to detect and forecast storms, and it has invested billions for modern weather satellite equipment with high-definition cameras.
The fast pace of developing computing power and increasingly higher definition satellite images necessitate a re-examination of conventional efforts regarding storm forecast, such as bare eye interpretation of satellite images~\cite{human_predict_storm}. Bare eye image interpretation by experts requires domain knowledge of cloud involvements and, for a variety of reasons, may result in omissions or delays of extreme weather forecasting. Moreover, the enhancements from the latest satellites which deliver images in real-time at a very high resolution demand tight processing speed. These challenges encourage us to explore how applying modern learning schema on forecasting storms can aid meteorologists in interpreting visual clues of storms from satellite images.
Satellite images with the cyclone formation in the mid-latitude area show clear visual patterns, known as the comma-shaped cloud pattern~\cite{carlson1980airflow}. This typical cloud distribution pattern is strongly associated with mid-latitude cyclonic storm systems. Figure~\ref{fig:comma} shows an example comma-shaped cloud in the Northern Hemisphere, where the cloud shield has the appearance of a comma. Its ``tail'' is formed with the warm conveyor belt extending to the east, and ``head'' within the range of the cold conveyor belt.
The dry-tongue jet forms a cloudless zone between the comma head and the comma tail.
The comma-shaped clouds also appear in the Southern Hemisphere, but they form towards the opposite direction ({\it i.e.}, an upside-down comma shape). This cloud pattern gets its name because the stream is oriented from the dry upper troposphere and has not achieved saturation before ascending over the low-pressure center. The comma-shaped cloud feature is strongly associated with many types of extratropical cyclones, including hail, thunderstorm, high winds, blizzards, and low-pressure systems. Consequently, we can observe severe events like ice, rain, snow, and thunderstorms around this visual feature~\cite{reed1979cyclogenesis}.
\begin{figure}[htp]\centering
\includegraphics[width=0.4\textwidth,trim=0cm 5cm 3cm 1cm,clip]{images/comma.jpg}
\caption{An example of the satellite image with the comma-shaped cloud in the north hemisphere. This image is taken at 03:15:19 UTC on Dec 15, 2011 from the fourth channel of the GOES-N weather satellite.}
\label{fig:comma}
\end{figure}
To capture the comma-shaped cloud pattern accurately, meteorologists have to read different weather data and many satellite images simultaneously, leading to inaccurate or untimely detection of suspected visual signals. Such manual procedures prevent meteorologists from leveraging all available weather data, which increasingly are visual in form and have high resolution. Negligence in the manual interpretation of weather data can lead to serious consequences. Automating this process, through creating intelligent computer-aided tools, can potentially benefit the analysis of historical data and make meteorologists' forecasting efforts less intensive and more timely. This philosophy is persuasive in the computer vision and multimedia community, where images in modern image retrieval and annotation systems are indexed by not only metadata, such as author and timestamp, but also semantic annotations and contextual relatedness based on the pixel content~\cite{li2003automatic,li2008real}.
We propose a machine-learning and pattern-recognition-based approach to detect comma-shaped clouds from satellite images. The comma-shaped cloud patterns, which have been manually searched and indexed by meteorologists, can be automatically detected by computerized systems using our proposed approach. We leverage the large satellite image dataset in the historical archive to train the model and demonstrate the effectiveness of our method in a manually annotated comma-shaped cloud dataset. Moreover, we demonstrate how this method can help meteorologists to forecast storms using the strong connection of comma-shaped cloud and storm formation.
While all comma-shaped clouds resemble the shape of a comma mark to some extent, the appearance and size of one such cloud can be very different from those of another. This makes conventional object detection or pattern matching techniques developed in computer vision inappropriate because they often assume a well-defined object shape ({\it e.g.} a face) or pattern ({\it e.g.} the skin texture of a zebra).
The key visual cues that human experts use in distinguishing comma-shaped clouds are {\it shape} and {\it motion}. During the formulation of a cyclone, the ``head'' of the comma-shaped cloud, which is the northwest part of the cloud shield, has a strong rotation feature. The dense cloud patch forms the shape of a comma, which distinguishes the cloud patch from other clouds. To emulate meteorologists, we propose two novel features that consider both shape and motion of the cloud patches, namely, \textit{Segmented HOG} and \textit{Motion Correlation} Histogram, respectively. We detail our proposals in Sec.~\ref{sec:segmentation} and Sec.~\ref{sec:corr}.
Our work makes two main contributions. First, we propose novel shape and motion features of the cloud using computer vision techniques. These features enable computers to recognize the comma-shaped cloud from satellite images. Second, we develop an automatic scheme to detect the comma-shaped cloud on the satellite images. Because the comma-shaped cloud is a visual indicator of severe weather events, our scheme can help meteorologists forecast such events.
\subsection{Related Work}
Cloud segmentation is an important method for detecting storm cells. Lakshmanan {\it et al.}~\cite{lakshmanan2003multiscale} proposed a hierarchical cloud-texture segmentation method for satellite image. Later, they improved the method by applying watershed transform to the segmentation and using pixel intensity thresholding to identify storms~\cite{lakshmanan2009efficient}. However, brightness temperature in a single satellite image is easily affected by lighting conditions, geographical location, and satellite imager quality, which is not fully considered in the thresholding-based methods. Therefore, we consider these spatial and temporal factors and segment the high cloud part based on the Gaussian Mixture Model (GMM).
Cloud motion estimation is also an important method for storm detection, and a common approach estimates cloud movements through cross-correlation over adjacent images.
Some earlier work~\cite{leese1970determination} and \cite{smith1972automated} applied the cross-correlation method to derive the motion vectors from cloud textures, which was later extended to multi-channel satellite images in~\cite{evans2006cloud}. The cross-correlation method could partly characterize the airflow dynamics of the atmosphere and provide meaningful speed and direction information on large areas~\cite{johnson1998storm}. After being introduced in the radar reflectivity images, the method was applied in the automatic cloud-tracking systems using satellite images. A later work~\cite{carvalho2001satellite} implemented the cross-correlation in predicting and tracking the Mesoscale Convective Systems (MCS, a type of storms). Their motion vectors were computed by aggregating nearby pixels at two consecutive frames; thus, they are subject to spatially smoothed effects and miss fine-grained details. Inspired by the ideas of motion interpretation, we define a novel correlation aiming to recognize cloud motion patterns in a longer period. The combination of motion and shape features demonstrates high classification accuracy on our manually labeled dataset.
Researchers have applied pattern recognition techniques to interpret storm formulation and movement extensively. Before the satellite data reached a high resolution, earlier works constructed storm formulation models based on 2D radar reflectivity images in the 1970s. The primary techniques can be categorized into \textit{cross correlation}~\cite{rinehart1978three} and \textit{centroid tracking}~\cite{crane1979automatic} methods. According to the analysis, cross-correlation based methods are more capable of accurate storm speed estimation, while centroid-tracking-based methods are better at tracking isolated storm cells.
Taking advantages of these two ideas, Dixon and Wiener developed the renowned centroid-based storm nowcasting algorithm, named Thunderstorm Identification, Tracking, Analysis and Nowcasting (TITAN)~\cite{dixon1993titan}. This method consists of two steps: identifying the isolated storm cells and forecasting possible centroid locations. Compared with former methods, TITAN can model and track some storm merging and splitting cases. This method, however, can have large errors if the cloud shape changes quickly~\cite{han20093d}. Some later works attempted to model the storm identification process mathematically. For instance, ~\cite{lakshmanan2009data} and~\cite{lopez2009discriminant} used statistical features of the radar reflection to classify regions into storm or storm-less classes.
Recently, Kamani {\it et al.} proposed a severe storm detection method by matching the skeleton feature of bow echoes ({\it i.e.}, visual radar patterns associated with storms) on radar images in~\cite{kamani2016shape}, with an improvement presented in~\cite{kamani2017skeleton}. The idea is inspiring, but radar reflectivity images have some weaknesses in extreme weather precipitation~\cite{westrick1999limitations}. First, the distribution of radar stations in the contiguous United States (CONUS) is uneven. The quality of ground-based radar reflectivity data is affected by the distance to the closest radar station to some extent. Second, detections of marine events are limited because there are no ground stations in the oceans to collect reflectivity signals. Finally, severe weather conditions would affect the accuracy of radar. Since our focus is on severe weather event detection, radar information may not provide enough timeliness and accuracy for detection purposes.
Compared with the weather radar, multi-channel geosynchronous satellites have larger spatial coverages and thus are capable of providing more global information to the meteorologists. Take the infrared spectral channel in the satellite imager as an example: the brightness of a pixel reflects the temperature and the height of the cloud top position~\cite{weinreb2011conversion}, which in turn provides the physical condition of the cloud patch at a given time. To find more information about the storm, researchers have applied many pattern recognition methods to satellite data interpretation, like combining multiple channels of image information from the weather satellite~\cite{evans2006cloud} and combining images from multiple satellites~\cite{ho2008automated}. Image analysis methods, including cloud patch segmentation and background extraction~\cite{lakshmanan2003multiscale}~\cite{srivastava2003onboard}, cyclone identification~\cite{lee2000tropical}~\cite{ho2008automated2}, cloud motion estimation~\cite{zhou2001tracking}, and vortex extraction~\cite{zhang2014locating}~\cite{zhang2017severe}, have also been incorporated in severe weather forecasting from satellite data. However, these approaches lack an attention mechanism that can focus on areas most likely to have major destructive weather conditions. Most of these methods do not consider high-level visual patterns ({\it i.e.} larger patterns spatially) to describe the severe weather condition. Instead, they represent extreme weather phenomena by relatively low-level image features.
\subsection{Proposed Spatiotemporal Modeling Approach}
In contrast to current technological approaches, meteorologists, who have geographical knowledge and rich experience of analyzing past weather events, typically take a top-down approach. They make sense of available weather data in a more global (in contrast to local) fashion than numerical simulation models do.
For instance, meteorologists can often make reasonable judgments about near-future weather conditions by looking at the general cloud patterns and the developing trends from satellite image sequences, while existing pattern recognition methods in weather forecasting do not capture such high-level clues such as comma-shaped clouds. Unlike the conventional object-detection task, detecting comma-shaped clouds is highly challenging. First, some parts of cloud patches can be missing from satellite images. Second, such clouds vary substantially in terms of scales, appearance, and moving trajectory. Standard object detection techniques and their evaluation methods are inappropriate.
To address these issues, we propose a new method to detect the comma-shaped cloud in satellite images. Our framework implements computer vision techniques to design the task-dependent features, and it includes re-designed data-processing pipelines. Our proposed algorithm can effectively identify comma-shaped clouds from satellite images. In the Evaluation and Case Study sections, we show that our method contributes to storm forecasting using real-world data, and it can produce earlier and more sensitive detections than human perception in some cases.
The remainder of the paper is organized into four sections. Section~\ref{sec:dataset} describes the satellite image dataset and the training labels. Sec.~\ref{sec:method} details our machine learning framework, with the evaluation results in Sec.~\ref{sec:result}. We provide some case studies in Sec.~\ref{sec:casestudy} and draw conclusions in Sec.~\ref{sec:conclude}.
\begin{figure}[tbp]\centering
\includegraphics[width=0.47\textwidth,trim=21.5cm 2.3cm 8cm 2.5cm,clip]{images/flow2.pdf}
\caption{\textit{Left:} The pipeline of the comma-shaped cloud detection process. high-cloud segmentation, region proposals, correlation with motion prior, constructions of weak classifiers, and the AdaBoost detector are described in Sections~\ref{sec:segmentation},~\ref{sec:corr},~\ref{sec:regions},~\ref{sec:weakclassifier}, and~\ref{sec:adaboost}, respectively. \textit{Right:} The detailed selection process for region proposals.}
\label{fig:framework}
\end{figure}
\section{The Dataset}~\label{sec:dataset}
Our dataset consists of the GOES-M weather satellite images for the year 2008 and the GOES-N weather satellite images for the years 2011 and 2012. We select these three years because the U.S. experienced more severe thunderstorm activities than it does in a typical year. GOES-M and GOES-N weather satellites are in the geosynchronous orbit of Earth and provide continuous monitoring for intensive data analysis. Among the five channels of the satellite imager, we adopt the fourth channel, because it is infrared among the wavelength range of (10.20 - 11.20$\mu$m), and thus can capture objects of meteorological interest including clouds and sea surface~\cite{arking1985retrieval}. The channel is at the resolution of 2.5 miles and the satellite takes pictures of the northern hemisphere at the 15th minute and the 45th minute of each hour.
We use these satellite frames of CONUS at 20$^{\circ}$-50$^{\circ}$ N, 60$^{\circ}$-120$^{\circ}$ W. Each satellite image has 1,024$\times$2,048 pixels, and a gray-scale intensity that positively correlates with the infrared temperature. After the raw data are converted into the image data in accordance with the information in~\cite{earth1998location}, each image pixel represents a specific geospatial location.
The labeled data of this dataset consist of two parts, (1) comma-shaped clouds identified with the help of meteorologists from AccuWeather Inc., and (2) an archive of storm events for these three years in the U.S.~\cite{stormeventdatabase}.
\begin{figure*}[!tbp]\centering
\begin{minipage}{0.47\linewidth}
\includegraphics[width=\textwidth,trim=4.5cm 0.2cm 1cm 4cm,clip]{images/distribution_storm_types.pdf}
\end{minipage}
\begin{minipage}{0.47\linewidth}
\includegraphics[width=0.49\textwidth,trim=5cm 4.1cm 4.5cm 5cm,clip]{images/thunderstormwind.pdf}
\includegraphics[width=0.49\textwidth,trim=5cm 4.1cm 4.5cm 5cm,clip]{images/hail.pdf}\\
\includegraphics[width=0.49\textwidth,trim=5cm 4.1cm 4.5cm 5cm,clip]{images/heavyrain.pdf}
\includegraphics[width=0.49\textwidth,trim=5cm 4.1cm 4.5cm 5cm,clip]{images/lightning.pdf}
\end{minipage}
\caption{Proportions and geographical distributions of different severe weather events in the year 2011 and 2012. \textit{Left:} Proportions of different categories of selected storm types. \textit{Right:} State-wise geographical distributions of land-based storms.}
\label{fig:storm_record}
\end{figure*}
To create the first part of the annotated data, we manually label comma-shaped clouds by using tight squared bounding boxes around each such cloud. If a comma-shaped cloud moves out of the range, we ensure that the head and tail of the comma are in the middle part of the square. The labeled comma-shaped clouds have different visual appearances, and their coverage varies from a width of 70 miles to 1,000 miles. Automatic detection of them is thus nontrivial. The labeled dataset includes a total of 10,892 comma-shaped cloud frames in 9,321 images for the three years 2008, 2011, and 2012. Most of them follow the earlier description of comma-shaped clouds, with the visible rotating head part, main body heading from southwest to northeast, and the dark dry slot area between them.
The second part of the labeled data consists of storm observations with detailed information, including time, location, range, and type. Each storm is represented by its latitude and longitude in the record. We ignore the range differences between storms because the range is relatively small ($<$ 5 miles) compared with our bounding boxes (70 $\sim$ 1000 miles). Every event recorded in the database had enough severity to cause death, injuries, damage, and disruption to commerce. The total estimated damage from storm events for the years 2011 and 2012 surpassed two billion dollars~\cite{annual2011summary}. From the database, we chose eight types of severe weather records\footnote{Tornadoes are included in the Thunderstorm Wind type.} that are known to correlate strongly with the comma-shaped clouds and happen most frequently among all types of events. The distribution of these eight types of severe weather events is shown in the left part of Fig.~\ref{fig:storm_record}. Among those eight types of severe weather events, thunderstorm winds, hail, and heavy rain happen most frequently ($\sim 93 \%$ of the total events). The state-wise geographical distributions of some types of storm events are in the right half of Fig.~\ref{fig:storm_record}. Because marine-based events do not have associated state information, we only visualize the geographical distributions for land-based storm events. With the exception of heavy rains, these severe weather events happen more frequently in East CONUS.
In our experiments, we include only storms that lasted for more than 30 minutes because they overlapped with at least one satellite image in the dataset. Consequently, we have 5,412 severe storm records for the years 2011 and 2012 in the CONUS area for testing purpose, and their time span varies from 30 minutes to 28 days.
\section{Our Proposed Detection Method}~\label{sec:method}
Fig.~\ref{fig:framework} shows our proposed comma-based cloud detection pipeline framework. We first segment the cloud from the background in Sec.~\ref{sec:segmentation}, and then we extract shape and motion features of clouds in Sec.~\ref{sec:corr}. Well-designed region proposals in Sec.~\ref{sec:regions} shrink the searching range on satellite images. The features on our extracted region proposals are fed into weak classifiers in Sec.~\ref{sec:weakclassifier} and then we ensemble these weak classifiers as our comma-shaped cloud detector in Sec.~\ref{sec:adaboost}. We now detail the technical setups in this section.
\subsection{High-Cloud Segmentation}\label{sec:segmentation}
We first segment the high cloud part from the noisy original satellite data.
Raw satellite images contain all the objects that can be seen from the geosynchronous orbit, including land, seas, and clouds. Among all the visible objects in satellite images, we focus on the dense middle and top clouds, which we refer to as ``high cloud'' in the following. The high cloud is important because the comma-shaped phase is most evident in this part, according to~\cite{carlson1980airflow}.
The prior work~\cite{otsu1979threshold} implemented the single-threshold segmentation method to separate clouds from the background. This method is based on the fact that high cloud looks brighter than other parts of the infrared satellite images~\cite{weinreb2011conversion}.
We evaluate this method and show the result in the second column of Fig.~\ref{fig:compare}.
Although this method can segment most high clouds from the background, it misses some cloud boundaries. Because Earth has periodic temperature changes and ground-level temperature variations, and the variations are affected by many factors including terrains, elevation, and latitudes, a single threshold cannot adapt to all these cases.
The imperfection of the prior segmentation method motivates us to explore a data-driven approach. The overall idea of the new segmentation scheme is described as follows: To be aware of spatiotemporal changes of the satellite images, we divide the image pixels into tiles, and then model the samples of each unit using a GMM. Afterward, we identify the existence of a component that most likely corresponds to the variations of high cloud-sky brightness.
We build independent GMMs for each hour and each spatial region to address the challenges of periodic sunlight changes and terrain effects. As sunlight changes in a cycle of one day, we group satellite images by hours and estimate GMMs for each hour separately. Furthermore, since land conditions also affect light conditions, we divide each satellite image into non-overlapping windows according to their different geolocations. Suppose all the pixels are indexed by their time stamp $t$ and spatial location $\mathbf x$, we divide each day into 24 hours, and divide each image into non-overlapping windows. Each window is a square of $32\times 32$. Thus, for each hour $h$ and each window $L$, {\it i.e.}, $T_h\times X_L$, we form a group of pixels $G_{h,L}=\{I(t,\mathbf x): t\in T_h, \mathbf x\in X_L\} = \{I_{h,L}(t,\mathbf x)\}$, with brightness $I(t,\mathbf x)\in$ [0, 255]. Each pixel group $G_{h,L}$ has about 150,000 samples. We model each group by a GMM with the number of components of that group $K_{h,L}$ to be 2 or 3, {\it i.e.},
\begin{equation*}
I_{h,L}(t,\mathbf x) \sim \sum_{i=1}^{K_{h,L}} \varphi^{(i)}_{h,L} \mathcal{N} \left( \mu^{(i)}_{h,L} ,\Sigma^{(i)}_{h,L} \right),
\end{equation*}
where
\begin{align*}
K_{h,L} & = \argmin_{i = 2,3; (t,\mathbf x) \in T_h \times X_L} \left\{AIC(K_{h,L}=i|t, \mathbf x)\right\}.
\end{align*}
Here AIC($\cdot$) is the Akaike information criterion function of $K_{h,L}$. $\psi_{h,L} = \left\{\varphi^{(i)}_{h,L},\mu^{(i)}_{h,L} ,\Sigma^{(i)}_{h,L}\right\}_{i = 1}^{K_{h,L}}$ are GMM parameters satisfying $\mu^{(i)}_{h,L} > \mu^{(j)}_{h,L}$ for $\forall i > j$, which are estimated by the k-means++ method~\cite{arthur2007k}. We can interpret the GMM component number K = 2 as the GMM peaks fit high-sky clouds and land, while K = 3 as the GMM peaks fit high-sky clouds, low-sky clouds, and the land. So for each GMM $\psi_{h,L}$, the component with the largest mean is the one modeling high cloud-sky. We then compute the normalized density of the point $(t, \mathbf x)$ over $\psi_{h,L}$. We annotate this normalized density as $\left\{p_{h,L}^{(i)}(t,\mathbf x)\right\}_{i = 1}^{K_{h,L}}$ and define the intensity value after segmentation to be
$\tilde{I}(t,\mathbf x):=$
\begin{equation}
\left\{\begin{array}{ll}
I_{h,L}(t,\mathbf x)\cdot p_{h,L}^{(1)}(t,\mathbf x) & \mbox{if } I_{h,L}(t,\mathbf x)\cdot p_{h,L}^{(1)}(t,\mathbf x) \ge \sigma \\
0 & \mbox{otherwise},
\end{array}\right.
\label{eq:cut}
\end{equation}
where $\sigma$ is chosen empirically between 100 and 130, with low impact to the features extracted. In our experiment, we choose 120 for convenience.
We then apply a min-max filter between neighboring GMMs in spatiotemporal space.
Based on the assumption that cloud movement is smooth in spatiotemporal space, GMM parameters $\psi_{h,L}$ should be a continuous function over $h$ and $L$. For most pixel groups which we have examined, we observe that our segmented cloud changes smoothly. But in case the GMM component number changes, $\mu^{(1)}_{h,L}$ would also change in both $h$ and $L$, resulting in significant changes to the segmented cloud.
To smooth the cloud boundary part, we post-process a min-max filter to update $\mu^{(1)}_{h,L}$, which is given by
\begin{equation}
\mu_{h,L}^{(1),\mbox{new}}:=\max\left\{\mu^{(1)}_{h,L}, {\min}_{\substack{h'\in \mathcal N_h \\L'\in \mathcal M_L}}\{\mu^{(1)}_{h',L'}\}\right\},
\label{eq:minimax_filter}
\end{equation}
where $\mathcal N_h = \left[h-1, h+1 \right]$ and $\mathcal M_L= \left\{l:\left|\ l - L\right|\ \leq 1\right\}$.
The min-max filter leverages smoothness of GMMs within spatiotemporal neighbors. After applying Eq.~\eqref{eq:minimax_filter}, we update normalized densities and receive more smooth results with Eq.~\eqref{eq:cut}. Example high-cloud segmentation results are shown in the third column of Fig.~\ref{fig:compare}. At the end of this step, high clouds are separated with detailed local information, while the shallow clouds and the land are removed.
\subsection{Correlation with Motion Prior}~\label{sec:corr}
Another evident feature of the comma-shaped clouds is motion. In the cyclonic system, the jet stream has a strong trend to rotate around the low center, which makes up the head part of the comma in the satellite image~\cite{crane1979automatic}. We design a visual feature to extract this cloud motion information, namely \textit{Motion Correlation} in this section. The key idea is that the \textit{same} cloud at two \textit{different} spatiotemporal points should have a strong positive correlation in appearance,
based on a reasonable assumption that clouds move at a nearly-uniform speed within a small spatial and time span.
Thus, cloud movement direction can be inferred from the direction of the maximum correlation. This assumption was first applied to compute cross-correlation in~\cite{leese1970determination}.
We therefore define the motion correlation of location $\mathbf x$ on the time interval of $(t-T, t]$ to be:
\begin{equation}
M(t, \mathbf x) =
\mbox{corr}_{t_0 \in (t-T, t]} (I(t - t_0,\mathbf x), I(t, \mathbf x + \mathbf h))\;,
\label{eq:motion_corr}
\end{equation}
where $\mbox{corr}(\cdot,\cdot)$ denotes the Pearson correlation coefficient, and $\mathbf h$ is the cloud displacement distance in time interval $T$.
This motion correlation can be viewed as an improved cross-correlation in~\cite{leese1970determination}, which we mentioned in Sec.~\ref{sec:intro}. The cross-correlation can be written in the following form:
\begin{equation}
M_0(t, \mathbf x) =
\mbox{corr}_{\left \| \mathbf x_1 - \mathbf x \right \|\leq \mathbf h_0} (I(t - T_0,\mathbf x), I(t, \mathbf x_1 + \mathbf h))\;,
\label{eq:motion_ref}
\end{equation}
where $T_0$ is the time span between two successive satellite images.
We can conclude that our motion correlation is \textit{temporally smoothed} and the cross-correlation is \textit{spatially smoothed} by comparing Eq.~\eqref{eq:motion_corr} and~\eqref{eq:motion_ref}. The cross-correlation feature focuses on the differences in only two images, and then it takes the average on a spatial range. On the other hand, our correlation feature, with motion prior, interprets movement accumulation during the entire time span.
We further re-normalize both $M(\cdot, \cdot)$ and $M_0(\cdot, \cdot)$ to [0, 255] and visualize these two motion descriptors in the fourth and fifth columns of Fig.~\ref{fig:compare}, where we fix $\mathbf h$ to be 10 pixels, $T$ to be 5 hours, and $h_0$ to be 128 pixels. The cross-correlation feature (fourth column of Fig.~\ref{fig:compare}) is noncontinuous across the area boundary. In image time series, the cross-correlation feature expresses less consistent positive/negative correlation in one neighborhood than our motion correlation does. Compared with the cross-correlation feature, our motion correlation feature (fifth column of Fig.~\ref{fig:compare}) shows more consistent texture with the cloud motion direction.
\begin{figure}[!htbp]
\vspace{-0.16in}
\centering
\subfloat {\includegraphics[width=0.09\textwidth,trim=950 550 650 26,clip]{images/goes12_2008_001_011515_BAND_04_ori.png}}\hspace{0.01in}
\subfloat {\includegraphics[width=0.09\textwidth,trim=950 550 650 26,clip]{images/goes12_2008_001_011515_BAND_04_cutthres.png}}\hspace{0.01in}
\subfloat {\includegraphics[width=0.09\textwidth,trim=950 550 650 26,clip]{images/goes12_2008_001_011515_BAND_04_cut.png}}\hspace{0.01in}
\subfloat {\includegraphics[width=0.09\textwidth,trim=950 550 650 26,clip]{images/goes12_2008_001_011515_BAND_04_othermotion.png}}\hspace{0.01in}
\subfloat {\includegraphics[width=0.09\textwidth,trim=950 550 650 26,clip]{images/goes12_2008_001_011515_BAND_04_motion.png}}
\vspace{-0.1in}
\subfloat {\includegraphics[width=0.09\textwidth,trim=150 160 1036 0,clip]{images/goes12_2008_004_144514_BAND_04_ori.png}}\hspace{0.01in}
\subfloat {\includegraphics[width=0.09\textwidth,trim=150 160 1036 0,clip]{images/goes12_2008_004_144514_BAND_04_cutthres.png}}\hspace{0.01in}
\subfloat {\includegraphics[width=0.09\textwidth,trim=150 160 1036 0,clip]{images/goes12_2008_004_144514_BAND_04_cut.png}}\hspace{0.01in}
\subfloat {\includegraphics[width=0.09\textwidth,trim=150 160 1036 0,clip]{images/goes12_2008_004_144514_BAND_04_othermotion.png}}\hspace{0.01in}
\subfloat {\includegraphics[width=0.09\textwidth,trim=150 160 1036 0,clip]{images/goes12_2008_004_144514_BAND_04_motion.png}}
\vspace{-0.1in}
\subfloat {\includegraphics[width=0.09\textwidth,trim=250 400 1400 250,clip]{images/goes12_2008_007_181515_BAND_04_ori.png}}\hspace{0.01in}
\subfloat {\includegraphics[width=0.09\textwidth,trim=250 400 1400 250,clip]{images/goes12_2008_007_181515_BAND_04_cutthres.png}}\hspace{0.01in}
\subfloat {\includegraphics[width=0.09\textwidth,trim=250 400 1400 250,clip]{images/goes12_2008_007_181515_BAND_04_cut.png}}\hspace{0.01in}
\subfloat {\includegraphics[width=0.09\textwidth,trim=250 400 1400 250,clip]{images/goes12_2008_007_181515_BAND_04_othermotion.png}}\hspace{0.01in}
\subfloat {\includegraphics[width=0.09\textwidth,trim=250 400 1400 250,clip]{images/goes12_2008_007_181515_BAND_04_motion.png}}
\vspace{-0.1in}
\subfloat[(a)] {\includegraphics[width=0.09\textwidth,trim=150 160 1036 0,clip]{images/goes12_2008_006_074514_BAND_04_ori.png}}\hspace{0.01in}
\subfloat[(b)] {\includegraphics[width=0.09\textwidth,trim=150 160 1036 0,clip]{images/goes12_2008_006_074514_BAND_04_cutthres.png}}\hspace{0.01in}
\subfloat[(c)] {\includegraphics[width=0.09\textwidth,trim=150 160 1036 0,clip]{images/goes12_2008_006_074514_BAND_04_cut.png}}\hspace{0.01in}
\subfloat[(d)] {\includegraphics[width=0.09\textwidth,trim=150 160 1036 0,clip]{images/goes12_2008_006_074514_BAND_04_othermotion.png}}\hspace{0.01in}
\subfloat[(e)] {\includegraphics[width=0.09\textwidth,trim=150 160 1036 0,clip]{images/goes12_2008_006_074514_BAND_04_motion.png}}
\caption{Cropped satellite images. (a) The original data. (b) Segmented high clouds with single threshold. (c) Segmented high clouds with GMM. (d) Cross-correlation in~\cite{leese1970determination}. (e) Correlation with motion prior.}
\label{fig:compare}
\end{figure}
\subsection{Data Partition}\label{sec:data_par}
In this section, we use the widely-used ``sliding windows'' in~\cite{papageorgiou2000trainable} as the first-step detection. Sliding windows with an image pyramid help us capture the comma-shaped clouds at various scales and locations. Because most comma-shaped clouds are in the high sky, we run our sliding windows on the segmented cloud images. We set 21 dense $L\times L$ sliding windows, where $L\in \left\{128, 128 \cdot 8^{1/20}, \cdots, 128 \cdot 8^{19/20}, 1024 \right\}$. For each sliding window size $L$, the movement pace of the sliding window is $\left \lfloor L/8 \right \rfloor$, where $\left \lfloor \cdot \right \rfloor$ is the floor function. Under that setting, each satellite image has more than $10^{4}$ sliding windows, which is enough to cover the comma-shaped clouds in different scales.
Before we apply machine learning techniques, it is important to define whether a given bounding box is positive or negative. Here we use the Intersection over Union metric (IoU)~\cite{pascal-voc-2012} to define the positive and negative samples, which is also a common criterion in object detection. We set bounding boxes with IoU greater than a value to be the positive examples, and those with IoU = 0 to be the negative samples.
\begin{figure}[tbp!]\centering
\includegraphics[width=0.45\textwidth,trim=1cm 2.3cm 2.5cm 3.6cm,clip]{images/recallRegionProposal.pdf}
\caption{IoU-Recall curve in the Region Proposal steps. The blue dot on the blue curve is our final IoU choice, with the corresponding recall of 0.91 .}
\label{fig:recall}
\end{figure}
A suitable IoU threshold should strike a balance between high recall and high accuracy of the selected comma-shaped clouds. Several factors prevent us from achieving a perfect recall rate. First, we only choose limited sizes of sliding windows with limited strides. Second, some of the satellite images are (partially) corrupted and unsuitable for a data-driven approach. Third, some cloud patches are in a lower altitude, hence they are removed in the high-cloud segmentation process in Sec.~\ref{sec:segmentation}. Fourth, we design simple classifiers to filter out most sliding windows without comma-shaped clouds (see Sec.~\ref{sec:regions}). Though we can get high efficiency by region proposals, the method inadvertently filters a small portion of true comma-shaped clouds. We show the IoU-recall curves in Fig.~\ref{fig:recall} for analyzing the effect of these factors to the recall rate. We provide our choice of IoU=0.50 as the blue dot in the plot and explain the reasons below.
Among the three curves in Fig.~\ref{fig:recall}, the green curve, marked as the \textit{Optimal Recall}, indicates the theoretical highest recall rate we can obtain with IoU changes. Because we have strong requirements to the sizes and locations of sliding windows in our algorithm, but do not apply those restrictions to human labelers, labeled regions and sliding windows cannot have a 100\% overlap due to human perception variations. Thus, we use the maximum IoU between each labeled region and all sliding windows as the highest theoretical IoU of this algorithm.
The red curve, marked as \textit{Recall before Region Proposals}, indicates the true recall we can get which considers missing images, image corruption, and high-cloud segmentation errors. Within our dataset, there are 11.26\% (5,926) of satellite images that are missing from the NOAA satellite image dataset, 0.36\% (188) that no recognized clouds, and 3.33\% (1,751) that have abnormally low contrast. Though low contrast level or dark images can be adjusted by histogram equalization, the pixel brightness values do not completely follow the GMMs estimated in the background extraction step.
Some high clouds are mistakenly removed with the background. In that experimental setting, this curve is the highest recall we can get before region proposals.
The blue curve, marked as \textit{Recall after Region Proposals}, indicates the true recall we can get after region proposals, where the detailed process to design region proposals is in the following Sec.~\ref{sec:regions}.
The positive training samples consist of sliding windows whose IoU with labeled regions are higher than a carefully chosen threshold in order to guarantee both a reasonably high recall and a high accuracy. As a convention in object detection tasks, we expect IoU threshold $\geq$ 0.50 to ensure visual similarity with manually labeled comma-shaped clouds, and a reasonably high recall rate ($\geq 90\%$) in total for enough training samples. Finally, the IoU threshold is set to be 0.50 for our task. The recall rate is 92.26\% before region proposals and 90.66\% after region proposals.
After establishing these boundaries, we partition the dataset into three parts: \textit{training set}, \textit{cross-validation set}, and \textit{testing set}. We use the data of the first 250 days of the year 2008 as the training set, the last 116 days of that year as the cross-validation set, and data from the years 2011 and 2012 as the testing set.
The separation of the training set is due to the unusually large number of severe storms in 2008. The storm distribution ratio in the training, cross-validation, and testing sets are roughly 50\% : 15\% : 35\%. There are strong data dependencies between consecutive images. Splitting our data by time rather than randomly breaks this type of dependencies and more realistically emulates the scenarios within our system. This data partitioning scheme is also valid for the region proposals described in Sec.~\ref{sec:regions}.
\subsection{Region Proposals}\label{sec:regions}
In this stage, we design simple classifiers to filter out a majority of negative sliding windows. This method was also applied in~\cite{dalal2005histograms}. Because only a very small proportion of sliding windows generated in Sec.~\ref{sec:data_par} contain comma-shaped clouds, we can save computation in subsequent training and testing processes by reducing the number of sliding windows.
We apply three weak classifiers to decrease the number of sliding windows.
The first classifier removes candidate sliding windows if their average pixel intensity is out of the range of [50, 200]. Comma-shaped clouds have typical shape characteristics that the cloud body part consists of dense clouds, but the dry tongue part is cloudless. Hence, the average intensity of a well-cropped patch should be within a reasonable range, neither too bright nor too dark. Finally, this classifier removes most cloudless bounding boxes while keeping over 98\% of the positive samples.
The second classifier uses a linear margin to separate positive examples from negative ones. We train this linear classifier on all the positive sliding windows with an equal number of randomly chosen negative examples, and then validate on the cross-validation set. All the sliding windows are resized to $256 \times 256$ pixels and vectorized before feeding into training, and the response variable is positive (1) or negative (0). As a result, the classifier has an accuracy of over 95\% on the training set and over 80\% on the cross-validation set. To ensure a high recall of our detectors, we output probability of each sliding window and then set a low threshold value. Sliding windows that output probability less than this threshold value are filtered out The threshold ensures that no positive samples are filtered out. We randomly change the train-test split for ten rounds and set the final threshold to be 0.2.
Finally, we compute the pixel-wise correlation $\gamma$ of each sliding window $I$ with the average comma-shaped cloud $I_0$. This correlation captures the similarity to a comma shape. $\gamma$ is computed as:
\begin{equation}
\gamma = \frac{I \cdot I_0}{\left \| I \right \|_{L_2} \cdot \left \| I_0 \right \|_{L_2}}\;.
\label{eq:corr}
\end{equation}
Because there are no visual differences between different categories of storms (as shown in the last row of the table in Fig.~\ref{fig:distribute}), $I_0$ is the average labeled comma-shaped clouds in the training set. The computation process of $I_0$ consists of the following steps. First, we take all the labeled comma-shaped clouds bounding boxes in the training set and resize them to $256 \times 256$. Next, we segment the high-cloud part from each image using the method in Sec.~\ref{sec:segmentation}. Finally, we take the average of the high-cloud parts. The resulting $I_0$ is marked as Avg. in the middle row of the table in Fig.~\ref{fig:distribute}. To be consistent in dimensions, every sliding window $I$ is also resized to $256 \times 256$ in Eq.~\eqref{eq:corr}.
\begin{figure}[tbp!]
\begin{minipage}{0.95\linewidth}
\centering
\includegraphics[width=\textwidth,trim=0.2cm 2.3cm 0.56cm 3.2cm,clip]{images/regionProposals.pdf}
\end{minipage}
\begin{minipage}{0.95\linewidth}
\centering
\begin{threeparttable}
\begin{tabular}{ |m{0.14\linewidth}|m{0.15\linewidth}|m{0.15\linewidth}|m{0.15\linewidth}|m{0.15\linewidth}| }
\hline
Marker & $N_1$ & $N_2$ & $N_3$ & $N_4$ \\
\hline
$\gamma$ & -0.60 & -0.40 & -0.20 & 0\\
\hline
Example &
\includegraphics[scale=0.08]{images/_0_6.png} &
\includegraphics[width=\linewidth]{images/_0_4.png} &
\includegraphics[width=\linewidth]{images/_0_2.png} &
\includegraphics[width=\linewidth]{images/0.png}\\
\hline\hline
Marker & $P_1$ & $P_2$ & $P_3$ & --\\
\hline
$\gamma$ & 0.20 & 0.40 & 0.60 &Avg.\\
\hline
Example &
\includegraphics[width=\linewidth]{images/0_2.png} &
\includegraphics[width=\linewidth]{images/0_45.png} &
\includegraphics[width=\linewidth]{images/0_6.png} &
\includegraphics[width=\linewidth]{images/allCommaImg.png}\\
\hline \hline
TS$^{*}$& TS & Lightning & Hail & Marine\\
Category& Wind & & & TS Wind\\
\hline
Avg. Image&
\includegraphics[width=\linewidth]{images/ThunderstormWind.png} &
\includegraphics[width=\linewidth]{images/MarineThunderstormWind.png} &
\includegraphics[width=\linewidth]{images/Hail.png} &
\includegraphics[width=\linewidth]{images/Lightning.png}\\
\hline
\end{tabular}
\begin{tablenotes}\footnotesize
\item[*] TS: Thunderstorm.
\end{tablenotes}
\end{threeparttable}
\end{minipage}
\caption{\textit{Top}: The correlation probability distribution of all sliding windows. \textit{Middle}:
Some segmented image examples. The last example image is the average image of the manually labeled regions in the training set. The correlation score $\gamma$ is defined in Eq.~\eqref{eq:corr}, and the diagram is the normalized probability distribution of $\gamma$ of the training set.
\textit{Bottom}: Average comma-shaped clouds in different categories. }
\label{fig:distribute}
\end{figure}
The higher correlation $\gamma$ indicates that a cloud patch has the appearance of comma-shaped clouds. Based on this observation, a simple classifier is designed to select sliding windows whose $\gamma$ is higher than a certain threshold.
Fig.~\ref{fig:distribute} serves as a reference to choose a customized threshold of $\gamma$. The distribution and some example images of $\gamma$ is listed in the table of Fig.~\ref{fig:distribute}. In the training and cross-validation sets, less than 1\% of positive examples have a $\gamma$ value lower than 0.15. So, we use $\gamma \geq 0.15$ as the final filter choice to eliminate sliding window candidates.
The region proposal process only permits about $10^{3}$ bounding boxes per image, which is only one tenth of the initial number of bounding boxes. As shown in Fig.~\ref{fig:recall}, the region proposals process does not significantly affect the recall rate, but it can save much time for the later training process.
\subsection{Construction of Weak Classifiers}~\label{sec:weakclassifier}
We design two sets of descriptive features to distinguish comma-shaped clouds. The first is the histogram of oriented gradients (HOG)~\cite{dalal2005histograms} feature based on segmented high clouds. For each of the region proposals, we compute the HOG feature of the bounding box. Because we compute the HOG feature on the segmented high clouds, we refer to it as \textit{Segmented HOG} feature in the following paragraphs. The second is the histogram feature of each image crop based on the motion prior image, where the texture of the image reflects the motion information of cloud patches. We fine-tune the parameters and show the accuracy on cross-validation set in Table~\ref{tb:HOGLBP}. We use Segmented HOG setting~\#4 and Motion Histogram Setting~\#5 as the final parameter setting in our experiments because it has a better performance on the cross-validation set. The feature dimension is 324 for Segmented HOG and 27 for Motion Histogram.
\begin{table}[tb]
\begin{center}
\caption{Avg. Accuracy of weak classifiers for the segmented HOG and motion histogram features in different parameter settings}
\begin{threeparttable}
\begin{tabular}{c|c|c|c|c}
\hline
Seg. HOG & Orientation & Pixels/ & Cells/ & (\%)Avg. \\
Settings & Directions & Cell & Block & Accuracy \\
\hline
\#1 & 9 & $64 \times 64 $ & $1\times 1$ & $69.88 \pm 1.15 $ \\
\#2 & 18 & $64 \times 64 $ & $1\times 1$ & $ 70.65 \pm 1.25 $ \\
\#3 & 9 & $128 \times 128 $ & $1\times 1$ & $ 61.21 \pm 0.62 $ \\
\textbf{\#4} & \textbf{9} & \textbf{64} $\times$ \textbf{64} & \textbf{2} $\times$ \textbf{2} & \textbf{73.18} $\pm$ \textbf{0.98} \\
\hline
\hline
Motion Hist. & Pixels to & Time Span & Hist. & (\%)Avg. \\
Settings & the West $\mathbf h^{*}$&in hours $T^{*}$& Bins & Accuracy \\
\hline
\#1 & 10 & 2 & 18 & $58.84 \pm 0.59$ \\
\#2 & 5 & 2 & 18 & $52.97 \pm 0.20$ \\
\#3 & 10 & 2 & 9 & $58.83 \pm 0.20$ \\
\#4 & 10 & 2 & 27 & $61.05 \pm 0.63$ \\
\textbf{\#5} & \textbf{10} & \textbf{5} & \textbf{27} & \textbf{63.25} $\pm$ \textbf{0.67} \\
\hline
\end{tabular}
\begin{tablenotes}\footnotesize
\item[*] $\mathbf h$ and T have the same meaning as annotated in Eq.~\eqref{eq:motion_corr}.\\
\end{tablenotes}
\end{threeparttable}
\label{tb:HOGLBP}
\end{center}
\vspace{-0.2in}
\end{table}
Since severe weather events have a low frequency of occurrence, positive examples only take up a very small proportion ($\sim 1 \%$) in the whole training set. To utilize negative samples fully in the training set, we construct 100 linear classifiers. Each of these classifiers is trained on the whole positive training set and an equal number of negative samples. We split and randomly select these 100 batches according to their time stamp so that their time stamps are not overlapped with other batches. We train each logistic regression model on the segmented HOG features and the motion histogram feature of the training set. Finally, we get 200 linear classifiers. We evaluate the accuracy of the trained linear classifiers based on a subset of testing examples whose positive/negative ratio is also 1-to-1. The average accuracy of the segmented HOG feature is 73.18\% and that of the motion histogram features is 63.25\%. The accuracy distribution of these two types of weak classifiers is shown in Fig.~\ref{fig:weakclassifier}. From the statistics and the figure, we know the Segmented HOG feature has a higher average accuracy than the Motion Histogram feature has, and a larger variation in the accuracy distribution. As shown in Fig.~\ref{fig:weakclassifier}, about 90\% of the classifiers on the motion histogram feature have an accuracy between 63\% and 65\%, while those on the segmented HOG feature distribute in a wider range from 53\% to 80\%.
\begin{figure}[tb!]\centering
\includegraphics[width=0.49\textwidth,trim=0cm 0cm 0cm 0cm,clip]{images/avgAccuWeak.png}
\caption{The accuracy distribution of weak classifiers with Segmented HOG feature and Motion Histogram feature.}
\label{fig:weakclassifier}
\end{figure}
\subsection{AdaBoost Detector}~\label{sec:adaboost}
We apply the stacked generalization on the probability output of our weak classifiers~\cite{wolpert1992stacked}. For each proposed region, we use the probability output of the 200 weak classifiers as the input, and get one probability $p$ as the output. We define the proposed region is positive for $p \geq p_0$ or negative in other cases, where $p_0\in (0, 1)$ is our given cutoff value.
We adopt AdaBoost~\cite{freund1995desicion} as the method for stacked generalization because it achieves the highest accuracy on the balanced cross-validation set, as shown in Table~\ref{tb:ensemble}. All these classifiers are constructed on the training set and fine-tuned on the cross-validation set. Table~\ref{tb:Adaboost} shows the accuracy of the AdaBoost classifier with different parameters.
For each set of parameters, we provide a 95\% confidence interval computed on 100 random seeds in both Table~\ref{tb:ensemble} and~\ref{tb:Adaboost}.
The classification accuracy reaches the maximum at 86.47\% with 40 leaf nodes and one layer. The AdaBoost classifier running on region proposals is our proposed comma-shaped cloud detector.
\begin{table}[tb]
\begin{minipage}{.49\linewidth}
\begin{center}
\caption{Accuracy of different stacked generalization methods on the cross-validation set}
\begin{threeparttable}
\begin{tabular}{c|c}
\hline
Method$^{*}$ & (\%)Accuracy \\
\hline
LR & 85.10 $\pm$ 0.20\\
Bagging & 81.98 $\pm$ 0.46\\
RF & 82.34 $\pm$ 0.40 \\
ERF & 82.45 $\pm$ 0.34\\
GBM & 85.77 $\pm$ 0.25 \\
\textbf{AdaBoost} & \textbf{86.47} $\pm$ \textbf{0.25}\\
\hline
\end{tabular}
\label{tb:ensemble}
\begin{tablenotes}\footnotesize
\item[*] LR: Logistic Regression; RF: Random Forest; ERF: Extremely Random Forest; GBM: Gradient Boosting Machine with deviance loss.\\
\end{tablenotes}
\end{threeparttable}
\end{center}
\end{minipage}
\hfill
\begin{minipage}{.5\linewidth}
\begin{center}
\caption{Accuracy of the AdaBoost detector on the cross-validation set with different parameters}
\begin{tabular}{c|c|c}
\hline
Tree & Leaf & (\%)Accuracy \\
layer & Nodes & \\
\hline
& 20 & 85.49 $\pm$ 0.26\\
\textbf{1} & \textbf{40} & \textbf{86.47} $\pm$ \textbf{0.25} \\
& 60 & 86.45 $\pm$ 0.22 \\
\hline
& 20 & 86.11 $\pm$ 0.24 \\
2 & 40 & 86.27 $\pm$ 0.24 \\
& 60 & 84.97 $\pm$ 0.25 \\
\hline
\end{tabular}
\label{tb:Adaboost}
\end{center}
\end{minipage}
\end{table}
We then run the AdaBoost detector on the testing set and then compute the ratio of the labeled comma-shaped clouds that our method can detect. For each image, we choose the detection regions that have the largest probability scores of having comma-shaped clouds (abbreviated as probability in this paragraph), and we ensure every two detection regions in one image have an IoU less than 0.30 --- a technique called non-maximum suppression (NMS) in object detection~\cite{rosten2006machine}. If one output region has IoU larger than 0.30 with another output region, we remove the one with lower probability from the AdaBoost detector. Finally, the detector outputs a set of sliding windows, with each region indicating one possible comma-shaped clouds.
In our experiment, the training set for ensembling is a combination of all 68,708 positive examples and a re-sampled subset of negative examples sized ten times larger than the size of positive examples ({\it i.e.}, 687,080).
We carry out experiments with the Python 2.7 implementation on a server with the Intel$\textsuperscript{\textregistered}$ Xeon X5550 2.67GHz CPUs. We apply our algorithm on every satellite image in parallel.
If the cutoff threshold is set to be 0.50, the detection process for one image, from high-cloud segmentation to AdaBoost detector, costs about 40.59 seconds per image. Within that time, the high-cloud segmentation takes 4.69 seconds, region proposals take 14.28 seconds, and the AdaBoost detector takes 21.62 seconds. We only get two satellite images per hour now, and these three processes finish in a sequential order. If higher speed is needed, an implementation in C/C++ is expected to be substantially faster.
\section{Evaluation}\label{sec:result}
In this section, we present the evaluation results for our detection method. First, we present an ablation study. Second, we show that our method can effectively detect both comma-shaped clouds and severe thunderstorms. Finally, we compare our method with two other satellite-based storm detection schemes and show that our method outperforms both.
\subsection{Ablation Study}
\begin{table}[!tb]
\begin{center}
\caption{Accuracy of the AdaBoost classifier on the cross-validation set with different features}
\begin{threeparttable}
\begin{tabular}{c|c|c}
\hline
With high-cloud & Feature(s) & Accuracy (\%) \\
segmentation & & \\
\hline
& HOG & 70.45 $\pm$ 1.13\\
No & Motion Hist. & 55.84 $\pm$ 0.88\\
& Combination & 80.30 $\pm$ 0.41 \\
\hline
& HOG & 74.01 $\pm$ 0.90 \\
\textbf{Yes} & Motion Hist. & 65.32 $\pm$ 0.62\\
& \textbf{Combination} & \textbf{86.47} $\pm$ \textbf{0.25}\\
\hline
\end{tabular}
\label{tb:ablation}
\begin{tablenotes}\footnotesize
\item Here HOG with high-cloud segmentation = Segmented HOG feature; Motion Hist. = Motion Histogram Feature.
\end{tablenotes}
\end{threeparttable}
\end{center}
\end{table}
To examine how much each step contributes to the model, we carried out an ablation study and show the results in Table~\ref{tb:ablation}.
We enumerate all the combinations in terms of high-cloud segmentations and features. The first column indicates whether the region proposals are on the original satellite images or on the segmented ones. The second column separates HOG feature, Motion Histogram feature, and their combinations. The last column shows the accuracy on the cross-validation set with a 95\% confidence interval.
If we do not use high-cloud segmentation, the combination of HOG and Motion Histogram features outperforms each of them.
If we use high-cloud segmentation, the combination of these two features also performs better than each of them, and it also outperforms the combination of features without high-cloud segmentation. In conclusion, the effectiveness of our detection scheme is due to \textit{both} high-cloud segmentation process and weak classifiers built on shape and motion features.
\subsection{Detection Result}
The evaluation in Fig.~\ref{fig:missing-detect} shows our model can detect up to 99.39\% of the labeled comma-shaped clouds and up to 79.41\% of storms of the year 2011 and 2012. Here we define the term ``detect comma-shaped clouds'' as: If our method outputs a bounding box having IoU $\geq$ 0.50 with the labeled region, we consider such bounding box detects comma-shaped clouds; otherwise not. We also define ``detect a storm'' as: If any storm in the NOAA storm database is captured within one of our output bounding boxes, we consider we detect this storm.
The comma-shaped clouds detector outputs the probability $p$ of each bounding box from AdaBoost classifier. If $p \geq p_0$, this bounding box consists of comma-shaped clouds. We recommend $p_0$ to be set in [0.50, 0.52], and we provide three reference probabilities $p_0$ = 0.50, 0.51 and 0.52. The number of detections per image as well as the missing rate of comma-shaped clouds and storms corresponding to each $p_0$ are available in the right part of Fig.~\ref{fig:missing-detect}. For a user who desires high recall rate, {\it e.g.} a meteorologist, we recommend setting $p_0$ = 0.50. The recall rate of the comma-shaped clouds is 99\% and the recall rate of storms is 64\% under that setting. Our detection method will output an average of 7.76 bounding boxes per satellite image. Other environmental data, like wind speed and pressure, are needed to be incorporated into the system to filter the bounding boxes. For a user who desires accurate detections, we recommend setting $p_0$ = 0.52. The recall rate of the comma-shaped clouds is 80\%, and our detector outputs an average of 1.09 bounding boxes per satellite image. The recall rate is reasonable, and the user will not get many incorrectly reported comma-shaped clouds.
The setting $p_0\in$ [0.50, 0.52] could give us the best performance for several reasons. When $p_0$ goes under the value 0.50, the missing rate of the comma-shaped clouds almost remains the same value ($\sim$1\%), and we need to check more than 8 bounding boxes per image to find these missing comma-shaped clouds. It consumes too much human effort. When $p_0$ goes over the value 0.52, the missing rate of comma-shaped clouds goes over 20\%, and the missing rate of storms goes over 77\%. Since missing a storm could cause severe loss, $p_0 > 0.52$ cannot provide us a recall rate that is high enough for the storm detection purpose.
\begin{figure}[!tb]
\begin{minipage}{0.6\linewidth}
\includegraphics[width=\textwidth,trim=0.3cm 0cm 0.3cm 0cm,clip]{images/missingPerDetection.pdf}
\end{minipage}
\begin{minipage}{0.39\linewidth}
\begin{center}
\footnotesize
\begin{tabular}{p{4.1em}|p{0.7em}p{0.8em}p{0.8em}}
\multicolumn{4}{c}{REFERENCE POINTS}\\
\hline
Marker & $C_1$ & $C_2$ & $C_3$ \\
\hline
Cutoff & 0.52 & 0.51 & 0.50\\
\hline
Detections & && \\
Per Image & 0.04 & 0.59 & 0.89\\
(log 10) & && \\
\hline
Missing & && \\
Rate (Clouds) & 0.20 & 0.03 & 0.01\\
\hline
Missing & && \\
Rate (Storms) & 0.77 & 0.52 & 0.36\\
\hline
\end{tabular}
\end{center}
\end{minipage}
\caption{Evaluation curves of our comma-shaped clouds detection method. \textit{Left}: Missing rate curve with Detections. \textit{Right}: Some reference cutoff values on the curve.}
\label{fig:missing-detect}
\end{figure}
Though our comma-shaped clouds detector can effectively cover most labeled comma-shaped clouds, it still misses at least 20\% storms in the record. Among different types of the storms, severe weather events on the ocean\footnote{Here severe weather events on the ocean includes marine thunderstorm wind, marine high wind, and marine strong wind.} have a higher probability to be detected in the algorithm than other types of severe weather events. At the point of the largest recall, our method detects approximately 85\% severe weather events on the ocean versus 75\% on the land. Our detector misses such events because severe weather does not always happen near the low center of the comma-shaped clouds. According to~\cite{carlson1980airflow} and~\cite{stewart1989winter}, the exact cold front and the warm front streamline cannot be accurately measured from satellite images. Hence, comma-shaped clouds are simply an indicator of storms, and further investigation in their geological relationships is necessary to improve our method.
\subsection{Storm Detection Ability}
We compare the storm identification ability of our algorithm with other baseline methods that use satellite images.
The first baseline method comes from~\cite{morel2002climatology} and~\cite{fiolleau2013algorithm}, and the second baseline improves the first in~\cite{lakshmanan2003multiscale}. We call them~\textit{Intensity Threshold Detection} and~\textit{Spatial-Intensity Threshold Detection} hereafter.
The Intensity Threshold Detection uses a fixed reflectivity level of radar or infrared satellite data to identify a storm. A continuous region with a reflectivity level larger than a certain threshold $I_0$ is defined as a storm-attacked area. Spatial-Intensity Threshold Detection improves it by changing the cost function to be a weighted sum:
\begin{equation*}
E = \sum_{i = 1}^{n} \lambda d_m\left(x_i\right) + \left(1 - \lambda\right) d_c\left(x_i\right),
\end{equation*}
where $X = \left\{x_i\right\}_{i = 1}^{n}$ is the point set representing a cloud patch, $d_m$ is the spatial distance within the cluster, and $d_c$ is the difference between the pixel brightness $I\left(x_i\right)$ and the average brightness of the cloud $X$.
We make some necessary changes to the baselines to make two methods comparable. First, we explore different $I_0$ value, because we use the different channels and satellites with the baselines. In addition, the light distribution of images is changed through histogram equalization in the preprocessing stage, so we cannot simply adopt $I_0$ used in the baselines.
Second, we change the irregular detected regions to the square bounding boxes and use the same criteria to define positive and negative detections.
We adopt the idea in~\cite{lakshmanan2009efficient} and view these pixel distributions as 2D GMM. We use Gaussian means and the larger eigenvalue of Gaussian covariance matrix to approximate a bounding box center and a bounding box size, respectively. The number of Gaussian components and other GMM parameters are estimated by mean Silhouette Coefficient~\cite{rousseeuw1987silhouettes} and the k-means++ method.
\begin{figure}[tbp!]
\begin{minipage}{0.63\linewidth}
\centering
\includegraphics[width=\textwidth,trim=0cm 0.25cm 0cm 0cm,clip]{images/baseline.pdf}
\end{minipage}
\begin{minipage}{0.36\linewidth}
\begin{center}
\footnotesize
\begin{tabular}{c|c}
\multicolumn{2}{c}{MAXIMUM RECALL}\\
\multicolumn{2}{c}{OF STORMS}\\
\hline
Method & Recall\\
\hline
Intensity & 0.41 \\
\hline
Spatial- & 0.44 \\
Intensity & \\
\hline
\textbf{Our} & \textbf{0.79} \\
\textbf{Method} & \\
\hline
\end{tabular}
\end{center}
\end{minipage}
\caption{Comparison of the baseline methods. \textit{Left}:
Part of Recall-Precision curve of the two baseline storm detection methods and our method. \textit{Right}: The maximum recall rate they can reach. Here Intensity = Intensity Threshold Detection and Spatial-Intensity = Spatial-Intensity Threshold Detection.}
\label{fig:baseline}
\end{figure}
The partial recall-precision curve in Fig.~\ref{fig:baseline} shows that our method outperforms both Intensity Threshold Detection and Spatial-Intensity Threshold Detection when the recall is less than 0.40. We provide only a partial recall-precision curve because of the limited range of $I_0$ under the limited time and computation resources. In our experiment, we change parameters $I_0$ in Intensity Threshold Detection from 210 to 230. When $I_0$ goes over 230, very few pixels would be selected so this method cannot ensure high recall rate. When $I_0$ goes under 210, many pixels representing low clouds are also included in the computation, which slows down the computations ($\sim$ 5 minutes per image). Consequently, we do not explore those values. As for Spatial-Intensity Threshold Detection, $I_0$ is fixed at the value of 225, and $\lambda$ is the weight between 0 and 1. When $\lambda$ changes from 0 to 1, the recall first goes up and then moves down, while the precision changes very little. The curve representing Spatial-Intensity Threshold Detection reaches the highest recall at 43.66\% when $\lambda$ approaches 0.7.
Compared with the two baselines that detect storm events directly, our proposed method has the following strengths: (1) Our method can reach a maximum recall of 79.41\%, almost twice as those of the baseline methods. Due to computational speed issues, we could not increase the recall rate of the two baseline methods to be higher than 45\%, which limits their use in practical storm detections. For our method, however, we can reach a high recall rate without heavy computational cost.
(2) Our method outperforms these two baseline methods in the precision rate of Fig.~\ref{fig:baseline}. Compared with these two methods that mostly rely on pixel-wise intensity, our method comprehensively combines the shape and motion information of clouds in the system, leading to a better performance in storm detection.
None of the three curves in Fig.~\ref{fig:baseline} have a high precision of detecting storm events, because this task is difficult especially without the help of other environmental data.
In addition, our method aims to detect comma-shaped clouds, rather than to forecast storm locations directly. Sometimes severe storms are reported later than the appearance of comma-shaped clouds. Such cases are not counted in the precision rate of Fig.~\ref{fig:baseline}. In those cases, our method can provide useful and timely information or alerts to meteorologists who can make the determination.
Fig.~\ref{fig:baseline} also points out the importance of exploring the spatial relationship between comma-shaped clouds and storm observations, as the Spatial-Intensity Threshold Detection method slightly outperforms our method when the recall rate is higher than 0.40~. According to the trend of the green curve, adding spatial information to the detection system can improve the performance to some extent. We will consider combining spatial information into our detection framework in the future.
\begin{figure*}[!bthp]\centering
\includegraphics[width=.32\textwidth]{images/case1_goes13_2012_159_004519.png}
\includegraphics[width=.32\textwidth]{images/case1_goes13_2012_159_051518.png}
\includegraphics[width=.32\textwidth]{images/case1_goes13_2012_159_194519.png}\\
\hfill ($a_1$) 00:45:19, June-07-2012 UTC \hfill ($a_2$) 05:15:18, June-07-2012 UTC \hfill ($a_3$) 19:45:19, June-07-2012 UTC \hfill\mbox{}\\
\includegraphics[width=.32\textwidth]{images/case2_goes13_2011_055_161517.png}
\includegraphics[width=.32\textwidth]{images/case2_goes13_2011_055_181519.png}
\includegraphics[width=.32\textwidth]{images/case2_goes13_2011_055_214519.png}\\
\hfill ($b_1$) 16:15:17, Feb-24-2011 UTC \hfill ($b_2$) 18:15:19, Feb-24-2011 UTC \hfill ($b_3$) 21:45:19, Feb-24-2011 UTC \hfill\mbox{}\\
\includegraphics[width=.32\textwidth]{images/case4_goes13_2011_002_211519.png}
\includegraphics[width=.32\textwidth]{images/case4_goes13_2011_003_021519.png}
\includegraphics[width=.32\textwidth]{images/case4_goes13_2011_003_054518.png}\\
\hfill ($c_1$) 21:15:19, Jan-02-2011 UTC \hfill ($c_2$) 02:15:19, Jan-03-2011 UTC \hfill ($c_3$) 05:45:18, Jan-03-2011 UTC \hfill\mbox{}\\
\caption{(a-c) Three detection cases. Green frames: our detection windows; Blue frames: our labeled windows; Red dots: storms. Some images have blank in left-bottom because it is out of the satellite range.}
\label{fig:casestudy}
\end{figure*}
\section{Case Studies}\label{sec:casestudy}
We present three case studies (a-c) in Fig.~\ref{fig:casestudy} to show the effectiveness and some imperfections of our proposed detection algorithm. The green bounding boxes are our detection outputs, the blue bounding boxes are comma-shaped clouds identified by meteorologists, and the red dots indicate severe weather events in the database~\cite{stormeventdatabase}. The detection threshold is set to 0.52 to ensure the precision of each output-bounding box. The descriptions of these storms are summarized from~\cite{annual2011summary}.
In the first case (row 1), strong wind, hail, and thunderstorm wind developed in the central and northeast part of Colorado, west of Nebraska and east of Wyoming, on late June 6, 2012. The green bounding box in the top-left corner of Fig.~\ref{fig:casestudy} - ($a_1$) indicated this region. Then, a dense cloud patch moved in the eastern direction and covered eastern Wyoming, western South Dakota, and western Nebraska on early June 7, 2012. At that time, these states reported property damages in different degrees. Later on June 7, 2012, the cloud patch became thinner as it moved northward to Montana and North Dakota, as shown in ($a_3$). Our method had a good tracking of that cloud patch all the time, even though the cloud shape did not look like a typical comma. In comparison, human eyes did not recognize it as a typical comma shape because it lost the head part.
Another region detected to have a comma-shaped cloud in ($a_1$) was around North Texas and Oklahoma. At that time, hail and thunderstorm winds were reported, but the comma shape in the cloud began to disappear. Another comma-shaped cloud began to form in the Gulf of Mexico, as seen in the center part of ($a_1$). At that time, the comma shape was too vague to be discovered by either our computer detector or by human eyes. As time passed ($a_2$), the comma-shaped cloud appeared, and it was detected by both our detector and human eyes. The clouds gathered as some severe events in North Florida in ($a_2$). According to the record, a person was injured, and Florida experienced severe property damage at that time. Later that day, the large comma-shaped cloud split into two parts. The cloud patch in the west had an incomplete shape, which is difficult for human eyes to discover, as shown in ($a_3$). However, our method successfully detected this change. In addition, our method detected all the recorded severe weather events. This example indicates our method is able to detect incomplete or atypical comma-shaped clouds, including when one comma-shaped cloud splits into two parts.
In the second case (row 2), a comma-shaped cloud appeared in the sky over Oklahoma, Kansas, and Missouri on Feb 24, 2011, when these areas were attacked by winter weather, flooding, and thunderstorm winds. Our method detected the comma-shaped cloud half an hour earlier than human eyes were able to capture it, as shown in ($b_1$). Soon ($b_2$), a clear comma-shaped cloud formed in the middle of the image, which was detected by both our method and human eyes. Red dots in ($b_2$) show the location of some severe weather events happened in Tennessee and Kentucky at that time. Since the cloud patch was large, it was difficult to include the whole cloud patch in one bounding box. In that case, human eyes could correctly figure out the middle part of the wide cloud to label the comma-shaped cloud. In comparison, our detector used two bounding boxes to cover the cloud patch, as shown in ($b_2$) and ($b_3$). Because there was only one comma-shaped cloud, our method outputs a false negative in that case.
In the third case (row 3), there were two comma-shaped cloud patches from late Jan 2, 2011 to the early next day, located in the left and the right part of the image, respectively. Our method detected the comma-shaped cloud in south California one hour ({\it i.e.}, two continuous satellite images) later than the human eye detected it. Importantly, however, after the region is detected, our method detected the right comma-shaped cloud over the North Atlantic Ocean one hour earlier than human eyes did. As indicated in the left part of ($c_2$) and ($c_3$), our output is highly overlapped with the labeled regions. Our method was able to recognize the comma-shaped cloud when the cloud just began to form in ($c_2$). At the beginning, human eyes cannot recognize its shape, but our method was able to capture that vague shape and motion information to make a correct detection.
To summarize these studies, our method can capture most human-labeled comma-shaped clouds. Moreover, our method can detect some comma-shaped clouds even before they are fully formed, and our detections are sometimes earlier than human eye recognition. These properties indicate that using our method to complement human detections in practical weather forecasting may be beneficial. On the other hand, our detection scheme has a weakness as indicated in case (b). It has difficulty outputting the correct position of spatially wide comma-shaped clouds.
\section{Conclusions}~\label{sec:conclude}
We propose a new computational framework to extract the shape-aware cloud movements that relate to storms. Our algorithm automatically selects the areas of interest at suitable scales and then tracks the evolving process of these selected areas. Compared with human annotator's performance, the computational algorithm provides an objective (yet agnostic) standard for defining the comma shape. The system can assist meteorologists in their daily case-by-case forecasting tasks.
Shape and motion are two visual clues frequently used by meteorologists in interpreting comma-shaped clouds. Our framework includes both the shape and the motion features based on cloud segmentation map and correlation with motion-prior map. Our experiments also validate the usage of these two visual features in detecting comma-shaped clouds. Further, considering the high variability of cloud appearance in satellite images affected by seasonal, geographical and temporal factors, we take a learning-based approach to enhance the robustness, which may also benefit from additional data.
Finally, the detection algorithm provides us a top-down approach to explore how severe weather events happen. Our future work will integrate this framework with the use of other data sources and models to improve reliability and timeliness of storm forecasting.
\section*{Acknowledgment}
We thank the anonymous reviewers and the associate editor for their constructive comments.
We thank the National Oceanic and Atmospheric Administration (NOAA) for providing the data. Yu Zhang, Yizhi Huang, and Jianyu Mao assisted with data collection, labeling, and visualization, respectively.
Haibo Zhang also provided feedback on the paper.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\newcommand{\setlength{\itemsep}{0.25 em}}{\setlength{\itemsep}{0.25 em}}
\bibliographystyle{IEEEtran}
|
{
"timestamp": "2018-12-14T02:19:48",
"yymm": "1802",
"arxiv_id": "1802.08937",
"language": "en",
"url": "https://arxiv.org/abs/1802.08937"
}
|
\section{Introduction}
A key limitation of using uranium dioxide as a nuclear reactor fuel is it’s intrinsically low thermal conductivity, which is known to significantly decay as a function of irradiation damage \cite{Ronchi2004,Ronchi2004a}. As the thermal conductivity falls, the radial temperature gradient across the fuel pin, becomes more substantial, leading to enhanced cracking and deformation. Consequently, the decay in thermal conductivity not only reduces the reactor efficiency but also contributes to the degradation in structural integrity of the fuel; together these effects ultimately act to limit the fuel lifetime.
The microscopic thermodynamic variables that drive the reduction in thermal conductivity have not yet been identified. Past work suggests that thermal conductivity is dominated by phonons \cite{Ronchi1999}, for temperatures below half the melting temperature (3120 K), where the quasiharmonic approximation is valid and contributions from polarons are small \cite{Ronchi1999,Harding1989}. The phonon-dispersion curves for undamaged UO$_2$ were first obtained by the pioneering measurements of Dolling \textit{et al}. in (1965) \cite{Dolling1965}. More recently, and of great significance to the present study, Pang \textit{et al}. \cite{Pang2013, Pang2014} have revisited the phonons of UO$_2$ at 295 and 1200 K and measured the phonon linewidths, as well as the phonon energies. From these measurements they extracted the thermal conductivities for each phonon branch, and showed that the totals are in excellent agreement with thermal conductivity measured by conventional techniques in UO$_2$ \cite{Ronchi2004}. This proves that, at these lower temperatures, the phonons are indeed the important transporters of heat, as expected. Furthermore, Pang \textit{et al}. showed that at room temperature the branch-specific thermal conductivities are roughly divided into four (almost equal) contributions from the transverse (TA), longitudinal (LA) acoustic, transverse (TO$_1$), and longitudinal (LO$_1$) optic modes. The strong involvement of the optic modes is unexpected, and not predicted by theory \cite{Pang2013}.
Given this good agreement, it would seem an obvious next step to perform the same experiments on an irradiated single crystal of UO$_2$ and large (up to 100 g) single crystals of UO$_2$ exist. However, irradiation of a crystal in a reactor will result in a dose rate of $>$\,100 R/hr (mostly from short-lived fission products) that, because of the danger to personnel, would be impossible to examine with any instrument at a neutron user facility. An alternative is to damage the crystal with charged particles from an accelerator, but such radiation does not penetrate into a bulk crystal more than several $\mu$m, so the damage would be inhomogeneous. We have overcome these difficulties by uniformly damaging thin epitaxial films of UO$_2$ with accelerated charged particles and then examined the phonons with inelastic X-ray scattering (IXS) in grazing incidence. There are clearly at least two significant challenges to be faced. (1) The first is to choose a suitable amount of damage so that some effect may be observed, and (2) to develop the technology of measuring phonons from thin films with sufficient precision to determine the linewidths.
\section{Experimental Details and Results}
\subsection{Production and characterisation of UO$_2$ epitaxial films}
A few examples of partially epitaxial UO$_2$ films can be found in the literature back before 2000, but the first major effort was undertaken at Los Alamos National Laboratory with a polymer assisted deposition method \cite{Scott2014}. The use of DC magnetron sputtering to produce epitaxial films was first reported by Strehle \textit{et al}. \cite{Strehle2012}, and such epitaxial films were fabricated by Bao \textit{et al}. at both Karlsruhe and Oxford/Bristol at about the same time \cite{Bao2013}. More details of the growth and characterization of these films can be found in Ref. \cite{Rennie2017}. Much thinner epitaxial films are used for so-called dissolution studies \cite{Springell2015}.
The epitaxial films of (001) UO$_2$ were produced via DC magnetron sputtering at Bristol University on substrates of (001) SrTiO$_3$ obtained commercially from MTI corp. These systems have a $\sqrt{2}$ epitaxial match, achieved through a 45$^{\circ}$, giving a lattice mismatch of 0.97$\%$, as shown in Fig. \ref{fig:fig1}. An argon pressure of 7\,$\times$\,10$^{-3}$\,mbar and an oxygen partial pressure of 2\,$\times$\,10$^{-5}$\,mbar were used to sputter epitaxial films at 1000$^{\circ}$C, giving a deposition rate of 0.2\,nm/sec.
\begin{figure}
\centering
\includegraphics[height=0.25\textheight]{latticematch.png}
\caption{Epitaxial (001) UO$_2$ thin films were deposited on [001] SrTiO$_3$. As depicted, the UO$_2$:SrTiO$_3$ system has a $\sqrt{2}$ epitaxial relation with a lattice mismatch of 0.97\%. Figure created using the VESTA software \cite{Momma2011}. \label{fig:fig1}}
\end{figure}
\subsection{Radiation damage in thin epitaxial UO$_2$ films}
One of the most difficult parameters to determine was the amount and type of radiation damage to produce in the films. If the damage is too extensive and the lattice itself is partially destroyed, then clearly we are unable to measure the phonon spectra as related to crystal directions; on the other hand, too little damage risks observing only small, or no changes in the phonons.
An important aspect is the uniformity of the damage in both the growth direction and across the surface of the film, since the grazing-incidence IXS casts a sizeable footprint of several mm on the film. In Fig. \ref{fig:fig2} we show the calculated damage profile for irradiation with light ions of He$^{2+}$, and they clearly show two aspects: (1) The “Bragg peak”, i.e. the most damaged region where the high-energy ions eventually stop, is deep into the substrate, Fig \ref{fig:fig2}, and (2) over a film thickness of 500\,nm the damage distribution is homogeneous, Fig \ref{fig:fig2}, insert. During the irradiation, the ion beam was rastered, so that the entire sample was damaged uniformly in the xy plane. Light ion (He$^{2+}$) irradiations are less likely to cause significant displacements of the heavier uranium atoms compared with the lighter oxygen due to the small momentum transfer.
\begin{figure*}
\centering
\includegraphics[height=0.24\textheight]{SRIMnew.png}
\caption{The irradiation damage profiles calculated using the monolayer method in SRIM \cite{Ziegler2008} for the irradiation of a 0.5\,$\mu$m UO$_2$ layer and bulk UO$_2$ sample with 2.1\,MeV He$^{2+}$ ions, using displacement energies of 20 eV and 40 eV for oxygen and uranium respectively. The dashed yellow lines represent the peak of the damage, located at 3.76\,$\mu$m, i.e. in the substrate. \label{fig:fig2}}
\end{figure*}
Irradiation experiments were conducted at the Dalton Cumbrian Facility (DCF) using a 5 MV Tandem Pelletron Ion Accelerator \cite{Wady2015}. Samples were damaged by 2.1\,MeV He$^{2+}$ ions generated by the TORVIS and the SRIM calculated displacements per atom was 0.15\,dpa. The flux was 1.8\, $\times$\,10$^{12}$ He$^{2+}$/cm$^2$/sec, and the accumulated dose: 6.7\,$\times$\,10$^{16}$ He$^{2+}$/cm$^2$ ions.
Two identical samples were made of dimensions 5\,$\times$\,5\,mm$^2$ cross section and thickness 300\,nm. One of these (the pristine sample) was not irradiated, and throughout the study a comparison was made between the phonons deduced from the pristine and irradiated samples.
The thin films were characterized through measurements of the (002) UO$_2$ XRD peak. These data are shown in Fig. \ref{fig:fig3}. There is a sizeable change in the lattice parameter corresponding to an expansion of $\Delta$a/a\,=\,+\,0.56(2)\,\%. Since the full-widths at half maximum (FWHM) are almost the same for the two films in both the longitudinal and transverse directions, we can conclude that the damage is uniform across the 300\,nm of the film, and the crystallinity remains almost intact. Other tests were performed on off-specular reflections, and, as expected, the UO$_2$ films were fully epitaxial.
\begin{figure*}
\centering
\includegraphics[height=0.27\textheight]{SN794_XRD_updated2.png}
\caption{Comparative longitudinal (i.e. $\theta$-2$\theta$) (left) and transverse ($\theta$ only) (right) diffraction profiles of the (002) reflection from the pristine (open blue circles) and irradiated films (open green circles). The shift in the longitudinal scans corresponds to a increased lattice parameter $\Delta$a/a\,=\,+\,0.56(2)\,\% for the irradiated film. There is also a small broadening of the FWHM in the transverse scans for the irradiated films. \label{fig:fig3}}
\end{figure*}
\subsection{Measuring phonons by grazing-incidence inelastic X-ray scattering}
The area of momentum space and energy we cover in our experiments is shown in Fig \ref{fig:fig4}, as a yellow box superimposed on the results of Ref. \cite{Pang2013} from a bulk stoichiometric sample of UO$_2$ measured by inelastic neutron scattering (INS). This region omits: (1) Any modes in the [$\zeta \zeta \zeta$] direction, and (2) any modes with energies above 30 meV. The first is related to the use of a SrTiO$_3$ substrate, which allows for the deposition of (001) oriented UO$_2$. Exploration of the [$\zeta \zeta \zeta]$ direction is possible with an (110) oriented UO$_2$ film, however while this can be grown using (110) YSZ (yttria stabilized zirconia) substrates \cite{Strehle2012}, defects within YSZ are known to give rise to significant diffuse scattering. The second is related to the challenge of seeing optic modes with IXS in any heavy metal oxide. This is demonstrated by recent work \cite{Maldonado2016} on NpO$_2$ where a small single crystal of 1.2\,mg was successfully used (in conventional reflection geometry rather than that of grazing incidence of the present work) to determine the phonons at room temperature. In this study, it was not possible to measure optic modes, as their intensity (arising mainly from oxygen displacements) is at least a factor of 100 times weaker than that from acoustic modes. Furthermore, the important mode LO$_1$, which Pang \textit{et al}.\cite{Pang2013} show carries $\sim$ 1/3 of the heat in UO$_2$, could not be observed with IXS in NpO$_2$ (See Fig. 3 in Ref \cite{Maldonado2016} and surrounding discussion); this mode is known to have no contribution from the metal atoms \cite{Elliott1967}.
\begin{figure*}
\centering
\includegraphics[height=0.4\textheight]{pang.png}
\caption{The yellow box highlights the region of the phonon dispersion explored during the present study of thin films. Data of the full dispersion curves are taken from recent neutron work on bulk unirradiated UO$_2$, as measured by Pang \textit{et al}. \cite{Pang2013}. Measurements were taken at 295 K (blue open circles) and at 1200 K (red solid symbols), where the circles and triangles represent the transverse and longitudinal phonon modes, respectively. The solid and dashed lines are theory – see Ref. \cite{Pang2013}. \label{fig:fig4}}
\end{figure*}
The experiments to measure the phonons from the thin films at room temperature were performed on the ID28 spectrometer at the European Synchrotron Radiation Facility \cite{ESRF}. Grazing-incidence IXS was conducted with a Kirkpatrick-Baez mirror, together with a Be focusing lens to produce a focused beam of 15\,$\mu$m\,(vertical)\,$\times$\,30\,$\mu$m\,(horizontal) with an inclination angle of 0.2\,$^{\circ}$ out of the horizontal plane. The incident energy was 17.794\,keV, with an instrumental resolution of 3\,meV. This energy, is determined by reflections from the Si(999) reflections in the analyzers, is just above the \textit{U\,L$_3$} resonant energy of 17.166\,keV. This increases the absorption of the incident beam, and the 1/e penetration of the photon beam of this energy in UO$_2$ is 10.4\,$\mu$m. The vertical spot size is 15\,$\mu$m, implying at an incident angle of 0.2\,$^{\circ}$ a footprint of $\sim$\,2.5 mm. Much of the beam intensity is lost to absorption. The critical angle of UO$_2$ for this energy X-ray beam is 0.18\,$^{\circ}$, reducing the interaction further. Given the absorption, the penetration depth of the X-ray beam will be $\sim$\,15\,nm at this angle.
Two experimental efforts were made on ID28. In the first, a 450\,nm film was used, and in the second a 300\,nm film. To increase the strength of the signal in the second attempt the film was tilted an additional 0.5\,$^{\circ}$ around an axis in the horizontal plane. This results in a small L component in the observed phonon modes, i.e. not completely in the horizontal plane (HK0), where the L component is indicated by $\delta$, where 0.03\,$<$\,$\delta$\,$<$\,0.15. However, this small L component allows a deeper penetration of $\sim$\,150\,nm, and a concomitant order of magnitude increase in the phonon signals. With the small penetration depths no evidence for the substrate was seen. In Fig. \ref{fig:fig5} we show a selection of data.
\begin{figure*}
\centering
\includegraphics[height=0.65\textheight]{Fig_6.png}
\caption{We show (upper) phonons measured from the TA (100) at positions (4\,0.8\,$\delta$) and (4\,1.0\,$\delta$) and (lower) those from the LA (110) at positions (2.8\,2.8\,$\delta$) and (3.0\,3.0\,$\delta$). In each case the blue are pristine, and the green are irradiated samples. The fits have used a Gaussian resolution of 3\,meV convoluted with Lorentzian consisting of a central (resolution limited) peak together with a Damped Harmonic Oscillator (DHO) representing both the Stokes and anti-Stokes phonons, weighted by the Bose factors, to reproduce the experimental curves. The width of the DHO function is then giving the experimentally deduced phonon linewidths. The data have been normalized differently so that they do not overlap in the figures; the higher intensity data are from the pristine film. \label{fig:fig5}}
\end{figure*}
Figure \ref{fig:fig5} shows both the measured Stokes and anti-Stokes phonons and this gives a better determination of the absolute energy of the excitation. The central (elastic) line arises from thermal diffuse scattering and defects and it is noticeable that it is stronger (compared to the phonons) in the irradiated samples (green) than in the pristine samples (blue), as would be anticipated.
From these analyses we determine the energy and linewidth of the phonons. Figure \ref{fig:fig6} shows the phonon energies that we have measured with the thin films at 295\,K and compares them to those reported by Pang\,\textit{et\,al}. \cite{Pang2013}.
\begin{figure*}
\centering
\includegraphics[height=0.35\textheight]{Fig_5.png}
\caption{The energies of the transverse (open circles) and longitudinal (open triangles) phonons of UO$_2$ along the (00$\zeta$) and ($\zeta\zeta$0) directions as measured via IXS for pristine (blue) and irradiated (green) thin films, in comparison with bulk data from Ref. \cite{Pang2013} where theses data (obtained by INS) have been fitted to give smooth curves represented by the solid (TA) and dashed (LA) lines. The TA and LA phonon energies at the X-point should be degenerate; the observed differences in the thin-film data are a consequence of the small L component introduced by tilting the film out of the horizontal plane. In the ($\zeta\zeta$0) direction a different TA phonon was measured compared to the INS work, due to different orientations; see text. \label{fig:fig6}}
\end{figure*}
A comparison between the data shown in Figs.\,\ref{fig:fig4} and \ref{fig:fig6} shows that in the ($\zeta\zeta$0) direction a different TA phonon has been measured in the films compared with the bulk spectra shown in Fig.\,\ref{fig:fig4}. This is because the standard orientation, as used to provide Fig.\,\ref{fig:fig4}\,\cite{Pang2013, Pang2014}, is with [1$\bar{1}$0] vertical, and our film (because of the epitaxy with the SrTiO3 substrate) has [001] vertical. The TA (110) phonons in Fig.\,\ref{fig:fig4} have a polarization [00u], whereas in our case this mode cannot be observed, and we have a TA (110) phonon with polarization [u, -u, 0] (where u is the small atomic displacement from the equilibrium atomic position). In our case all the atomic displacements in the measured phonons lie in the plane of the film. This is because in grazing incidence the scattering vector \textbf{Q} lies very close to the plane (within 2\,$^{\circ}$), and phonons are observed when the product \textbf{Q\,$\cdot$\,u} is non-zero. This makes the measurements insensitive to atomic vibrations along the film growth axis. The LA modes are the same in both our work and that of Pang\,\textit{et\,al}. \cite{Pang2013, Pang2014}.
Figure\,\ref{fig:fig6} shows no significant differences in the phonon energies between the pristine and the irradiated films, and both results agree within statistics with the energies determined by Pang\,\textit{et\,al}. \cite{Pang2013}, measured by INS.
The lack of any difference in the phonon energies between the pristine and irradiated films is not surprising, since the lattice parameter has been changed by only 0.6\,\% and the crystallinity of the sample is still preserved. Based on the well-known thermal expansion of unirradiated UO$_2$ \cite{Martin1987} the lattice expands by 0.85\,\% between 295 and 1200\,K, and we can see in Fig. \ref{fig:fig4} that this expansion makes little difference to the energies of the acoustic modes.
However, the more likely change would come in the linewidths; an increase of the linewidths would translate to a decrease in the phonon lifetimes and a concomitant decrease of thermal conductivity \cite{Pang2013, Pang2014}.
\subsection{Analysis of the phonon linewidths}
The effects on phonon lifetime are present in the FWHM and are related directly (by their inverse values) to the thermal conductivity. It is important in this respect to compare our values with those deduced from bulk UO$_2$ from Tables A1 and A2 of Ref. \cite{Pang2014}. We tabulate all our measured energies and deduced linewidths in the Appendix. For graphical representation we show only the TA (100) mode – see Fig.\,\ref{fig:fig7}. The TA(110) mode is not the same as measured by Pang\,\textit{et\,al}. \cite{Pang2013,Pang2014}, and the LA modes are weaker than the TA. This reduces the statistics for the LA modes, however similar trends are seen in all acoustic modes. Fig.\,\ref{fig:fig5} shows the FWHM of low-energy phonons are nontrivial to fit as there is an appreciable contribution to their intensity from the central elastic line. Therefore the FWHM’s for the lower-energy phonons ($<$\,$\sim$\,5\,meV) are omitted from Fig.\,\ref{fig:fig7}.
\begin{figure*}
\centering
\includegraphics[height=0.38\textheight]{Fig_7.png}
\caption{Values of the FWHM deduced from analysis of the phonons measured in the TA [100] direction. The values tabulated in Ref. \cite{Pang2014} by INS are shown as blue (295\,K) and red (1200\,K) filled circles. Our values using IXS are shown as open blue (pristine) and open green (irradiated) circles. Values determined from a small bulk UO$_2$ crystal at room temperature determined on the same X-ray instrument are shown as open black squares \cite{Paolasini}. \label{fig:fig7}}
\end{figure*}
\section{Discussion}
\subsection{Extent of radiation damage}
Previous work into radiation damage is broadly separated into two different aspects: damage in reactor and damage as a product of self-irradiation as a spent fuel. The first relates to damage of the fuel, and the formation of the “high burn-up structure” in nuclear fuels \cite{Rondinella2010}. The second is long-term storage of irradiated nuclear fuel, and what happens to the fuel as a function of time \cite{Wiss2014}. In the storage case (Fig. 1 of Ref \cite{Wiss2014}) damage of $\sim$\,0.15\,dpa corresponds approximately to the activity of moderately radiated 60\,GWd/t (gigawatt – days per ton) fuel when it is removed from the reactor. This would correspond to $\sim$\,5\,$\times$\,10$^{17}$ alpha-decay events/g, and have a swelling of 0.7\,\% (Table 1 Ref. \cite{Wiss2014}) compared with the swelling we produced of $\sim$\,0.6\% (i.e.\,$\Delta$a/a\,=\,0.56\,\%). The thermal conductivity of this material would have decreased also by $\sim$\,50\% \cite{Lucuta1996}. The precise relationship between the lattice swelling, the concentration of defects, and the drop in thermal conductivity is still open for discussion \cite{Staicu2010}.
With alpha particles, He$^{2+}$ ions, the damage is not as extensive as with the recoil from fission products when the fuel is inside the reactor; the alpha particles will result primarily in displacements and interstitials associated with the oxygen sublattice. Additional inhomogeneity is caused by the implantation of He in the lattice. To simulate the effect of fission-product damage, requires the use of heavier ions, such as Zn, Mo, Cd, Sn, Xe, I, Pb, Au and U \cite{Matzke1996, Matzke2000}. This suggests dpa is not the only variable that should be considered. On the other hand, as Fig. \ref{fig:fig7} shows, we do observe a substantial increase in the phonon linewidths with the He$^{2+}$ irradiation we have performed, even of the acoustic modes. Presumably, the effects on the optic modes would be even greater if the main accommodation of damage is in the oxygen sublattice. Future experiments should look to observe the LO$_1$ mode, as it carries a fair proportion of the heat; however, measuring such optic modes represents a significant experimental challenge.
Observed swelling during this radiation damage experiment in the growth direction of $\sim$\,0.6\,\% with a dpa of 0.15, is in good agreement with that produced in the top layer of bulk UO$_2$ by Debelle \textit{et al}.,\cite{Debelle2011}, where they used 20\,keV alpha particles, i.e. an energy 100 times less than used in our experiments. With lower-energy particles, more He atoms will be implanted in the first few microns of the bulk UO$_2$ sample. Debelle \textit{et al}. show (Fig. 2 of Ref. \cite{Debelle2011}) that when the damage is increased to 3.3\,dpa the lattice loses definition and an average lattice swelling cannot be determined. At this high level of irradiation, small grain growth, and polygonization can be induced. This is associated with the `high burn-up' structure, in which the grain sizes are reduced from microns to 100's of nm \cite{Rondinella2010}. Previous work suggest that the lattice becomes smaller when the `high burn-up' structure appears \cite{Spino2000}. At this stage of damage, where the microstructure has been significantly changed, the measurement of phonons by IXS would not be possible.
Further supporting evidence that thermal conductivity should change in our films is provided by the study of Weisensee \textit{et al}. \cite{Weisensee2013} in which 360\,nm epitaxial films of UO$_2$ were irradiated with 2\,MeV\,Ar$^+$ ions at room temperature, and the thermal conductivity decrease (of about $\sim$\,50\%) was measured directly with a time-domain thermal reflectance technique. These UO$_2$ films were grown on YSZ substrates \cite{Strehle2012}, where it is known that the UO$_2$ is under 6.4\,\% compressive strain. The authors of Ref. \cite{Weisensee2013} do not report a lattice swelling, but from their Fig. 1 this may be estimated at no more than $\sim$\,0.28\,\%. Of course, the growth direction should not be directly affected by the lattice mismatch, but this aspect does cast doubt as to whether this swelling is a meaningful measure, as indeed the authors themselves note. Weisensee \textit{et al}. have shown that with an irradiation dose of 10$^{15}$\,Ar$^+$\,/cm$^2$ the thermal conductivity drops by a factor of $\sim$\,2.5 and this does not change for a further increase of the dose by a factor of 10 [See Fig. 4 of Ref \cite{Weisensee2013}]. Fig. 4 shows that the decrease in thermal conductivity is already saturating by $\sim$\,10$^{15}$\,Ar$^+$\,/cm$^2$. The dose used in the current experiment is 6.7\,$\times$\,10$^{16}$ He$^{2+}$\,/cm$^2$ ions a factor of $\sim$\,7 times more than used by Weisensee \textit{et al}. \cite{Weisensee2013} with the ions used (He$^{2+}$) lighter (by a factor of 10) than Ar$^+$. A direct relationship between these two experiments is difficult to quantitatively establish, although qualitatively the comparison with Ref. \cite{Weisensee2013} suggests the thermal conductivity of our sample should drop by about 50\,\%.
In summary, as shown in Fig. 2 of Ref. \cite{Wiss2014}, our value of radiation damage (0.15\,dpa), coupled with a swelling of 0.6\,\%, does appear consistent with many current studies, and does suggest a reduction would be observed in the thermal conductivity. This agrees with the observations we have made in changes of the linewidths of the acoustic phonons.
Future irradiation to cause displacements in the uranium sublattice using heavier particles Xe for example \cite{Matzke2000, Usov2013, Popel2016}, to simulate what happens in actual irradiated fuels due to the fission recoil damage might show interesting changes to the phonon spectra.
\subsection{Phonons and their linewidths}
These experiments have been able to measure the acoustic phonons from a 300\,nm epitaxial film of UO$_2$. The grazing incidence technique has been refined so that the penetration depth into the sample is $\sim$ 150\,nm. This is a small volume of homogeneous damage, that may be roughly estimated as related to the cross section of the beam, multiplied by the attenuation length of 10 $\mu$m. This gives a mass of UO$_2$ (density of $\sim$ 10 gcm$^{-3}$) of $\sim$ 100 ng. For an inelastic neutron experiment samples would have to be at least 50\,mm$^3$, i.e. $\sim$\,0.5\,g, this gives an enormous increase in sensitivity for the X-ray experiments compared to neutron inelastic scattering. The intensity is increased by the large photon cross section from the 92 electrons around the U nucleus as well as by the greater X-ray flux. The optic modes, which primarily consist of motions of the oxygen atoms \cite{Pang2014, Maldonado2016}, could not be observed with IXS. This has been shown to be a limitation of the technique, as in the study of NpO$_2$ from a larger bulk sample in the X-ray beam \cite{Maldonado2016}.
To our knowledge there have been only two studies published using inelastic scattering to address the phonons of surfaces or in thin films.
(1) The work on NbSe$_2$ where the authors \cite{Murphy2005} used grazing-incident X-ray scattering with the angle of incidence set either below or above the critical angle to observe the soft-mode associated with the charge-density wave in this material. Their interest was primarily on the energy of these soft modes, and in a discussion of complicated inelastic spectra they make no comments on the phonon linewidths, although the probing distance for settings below the critical angle was only $\sim$ 4\,nm.
(2) Experiments on InN films \cite{Serrano2011} were performed also in grazing-incidence geometry, but a film of thickness 6.2 $\mu$m was used, some 20 times thicker than the films we have used and the penetration length of the X-rays was $\sim$\,50\,$\mu$m, so 5 times more than in our case. These larger parameters may be the reason they observe resolution-limited phonon linewidths.
As shown in Fig. \ref{fig:fig7} and Table A1, measured linewidths at 295\,K in the pristine sample of a 300\,nm UO$_2$ film are significantly larger (especially at higher energies) than reported at 295\,K for the bulk by Pang \textit{et al}. \cite{Pang2013, Pang2014}. We can be more certain of this increase over the value from bulk samples, as experiments have recently been performed \cite{Paolasini} on a small (bulk) single crystal of UO$_2$ on the same instrument (ID28), and with the same experimental set-up giving the resolution of 3\,meV. The linewidths (shown in Fig.\,\ref{fig:fig7}) deduced from these experiments \cite{Paolasini} are in good agreement (or even smaller) with those deduced by Pang \textit{et al}. \cite{Pang2013, Pang2014}. Initially, it may be thought that these differences can be attributed to finite size effects, however this seems unlikely given that the penetration distance is 150\,nm, and the chemical unit cell is 0.547\,nm. We suggest that this difference is due to intrinsic strain in the pristine film causing a decrease in the phonon lifetime when the phonon wavelengths become short, i.e. at higher energies near the zone boundaries. This accounts for the slope of the linewidth vs energy curve (Fig. \ref{fig:fig7}) for the film data. The effect is strongly enhanced when the irradiated film is considered, due to greater strain and the presence of inhomogeneity caused by the He particles in the lattice. This aspect, as well as the changes in lifetime in the phonons due to near-surface effects would be interesting to consider theoretically. Measurements on unirradiated bulk samples \cite{Pang2013, Pang2014} show a slope in the linewidth vs energy curves for most phonon branches (see Fig.\,2 of Ref. \cite{Pang2013} and Fig.\,7 of Ref. \cite{Pang2014}). These experiments on irradiated film (as shown in our Fig.\,\ref{fig:fig7} and Table A1) suggest the slopes for irradiated samples may be greater than with the sample at higher temperature, and it would be interesting to explore theoretically the effects of defects on the linewidth vs energy and momentum dependence.
As discussed in Ref. \cite{Pang2013, Pang2014} the thermal conductivity $\kappa_{\textbf{q}j}$ for phonons of wave vector \textbf{q} and branch j is given by
\begin{equation}
\kappa_{\textbf{q}j} = (1/3)C_{\textbf{q}j}v^2_{\textbf{q}j}/\Gamma_{\textbf{q}j}
\end{equation}
where C$_{\textbf{q}j}$ is the phonon heat capacity and v$_{\textbf{q}j}$ = $\delta$E$_{\textbf{q}j}$/$\delta$\textbf{q} is the group velocity determined by the local dispersion gradients. The phonon mean free paths $\lambda_{\textbf{q}j}$ = v$_{\textbf{q}j}\tau_{\textbf{q}j}$ depend on the measured phonon linewidths through the relaxation time $\tau_{\textbf{q}j}$ = 1$/\Gamma_{\textbf{q}j}$.
In our case, since the energies of the phonons have not changed between the pristine and irradiated samples, the change in thermal conductivity will depend only on the change in the linewidths. A complete calculation of the thermal conductivity of the damaged films is not possible without a measure of the optic phonon linewidth. However, given the acoustic modes contribute $\sim$\,50\,\% to the thermal conductivity \cite{Pang2013}, therefore the doubling of the average linewidths (see Fig. \ref{fig:fig7}) translates to a factor of two drop in the thermal conductivity for damaged films. This is consistent with that found by Weisensee \textit{et al.}\cite{Weisensee2013} for similar films irradiated by Ar ions.
\subsection{Alternative methods for phonon measurements}
Raman scattering is an alternative method used to obtain phonon measurements, and such studies have observed the LO modes from a number of different samples \cite{Livneh2006, Livneh2008}. If we confine our attention to the low-energy modes, then Ref. \cite{Desgranges2012, Manara2003} show that the only really strong line is that at 445\,cm$^{-1}$ = 55.2\,meV, which corresponds to the TO$_2$ phonon line at the zone-center ($\Gamma$) in Fig.\,\ref{fig:fig4}. As shown by Pang \textit{et al.} \cite{Pang2013} (see Fig. 4 of Ref. \cite{Pang2013}) this TO$_2$ mode contributes very little to the thermal conductivity. Therefore Raman technique will not elucidate the role of damage in changes of thermal conductivity. Due to the penetration depth of the laser light Raman studies performed on irradiated UO$_2$ \cite{Manara2003, Guimbretiere2012,Guimbretiere2013,Desgranges2014} samples, require $\sim$\,100\,$\mu$m of UO$_2$, far more than the 300\,nm film used in this experiment. Any backscattering Raman technique on a 300 nm thin film will be sensitive to the substrate excitations, so a grazing-incidence geometry would be required.
Desgranges and colleagues have irradiated UO$_2$ samples with high energy ($>$
\,20\,MeV) He$^{2+}$ ions and are able \textit{in situ} to probe the Raman signal with a spatial resolution of about 2\,$\mu$m over the irradiated depth profile of 150\,$\mu$m. They observe extra peaks in the Raman signal, which they associate \cite{Desgranges2014} with defects due to a charge-separated state, where U$^{3+}$ and U$^{5+}$ ions coexist in the irradiated material. They also observe \cite{Guimbretiere2012} the TO$_2$ phonon (T$_{2g}$ mode) dropping in frequency near the Bragg peak of the damage at 130\,$\mu$m, but this drop in energy is only 0.5\,cm$^{-1}$, i.e. $<$\,0.1\,meV, which is far smaller than our resolution with IXS of 3\,meV. Comparison with these Raman papers \cite{Guimbretiere2012, Guimbretiere2013} is difficult due to the lack of other characterisation information such as a value for the lattice swelling, $\Delta$a/a.
\section{Conclusions}
These experiments shed further light on the reasons for the drop of the thermal conductivity in UO$_2$ when it is irradiated in a reactor, which is a technical problem when using UO$_2$ fuel. We have shown that irradiation with He$^{2+}$ ions on a thin epitaxial film produces uniform damage (Fig. \ref{fig:fig2}) over the whole film thickness, and characterisation of the (homogeneous) damage is equivalent to 0.15\,dpa, with a swelling of the damaged UO$_2$ by $\Delta$a/a $\sim$ 0.6\,\%, (Fig. \ref{fig:fig3}), i.e. a volume swelling of $\sim$\,2\,\%. (This assumes that the clamping by the SrTiO$_3$ substrate allows the film to expand equally in all three directions.) There will also be inhomogeneous damage caused by the presence of He particles in the lattice, as well as displaced oxygen atoms.
We have succeeded in measuring the acoustic phonons (Figs. \ref{fig:fig5} and \ref{fig:fig6}) by grazing-incidence X-ray scattering from thin films, where the estimated amount of material giving the phonons is $\sim$ 100 ng. The optic phonons, some of which are known to be important in carrying the heat in UO$_2$ \cite{Pang2013, Pang2014}, could not be measured as they are about 100 or more times weaker than the acoustic modes \cite{Maldonado2016}. The acoustic modes, both transverse and longitudinal, were measured with enough precision to analyse their respective widths (Fig. \ref{fig:fig7} and Table A1).
For both the pristine and irradiated films (Fig. \ref{fig:fig6}), the energies of the acoustic phonons are within experimental error consistent with those of the bulk. This is not surprising given that the UO$_2$ phonon spectra changes only a small amount when heating from 295 to 1200 K \cite{Pang2013, Pang2014}, in which case the volume expansion is comparable to that caused by the damage induced in this study through He$^{2+}$ irradiation.
A definite increase in the phonon lifetimes is observed for phonons in the pristine sample as compared to the bulk values, as measured by both neutron and X-ray inelastic scattering. We attribute these changes to strain in the pristine films.
Changes in the linewidths between the pristine and damaged films are shown in a plot of deconvoluted FWHM vs energy for the TA(100) modes, see Fig. \ref{fig:fig7} and Table A1. All acoustic modes show significant effects, with the average effect being an increase in FWHM of about 50-100\%, depending on the energy. As phonon energies do not change, and the group velocity of the phonons is the same for the pristine and damaged samples, this can be translated directly into a decrease in the contribution to the thermal conductivity for the low-energy acoustic modes for the damaged UO$_2$ thin film \cite{Pang2013, Weisensee2013}. Total thermal conductivity cannot be deduced from these experiments without measuring the higher-energy optic modes, and especially the LO$_1$ mode that carries so much of the heat \cite{Pang2013} in UO$_2$. The measurement of the acoustic modes suggest a significant decrease in thermal conductivity of irradiated UO$_2$ is caused by the damage affecting the lifetime of the phonons, and not by other possible mechanisms due to increased grain boundaries and defects.
We hope this work prompts more careful theoretical analysis of the thermal conductivity of UO$_2$ in the future, as well as further experiments as the intensity of synchrotron X-ray sources increases. It will be interesting, for example to irradiate films with heavier ions to see the additional effects on the phonon spectra.
To develop grazing incidence Raman scattering capable of examining irradiated films of $<$\,1\,$\mu$m would also lead to further progress, and would be a great help to monitor films already damaged.
\begin{table}[]
\centering
\caption{Results of analysis of the energies and full-width at half maximum (here given as 2$\Gamma$ of the DHO function used for the fitting, similarly to Ref. \cite{Pang2014}) of the acoustic phonons in the pristine and irradiated films. Notice that because of the 0.5$^{\circ}$ tilt of the film, the value of the L component is not strictly zero, so is indicated here as $\delta$, where this varies between 0.05 to 0.15 depending on the phonon.}
\label{my-label}
\begin{tabular}{p{1.6cm} p{1.6cm} p{1.6cm} p{1.6cm} p{1.6cm} p{1.6cm}}
\hline\hline
\multicolumn{5}{c}{TA} \\
\hline
\begin{tabular}[c]{@{}c@{}}Wave\\ vector\end{tabular} & \begin{tabular}[c]{@{}c@{}}E$_{PRIS}$\\ (meV)\end{tabular} & \begin{tabular}[c]{@{}c@{}}2$\Gamma_{PRIS}$\\ (meV)\end{tabular} & \begin{tabular}[c]{@{}c@{}}E$_{IRRAD}$\\ (meV)\end{tabular} & \begin{tabular}[c]{@{}c@{}}2$\Gamma_{IRRAD}$\\ (meV)\end{tabular} \\
\hline
(4 0.2 $\delta$) & 5.0(2) & - & 3.7(2) & - \\
(4 0.4 $\delta$) & 6.5(1) & 0.9(1) & 6.9(1) & 1.3(2) \\
(4 0.5 $\delta$) & - & - & 8.6(1) & 1.6(2) \\
(4 0.6 $\delta$) & 11.0(1) & 1.3(2) & 10.1(1) & 2.2(2) \\
(4 0.8 $\delta$) & 13.2(1) & 1.7(4) & 12.5(1) & 3.0(4) \\
(4 1.0 $\delta$) & 14.0(1) & 1.5(2) & 13.1(1) & 3.1(4) \\
& & & & \\
(2.2 1.8 $\delta$) & 6.7(1) & - & 6.5(1) & - \\
(2.4 1.6 $\delta$) & 10.3(1) & 0.8(1) & 10.1(1) & 1.0(3) \\
(2.6 1.4 $\delta$) & 11.9(1) & 1.0(1) & - & - \\
(2.8 1.2 $\delta$) & 12.6(1) & 1.5(1) & 12.9(1) & 1.9(4) \\
(3.0 1.0 $\delta$) & 12.7(1) & 1.1(1) & 13.3(2) & 2.8(5) \\
& & & & \\
\hline\hline
\multicolumn{5}{c}{LA} \\
\hline
\begin{tabular}[c]{@{}c@{}}Wave\\ vectors\end{tabular} & \begin{tabular}[c]{@{}c@{}}E$_{PRIS}$\\ (meV)\end{tabular} & \begin{tabular}[c]{@{}c@{}}2$\Gamma_{PRIS}$\\ (meV)\end{tabular} & \begin{tabular}[c]{@{}c@{}}E$_{IRRAD}$\\ (meV)\end{tabular} & \begin{tabular}[c]{@{}c@{}}2$\Gamma_{IRRAD}$\\ (meV)\end{tabular} \\
\hline
(4.2 0 $\delta$) & 9.7(2) & - & 8.7(1) & - \\
(4.4 0 $\delta$) & 15.6(1) & 1.1(2) & 15.8(1) & 1.9(4) \\
(4.6 0 $\delta$) & 20.4(1) & 1.3(2) & 20.9(2) & 1.9(4) \\
(4.8 0 $\delta$) & 23.3(1) & 2.0(3) & 24.0(2) & 3.0(6) \\
(5.0 0 $\delta$) & 24.3(1) & 2.5(3) & 25.0(2) & 3.4(8) \\
& & & & \\
(2.2 2.2 $\delta$) & 11.5(1) & - & 11.5(2) & 2.8(8) \\
(2.4 2.4 $\delta$) & 18.0(2) & 0.7(2) & 17.1(6) & 3.1(8) \\
(2.6 2.6 $\delta$) & 20.1(1) & 1.0(4) & 20.0(6) & - \\
(2.8 2.8 $\delta$) & 15.7(1) & 1.1(3) & 15.4(2) & 2.2(6) \\
(3.0 3.0 $\delta$) & 12.9(2) & 1.3(4) & 13.4(2) & 2.4(3)
\end{tabular}
\end{table}
\section{Acknowledgements}
We thank Simon Pimplott and Tom Scott for their support and encouragement. SR would like to thank the AWE and EPSRC for funding. GHL thanks Vincenzo Rondinella, Thierry Weiss, Roberto Cacuiffo, and Nicola Magnani at JRC, Karlsruhe for discussions about the damage in UO$_2$ and the response of the phonons. Discussions with Boris Dorado and Johann Bouchet of the CEA on the effects of defects on the phonon dispersion relations are much appreciated.
\section{Introduction}
A key limitation of using uranium dioxide as a nuclear reactor fuel is it’s intrinsically low thermal conductivity, which is known to significantly decay as a function of irradiation damage \cite{Ronchi2004,Ronchi2004a}. As the thermal conductivity falls, the radial temperature gradient across the fuel pin, becomes more substantial, leading to enhanced cracking and deformation. Consequently, the decay in thermal conductivity not only reduces the reactor efficiency but also contributes to the degradation in structural integrity of the fuel; together these effects ultimately act to limit the fuel lifetime.
The microscopic thermodynamic variables that drive the reduction in thermal conductivity have not yet been identified. Past work suggests that thermal conductivity is dominated by phonons \cite{Ronchi1999}, for temperatures below half the melting temperature (3120 K), where the quasiharmonic approximation is valid and contributions from polarons are small \cite{Ronchi1999,Harding1989}. The phonon-dispersion curves for undamaged UO$_2$ were first obtained by the pioneering measurements of Dolling \textit{et al}. in (1965) \cite{Dolling1965}. More recently, and of great significance to the present study, Pang \textit{et al}. \cite{Pang2013, Pang2014} have revisited the phonons of UO$_2$ at 295 and 1200 K and measured the phonon linewidths, as well as the phonon energies. From these measurements they extracted the thermal conductivities for each phonon branch, and showed that the totals are in excellent agreement with thermal conductivity measured by conventional techniques in UO$_2$ \cite{Ronchi2004}. This proves that, at these lower temperatures, the phonons are indeed the important transporters of heat, as expected. Furthermore, Pang \textit{et al}. showed that at room temperature the branch-specific thermal conductivities are roughly divided into four (almost equal) contributions from the transverse (TA), longitudinal (LA) acoustic, transverse (TO$_1$), and longitudinal (LO$_1$) optic modes. The strong involvement of the optic modes is unexpected, and not predicted by theory \cite{Pang2013}.
Given this good agreement, it would seem an obvious next step to perform the same experiments on an irradiated single crystal of UO$_2$ and large (up to 100 g) single crystals of UO$_2$ exist. However, irradiation of a crystal in a reactor will result in a dose rate of $>$\,100 R/hr (mostly from short-lived fission products) that, because of the danger to personnel, would be impossible to examine with any instrument at a neutron user facility. An alternative is to damage the crystal with charged particles from an accelerator, but such radiation does not penetrate into a bulk crystal more than several $\mu$m, so the damage would be inhomogeneous. We have overcome these difficulties by uniformly damaging thin epitaxial films of UO$_2$ with accelerated charged particles and then examined the phonons with inelastic X-ray scattering (IXS) in grazing incidence. There are clearly at least two significant challenges to be faced. (1) The first is to choose a suitable amount of damage so that some effect may be observed, and (2) to develop the technology of measuring phonons from thin films with sufficient precision to determine the linewidths.
\section{Experimental Details and Results}
\subsection{Production and characterisation of UO$_2$ epitaxial films}
A few examples of partially epitaxial UO$_2$ films can be found in the literature back before 2000, but the first major effort was undertaken at Los Alamos National Laboratory with a polymer assisted deposition method \cite{Scott2014}. The use of DC magnetron sputtering to produce epitaxial films was first reported by Strehle \textit{et al}. \cite{Strehle2012}, and such epitaxial films were fabricated by Bao \textit{et al}. at both Karlsruhe and Oxford/Bristol at about the same time \cite{Bao2013}. More details of the growth and characterization of these films can be found in Ref. \cite{Rennie2017}. Much thinner epitaxial films are used for so-called dissolution studies \cite{Springell2015}.
The epitaxial films of (001) UO$_2$ were produced via DC magnetron sputtering at Bristol University on substrates of (001) SrTiO$_3$ obtained commercially from MTI corp. These systems have a $\sqrt{2}$ epitaxial match, achieved through a 45$^{\circ}$, giving a lattice mismatch of 0.97$\%$, as shown in Fig. \ref{fig:fig1}. An argon pressure of 7\,$\times$\,10$^{-3}$\,mbar and an oxygen partial pressure of 2\,$\times$\,10$^{-5}$\,mbar were used to sputter epitaxial films at 1000$^{\circ}$C, giving a deposition rate of 0.2\,nm/sec.
\begin{figure}
\centering
\includegraphics[height=0.25\textheight]{latticematch.png}
\caption{Epitaxial (001) UO$_2$ thin films were deposited on [001] SrTiO$_3$. As depicted, the UO$_2$:SrTiO$_3$ system has a $\sqrt{2}$ epitaxial relation with a lattice mismatch of 0.97\%. Figure created using the VESTA software \cite{Momma2011}. \label{fig:fig1}}
\end{figure}
\subsection{Radiation damage in thin epitaxial UO$_2$ films}
One of the most difficult parameters to determine was the amount and type of radiation damage to produce in the films. If the damage is too extensive and the lattice itself is partially destroyed, then clearly we are unable to measure the phonon spectra as related to crystal directions; on the other hand, too little damage risks observing only small, or no changes in the phonons.
An important aspect is the uniformity of the damage in both the growth direction and across the surface of the film, since the grazing-incidence IXS casts a sizeable footprint of several mm on the film. In Fig. \ref{fig:fig2} we show the calculated damage profile for irradiation with light ions of He$^{2+}$, and they clearly show two aspects: (1) The “Bragg peak”, i.e. the most damaged region where the high-energy ions eventually stop, is deep into the substrate, Fig \ref{fig:fig2}, and (2) over a film thickness of 500\,nm the damage distribution is homogeneous, Fig \ref{fig:fig2}, insert. During the irradiation, the ion beam was rastered, so that the entire sample was damaged uniformly in the xy plane. Light ion (He$^{2+}$) irradiations are less likely to cause significant displacements of the heavier uranium atoms compared with the lighter oxygen due to the small momentum transfer.
\begin{figure*}
\centering
\includegraphics[height=0.24\textheight]{SRIMnew.png}
\caption{The irradiation damage profiles calculated using the monolayer method in SRIM \cite{Ziegler2008} for the irradiation of a 0.5\,$\mu$m UO$_2$ layer and bulk UO$_2$ sample with 2.1\,MeV He$^{2+}$ ions, using displacement energies of 20 eV and 40 eV for oxygen and uranium respectively. The dashed yellow lines represent the peak of the damage, located at 3.76\,$\mu$m, i.e. in the substrate. \label{fig:fig2}}
\end{figure*}
Irradiation experiments were conducted at the Dalton Cumbrian Facility (DCF) using a 5 MV Tandem Pelletron Ion Accelerator \cite{Wady2015}. Samples were damaged by 2.1\,MeV He$^{2+}$ ions generated by the TORVIS and the SRIM calculated displacements per atom was 0.15\,dpa. The flux was 1.8\, $\times$\,10$^{12}$ He$^{2+}$/cm$^2$/sec, and the accumulated dose: 6.7\,$\times$\,10$^{16}$ He$^{2+}$/cm$^2$ ions.
Two identical samples were made of dimensions 5\,$\times$\,5\,mm$^2$ cross section and thickness 300\,nm. One of these (the pristine sample) was not irradiated, and throughout the study a comparison was made between the phonons deduced from the pristine and irradiated samples.
The thin films were characterized through measurements of the (002) UO$_2$ XRD peak. These data are shown in Fig. \ref{fig:fig3}. There is a sizeable change in the lattice parameter corresponding to an expansion of $\Delta$a/a\,=\,+\,0.56(2)\,\%. Since the full-widths at half maximum (FWHM) are almost the same for the two films in both the longitudinal and transverse directions, we can conclude that the damage is uniform across the 300\,nm of the film, and the crystallinity remains almost intact. Other tests were performed on off-specular reflections, and, as expected, the UO$_2$ films were fully epitaxial.
\begin{figure*}
\centering
\includegraphics[height=0.27\textheight]{SN794_XRD_updated2.png}
\caption{Comparative longitudinal (i.e. $\theta$-2$\theta$) (left) and transverse ($\theta$ only) (right) diffraction profiles of the (002) reflection from the pristine (open blue circles) and irradiated films (open green circles). The shift in the longitudinal scans corresponds to a increased lattice parameter $\Delta$a/a\,=\,+\,0.56(2)\,\% for the irradiated film. There is also a small broadening of the FWHM in the transverse scans for the irradiated films. \label{fig:fig3}}
\end{figure*}
\subsection{Measuring phonons by grazing-incidence inelastic X-ray scattering}
The area of momentum space and energy we cover in our experiments is shown in Fig \ref{fig:fig4}, as a yellow box superimposed on the results of Ref. \cite{Pang2013} from a bulk stoichiometric sample of UO$_2$ measured by inelastic neutron scattering (INS). This region omits: (1) Any modes in the [$\zeta \zeta \zeta$] direction, and (2) any modes with energies above 30 meV. The first is related to the use of a SrTiO$_3$ substrate, which allows for the deposition of (001) oriented UO$_2$. Exploration of the [$\zeta \zeta \zeta]$ direction is possible with an (110) oriented UO$_2$ film, however while this can be grown using (110) YSZ (yttria stabilized zirconia) substrates \cite{Strehle2012}, defects within YSZ are known to give rise to significant diffuse scattering. The second is related to the challenge of seeing optic modes with IXS in any heavy metal oxide. This is demonstrated by recent work \cite{Maldonado2016} on NpO$_2$ where a small single crystal of 1.2\,mg was successfully used (in conventional reflection geometry rather than that of grazing incidence of the present work) to determine the phonons at room temperature. In this study, it was not possible to measure optic modes, as their intensity (arising mainly from oxygen displacements) is at least a factor of 100 times weaker than that from acoustic modes. Furthermore, the important mode LO$_1$, which Pang \textit{et al}.\cite{Pang2013} show carries $\sim$ 1/3 of the heat in UO$_2$, could not be observed with IXS in NpO$_2$ (See Fig. 3 in Ref \cite{Maldonado2016} and surrounding discussion); this mode is known to have no contribution from the metal atoms \cite{Elliott1967}.
\begin{figure*}
\centering
\includegraphics[height=0.4\textheight]{pang.png}
\caption{The yellow box highlights the region of the phonon dispersion explored during the present study of thin films. Data of the full dispersion curves are taken from recent neutron work on bulk unirradiated UO$_2$, as measured by Pang \textit{et al}. \cite{Pang2013}. Measurements were taken at 295 K (blue open circles) and at 1200 K (red solid symbols), where the circles and triangles represent the transverse and longitudinal phonon modes, respectively. The solid and dashed lines are theory – see Ref. \cite{Pang2013}. \label{fig:fig4}}
\end{figure*}
The experiments to measure the phonons from the thin films at room temperature were performed on the ID28 spectrometer at the European Synchrotron Radiation Facility \cite{ESRF}. Grazing-incidence IXS was conducted with a Kirkpatrick-Baez mirror, together with a Be focusing lens to produce a focused beam of 15\,$\mu$m\,(vertical)\,$\times$\,30\,$\mu$m\,(horizontal) with an inclination angle of 0.2\,$^{\circ}$ out of the horizontal plane. The incident energy was 17.794\,keV, with an instrumental resolution of 3\,meV. This energy, is determined by reflections from the Si(999) reflections in the analyzers, is just above the \textit{U\,L$_3$} resonant energy of 17.166\,keV. This increases the absorption of the incident beam, and the 1/e penetration of the photon beam of this energy in UO$_2$ is 10.4\,$\mu$m. The vertical spot size is 15\,$\mu$m, implying at an incident angle of 0.2\,$^{\circ}$ a footprint of $\sim$\,2.5 mm. Much of the beam intensity is lost to absorption. The critical angle of UO$_2$ for this energy X-ray beam is 0.18\,$^{\circ}$, reducing the interaction further. Given the absorption, the penetration depth of the X-ray beam will be $\sim$\,15\,nm at this angle.
Two experimental efforts were made on ID28. In the first, a 450\,nm film was used, and in the second a 300\,nm film. To increase the strength of the signal in the second attempt the film was tilted an additional 0.5\,$^{\circ}$ around an axis in the horizontal plane. This results in a small L component in the observed phonon modes, i.e. not completely in the horizontal plane (HK0), where the L component is indicated by $\delta$, where 0.03\,$<$\,$\delta$\,$<$\,0.15. However, this small L component allows a deeper penetration of $\sim$\,150\,nm, and a concomitant order of magnitude increase in the phonon signals. With the small penetration depths no evidence for the substrate was seen. In Fig. \ref{fig:fig5} we show a selection of data.
\begin{figure*}
\centering
\includegraphics[height=0.65\textheight]{Fig_6.png}
\caption{We show (upper) phonons measured from the TA (100) at positions (4\,0.8\,$\delta$) and (4\,1.0\,$\delta$) and (lower) those from the LA (110) at positions (2.8\,2.8\,$\delta$) and (3.0\,3.0\,$\delta$). In each case the blue are pristine, and the green are irradiated samples. The fits have used a Gaussian resolution of 3\,meV convoluted with Lorentzian consisting of a central (resolution limited) peak together with a Damped Harmonic Oscillator (DHO) representing both the Stokes and anti-Stokes phonons, weighted by the Bose factors, to reproduce the experimental curves. The width of the DHO function is then giving the experimentally deduced phonon linewidths. The data have been normalized differently so that they do not overlap in the figures; the higher intensity data are from the pristine film. \label{fig:fig5}}
\end{figure*}
Figure \ref{fig:fig5} shows both the measured Stokes and anti-Stokes phonons and this gives a better determination of the absolute energy of the excitation. The central (elastic) line arises from thermal diffuse scattering and defects and it is noticeable that it is stronger (compared to the phonons) in the irradiated samples (green) than in the pristine samples (blue), as would be anticipated.
From these analyses we determine the energy and linewidth of the phonons. Figure \ref{fig:fig6} shows the phonon energies that we have measured with the thin films at 295\,K and compares them to those reported by Pang\,\textit{et\,al}. \cite{Pang2013}.
\begin{figure*}
\centering
\includegraphics[height=0.35\textheight]{Fig_5.png}
\caption{The energies of the transverse (open circles) and longitudinal (open triangles) phonons of UO$_2$ along the (00$\zeta$) and ($\zeta\zeta$0) directions as measured via IXS for pristine (blue) and irradiated (green) thin films, in comparison with bulk data from Ref. \cite{Pang2013} where theses data (obtained by INS) have been fitted to give smooth curves represented by the solid (TA) and dashed (LA) lines. The TA and LA phonon energies at the X-point should be degenerate; the observed differences in the thin-film data are a consequence of the small L component introduced by tilting the film out of the horizontal plane. In the ($\zeta\zeta$0) direction a different TA phonon was measured compared to the INS work, due to different orientations; see text. \label{fig:fig6}}
\end{figure*}
A comparison between the data shown in Figs.\,\ref{fig:fig4} and \ref{fig:fig6} shows that in the ($\zeta\zeta$0) direction a different TA phonon has been measured in the films compared with the bulk spectra shown in Fig.\,\ref{fig:fig4}. This is because the standard orientation, as used to provide Fig.\,\ref{fig:fig4}\,\cite{Pang2013, Pang2014}, is with [1$\bar{1}$0] vertical, and our film (because of the epitaxy with the SrTiO3 substrate) has [001] vertical. The TA (110) phonons in Fig.\,\ref{fig:fig4} have a polarization [00u], whereas in our case this mode cannot be observed, and we have a TA (110) phonon with polarization [u, -u, 0] (where u is the small atomic displacement from the equilibrium atomic position). In our case all the atomic displacements in the measured phonons lie in the plane of the film. This is because in grazing incidence the scattering vector \textbf{Q} lies very close to the plane (within 2\,$^{\circ}$), and phonons are observed when the product \textbf{Q\,$\cdot$\,u} is non-zero. This makes the measurements insensitive to atomic vibrations along the film growth axis. The LA modes are the same in both our work and that of Pang\,\textit{et\,al}. \cite{Pang2013, Pang2014}.
Figure\,\ref{fig:fig6} shows no significant differences in the phonon energies between the pristine and the irradiated films, and both results agree within statistics with the energies determined by Pang\,\textit{et\,al}. \cite{Pang2013}, measured by INS.
The lack of any difference in the phonon energies between the pristine and irradiated films is not surprising, since the lattice parameter has been changed by only 0.6\,\% and the crystallinity of the sample is still preserved. Based on the well-known thermal expansion of unirradiated UO$_2$ \cite{Martin1987} the lattice expands by 0.85\,\% between 295 and 1200\,K, and we can see in Fig. \ref{fig:fig4} that this expansion makes little difference to the energies of the acoustic modes.
However, the more likely change would come in the linewidths; an increase of the linewidths would translate to a decrease in the phonon lifetimes and a concomitant decrease of thermal conductivity \cite{Pang2013, Pang2014}.
\subsection{Analysis of the phonon linewidths}
The effects on phonon lifetime are present in the FWHM and are related directly (by their inverse values) to the thermal conductivity. It is important in this respect to compare our values with those deduced from bulk UO$_2$ from Tables A1 and A2 of Ref. \cite{Pang2014}. We tabulate all our measured energies and deduced linewidths in the Appendix. For graphical representation we show only the TA (100) mode – see Fig.\,\ref{fig:fig7}. The TA(110) mode is not the same as measured by Pang\,\textit{et\,al}. \cite{Pang2013,Pang2014}, and the LA modes are weaker than the TA. This reduces the statistics for the LA modes, however similar trends are seen in all acoustic modes. Fig.\,\ref{fig:fig5} shows the FWHM of low-energy phonons are nontrivial to fit as there is an appreciable contribution to their intensity from the central elastic line. Therefore the FWHM’s for the lower-energy phonons ($<$\,$\sim$\,5\,meV) are omitted from Fig.\,\ref{fig:fig7}.
\begin{figure*}
\centering
\includegraphics[height=0.38\textheight]{Fig_7.png}
\caption{Values of the FWHM deduced from analysis of the phonons measured in the TA [100] direction. The values tabulated in Ref. \cite{Pang2014} by INS are shown as blue (295\,K) and red (1200\,K) filled circles. Our values using IXS are shown as open blue (pristine) and open green (irradiated) circles. Values determined from a small bulk UO$_2$ crystal at room temperature determined on the same X-ray instrument are shown as open black squares \cite{Paolasini}. \label{fig:fig7}}
\end{figure*}
\section{Discussion}
\subsection{Extent of radiation damage}
Previous work into radiation damage is broadly separated into two different aspects: damage in reactor and damage as a product of self-irradiation as a spent fuel. The first relates to damage of the fuel, and the formation of the “high burn-up structure” in nuclear fuels \cite{Rondinella2010}. The second is long-term storage of irradiated nuclear fuel, and what happens to the fuel as a function of time \cite{Wiss2014}. In the storage case (Fig. 1 of Ref \cite{Wiss2014}) damage of $\sim$\,0.15\,dpa corresponds approximately to the activity of moderately radiated 60\,GWd/t (gigawatt – days per ton) fuel when it is removed from the reactor. This would correspond to $\sim$\,5\,$\times$\,10$^{17}$ alpha-decay events/g, and have a swelling of 0.7\,\% (Table 1 Ref. \cite{Wiss2014}) compared with the swelling we produced of $\sim$\,0.6\% (i.e.\,$\Delta$a/a\,=\,0.56\,\%). The thermal conductivity of this material would have decreased also by $\sim$\,50\% \cite{Lucuta1996}. The precise relationship between the lattice swelling, the concentration of defects, and the drop in thermal conductivity is still open for discussion \cite{Staicu2010}.
With alpha particles, He$^{2+}$ ions, the damage is not as extensive as with the recoil from fission products when the fuel is inside the reactor; the alpha particles will result primarily in displacements and interstitials associated with the oxygen sublattice. Additional inhomogeneity is caused by the implantation of He in the lattice. To simulate the effect of fission-product damage, requires the use of heavier ions, such as Zn, Mo, Cd, Sn, Xe, I, Pb, Au and U \cite{Matzke1996, Matzke2000}. This suggests dpa is not the only variable that should be considered. On the other hand, as Fig. \ref{fig:fig7} shows, we do observe a substantial increase in the phonon linewidths with the He$^{2+}$ irradiation we have performed, even of the acoustic modes. Presumably, the effects on the optic modes would be even greater if the main accommodation of damage is in the oxygen sublattice. Future experiments should look to observe the LO$_1$ mode, as it carries a fair proportion of the heat; however, measuring such optic modes represents a significant experimental challenge.
Observed swelling during this radiation damage experiment in the growth direction of $\sim$\,0.6\,\% with a dpa of 0.15, is in good agreement with that produced in the top layer of bulk UO$_2$ by Debelle \textit{et al}.,\cite{Debelle2011}, where they used 20\,keV alpha particles, i.e. an energy 100 times less than used in our experiments. With lower-energy particles, more He atoms will be implanted in the first few microns of the bulk UO$_2$ sample. Debelle \textit{et al}. show (Fig. 2 of Ref. \cite{Debelle2011}) that when the damage is increased to 3.3\,dpa the lattice loses definition and an average lattice swelling cannot be determined. At this high level of irradiation, small grain growth, and polygonization can be induced. This is associated with the `high burn-up' structure, in which the grain sizes are reduced from microns to 100's of nm \cite{Rondinella2010}. Previous work suggest that the lattice becomes smaller when the `high burn-up' structure appears \cite{Spino2000}. At this stage of damage, where the microstructure has been significantly changed, the measurement of phonons by IXS would not be possible.
Further supporting evidence that thermal conductivity should change in our films is provided by the study of Weisensee \textit{et al}. \cite{Weisensee2013} in which 360\,nm epitaxial films of UO$_2$ were irradiated with 2\,MeV\,Ar$^+$ ions at room temperature, and the thermal conductivity decrease (of about $\sim$\,50\%) was measured directly with a time-domain thermal reflectance technique. These UO$_2$ films were grown on YSZ substrates \cite{Strehle2012}, where it is known that the UO$_2$ is under 6.4\,\% compressive strain. The authors of Ref. \cite{Weisensee2013} do not report a lattice swelling, but from their Fig. 1 this may be estimated at no more than $\sim$\,0.28\,\%. Of course, the growth direction should not be directly affected by the lattice mismatch, but this aspect does cast doubt as to whether this swelling is a meaningful measure, as indeed the authors themselves note. Weisensee \textit{et al}. have shown that with an irradiation dose of 10$^{15}$\,Ar$^+$\,/cm$^2$ the thermal conductivity drops by a factor of $\sim$\,2.5 and this does not change for a further increase of the dose by a factor of 10 [See Fig. 4 of Ref \cite{Weisensee2013}]. Fig. 4 shows that the decrease in thermal conductivity is already saturating by $\sim$\,10$^{15}$\,Ar$^+$\,/cm$^2$. The dose used in the current experiment is 6.7\,$\times$\,10$^{16}$ He$^{2+}$\,/cm$^2$ ions a factor of $\sim$\,7 times more than used by Weisensee \textit{et al}. \cite{Weisensee2013} with the ions used (He$^{2+}$) lighter (by a factor of 10) than Ar$^+$. A direct relationship between these two experiments is difficult to quantitatively establish, although qualitatively the comparison with Ref. \cite{Weisensee2013} suggests the thermal conductivity of our sample should drop by about 50\,\%.
In summary, as shown in Fig. 2 of Ref. \cite{Wiss2014}, our value of radiation damage (0.15\,dpa), coupled with a swelling of 0.6\,\%, does appear consistent with many current studies, and does suggest a reduction would be observed in the thermal conductivity. This agrees with the observations we have made in changes of the linewidths of the acoustic phonons.
Future irradiation to cause displacements in the uranium sublattice using heavier particles Xe for example \cite{Matzke2000, Usov2013, Popel2016}, to simulate what happens in actual irradiated fuels due to the fission recoil damage might show interesting changes to the phonon spectra.
\subsection{Phonons and their linewidths}
These experiments have been able to measure the acoustic phonons from a 300\,nm epitaxial film of UO$_2$. The grazing incidence technique has been refined so that the penetration depth into the sample is $\sim$ 150\,nm. This is a small volume of homogeneous damage, that may be roughly estimated as related to the cross section of the beam, multiplied by the attenuation length of 10 $\mu$m. This gives a mass of UO$_2$ (density of $\sim$ 10 gcm$^{-3}$) of $\sim$ 100 ng. For an inelastic neutron experiment samples would have to be at least 50\,mm$^3$, i.e. $\sim$\,0.5\,g, this gives an enormous increase in sensitivity for the X-ray experiments compared to neutron inelastic scattering. The intensity is increased by the large photon cross section from the 92 electrons around the U nucleus as well as by the greater X-ray flux. The optic modes, which primarily consist of motions of the oxygen atoms \cite{Pang2014, Maldonado2016}, could not be observed with IXS. This has been shown to be a limitation of the technique, as in the study of NpO$_2$ from a larger bulk sample in the X-ray beam \cite{Maldonado2016}.
To our knowledge there have been only two studies published using inelastic scattering to address the phonons of surfaces or in thin films.
(1) The work on NbSe$_2$ where the authors \cite{Murphy2005} used grazing-incident X-ray scattering with the angle of incidence set either below or above the critical angle to observe the soft-mode associated with the charge-density wave in this material. Their interest was primarily on the energy of these soft modes, and in a discussion of complicated inelastic spectra they make no comments on the phonon linewidths, although the probing distance for settings below the critical angle was only $\sim$ 4\,nm.
(2) Experiments on InN films \cite{Serrano2011} were performed also in grazing-incidence geometry, but a film of thickness 6.2 $\mu$m was used, some 20 times thicker than the films we have used and the penetration length of the X-rays was $\sim$\,50\,$\mu$m, so 5 times more than in our case. These larger parameters may be the reason they observe resolution-limited phonon linewidths.
As shown in Fig. \ref{fig:fig7} and Table A1, measured linewidths at 295\,K in the pristine sample of a 300\,nm UO$_2$ film are significantly larger (especially at higher energies) than reported at 295\,K for the bulk by Pang \textit{et al}. \cite{Pang2013, Pang2014}. We can be more certain of this increase over the value from bulk samples, as experiments have recently been performed \cite{Paolasini} on a small (bulk) single crystal of UO$_2$ on the same instrument (ID28), and with the same experimental set-up giving the resolution of 3\,meV. The linewidths (shown in Fig.\,\ref{fig:fig7}) deduced from these experiments \cite{Paolasini} are in good agreement (or even smaller) with those deduced by Pang \textit{et al}. \cite{Pang2013, Pang2014}. Initially, it may be thought that these differences can be attributed to finite size effects, however this seems unlikely given that the penetration distance is 150\,nm, and the chemical unit cell is 0.547\,nm. We suggest that this difference is due to intrinsic strain in the pristine film causing a decrease in the phonon lifetime when the phonon wavelengths become short, i.e. at higher energies near the zone boundaries. This accounts for the slope of the linewidth vs energy curve (Fig. \ref{fig:fig7}) for the film data. The effect is strongly enhanced when the irradiated film is considered, due to greater strain and the presence of inhomogeneity caused by the He particles in the lattice. This aspect, as well as the changes in lifetime in the phonons due to near-surface effects would be interesting to consider theoretically. Measurements on unirradiated bulk samples \cite{Pang2013, Pang2014} show a slope in the linewidth vs energy curves for most phonon branches (see Fig.\,2 of Ref. \cite{Pang2013} and Fig.\,7 of Ref. \cite{Pang2014}). These experiments on irradiated film (as shown in our Fig.\,\ref{fig:fig7} and Table A1) suggest the slopes for irradiated samples may be greater than with the sample at higher temperature, and it would be interesting to explore theoretically the effects of defects on the linewidth vs energy and momentum dependence.
As discussed in Ref. \cite{Pang2013, Pang2014} the thermal conductivity $\kappa_{\textbf{q}j}$ for phonons of wave vector \textbf{q} and branch j is given by
\begin{equation}
\kappa_{\textbf{q}j} = (1/3)C_{\textbf{q}j}v^2_{\textbf{q}j}/\Gamma_{\textbf{q}j}
\end{equation}
where C$_{\textbf{q}j}$ is the phonon heat capacity and v$_{\textbf{q}j}$ = $\delta$E$_{\textbf{q}j}$/$\delta$\textbf{q} is the group velocity determined by the local dispersion gradients. The phonon mean free paths $\lambda_{\textbf{q}j}$ = v$_{\textbf{q}j}\tau_{\textbf{q}j}$ depend on the measured phonon linewidths through the relaxation time $\tau_{\textbf{q}j}$ = 1$/\Gamma_{\textbf{q}j}$.
In our case, since the energies of the phonons have not changed between the pristine and irradiated samples, the change in thermal conductivity will depend only on the change in the linewidths. A complete calculation of the thermal conductivity of the damaged films is not possible without a measure of the optic phonon linewidth. However, given the acoustic modes contribute $\sim$\,50\,\% to the thermal conductivity \cite{Pang2013}, therefore the doubling of the average linewidths (see Fig. \ref{fig:fig7}) translates to a factor of two drop in the thermal conductivity for damaged films. This is consistent with that found by Weisensee \textit{et al.}\cite{Weisensee2013} for similar films irradiated by Ar ions.
\subsection{Alternative methods for phonon measurements}
Raman scattering is an alternative method used to obtain phonon measurements, and such studies have observed the LO modes from a number of different samples \cite{Livneh2006, Livneh2008}. If we confine our attention to the low-energy modes, then Ref. \cite{Desgranges2012, Manara2003} show that the only really strong line is that at 445\,cm$^{-1}$ = 55.2\,meV, which corresponds to the TO$_2$ phonon line at the zone-center ($\Gamma$) in Fig.\,\ref{fig:fig4}. As shown by Pang \textit{et al.} \cite{Pang2013} (see Fig. 4 of Ref. \cite{Pang2013}) this TO$_2$ mode contributes very little to the thermal conductivity. Therefore Raman technique will not elucidate the role of damage in changes of thermal conductivity. Due to the penetration depth of the laser light Raman studies performed on irradiated UO$_2$ \cite{Manara2003, Guimbretiere2012,Guimbretiere2013,Desgranges2014} samples, require $\sim$\,100\,$\mu$m of UO$_2$, far more than the 300\,nm film used in this experiment. Any backscattering Raman technique on a 300 nm thin film will be sensitive to the substrate excitations, so a grazing-incidence geometry would be required.
Desgranges and colleagues have irradiated UO$_2$ samples with high energy ($>$
\,20\,MeV) He$^{2+}$ ions and are able \textit{in situ} to probe the Raman signal with a spatial resolution of about 2\,$\mu$m over the irradiated depth profile of 150\,$\mu$m. They observe extra peaks in the Raman signal, which they associate \cite{Desgranges2014} with defects due to a charge-separated state, where U$^{3+}$ and U$^{5+}$ ions coexist in the irradiated material. They also observe \cite{Guimbretiere2012} the TO$_2$ phonon (T$_{2g}$ mode) dropping in frequency near the Bragg peak of the damage at 130\,$\mu$m, but this drop in energy is only 0.5\,cm$^{-1}$, i.e. $<$\,0.1\,meV, which is far smaller than our resolution with IXS of 3\,meV. Comparison with these Raman papers \cite{Guimbretiere2012, Guimbretiere2013} is difficult due to the lack of other characterisation information such as a value for the lattice swelling, $\Delta$a/a.
\section{Conclusions}
These experiments shed further light on the reasons for the drop of the thermal conductivity in UO$_2$ when it is irradiated in a reactor, which is a technical problem when using UO$_2$ fuel. We have shown that irradiation with He$^{2+}$ ions on a thin epitaxial film produces uniform damage (Fig. \ref{fig:fig2}) over the whole film thickness, and characterisation of the (homogeneous) damage is equivalent to 0.15\,dpa, with a swelling of the damaged UO$_2$ by $\Delta$a/a $\sim$ 0.6\,\%, (Fig. \ref{fig:fig3}), i.e. a volume swelling of $\sim$\,2\,\%. (This assumes that the clamping by the SrTiO$_3$ substrate allows the film to expand equally in all three directions.) There will also be inhomogeneous damage caused by the presence of He particles in the lattice, as well as displaced oxygen atoms.
We have succeeded in measuring the acoustic phonons (Figs. \ref{fig:fig5} and \ref{fig:fig6}) by grazing-incidence X-ray scattering from thin films, where the estimated amount of material giving the phonons is $\sim$ 100 ng. The optic phonons, some of which are known to be important in carrying the heat in UO$_2$ \cite{Pang2013, Pang2014}, could not be measured as they are about 100 or more times weaker than the acoustic modes \cite{Maldonado2016}. The acoustic modes, both transverse and longitudinal, were measured with enough precision to analyse their respective widths (Fig. \ref{fig:fig7} and Table A1).
For both the pristine and irradiated films (Fig. \ref{fig:fig6}), the energies of the acoustic phonons are within experimental error consistent with those of the bulk. This is not surprising given that the UO$_2$ phonon spectra changes only a small amount when heating from 295 to 1200 K \cite{Pang2013, Pang2014}, in which case the volume expansion is comparable to that caused by the damage induced in this study through He$^{2+}$ irradiation.
A definite increase in the phonon lifetimes is observed for phonons in the pristine sample as compared to the bulk values, as measured by both neutron and X-ray inelastic scattering. We attribute these changes to strain in the pristine films.
Changes in the linewidths between the pristine and damaged films are shown in a plot of deconvoluted FWHM vs energy for the TA(100) modes, see Fig. \ref{fig:fig7} and Table A1. All acoustic modes show significant effects, with the average effect being an increase in FWHM of about 50-100\%, depending on the energy. As phonon energies do not change, and the group velocity of the phonons is the same for the pristine and damaged samples, this can be translated directly into a decrease in the contribution to the thermal conductivity for the low-energy acoustic modes for the damaged UO$_2$ thin film \cite{Pang2013, Weisensee2013}. Total thermal conductivity cannot be deduced from these experiments without measuring the higher-energy optic modes, and especially the LO$_1$ mode that carries so much of the heat \cite{Pang2013} in UO$_2$. The measurement of the acoustic modes suggest a significant decrease in thermal conductivity of irradiated UO$_2$ is caused by the damage affecting the lifetime of the phonons, and not by other possible mechanisms due to increased grain boundaries and defects.
We hope this work prompts more careful theoretical analysis of the thermal conductivity of UO$_2$ in the future, as well as further experiments as the intensity of synchrotron X-ray sources increases. It will be interesting, for example to irradiate films with heavier ions to see the additional effects on the phonon spectra.
To develop grazing incidence Raman scattering capable of examining irradiated films of $<$\,1\,$\mu$m would also lead to further progress, and would be a great help to monitor films already damaged.
\begin{table}[]
\centering
\caption{Results of analysis of the energies and full-width at half maximum (here given as 2$\Gamma$ of the DHO function used for the fitting, similarly to Ref. \cite{Pang2014}) of the acoustic phonons in the pristine and irradiated films. Notice that because of the 0.5$^{\circ}$ tilt of the film, the value of the L component is not strictly zero, so is indicated here as $\delta$, where this varies between 0.05 to 0.15 depending on the phonon.}
\label{my-label}
\begin{tabular}{p{1.6cm} p{1.6cm} p{1.6cm} p{1.6cm} p{1.6cm} p{1.6cm}}
\hline\hline
\multicolumn{5}{c}{TA} \\
\hline
\begin{tabular}[c]{@{}c@{}}Wave\\ vector\end{tabular} & \begin{tabular}[c]{@{}c@{}}E$_{PRIS}$\\ (meV)\end{tabular} & \begin{tabular}[c]{@{}c@{}}2$\Gamma_{PRIS}$\\ (meV)\end{tabular} & \begin{tabular}[c]{@{}c@{}}E$_{IRRAD}$\\ (meV)\end{tabular} & \begin{tabular}[c]{@{}c@{}}2$\Gamma_{IRRAD}$\\ (meV)\end{tabular} \\
\hline
(4 0.2 $\delta$) & 5.0(2) & - & 3.7(2) & - \\
(4 0.4 $\delta$) & 6.5(1) & 0.9(1) & 6.9(1) & 1.3(2) \\
(4 0.5 $\delta$) & - & - & 8.6(1) & 1.6(2) \\
(4 0.6 $\delta$) & 11.0(1) & 1.3(2) & 10.1(1) & 2.2(2) \\
(4 0.8 $\delta$) & 13.2(1) & 1.7(4) & 12.5(1) & 3.0(4) \\
(4 1.0 $\delta$) & 14.0(1) & 1.5(2) & 13.1(1) & 3.1(4) \\
& & & & \\
(2.2 1.8 $\delta$) & 6.7(1) & - & 6.5(1) & - \\
(2.4 1.6 $\delta$) & 10.3(1) & 0.8(1) & 10.1(1) & 1.0(3) \\
(2.6 1.4 $\delta$) & 11.9(1) & 1.0(1) & - & - \\
(2.8 1.2 $\delta$) & 12.6(1) & 1.5(1) & 12.9(1) & 1.9(4) \\
(3.0 1.0 $\delta$) & 12.7(1) & 1.1(1) & 13.3(2) & 2.8(5) \\
& & & & \\
\hline\hline
\multicolumn{5}{c}{LA} \\
\hline
\begin{tabular}[c]{@{}c@{}}Wave\\ vectors\end{tabular} & \begin{tabular}[c]{@{}c@{}}E$_{PRIS}$\\ (meV)\end{tabular} & \begin{tabular}[c]{@{}c@{}}2$\Gamma_{PRIS}$\\ (meV)\end{tabular} & \begin{tabular}[c]{@{}c@{}}E$_{IRRAD}$\\ (meV)\end{tabular} & \begin{tabular}[c]{@{}c@{}}2$\Gamma_{IRRAD}$\\ (meV)\end{tabular} \\
\hline
(4.2 0 $\delta$) & 9.7(2) & - & 8.7(1) & - \\
(4.4 0 $\delta$) & 15.6(1) & 1.1(2) & 15.8(1) & 1.9(4) \\
(4.6 0 $\delta$) & 20.4(1) & 1.3(2) & 20.9(2) & 1.9(4) \\
(4.8 0 $\delta$) & 23.3(1) & 2.0(3) & 24.0(2) & 3.0(6) \\
(5.0 0 $\delta$) & 24.3(1) & 2.5(3) & 25.0(2) & 3.4(8) \\
& & & & \\
(2.2 2.2 $\delta$) & 11.5(1) & - & 11.5(2) & 2.8(8) \\
(2.4 2.4 $\delta$) & 18.0(2) & 0.7(2) & 17.1(6) & 3.1(8) \\
(2.6 2.6 $\delta$) & 20.1(1) & 1.0(4) & 20.0(6) & - \\
(2.8 2.8 $\delta$) & 15.7(1) & 1.1(3) & 15.4(2) & 2.2(6) \\
(3.0 3.0 $\delta$) & 12.9(2) & 1.3(4) & 13.4(2) & 2.4(3)
\end{tabular}
\end{table}
\section{Acknowledgements}
We thank Simon Pimplott and Tom Scott for their support and encouragement. SR would like to thank the AWE and EPSRC for funding. GHL thanks Vincenzo Rondinella, Thierry Weiss, Roberto Cacuiffo, and Nicola Magnani at JRC, Karlsruhe for discussions about the damage in UO$_2$ and the response of the phonons. Discussions with Boris Dorado and Johann Bouchet of the CEA on the effects of defects on the phonon dispersion relations are much appreciated.
|
{
"timestamp": "2018-02-26T02:10:17",
"yymm": "1802",
"arxiv_id": "1802.08593",
"language": "en",
"url": "https://arxiv.org/abs/1802.08593"
}
|
\section{Introduction\label{Sec1}}
The consistent consideration of quantum processes in the vacuum violating
backgrounds has to be done in the framework of nonperturbative calculations
for quantum field theory, in particular, QED. Different analytical and
numerical methods were applied to study the effect of electron-positron pair
creation from vacuum; see recent reviews \cite{Ruffini,Gelis}. Among these
methods, there are ones based on the existence of exact solutions of the
Dirac (or Klein-Gordon) equation in the corresponding background fields;
e.g., see Refs. \cite{FGS,GavGit16}. They give us exactly solvable models
for QED that are useful to consider the characteristic features of theory
and could be used to check approximations and numeric calculations.
Recently, we present the review of particle creation effects in
time-dependent uniform external electric fields that contains three most
important exactly solvable cases: Sauter-like electric field, $T$-constant
electric field, and exponentially growing and decaying electric fields \cite%
{AdoGavGit17}. These electric fields switched on and off at the initial and
the final time instants, respectively. We refer to such kind of external
fields as the $t$-electric potential steps.
Choosing parameters for the exponentially varying electric fields, one can
consider both fields in the slowly varying regime and fields that exist only
for a short time in a vicinity of the switching on and off time. The case of
the $T$-constant electric field is distinct. In this case the electric field
is constant within the time interval $T$ and is zero outside of it, that is,
it is switched on and off ``abruptly'' at
definite instants. The model with the $T$-constant electric field is
important to study the particle creation effects; see Ref. \cite{AdoGavGit17}
for the review. Then the details of switching on and off for the $T$%
-constant electric field is of interest. To estimate the role of the
switching on and off effects for the pair creation due to the $T$-constant
electric field we consider a composite electric field that grows
exponentially in the first interval $t\in \mathrm{I}=\left( -\infty
,t_{1}\right) $, remains constant in the second interval $t\in \mathrm{II}=%
\left[ t_{1},t_{2}\right] $,$\ $and decreases exponentially in the last
interval $t\in \mathrm{III}=\left( t_{2},+\infty \right) $. We essentially
use notation and final formulas from Ref. \cite{AdoGavGit17}.
The article is organized as follows: In Sec. \ref{Sec2}, we introduce the
composite field and summarize details concerning the exact solutions of the
Dirac equation with such a field. We find exact formulas for the
differential mean number of particles created from the vacuum, the total
number of particles created from the vacuum and the vacuum-to-vacuum
transition probability. In Sec. \ref{Sec3} we consider the general
properties of the differential mean numbers of pairs created. We visualize
how these mean numbers are distributed over the quantum numbers, especially
in cases where asymptotic approximations involved are not applicable. In
Sec. \ref{Sec4} we compute differential and total quantities in some special
field configurations of interest. We show that the results for slowly
varying fields are completely predictable using recently developed version
of a locally constant field approximation. We study configurations that
simulate finite switching-on and -off processes within and beyond the slowly
varying regime. Final comments are placed in Sec. \ref{conclusions}.
\section{IN and OUT solutions in a composite electric field\label{Sec2}}
In this section we summarize general aspects on exact solutions of the Dirac
equation with the field under consideration and briefly discuss the
calculation of differential and total numbers of pairs creation.
The composite electric field a $d=D+1$ dimensional Minkowski space-time is
homogeneous, positively oriented along a single direction $\mathbf{E}\left(
t\right) =\left( E^{i}\left( t\right) =\delta _{1}^{i}E\left( t\right) \,,\
\ i=1,...,D\right) $ and described by a vector along the same direction, $%
A^{\mu }=\left( A^{0}=0,\mathbf{A}\left( t\right) \right) $, $\mathbf{A}%
\left( t\right) =\left( A^{i}\left( t\right) =\delta _{1}^{i}A_{x}\left(
t\right) \right) $, whose explicit forms are%
\begin{eqnarray}
&&E\left( t\right) =E\left\{
\begin{array}{ll}
e^{k_{1}\left( t-t_{1}\right) }\,, & t\in \mathrm{I}\,, \\
1\,, & t\in \mathrm{II\,}, \\
e^{-k_{2}\left( t-t_{2}\right) }\,, & t\in \mathrm{III\,},%
\end{array}%
\right. \ \ \left( E,k_{1},k_{2}\right) >0\,, \label{s2.0} \\
&&A_{x}\left( t\right) =E\left\{
\begin{array}{ll}
k_{1}^{-1}\left( -e^{k_{1}\left( t-t_{1}\right) }+1-k_{1}t_{1}\right) \,, &
t\in \mathrm{I}\,, \\
-t\,, & t\in \mathrm{II\,}, \\
k_{2}^{-1}\left( e^{-k_{2}\left( t-t_{2}\right) }-1-k_{2}t_{2}\right) \,, &
t\in \mathrm{III}\,,%
\end{array}%
\right. \label{s2.1}
\end{eqnarray}%
where $t_{1}<0$ and $t_{2}>0$ are fixed time instants. Throughout the text,
we refer to \textrm{I} as the switching-on interval, \textrm{III} as the
switching-off interval and \textrm{II} as the constant field interval.
This field configuration encompasses the $T$\ -constant field \cite{GavGit96}, characterized by the
absence of exponential parts, and the peak field \cite{AdoGavGit16}.
The Dirac equation\footnote{%
The subscript \textquotedblleft $\perp $\textquotedblright\ denotes spacial
components perpendicular to the electric field (e. g. $\mathbf{x}_{\perp
}=\left\{ x^{2},...,x^{D}\right\} $) and $\psi (x)$ is a $2^{[d/2]}$%
-component spinor ($[d/2]$ stands for the integer part of the ratio $d/2$).
As usual, $m$ denotes the electron mass, $\gamma ^{\mu }$ are $\gamma $%
-matrices in $d$ dimensions and $U\left( t\right) $ denotes the potential
energy of a particle with algebraic charge $q$. We select the electron as
the main particle, $q=-e$ with $e$ representing the absolute value of the
electron charge. Hereafter we use the relativistic system of units ($\hslash
=c=1$), except when indicated otherwise.}%
\begin{eqnarray}
&&i\partial _{t}\psi \left( x\right) =H\left( t\right) \psi \left( x\right)
\,,\ \ H\left( t\right) =\gamma ^{0}\left( \boldsymbol{\gamma }\mathbf{P}%
+m\right) \,, \notag \\
&&\,P_{x}=-i\partial _{x}-U\left( t\right) ,\ \ \mathbf{P}_{\bot }=-i%
\boldsymbol{\nabla }_{\perp },\ \ U\left( t\right) =qA_{x}\left( t\right) \,,
\label{s3}
\end{eqnarray}%
can be solved exactly in each one of the intervals above. Once the
corresponding exact solutions are known (see, e.g., the review \cite{AdoGavGit17}), we only
present few details to obtain such a solutions. Firstly, we represent the
Dirac spinors $\psi _{n}\left( x\right) $ in terms of new time-dependent
spinors $\phi _{n}(t)$ as%
\begin{eqnarray}
&&\psi _{n}\left( x\right) =\exp \left( i\mathbf{pr}\right) \psi _{n}\left(
t\right) \,,\ \ n=(\mathbf{p},\sigma )\,, \notag \\
&&\psi _{n}\left( t\right) =\left\{ \gamma ^{0}i\partial _{t}-\left[
p_{x}-U\left( t\right) \right] -\boldsymbol{\gamma }\mathbf{p}+m\right\}
\phi _{n}(t)\,, \label{2.10}
\end{eqnarray}%
and separate the spinning degrees of freedom by the substitution $\phi
_{n}(t)=\varphi _{n}\left( t\right) v_{\chi ,\sigma }$, in which $v_{\chi
,\sigma }$ and $\varphi _{n}\left( t\right) $ denotes a set of constant
orthonormalized spinors and scalar functions, respectively. The constant
spinors satisfy
\begin{equation}
\gamma ^{0}\gamma ^{1}v_{\chi ,\sigma }=\chi v_{\chi ,\sigma }\,,\ \ v_{\chi
,\sigma }^{\dag }v_{\chi ^{\prime },\sigma ^{\prime }}=\delta _{\chi ,\chi
^{\prime }}\delta _{\sigma ,\sigma ^{\prime }\,}, \label{e2a}
\end{equation}%
where $\chi =\pm 1$ are eigenvalues of $\gamma ^{0}\gamma ^{1}$ and $\sigma
=(\sigma _{1},\sigma _{2},\dots ,\sigma _{\lbrack d/2]-1})$ represent a set
of additional eigenvalues, corresponding to spin operators compatible with $%
\gamma ^{0}\gamma ^{1}$. The constant spinors are subjected to additional
conditions depending on the space-time dimensions, whose details can be
found in Ref. \cite{AdoGavGit17}. After these substitutions, the Dirac
spinor can be obtained through the solutions of the second-order ordinary
differential equation\footnote{%
For scalar particles, the exact solutions for the Klein-Gordon equation $%
\phi _{n}\left( x\right) $ are connected with the scalar functions as $\phi
_{n}\left( x\right) =\exp \left( i\mathbf{pr}\right) \varphi _{n}\left(
t\right) $. Since spinning degrees-of-freedom are absent in this case, $n=%
\mathbf{p}$ and $\chi =0$ in Eq. (\ref{s2}) as well as in all subsequent
formulas.}%
\begin{equation}
\left\{ \frac{d^{2}}{dt^{2}}+\left[ p_{x}-U\left( t\right) \right] ^{2}+\pi
_{\perp }^{2}-i\chi \dot{U}\left( t\right) \right\} \varphi _{n}\left(
t\right) =0\,,\ \ \pi _{\perp }=\sqrt{\mathbf{p}_{\perp }^{2}+m^{2}}\,.
\label{s2}
\end{equation}%
In the switching-on \textrm{I} and -off \textrm{III} intervals, the
solutions are expressed in terms of Confluent Hypergeometric Functions
(CHFs.),%
\begin{align}
& \varphi _{n}^{j}\left( t\right) =b_{2}^{j}y_{1}^{j}\left( \eta _{j}\right)
+b_{1}^{j}y_{2}^{j}\left( \eta _{j}\right) \,, \notag \\
& y_{1}^{j}\left( \eta _{j}\right) =e^{-\eta _{j}/2}\eta _{j}^{\nu _{j}}\Phi
\left( a_{j},c_{j};\eta _{j}\right) \,, \notag \\
& y_{2}^{j}\left( \eta _{j}\right) =e^{\eta _{j}/2}\eta _{j}^{-\nu _{j}}\Phi
\left( 1-a_{j},2-c_{j};-\eta _{j}\right) \,, \label{i.3.3}
\end{align}%
while at the constant interval \textrm{II}, the solutions are expressed in
terms of Weber Parabolic Cylinder Functions (WPCFs.),%
\begin{eqnarray}
\varphi _{n}\left( z\right) &=&b^{+}u_{+}\left( z\right) +b^{-}u_{-}\left(
z\right) \,, \notag \\
u_{+}\left( z\right) &=&D_{\beta +\left( \chi -1\right) /2}\left( z\right)
\,,\ \ u_{-}\left( z\right) =D_{-\beta -\left( \chi +1\right) /2}\left(
iz\right) \,. \label{ii.5}
\end{eqnarray}%
At these equations, $a_{j}$, $c_{j}$, $\nu _{j}$ and $\beta $ are parameters%
\begin{eqnarray}
&&a_{1}=\frac{1}{2}\left( 1+\chi \right) +i\Xi _{1}^{-}\,,\ \ a_{2}=\frac{1}{%
2}\left( 1+\chi \right) +i\Xi _{2}^{+}\,, \notag \\
&&\Xi _{j}^{\pm }=\frac{\omega _{j}\pm \Pi _{j}}{k_{j}}\,,\ \ c_{j}=1+2\nu
_{j}\,,\ \ \nu _{j}=\frac{i\omega _{j}}{k_{j}}\,,\ \ \beta =\frac{i\lambda }{%
2}\,, \notag \\
&&\omega _{j}=\sqrt{\Pi _{j}^{2}+\pi _{\perp }^{2}}\,,\ \ \Pi _{j}=p_{x}-%
\frac{eE}{k_{j}}\left[ \left( -1\right) ^{j}+k_{j}t_{j}\right] \,,\ \
\lambda =\frac{\pi _{\perp }^{2}}{eE}\,, \label{i.3}
\end{eqnarray}%
$z$ and $\eta _{j}$ are time-dependent functions%
\begin{eqnarray}
&&\eta _{1}\left( t\right) =ih_{1}e^{k_{1}\left( t-t_{1}\right) }\,,\ \ \eta
_{2}\left( t\right) =ih_{2}e^{-k_{2}\left( t-t_{2}\right) }\,,\ \ h_{j}=%
\frac{2eE}{k_{j}^{2}}\,, \label{i.0} \\
&&z\left( t\right) =\left( 1-i\right) \xi \left( t\right) \,,\ \ \xi \left(
t\right) =\frac{eEt-p_{x}}{\sqrt{eE}}\,, \label{ii.3}
\end{eqnarray}%
and $b_{1,2}^{j}$, $b^{\pm }$ are constants, fixed by initial conditions. In
addition, the index $j$ in Eqs. (\ref{i.3.3}), (\ref{i.3}) and (\ref{i.0})
distinguish quantities associated to the switching-on $\left( j=1\right) $
from the switching-off $\left( j=2\right) $ intervals.
In virtue of asymptotic properties of the CHFs. at $t\rightarrow \pm \infty $%
, the solutions given by Eq. (\ref{i.3.3}) can be classified as
particle/antiparticle states%
\begin{eqnarray}
\ _{+}\varphi _{n}\left( t\right) &=&\ _{+}\mathcal{N}\exp \left( i\pi \nu
_{1}/2\right) y_{2}^{1}\left( \eta _{1}\right) \,,\,\ _{-}\varphi _{n}\left(
t\right) =\ _{-}\mathcal{N}\exp \left( -i\pi \nu _{1}/2\right)
y_{1}^{1}\left( \eta _{1}\right) \,,\ \ t\in \mathrm{I}\,, \notag \\
\ ^{+}\varphi _{n}\left( t\right) &=&\ ^{+}\mathcal{N}\exp \left( -i\pi \nu
_{2}/2\right) y_{1}^{2}\left( \eta _{2}\right) \,,\,\ ^{-}\varphi _{n}\left(
t\right) =\ ^{-}\mathcal{N}\exp \left( i\pi \nu _{2}/2\right)
y_{2}^{2}\left( \eta _{2}\right) \,,\ \ t\in \mathrm{III}\,, \label{i.4.1}
\end{eqnarray}%
since, at the infinitely remote past $t\rightarrow -\infty $ and future $%
t\rightarrow +\infty $, the set above behaves as plane-waves,%
\begin{equation}
\ _{\zeta }\varphi _{n}\left( t\right) =\ _{\zeta }\mathcal{N}e^{-i\zeta
\omega _{1}t}\,,\ \ t\rightarrow -\infty \,,\ \ ^{\zeta }\varphi _{n}\left(
t\right) =\ ^{\zeta }\mathcal{N}e^{-i\zeta \omega _{2}t}\,,\ \ t\rightarrow
+\infty \,, \label{i.4.0}
\end{equation}%
where $\omega _{1}$ denotes the energy of initial particles at $t\rightarrow
-\infty $, $\omega _{2}$ denotes the energy of final particles at $%
t\rightarrow +\infty $ and $\zeta $ labels electron $\left( \zeta =+\right) $
and positron $\left( \zeta =-\right) $ states. With the help of such
solutions, one may construct IN $\left\{ \ _{\zeta }\psi \left( x\right)
\right\} $ and OUT $\left\{ \ ^{\zeta }\psi \left( x\right) \right\} $ sets
of Dirac spinors. The normalization constants$\;_{\zeta }\mathcal{N}=\
_{\zeta }CV_{\left( d-1\right) }^{-1/2}$ and $\;^{\zeta }\mathcal{N}=\
^{\zeta }CV_{\left( d-1\right) }^{-1/2}$ are calculated with respect to the
usual inner product for Fermions and Bosons, where $\ _{\zeta }C$ and $\
^{\zeta }C$ given by%
\begin{equation}
\ _{\zeta }C=\left\{
\begin{array}{ll}
\left( 2\omega _{1}q_{1}^{\zeta }\right) ^{-1/2}\,, & \mathrm{Fermi\,,} \\
\left( 2\omega _{1}\right) ^{-1/2}\,, & \mathrm{Bose\,,}%
\end{array}%
\right. \,,\ ^{\zeta }C=\left\{
\begin{array}{ll}
\left( 2\omega _{2}q_{2}^{\zeta }\right) ^{-1/2}\,, & \mathrm{Fermi\,,} \\
\left( 2\omega _{2}\right) ^{-1/2}\,, & \mathrm{Bose\,,}%
\end{array}%
\right. \,,\ q_{j}^{\zeta }=\omega _{j}-\chi \zeta \Pi _{j}\,. \label{i.4.2}
\end{equation}%
For further details, e.g., see Ref. \cite{AdoGavGit17}.
With the exact solutions discussed above, one can write complete sets of
solutions for the whole time interval $t\in \left( -\infty ,+\infty \right) $%
. Using the classification (\ref{i.4.1}) and the solutions given by Eq. (\ref%
{ii.5}), Dirac spinors (\ref{2.10}) (or Klein-Gordon solutions) for all time
duration can be calculated from the following set of solutions,%
\begin{eqnarray}
\ ^{+}\varphi _{n}\left( t\right) &=&\left\{
\begin{array}{ll}
\kappa g\left( _{-}|^{+}\right) \ _{-}\varphi _{n}\left( t\right) +g\left(
_{+}|^{+}\right) \ _{+}\varphi _{n}\left( t\right) \,, & t\in \mathrm{I}\,
\\
b_{1}^{+}u_{+}\left( t\right) +b_{1}^{-}u_{-}\left( t\right) \,, & t\in
\mathrm{II\,} \\
\ ^{+}\mathcal{N}\exp \left( -i\pi \nu _{2}/2\right) y_{1}^{2}\left( \eta
_{2}\right) \,, & t\in \mathrm{III\,}%
\end{array}%
\right. ; \label{v1} \\
\ _{-}\varphi _{n}\left( t\right) &=&\left\{
\begin{array}{ll}
\;_{-}\mathcal{N}\exp \left( -i\pi \nu _{1}/2\right) y_{1}^{1}\left( \eta
_{1}\right) \,, & t\in \mathrm{I\,} \\
b_{2}^{+}u_{+}\left( t\right) +b_{2}^{-}u_{-}\left( t\right) \,, & t\in
\mathrm{II\,} \\
g\left( ^{+}|_{-}\right) \ ^{+}\varphi _{n}\left( t\right) +\kappa g\left(
^{-}|_{-}\right) \ ^{-}\varphi _{n}\left( t\right) \,, & t\in \mathrm{III\,}%
\end{array}%
\right. , \label{v4}
\end{eqnarray}%
where $b_{1,2}^{\pm }$, $g\left( _{\pm }|^{+}\right) $, and $g\left( ^{\pm
}|_{-}\right) $ are some coefficients, $g\left( ^{\zeta ^{\prime }}|_{\zeta
}\right) =g\left( _{\zeta ^{\prime }}|^{\zeta }\right) ^{\ast }$. Here $%
\kappa $ is an auxiliary constant that allow us to present solutions for the
Klein-Gordon $\left( \kappa =-1\right) $ or Dirac $\left( \kappa =+1\right) $
equations. For the solutions of the Dirac equation, the $g$-coefficients
satisfy unitarity relations%
\begin{equation}
\sum_{\varkappa }g\left( ^{\zeta }|_{\varkappa }\right) g\left( _{\varkappa
}|^{\zeta ^{\prime }}\right) =\sum_{\varkappa }g\left( _{\zeta }|^{\varkappa
}\right) g\left( ^{\varkappa }|_{\zeta ^{\prime }}\right) =\delta _{\zeta
,\zeta ^{\prime }}\, \label{v4.2}
\end{equation}%
while for the solutions of the Klein-Gordon equation, the $g$-coefficients
satisfy unitarity relations
\begin{equation}
\sum_{\varkappa }\varkappa g\left( ^{\zeta }|_{\varkappa }\right) g\left(
_{\varkappa }|^{\zeta ^{\prime }}\right) =\sum_{\varkappa }\varkappa g\left(
_{\zeta }|^{\varkappa }\right) g\left( ^{\varkappa }|_{\zeta ^{\prime
}}\right) =\zeta \delta _{\zeta ,\zeta ^{\prime }}\,, \label{v4.4}
\end{equation}
To obtain the $g$-coefficients, we conveniently consider continuity
conditions at instants $t_{1}$, $t_{2}$%
\begin{equation*}
\ _{-}^{+}\varphi _{n}\left( t_{1,2}-0\right) =\ _{-}^{+}\varphi _{n}\left(
t_{1,2}+0\right) \,,\ \ \partial _{t}\ _{-}^{+}\varphi _{n}\left(
t_{1,2}-0\right) =\partial _{t}\ _{-}^{+}\varphi _{n}\left( t_{1,2}+0\right)
\,,
\end{equation*}%
substitute appropriate normalization constants for each case, given by Eqs. (%
\ref{i.4.2}), and use Wronskian determinants for CHFs. and WPCFs. After
these manipulations, one can readily verify that $g\left( _{-}|^{+}\right) $
and $g\left( ^{+}|_{-}\right) $ for the Dirac case reads%
\begin{eqnarray}
g\left( _{-}|^{+}\right) &=&\sqrt{\frac{q_{1}^{-}}{8eE\omega
_{1}q_{2}^{+}\omega _{2}}}\exp \left[ \frac{i\pi }{2}\left( \nu _{1}-\nu
_{2}+\beta +\frac{\chi }{2}\right) \right] \left[ f_{1}^{-}\left(
t_{2}\right) f_{2}^{+}\left( t_{1}\right) -f_{1}^{+}\left( t_{2}\right)
f_{2}^{-}\left( t_{1}\right) \right] \,, \notag \\
g\left( ^{+}|_{-}\right) &=&\sqrt{\frac{q_{2}^{+}}{8eE\omega
_{2}q_{1}^{-}\omega _{1}}}\exp \left[ \frac{i\pi }{2}\left( \nu _{2}-\nu
_{1}+\beta +\frac{\chi }{2}\right) \right] \left[ f_{1}^{+}\left(
t_{1}\right) f_{2}^{-}\left( t_{2}\right) -f_{1}^{-}\left( t_{1}\right)
f_{2}^{+}\left( t_{2}\right) \right] \,, \notag \\
f_{k}^{\pm }\left( t_{j}\right) &=&\left. \left[ (-1)^{j}k_{j}\eta _{j}\frac{%
dy_{k}^{j}\left( \eta _{j}\right) }{d\eta _{j}}+y_{k}^{j}\left( \eta
_{j}\right) \partial _{t}\right] u_{\pm }\left( z\right) \right\vert
_{t=t_{j}}\,, \label{r4}
\end{eqnarray}%
while for the Klein-Gordon case have the form,%
\begin{eqnarray}
g\left( _{-}|^{+}\right) &=&-\frac{1}{\sqrt{8eE\omega _{1}\omega _{2}}}\exp %
\left[ \frac{i\pi }{2}\left( \nu _{1}-\nu _{2}+\beta \right) \right] \left. %
\left[ f_{1}^{-}\left( t_{2}\right) f_{2}^{+}\left( t_{1}\right)
-f_{1}^{+}\left( t_{2}\right) f_{2}^{-}\left( t_{1}\right) \right]
\right\vert _{\chi =0}\,, \notag \\
g\left( ^{+}|_{-}\right) &=&\frac{1}{\sqrt{8eE\omega _{1}\omega _{2}}}\exp %
\left[ \frac{i\pi }{2}\left( \nu _{2}-\nu _{1}+\beta \right) \right] \left. %
\left[ f_{1}^{+}\left( t_{1}\right) f_{2}^{-}\left( t_{2}\right)
-f_{1}^{-}\left( t_{1}\right) f_{2}^{+}\left( t_{2}\right) \right]
\right\vert _{\chi =0}\,. \label{r7}
\end{eqnarray}
Taking into account that the $g$'s coefficients establish the Bogoliubov
transformations, one may compute fundamental quantities concerning vacuum
instability for Fermions (the Dirac case) and Bosons (the Klein-Gordon
case), for example, the differential mean number of pairs created from the
vacuum $N_{n}^{\mathrm{cr}}$, the total number $N$ and the vacuum-to-vacuum
transition probability $P_{v}$ as%
\begin{eqnarray}
&&N_{n}^{\mathrm{cr}}=\left\vert g\left( _{-}|^{+}\right) \right\vert
^{2}\,,\ \ N^{\mathrm{cr}}=\sum_{n}N_{n}^{\mathrm{cr}}\,, \notag \\
&&P_{v}=\exp \left[ \kappa \sum_{n}\ln \left( 1-\kappa N_{n}^{\mathrm{cr}%
}\right) \right] \,. \label{NP}
\end{eqnarray}
\section{General properties of the differential mean numbers of pairs
created \label{Sec3}}
The $g$-coefficients (\ref{r4}) and (\ref{r7}) enjoy certain properties
under time/momentum reversal that result in symmetries for differential
quantities. More precisely, the simultaneous change%
\begin{equation}
k_{1}\leftrightarrows k_{2}\,,\ \ t_{1}\leftrightarrows -t_{2}\,,\ \
p_{x}\leftrightarrows -p_{x}\,, \label{sym1}
\end{equation}%
yields to a number of identities, for instance, $\Pi _{1}\leftrightarrows
-\Pi _{2}$, $\omega _{1}\leftrightarrows \omega _{2}$, $a_{1}%
\leftrightarrows a_{2}$, $c_{1}\leftrightarrows c_{2}$ so that $g\left(
_{-}|^{+}\right) $ and $g\left( ^{+}|_{-}\right) $ are related by%
\begin{equation}
g\left( _{-}|^{+}\right) \leftrightarrows \kappa g\left( ^{+}|_{-}\right) \,,
\label{sym2}
\end{equation}%
implying, in particular, that $N_{n}^{\mathrm{cr}}$ (and therefore total
quantities) are even with respect to the exchanges (\ref{sym1}). Moreover (%
\ref{r4}) and (\ref{r7}) are even with respect to $\mathbf{p}_{\perp }$, so
that all quantum quantities in Eq. (\ref{NP}) are symmetric with respect to
the momenta $\mathbf{p}$ (for Fermions, these quantities does not depend on
spin polarizations as well). Such properties are helpful in computing
asymptotic estimates in several regimes, some of them discussed in
subsequent section.
Aside these properties, it is useful to visualize how the differential mean
numbers $N_{n}^{\mathrm{cr}}$ are distributed over the quantum numbers (for
instance $p_{x}$) to outline some preliminary remarks concerning pair
creation, especially in cases where asymptotic approximations of the WPCFs.
and CHFs. involved in the $g$-coefficients are not applicable\footnote{%
For example, when the argument of WPCFs. $z_{j}$ or of CHFs. $\eta _{j}$ are
finite quantities. Also when the parameters $a_{j}$, $c_{j}$ are also finite.%
}. To this end, we present below some plots of the mean number of particles
created from the vacuum $N_{n}^{\mathrm{cr}}$ (\ref{NP}) as a function of $%
p_{x}$ for different values of $k_{1}$, $k_{2}$ and $T$ (Figs. \ref{Fig1a}, %
\ref{Fig1b} for Fermions and \ref{Fig2a}, \ref{Fig2b} for Bosons) for a
fixed amplitude $E$ of the composite field. For the sake of simplicity, we
set $\mathbf{p}_{\perp }=0$ and select a convenient system of units, in
which besides $\hslash =c=1$ the electron mass is also set equal to the
unity, $m=1$.
In this system, the Compton wavelength corresponds to one unit
of length ${\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda} _{e}=\hslash /mc=1 \approx
3.8614\times 10^{-14}\,\mathrm{m}$, the Compton time corresponds to one unit of time $%
{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda} _{e}/c=1 \approx 1.3\times 10^{-21}\,%
\mathrm{s}$ and the electron rest energy corresponds to one unit of energy $%
mc^{2}=1 \approx 0.511\ $\textrm{MeV}. In all plots
below, the longitudinal momentum $p_{x}$, time duration $T$and phases $k_{j}$
are relative to electron's mass $m$, corresponding to dimensionless
quantities, i. e., $p_{x}/m$, $mT$ and $k_{j}/m$, respectively.
\begin{figure}[th]
\begin{center}
\includegraphics[scale=0.48]{Fig1F.pdf}
\end{center}
\caption{(color online) Differential mean number of Fermions created from
the vacuum $N_{n}^{\mathrm{cr}}$ (solid lines) by a symmetrical composite
field, with $k_{1}=k_{2}=k$ and amplitude $E=E_{\mathrm{c}}=m^{2}/e=1$
fixed. Graph (A) shows distributions with $k/m=1$ fixed,
while Graph (B) shows $mT$ $\ $ is fixed, $mT=5$. In (A), the solid lines labeled with $(\mathrm{i})$, $(\mathrm{ii})$ and $(\mathrm{iii})$ refers to $mT=5$, $mT=10$ and $mT=50$, respectively. In (B), $(\mathrm{i})$, $(\mathrm{ii})$ and $(\mathrm{iii})$ refers to $k/m=0.1$, $k/m=0.05$ and $k/m=0.01$, respectively. The horizontal dashed line corresponds to the uniform distribution $e^{-\protect
\pi \protect\lambda }$ which, in this system of units and $\mathbf{p}%
_{\perp }=0$, is $e^{-\protect\pi }$.}
\label{Fig1a}
\end{figure}
\begin{figure}[th]
\begin{center}
\includegraphics[scale=0.48]{Fig2F.pdf}
\end{center}
\caption{(color online) Differential mean number of Fermions created from
the vacuum $N_{n}^{\mathrm{cr}}$ (solid lines) by asymmetrical composite
fields, with $k_{1}\neq k_{2}$ and amplitude $E=E_{\mathrm{c}}=m^{2}/e=1$
fixed. In both graphs, $mT$ is fixed, where in Graph
(C) shows $mT=10$ and $k_{1}/m=0.5$ while in Graph (D) shows $mT=5$.
In (C), the solid lines labeled with $(\mathrm{i})$, $(\mathrm{ii})$ and $(\mathrm{iii})$ refers to $k_2/m=1$, $k_2/m=5$ and $k_2/m=10$, respectively. In (D), $(\mathrm{i})$ denotes $k_1/m=0.5,\,k_2/m=0.3$, $(\mathrm{ii})$ denotes $k_1/m=0.1,\,k_2/m=0.07$ and $(\mathrm{iii})$ denotes $k_1/m=0.01,\,k_2/m=0.008$.
The horizontal dashed line corresponds to the uniform distribution $e^{-%
\protect\pi \protect\lambda }$ which, in this system of units and $%
\mathbf{p}_{\perp }=0$, is $e^{-\protect\pi }$.}
\label{Fig1b}
\end{figure}
\begin{figure}[th]
\begin{center}
\includegraphics[scale=0.48]{Fig3F.pdf}
\end{center}
\caption{(color online) Differential mean number of Bosons created from the
vacuum $N_{n}^{\mathrm{cr}}$ (solid lines) by a symmetrical composite field,
with $k_{1}=k_{2}=k$ and amplitude $E=E_{\mathrm{c}}=m^{2}/e=1$ fixed.
Graph (A) shows distributions with $k/m=1$ fixed,
while Graph (B) shows $mT$ is fixed, $mT=5$. In (A), the solid lines labeled with $(\mathrm{i})$, $(\mathrm{ii})$ and $(\mathrm{iii})$ refers to $mT=5$, $mT=10$ and $mT=50$, respectively. In (B), $(\mathrm{i})$, $(\mathrm{ii})$ and $(\mathrm{iii})$ refers to $k/m=0.1$, $k/m=0.05$ and $k/m=0.01$, respectively. The horizontal dashed line corresponds to the uniform distribution $e^{-\protect\pi \protect%
\lambda }$ which, in this system of units and $\mathbf{p}_{\perp }=0$%
, is $e^{-\protect\pi }$.}
\label{Fig2a}
\end{figure}
\begin{figure}[th]
\begin{center}
\includegraphics[scale=0.48]{Fig4F.pdf}
\end{center}
\caption{(color online) Differential mean number of Bosons created from the
vacuum $N_{n}^{\mathrm{cr}}$ (solid lines) by asymmetrical composite fields,
with $k_{1}\neq k_{2}$ and amplitude $E=E_{\mathrm{c}}=m^{2}/e=1$ fixed.
In both graphs, $mT$ is fixed, where Graph (C) shows $mT=10$ and $k_{1}/m=0.5$ while in Graph (D) shows $mT=5$.
In (C), the solid lines labeled with $(\mathrm{i})$, $(\mathrm{ii})$ and $(\mathrm{iii})$ refers to $k_2/m=1$, $k_2/m=5$ and $k_2/m=10$, respectively. In (D), $(\mathrm{i})$ denotes $k_1/m=0.5,\,k_2/m=0.3$, $(\mathrm{ii})$ denotes $k_1/m=0.1,\,k_2/m=0.07$ and $(\mathrm{iii})$ denotes $k_1/m=0.01,\,k_2/m=0.008$. The horizontal dashed line corresponds to the uniform distribution $e^{-\protect\pi \protect
\lambda }$ which, in this system of units and $\mathbf{p}_{\perp }=0$ is $e^{-\protect\pi }$.}
\label{Fig2b}
\end{figure}
The results displayed in all pictures above, reveal wider distributions
corresponding to composite electric fields with larger $T$ (red/dark blue
lines for Fermions/Bosons in graphs (a), Figs. \ref{Fig1a}, \ref{Fig2a}) or
smaller $k_{j}$ (red/dark blue lines for Fermions/Bosons in graphs (b),
Figs. \ref{Fig1a}, \ref{Fig2a}) and thinner distributions corresponding to
opposite configurations, associated with smaller $T$(orange/purple lines for
Fermions/Bosons in graphs (a), Figs. \ref{Fig1a}, \ref{Fig2a}) or larger $%
k_{j}$ (orange/purple lines for Fermions/Bosons in graphs (b), \ref{Fig1a}, %
\ref{Fig2a}). Once the time duration is designated by $T$ and $k_{j}^{-1}$ ($%
k_{j}^{-1}$ represent a scale of time duration for increasing and decreasing
phases), these results are consistent with the fact that the larger the
duration of an electric field, the longer it has to accelerate pairs.
Therefore larger values to $p_{x}/m$ are expected to occur in cases
corresponding to electric fields with larger time duration. Moreover, it
should be noted that the distributions above tend to the uniform
distribution $N_{n}^{\mathrm{cr}}=e^{-\pi \lambda }$ (horizontal dashed
lines) for $T$ and $k_{j}^{-1}$ sufficiently large. This is not unexpected
since the composite field tends to a constant field as soon as $T$ and $%
k_{j}^{-1}$ increase, becoming infinitely constant in the limit $%
T\rightarrow \infty $ and $k_{j}^{-1}\rightarrow \infty $. At last, but not
least, observing Figs. \ref{Fig1b}, \ref{Fig2b} we find that asymmetrical
configurations $\left( k_{1}\neq k_{2}\right) $ yields to asymmetrical
distributions. This is associated with the fact that different phases $%
k_{1},k_{2}$ implies in different times to accelerate pairs during the
switching-on and -off processes in general. An interpretation of these
results follows from the semiclassical analysis: Electrons created from the
vacuum have quantum numbers $p_{x}$ within the range $-eE\left(
T/2+k_{1}^{-1}\right) \leq p_{x}\leq eE\left( T/2+k_{2}^{-1}\right) $,
corresponding to longitudinal kinetic momenta $\Pi _{x}\left( t\right)
=p_{x}+eA_{x}\left( t\right) $ which, at $t\rightarrow +\infty $, varies
according to $-eE\left( T+k_{1}^{-1}+k_{2}^{-1}\right) \leq \Pi _{x}\left(
+\infty \right) \leq 0$. Assuming that pairs are materialized from the
vacuum with zero longitudinal momentum $\Pi _{x}\left( t\right) =0$, it
follows from the classical equations of motion that the kinetic longitudinal
momentum at $t\rightarrow +\infty $ has the form $\Pi _{x}\left( +\infty
\right) =-e\int_{t}^{+\infty }dt^{\prime }E\left( t^{\prime }\right) $,
where $t$ is the time of creation. Thus, if an electron is created at $%
t\rightarrow -\infty $, its longitudinal kinetic momentum at $t\rightarrow
+\infty $ is maximal (in absolute value) $\Pi _{x}\left( +\infty \right)
=-eE\left( T+k_{1}^{-1}+k_{2}^{-1}\right) $. At the same time, its
longitudinal kinetic momentum is expressed in terms of $p_{x}$ as $\Pi
\left( +\infty \right) =p_{x}-eE\left( T/2+k_{2}^{-1}\right) $, which means
that such a electron is found to have the a minimal value to $p_{x}$, namely
$p_{x}\rightarrow p_{x}^{\min }=-eE\left( T/2+k_{1}^{-1}\right) $. On the
other hand, if the electron is created at $t\rightarrow +\infty $, then its
longitudinal kinetic momentum tends to zero, $\Pi _{x}\left( +\infty \right)
\rightarrow 0$, which means that the corresponding quantum number $p_{x}$
tends to its maximum, $p_{x}\rightarrow p_{x}^{\max }=eE\left(
T/2+k_{2}^{-1}\right) $. According to this interpretation, asymmetric
configurations result in asymmetric distributions. This explains asymmetric
distributions in graphs (C) and (D), Figs. \ref{Fig1b}, \ref{Fig2b}, for
instance.
\section{Differential and total quantities in some special configurations
\label{Sec4}}
Irrespective of the $t$-electric potential step under consideration, it is
known that the most favorable conditions for pair creation from the vacuum
are associated with strong fields acting over a sufficiently large period of
time, in which differential and total quantities are significant. For the
composite electric field (\ref{s2.0}), the time duration is encoded in two
sets, namely, $\left( k_{1}^{-1},k_{2}^{-1}\right) $ and $\left(
t_{1},t_{2}\right) $. The former represent scales of time duration for the
increasing and decreasing phases of the electric field, defined at intervals
$\mathrm{I}$ and $\mathrm{III}$, while the latter corresponds to the time
duration in which the field is constant, defined at interval $\mathrm{II}$.
If the period $T$\ is a relatively short
(see, e.g., the cases with $mT=5$ and $k/m<0.5$ on the right side of Figs. \ref{Fig1a} - \ref{Fig2b}),
the effects of pair creation tend
to ones obtained for the peak field \cite{AdoGavGit17,AdoGavGit16}. The
latter field correspond to a limit of the composite field when the
intermediate interval $T$ is absent. From the results above, we observe that
the existence of a finite interval $T$, between \textquotedblleft
slow\textquotedblright\ switching-on and -off processes, has no significant
influence on the distribution of the differential mean numbers $N_{n}^{%
\mathrm{cr}}$ over the quantum numbers (see appropriate asymptotic formulae
in Ref. \cite{AdoGavGit17}). The influence of the $T$-constant interval
appears only in the next-to-leading order.
A composite electric field of large duration corresponds to small values for
the switching-on phase $k_{1}$, switching-off phase $k_{2}$ and large $%
T=t_{2}-t_{1}$,\footnote{%
Without loss of generality, we select from now on a symmetrical interval
\textrm{II}, in which $t_{1}=-T/2=-t_{2}$.} satisfying the following
condition%
\begin{equation}
\min \left( \sqrt{eE}T,eEk_{1}^{-2},eEk_{2}^{-2}\right) \gg \max \left( 1,%
\frac{m^{2}}{eE}\right) \,. \label{s3.1}
\end{equation}%
The condition (\ref{s3.1}) defines a configuration in which the field takes
a sufficiently large time to reach the constant regime (slow switching-on
process, $k_{1}^{-1}$ large), remains constant over a sufficiently large
interval $T$ and finally takes another sufficiently large time to switch-off
completely (slow switching-on process, $k_{2}^{-1}$ large).
The most important objects in vacuum instability by external fields are the
total number of particles created from the vacuum $N^{\mathrm{cr}}$ and the
vacuum-to-vacuum transition probability $P_{v}$, both given by Eq. (\ref{NP}%
). The first quantity corresponds to the summation of the differential mean
numbers $N_{n}^{\mathrm{cr}}$ over the momenta $\mathbf{p}$, and spin
degrees-of-freedom%
\begin{equation}
N^{\mathrm{cr}}=V_{\left( d-1\right) }n^{\mathrm{cr}}\,,\ \ n^{\mathrm{cr}}=%
\frac{J_{\left( d\right) }}{\left( 2\pi \right) ^{d-1}}\int d\mathbf{p}%
N_{n}^{\mathrm{cr}}\,, \label{st1}
\end{equation}%
which, in fact, is reduced to the calculation of the density of pairs
created from the vacuum $n^{\mathrm{cr}}$. Here the summation over $\mathbf{p%
}$ was transformed into an integral and $J_{\left( d\right) }=2^{\left[ d/2%
\right] -1}$ denotes the total number spin projections in a $d$-dimensional
space. These are factored out since the numbers $N_{n}^{\mathrm{cr}}$ are
independent of spin polarization. The dominant contribution of the densities
$n^{\mathrm{cr}}$ in the slowly varying regime are proportional to the total
increment of the longitudinal kinetic momentum, $\Delta U=\left\vert \Pi
_{2}-\Pi _{1}\right\vert =e\left\vert A_{x}\left( +\infty \right)
-A_{x}\left( -\infty \right) \right\vert $, which is the largest parameter
in the problem \cite{GavGit17}. Hence it is meaningful to approximate the
total density $n^{\mathrm{cr}}$ by its dominant contribution $\tilde{n}^{%
\mathrm{cr}}$, corresponding to an integral over an specific domain $\Omega$%
\begin{equation}
\Omega: n^{\mathrm{cr}}\approx \tilde{n}^{\mathrm{cr}}=\frac{J_{\left(
d\right) }}{\left( 2\pi \right) ^{d-1}}\int_{\mathbf{p}\in \Omega }d\mathbf{p%
}N_{n}^{\mathrm{cr}}\,, \label{st1b}
\end{equation}%
whose result is proportional to $\Delta U$. As it is general for $t$%
-electric potential steps, such domain $\Omega $ is defined by a specific
range of values to the longitudinal momentum $p_{x}$ and restricted values
to the perpendicular momentum $\mathbf{p}_{\perp }$ which, under the
condition (\ref{s3.1}), is%
\begin{equation}
\Omega :\left\{ \frac{|p_{x}|}{\sqrt{eE}}\leq\sqrt{eE}\frac{T}{2}+\frac{3}{2}\sqrt{\frac{h_1}{2}}\,,\,\, \sqrt{\lambda}<K_{\perp}\,,\,\,K_{\perp}^{2}\gg\max \left(1,\frac{m^2}{eE}\right)\right\}\,. \label{st1c}
\end{equation}
In this case using asymptotic formulas given by Ref. \cite{AdoGavGit17} one
can see that the differential mean numbers are practically uniform over a
wide range of values to the kinetic momenta of the domain $\Omega $ while
decreases exponentially beyond these ranges. In leading-order approximation,
the mean numbers are%
\begin{equation}
N_{n}^{\mathrm{cr}}\sim \left\{
\begin{array}{ll}
\exp \left( -2\pi \Xi _{1}^{-}\right) \,, & \mathrm{for}\ \ p_{x}/\sqrt{eE}<-%
\sqrt{eE}T/2\,, \\
e^{-\pi \lambda }\,, & \mathrm{for}\ \ \left\vert p_{x}\right\vert /\sqrt{eE}%
\leq \sqrt{eE}T/2\,, \\
\exp \left( -2\pi \Xi _{2}^{+}\right) \,, & \mathrm{for}\ \ p_{x}/\sqrt{eE}>+%
\sqrt{eE}T/2\,.%
\end{array}%
\right. \label{fas17}
\end{equation}%
It is clear that the asymptotic forms
(\ref{fas17}) specified in each range above, coincides with asymptotic forms
of the $T$-constant and exponential electric fields; see, e. g., Ref. \cite%
{AdoGavGit17}. Thus, we see that in each domain of $\Omega $ with a particular
type of field, principal terms in the distribution $N_{n}^{\mathrm{cr}}$ do
not depend on the type of fields in neighboring regions, only terms of
following orders acquire such a dependence. It follows that\emph{\ }the
dominant contribution for the density of pairs created by the composite
field is expressed as a sum of the dominant contribution for the $T$%
-constant and exponential electric fields,%
\begin{equation}
\tilde{n}^{\mathrm{cr}}\approx \sum_{j}\tilde{n}_{j}^{\mathrm{cr}},\ \
\tilde{n}_{j}^{\mathrm{cr}}=\frac{J_{\left( d\right) }}{\left( 2\pi \right)
^{d-1}}\int_{t\in D_{j}}dt\left[ eE_{j}\left( t\right) \right] ^{d/2}\exp %
\left[ -\pi \frac{m^{2}}{eE_{j}\left( t\right) }\right] \,, \label{st13c}
\end{equation}%
where the index $j=1,2,3$ denotes each interval of the composite field, $%
D_{1,2,3}=\mathrm{I,II,III}$. It is known \cite{AdoGavGit17} that%
\begin{eqnarray}
\tilde{n}^{\mathrm{cr}}_{1,3} &=&\frac{J_{\left( d\right) }}{\left( 2\pi
\right) ^{d-1}}\frac{\left( eE\right) ^{d/2}}{k_{1,2}}e^{-\pi
m^{2}/eE}G\left( \frac{d}{2},\frac{\pi m^{2}}{eE}\right) \,. \notag \\
\tilde{n}^{\mathrm{cr}}_{2} &=&\frac{J_{\left( d\right) }\left( eE\right)
^{d/2}T}{\left( 2\pi \right) ^{d-1}}\exp \left[ -\frac{\pi m^{2}}{eE}\right]
\,, \label{st13d}
\end{eqnarray}%
where $G\left( \alpha ,z\right) $ is expressed in terms of the incomplete
gamma function $\Gamma \left( \alpha ,z\right) $ \cite{DLMF} as%
\begin{equation}
G\left( \alpha ,z\right) =\int_{1}^{\infty }\frac{ds}{s^{\alpha +1}}%
e^{-z\left( s-1\right) }=e^{z}z^{\alpha }\Gamma \left( -\alpha ,z\right) \,.
\label{st7}
\end{equation}%
Calculating the vacuum-to-vacuum probability of the composite field, we
obtain that it is product of the partial $P_{v}^{j}$ for the $T$-constant
and exponential electric fields, respectively, $\ln P_{v}=\sum_{j}\ln
P_{v}^{j}$; see the Ref. \cite{AdoGavGit17}. It is important to point out
that the result above may be reproduced from the universal form for the
total density of pairs created by $t$-electric potential steps in the slowly
varying regime \cite{GavGit17}. Such a form does not demand knowledge on the
exact solutions of the Dirac/Klein-Gordon equations. This is a consequence
of the fact that in the approximation by leading terms, the distribution $%
N_{n}^{\mathrm{cr}}$ in each region of $\Omega $ is formed independently of
neighboring regions.
While the results for slowly varying fields are completely predictable,
configurations in which fields act over a relatively short time to reach the
constant regime (fast switching-on process, $k_{1}^{-1}$ small), remain
constant over a sufficiently large interval $T$ and takes a short interval
to switch-off completely (fast switching-off process, $k_{2}^{-1}$ small)
have to be studied in more details. These configurations simulate finite
switching-on and -off processes, whose considerations are discussed below.
To study such configurations, one has to compare parameters involving
momenta with ones involving time scales, such as $\sqrt{eE}T$, $eEk_{1}^{-2}$
and $eEk_{2}^{-2}$. Regarding the dependence on the perpendicular momenta $%
\mathbf{p}_{\perp }$ for instance, it is well known that a $t$-electric
potential step of large time duration does not create a significant number
of pairs with large $\mathbf{p}_{\perp } $. This is meaningful as long as
charged pairs are accelerated along the direction of the electric field,
having thereby a wider range of values of $p_{x}$ instead $\mathbf{p}_{\perp
}$. By virtue of that, one may simplify the calculation of differential
quantities and consider restricted values to $\mathbf{p}_{\perp }$, ranging
from zero till a finite number so that the inequality%
\begin{equation}
\sqrt{\lambda }<K_{\perp }\,,\ \ K_{\perp }^{2}\gg \max \left( 1,\frac{m^{2}%
}{eE}\right) \,, \label{s3.2}
\end{equation}%
is fulfilled. Here $K_{\perp }$ is a moderately large number that sets an
upper bound to the perpendicular momenta of pairs created. Thus, taking into
account the inequality above, we assume that
\begin{equation}
\sqrt{eE}T\gg K_{\perp }^{2}\,,\ \ \max \left(
eEk_{1}^{-2},eEk_{2}^{-2}\right) \leq \max \left( 1,\frac{m^{2}}{eE}\right)
\,, \label{s3.3}
\end{equation}%
As a consequence, the field satisfies the following inequalities%
\begin{equation}
\sqrt{eE}T/2\gg \max \left( \sqrt{eE}k_{1}^{-1},\sqrt{eE}k_{2}^{-1}\right)
\leftrightarrow \max \left( k_{1}T/2,k_{2}T/2\right) \gg 1\,. \label{s3.3.1}
\end{equation}
To study differential quantities in this case we select a definite sign for $%
p_{x}$ which, for convenience, the negative is chosen $-\infty <p_{x}\leq 0$%
. Next we use the properties of symmetry discussed in Eqs. (\ref{sym1}) and (%
\ref{sym2}) to generalize results for $p_{x}$ positive. Here $\xi _{1}$
varies from large negative to large positive values while $\xi _{2}$ is
always large and positive; $\Pi _{1}/\sqrt{eE}$ changes from large positive
to large negative values while $\Pi _{2}/\sqrt{eE}$ that is always large and
negative. However once $h_{1}$,$h_{2}$ are finite, we find that the
asymptotic behavior of $N_{n}^{\mathrm{cr}}$ is classified according to
three main ranges%
\begin{eqnarray}
&&\left( \mathrm{a}\right) \ \ -\sqrt{eE}\frac{T}{2}\leq \xi _{1}\leq -%
\tilde{K}_{1}\leftrightarrow \sqrt{eE}\frac{T}{2}+\sqrt{\frac{h_{1}}{2}}\geq
\frac{\Pi _{1}}{\sqrt{eE}}\geq \tilde{K}_{1}+\sqrt{\frac{h_{1}}{2}}\,,
\notag \\
&&\left( \mathrm{b}\right) \ \ -\tilde{K}_{1}<\xi _{1}<\tilde{K}%
_{1}\leftrightarrow \tilde{K}_{1}+\sqrt{\frac{h_{1}}{2}}>\frac{\Pi _{1}}{%
\sqrt{eE}}>-\tilde{K}_{1}+\sqrt{\frac{h_{1}}{2}}\,, \notag \\
&&\left( \mathrm{c}\right) \ \ \xi _{1}\geq \tilde{K}_{1}\leftrightarrow
\frac{\Pi _{1}}{\sqrt{eE}}\leq -\tilde{K}_{1}+\sqrt{\frac{h_{1}}{2}}\,,
\label{fas19}
\end{eqnarray}%
where $\tilde{K}_{1}$ is a sufficiently large number satisfying $\sqrt{eE}T>%
\tilde{K}_{1}\gg K_{\perp }^{2}$. Moreover, as long as $\xi _{2}$ is large
and positive, $c_{2}$ is also large so that one case use the asymptotic
approximation (9.246.1) in Ref. \cite{Gradshteyn} for the WPCfs. $u_{\pm
}\left( z_{2}\right) $ and Eq. (13.8.2) in Ref. \cite{DLMF} for the CHF $%
y_{1}^{2}\left( \eta _{2}\right) $, throughout all ranges above.
In the range $\left( \mathrm{a}\right) $, $\xi _{1}$ is large and negative
and $c_{1}$ is large as well. Then using the asymptotic expansions
(9.246.2), (9.246.3) in Ref. \cite{Gradshteyn} for the WPCfs. $u_{\pm
}\left( z_{1}\right) $ and Eq. (13.8.2) in \cite{DLMF} for the CHF $%
y_{2}^{1}\left( \eta _{1}\right) $, one finds that the mean number of
particles created, in the leading order approximation, admit the following
form%
\begin{eqnarray}
N_{n}^{\mathrm{cr}} &\sim &\frac{\exp \left[ -\pi \left( \lambda +\Xi
_{1}^{-}-\Xi _{2}^{+}\right) \right] }{\sinh \left( 2\pi \omega
_{2}/k_{2}\right) \sinh \left( 2\pi \omega _{1}/k_{1}\right) } \notag \\
&\times &\left\{
\begin{array}{ll}
\sinh \left( \pi \Xi _{2}^{-}\right) \sinh \left( \pi \Xi _{1}^{+}\right) \,,
& \mathrm{Fermi} \\
\cosh \left( \pi \Xi _{2}^{-}\right) \cosh \left( \pi \Xi _{1}^{+}\right) \,,
& \mathrm{Bose}%
\end{array}%
\right. \,, \label{fas20}
\end{eqnarray}%
as $T\rightarrow \infty $. The combination of hyperbolic functions above
tends to the unity since, in this range, the frequencies $\omega _{1}$, $%
\omega _{2}$ and the parameters $\Xi _{1}^{+}$, $\Xi _{2}^{-}$ are large
quantities, namely $\omega _{1}\simeq \sqrt{eE}\left\vert \xi
_{1}\right\vert $, $\omega _{2}\simeq \sqrt{eE}\xi _{2}$, $\Xi _{1}^{+}\sim
\sqrt{2h_{1}}\left\vert \xi _{1}\right\vert $, $\Xi _{2}^{-}\simeq \sqrt{%
2h_{2}}\xi _{2}$. In virtue of that, the dominant contribution of Eq. (\ref%
{fas20}) has the form%
\begin{equation}
N_{n}^{\mathrm{cr}}\sim \exp \left[ -\pi \left( \lambda +2\Xi
_{1}^{-}\right) \right] \,, \label{fas21}
\end{equation}%
as $T\rightarrow \infty $, being valid both for Fermions and Bosons. In this
last result, the parameter $\Xi _{1}^{-}$ is a small quantity, $\Xi
_{1}^{-}\sim \sqrt{h_{1}/2}\left( \lambda /2\left\vert \xi _{1}\right\vert
\right) $, so that its contribution to $N_{n}^{\mathrm{cr}}$ are negligible
in comparison to $\lambda $. As a result, the differential mean numbers are
practically uniform over the range $\left( \mathrm{a}\right) $, $N_{n}^{%
\mathrm{cr}}\sim e^{-\pi \lambda }$.
In the range $\left( \mathrm{c}\right) $, $\xi _{1}$ is large and positive
and $c_{1}$ is also large. Hence one may use the asymptotic expansions
(9.246.1) in Ref. \cite{Gradshteyn} for the WPCfs. $u_{\pm }\left(
z_{1}\right) $ and Kummer transformations for the CHF $y_{2}^{1}\left( \eta
_{1}\right) $ to prove that the mean number of particles created is
significantly small%
\begin{equation}
N_{n}^{\mathrm{cr}}\sim \mathcal{F}_{1}\left[ O\left( \xi _{1}^{-6}\right)
+O\left( \xi _{2}^{-6}\right) +O\left( \xi _{1}^{-3}\xi _{2}^{-3}\right) %
\right] \,, \label{fas22}
\end{equation}%
as $T\rightarrow \infty $, in which $\mathcal{F}_{1}$ is a combination of
hyperbolic functions similar to Eq. (\ref{fas20}),%
\begin{eqnarray}
\mathcal{F}_{1} &=&\frac{\exp \left[ \pi \left( \Xi _{2}^{+}+\Xi
_{1}^{+}\right) \right] }{\sinh \left( 2\pi \omega _{2}/k_{2}\right) \sinh
\left( 2\pi \omega _{1}/k_{1}\right) } \notag \\
&\times &\left\{
\begin{array}{ll}
\sinh \left( \pi \Xi _{2}^{-}\right) \sinh \left( \pi \Xi _{1}^{-}\right) \,,
& \mathrm{Fermi\,,} \\
\cosh \left( \pi \Xi _{2}^{-}\right) \cosh \left( \pi \Xi _{1}^{-}\right) \,,
& \mathrm{Bose\,.}%
\end{array}%
\right. \label{fas23}
\end{eqnarray}%
In this range, the frequencies $\omega _{j}$ and the parameters $\Xi
_{j}^{-} $ are large quantities $\omega _{j}\simeq \sqrt{eE}\xi _{j}$, $\Xi
_{j}^{-}\sim \sqrt{2h_{j}}\xi _{j}$ so that, as in the range $\left( \mathrm{%
a}\right) $, $\mathcal{F}_{1}$ can be approximated to $\mathcal{F}_{1}\sim 1$%
. Therefore the differential mean numbers are significantly small in this
range.
In the range $\left( \mathrm{b}\right) $, $\xi _{1}$ varies from large
negative to large positive values while $c_{1}$ varies from large to finite
values. By this reason, it is not possible to use any asymptotic
approximations for the special functions $u_{\pm }\left( z_{1}\right) $ and $%
y_{2}^{1}\left( \eta _{1}\right) $, although one can still consider the same
approximations (9.246.1) in Ref. \cite{Gradshteyn} and (13.8.2) in Ref. \cite%
{DLMF} for the WPCfs. $u_{\pm }\left( z_{2}\right) $ and CHF $%
y_{1}^{2}\left( \eta _{2}\right) $, respectively. The resulting expression
shall depend explicitly on the exact forms of $u_{\pm }\left( z_{1}\right) $
and $y_{2}^{1}\left( \eta _{1}\right) $.
The most significant contribution for the differential mean numbers for $%
p_{x}$ positive, $0\leq p_{x}<+\infty $, can be obtained by a similar
analysis, taking into account the properties of symmetry (\ref{sym1}) and (%
\ref{sym2}). We finally find the domain of dominant contribution to the mean
number of particles created. In this domain in the leading order
approximation, it has a form
\begin{equation}
N_{n}^{\mathrm{cr}}\sim e^{-\pi \lambda }\times \left\{
\begin{array}{ll}
\exp \left( -2\pi \Xi _{1}^{-}\right) \,, & \mathrm{for}\ \ -\sqrt{eE}T/2+%
\tilde{K}_{1}<p_{x}/\sqrt{eE}\leq 0\,, \\
\exp \left( -2\pi \Xi _{2}^{+}\right) \,, & \mathrm{for}\ \ 0<p_{x}/\sqrt{eE}%
\leq \sqrt{eE}T/2-\tilde{K}_{2}\,,%
\end{array}%
\right. \label{fas27}
\end{equation}%
as $T\rightarrow \infty $, valid for Fermions and Bosons. This approximation
is almost uniform over this wide range of values to the longitudinal
momentum since the parameters $\Xi _{1}^{-}$ and $\Xi _{2}^{+}$ are
negligible in comparison to $\lambda $. In virtue of that, the switching-on
and -off effects on the differential mean numbers, in the present
configuration, manifest themselves as next-to-leading corrections to the
uniform distribution $e^{-\pi \lambda }$. This means that the influence of
the switching-on and -off processes on differential quantities are
negligible for $T$ sufficiently large. From these results, the present
configuration can be referred as a \textquotedblleft fast\textquotedblright\
switching-on and -off configuration, in virtue of Eq. (%
\ref{s3.3.1}) and from the fact that the mean number of particles created
are mainly characterized by the uniform distribution $e^{-\pi \lambda }$. In
this case, the leading contribution to the number density $\tilde{n}^{%
\mathrm{cr}}$, given by Eq.~(\ref{st1b}),\ is proportional to the total
increment of the longitudinal kinetic momentum,\ $\Delta U=eET\ $, and then
the time duration $T$. We see that both the $T$-constant field
itself and the composite field under condition (\ref{s3.3.1}) can be
considered as regularizations of a constant field. The present
discussion encompasses the $T$\ -constant limit, characterized by the
absence of exponential parts and defined by the limit $k\rightarrow \infty $.
We know that a possibility of describing particle creation by the $T$%
-constant field in the slowly varying approximation depends of the value of
dimensionless parameter $\sqrt{eE}T>1.$ According to condition (\ref{s3.3})
a magnitude of the lower boundary $\vartheta =\min \sqrt{eE}T$\ is
proportional to $m^{2}/eE$ if $m^{2}/eE>1$. Accordingly, a contribution of
switching-on and -off processes to the particle creation effect becomes more
pronounced for not too strong fields. It is useful to compare switching-on
and -off effects for the $T$-constant field and for the composite field in
the case when the parameter $\sqrt{eE}T$ approaches the above mentioned
threshold values.
From the plots on the left side of Figs. \ref{Fig1a} - \ref{Fig2b}) one can see that $\sqrt{eE}T=10$ is near threshold value.
To this end we compute exact plots of the mean
differential number of Fermions (\ref{r4}) and Bosons (\ref{r7}) created as
a function of $p_x/m$ for two typical cases of critical field and very
strong field, respectively. In the case of the $T$-constant field, we
calculate the $p_{x}/m$ dependence using exact Eqs. (4.9) and (4.11) given
in Ref. \cite{AdoGavGit17}. Results of these computations are presented on
Figs. \ref{Fermi} and \ref{Bose}. We see that differential mean numbers of
pairs created by the composite electric field (solid lines) and the $T$%
-constant field (dashed lines) oscillate around the uniform distribution $%
e^{-\pi \lambda }$. It can be seen that for fields with a critical
magnitude, $E=E_{\mathrm{c}}$\ and $\sqrt{eE}T=10$ (plots (a)), the
oscillations around the uniform distribution are greater than for fields
with overcritical magnitude, $E=10E_{\mathrm{c}}$\ and $\sqrt{eE}T=10\sqrt{10%
}$\ (plots (b)), both for the composite field and the $T$-constant field.
\begin{figure}[th]
\begin{center}
\includegraphics[scale=0.48]{FermiF.pdf}
\end{center}
\caption{(color online) Differential mean number of electron/positron pairs
created from the vacuum by a symmetric composite field (solid red lines, labeled with (i))
with $k_{1}/m=k_{2}/m=1$ and by a $T$-constant field (dashed light red
lines, labeled with (ii)). In panel (A), $E=E_{\mathrm{c}}$ while in panel (B), $E=10E_{\mathrm{c}}$. In both cases, $mT=10$ and $\mathbf{p}_{\perp }=0$%
. The horizontal dashed black line denotes the uniform distributions, being $%
e^{-\protect\pi } $ in (A) and $e^{-\protect\pi /10}$ in (B).}
\label{Fermi}
\end{figure}
\begin{figure}[th]
\begin{center}
\includegraphics[scale=0.48]{BoseF.pdf}
\end{center}
\caption{(color online) Differential mean number of scalar particles created
from the vacuum by a symmetric composite field (solid blue lines, labeled with (i)) with $%
k_{1}/m=k_{2}/m=1$ and by a $T$-constant field (dashed light blue lines, labeled with (ii)). In
panel (A), $E=E_{\mathrm{c}}$ while in panel (B), $%
E=10E_{\mathrm{c}}$. In both cases, $mT=10$ and $\mathbf{p}_{\perp }=0$. The
horizontal dashed black line denotes the uniform distributions, $e^{-\protect%
\pi }$ in (A) and $e^{-\protect\pi /10}$ in (B).}
\label{Bose}
\end{figure}
We see that distributions $N_{n}^{\mathrm{cr}}$ for the $T$-constant field
always oscillate greater around the uniform distribution than for the
composite field. And in the case of bosons, these deviations from the
uniform distribution are more significant. On the other hand, the plot of $%
N_{n}^{\mathrm{cr}}$\ for the $T$-constant field is more \textquotedblleft
rectangular\textquotedblright\ than for the composite field (for
overcritical magnitudes).\emph{\ }Such wide distributions arose because of
contributions of exponential tails,\emph{\ }$\left\vert p_{x}\right\vert
/m<\left( \frac{eE}{m^{2}}\right) \left( \frac{mT}{2}+\frac{m}{k}\right) $.
Note also that for $\left\vert p_{x}\right\vert /m>\left( \frac{eE}{%
m^{2}}\right) \left( \frac{mT}{2}+\frac{m}{k}\right) $, mean numbers for the
composite field for both magnitudes are negligible, whereas for the $T$%
-constant field this is not always true: in fact, for critical magnitudes,
the mean numbers for $\left\vert p_{x}\right\vert /m$ slightly larger than $%
\left( \frac{eE}{m^{2}}\right) \left( \frac{mT}{2}\right) $ are not
negligible, although they are small.\emph{\ }The characteristic behavior$\ $%
in the case of the slowly varying regime, when$\ \tilde{n}^{\mathrm{cr}}\sim
T,\ $is quite noticeable in the case of fermions already for the value of
the dimensionless parameter$\ \sqrt{eE}T=10\ $and is pronounced for large
values of this parameter.\emph{\ }It can be concluded that for fermions the
quantity$\ \sqrt{eE}T\ =\ 10\ $is close to the threshold value. However, for
bosons at$\ \sqrt{eE}T\ =\ 10,\ $the approximation of the slowly varying
regime does not work yet.\emph{\ }To be applicable this approximation
requires larger values of the parameter $\sqrt{eE}T$. The slowly varying
regime is working for both fermions and bosons at $\sqrt{eE}T=10\sqrt{10}$.
Comparing these two cases,\ we\ see\ that\ the regularization by switching\
on\ and\ off\ exponential\ fields is less\emph{\ }disturbing\emph{\ }than by
the\ $T$-constant field,\ which entails considerable oscillations in the
distributions, and can even lead to sharp bursts $N_{n}^{\mathrm{cr}}$\ in
narrow regions of $p_{x}$. The latter circumstance, however, is not
essential for estimating of dominant contributions for the density of
created pairs due to the very strong $T$-constant field. However, the above
calculation method which is using the composite field is more realistic and
preferable for the analysis of next-to-leading terms.
\section{Concluding remarks\label{conclusions}}
We find exact formulas for differential mean numbers of fermions and bosons
created from the vacuum due to the composite electric field of special
configuration that simulate finite switching-on and -off processes within
and beyond the slowly varying regime. We show that the results for slowly
varying fields are completely predictable using recently developed version
of a locally constant field approximation. Using exact results beyond the
slowly varying regime, we find that the leading contribution to the number
density of created pairs is independent of fast switching-on and -off if the
time duration $T$\ of a slowly varying field is sufficiently large. It means
that composite fields of such configurations can be used as regularizations
of a slowly varying field, in particular, of a constant field.\emph{\ }We
have studied effects of fast switching-on and -off in a number of cases,
when the value of the total increment of the longitudinal kinetic momentum,
characterized by the dimensionless parameter $\sqrt{eE}T>1$,\ approaches the
threshold that determines the transition from a regime that is sensitive to
parameters of on-off switching to the slowly varying regime. It is shown
that for bosons this threshold value is much higher.\emph{\ }We\ see\ that\
the regularization by faster switching\ on\ and\ off\ is more disturbing%
\emph{,\ }which entails considerable oscillations in distributions, and can
even lead to sharp bursts $N_{n}^{\mathrm{cr}}$\ in narrow regions of $p_{x}$%
\emph{. }The latter circumstance, however, is not essential for estimating
of dominant contributions to the density of created pairs due to the very
strong field. However, the above calculation method which is using the
composite field is more realistic and preferable for the analysis of
next-to-leading terms. Thus, details of switching-on and -off may be
important for a more complete description of the vacuum instability in some
physical situations, for example, in physics of low dimensional systems,
such as graphene and similar nanostructures, whose transport properties may
be interpreted as pair creation effects under low energy approximations.
\section*{Acknowledgements}
The reported study was partially funded by RFBR according to the research
project No. 18-02-00149. The authors acknowledge support from Tomsk State
University Competitiveness Improvement Program. D.M.G. is also supported by
Grant No. 2016/03319-6, Funda\c{c}\~{a}o de Amparo \`{a} Pesquisa do Estado
de S\~{a}o Paulo (FAPESP), and permanently by Conselho Nacional de
Desenvolvimento Cient\'{\i}fico e Tecnol\'{o}gico (CNPq), Brazil.
|
{
"timestamp": "2018-05-01T02:01:00",
"yymm": "1802",
"arxiv_id": "1802.08850",
"language": "en",
"url": "https://arxiv.org/abs/1802.08850"
}
|
\section{Introduction}
\label{intro}
The study of periodically driven quantum systems has gained interest in the last years.
The most important tool in this context is the \textit{Floquet theorem}.
One of its most interesting consequences is that it allows to write the evolution over multiples of the driving period in terms of a time independent effective Hamiltonian \cite{floshi}, \cite{flosam}.
The existence of such an effective Hamiltonian can open the way to the so called \textit{Floquet engineering}, that is, the possibility to realise non trivial time independent models by periodically modulating a quantum system with a suitable protocol. This concept has been employed very successfully in various experiments with ultracold atoms in driven optical lattices \cite{eckrev}. This includes dynamic localisation (\cite{dundyn}, \cite{holdyn}, \cite{gridyn}, \cite{ligdyn}), “photon”-assisted tunneling (\cite{zakphot}, \cite{eckphot}), control of the bosonic superfluid-to-Mott-insulator transition (\cite{ecksup}, \cite{zensup}) and the realisation of artificial magnetic fields (\cite{haldane}, \cite{aidelsburgerstrong}, \cite{tungauge}, \cite{Daliba}).
A recent seminal experiment \cite{bloch} has focused instead on the case of a driven many-body localised system, described by the Aubry-André model \cite{auband}, showing that a properly tuned periodic driving can lead the system across the localisation transition.
The interplay between a many-body localised quantum system and a periodic driving has drawn a lot of attention and recent theoretical works have put forward the possibility that this combination can generate symmetry protected topological phases which have no equilibrium analogues \cite{topological1} \cite{topological2} \cite{topological3} \cite{topological4} \cite{topological5}.
Motivated by the particular case of the experiment reported in \cite{bloch} we consider its single particle counterpart, i.e. the periodically driven Aubry-André model, in order to understand which kind of effects can be disentangled from a strict many-body description.
The same model was recently studied in \cite{Sinha}, with which our results show good agreement, particularly in estimating the critical amplitude.
\section{The model}
\label{sec:model}
We consider the Aubry-André Hamiltonian $H_0$, with periodically modulated potential $V(t)$:
\begin{equation}
H(t)=H_0+V(t)
\end{equation}
where
\begin{equation}
\label{H_0}
H_{0}=J\sum_i^{N} \left(\ket{i}\bra{i+1}+\ket{i+1}\bra{i}\right)+\lambda \sum_i\cos(2\pi \beta i +\phi)\ket{i}\bra{i}
\end{equation}
\begin{equation}
V(t)=A\cos (\omega t) \sum_i \cos(2\pi \beta i+\phi)\ket{i}\bra{i}
\end{equation}
Here $\beta$ is an irrational number, $\ket{i}$ is the Wannier state on site $i$ and $\lambda$ is the disorder strength. We choose periodic boundary conditions for the lattice.
The term $V(t)$ satisfies $V(t+T)=V(t)$, where $T=2 \pi/\omega$.
The qualitative features of the undriven system in the many-body and the noninteracting cases are quite similar \cite{praund}.
In the noninteracting case the Hamiltonian $H_0$ is that of a particle moving in a lattice with two lenghts $L_1$ and $L_2$ which are incommensurate, with $\beta=L_1/L_2$.
This produces a pseudorandom potential and the different realisations of this randomness are obtained changing the value of the phase $\phi$.
Such a Hamiltonian is the simplest realisation of a quasiperiodic crystal, or quasicrystal (see \cite{revQC} for a review on the topic).
When $\beta$ is an irrational number the time independent model is proven to undergo a metal to insulator transition: above $\lambda=\lambda_c=2J$ the eigenstates of $H_0$ are localised, whereas below they are delocalised (\cite{auband}, \cite{Azbel}, \cite{Aulbach})
In the many-body case the transition is controlled by the parameters $J$, $\lambda$ and $U$, where $U$ is the intensity of the repulsive on-site interaction.
There is a critical disorder strength which depends on $J$ and $U$ above which the system becomes localised \cite{timeindependent}. More precisely, until $U \approx 2\lambda $ the interaction decreases the degree of localisation, while for large $|U|$, increasing $U$ helps to make the system more localised.
This is understood as a consequence of the formation of repulsive stable bound atom pairs in optical lattices described by a Hubbard Hamiltonian (\cite{Mattis}).For the first realization of this effect with cold atoms see \cite{Hubbard} These pairs have a reduced effective tunneling rate of $J_{eff} \approx J^2/|U|$ which thus increases the degree of localisation .
Both above and below $2\lambda$ for each value of $U$ there is a definite value of $\lambda$ for which the transition occurs.
It is thus interesting to see whether these analogies are retained in presence of the time periodic modulation.
\section{Setup}
\label{sec:setup}
\subsection{Imbalance}
As a first step to explore the phase diagram of the model we mimic as closely as possible the procedure described in \cite{bloch} but in a single particle context.
The initial state there is chosen as a density-wave pattern in which fermions occupy the even sites of the lattice.
The parameter which discerns between a localised and a non-localised phase is the asymptotic imbalance:
\begin{equation}
\label{imbalance}
I=\lim_{t \rightarrow \infty} \frac{1}{t} \int_0^t dt' \dfrac{N_e(t')-N_o(t')}{N_e(t')+N_o(t')}
\end{equation}
where $N_e$ and $N_o$ are the number of particles in the even and odd sites respectively.
A persistent imbalance indicates a localised phase, while it obviously drops to 0 in absence of localisation, indicating that the system is ergodic as it does not retain the memory of its initial conditions.
To properly imitate the experiment we consider different realisations of the system each initially localised on a single even site and let them evolve separately under the Hamiltonian $H(t)$. The initial state in the Wannier states basis for each realisation $m=1,...,N/2$ reads:
\begin{equation}
\psi^{(2m)}(i,t=0)=\braket{i|\psi^{(2m)}(t=0)}=
\delta_{i,2m}
\end{equation}
After a long evolution time we sum the modulus squared amplitudes of all the realisations to obtain the final density, namely:
\begin{equation}
n(i,t)=\sum_{m=1,...N/2} |\psi^{(2m)}(i,t)|^2
\end{equation}
The above definition is justified by the fact that the one-body density of a noninteracting many-body system is the sum of the densities of the occupied single particle states \cite{manybodybook}.
The analogues to the occupation numbers $N_e$ and $N_o$ are then calculated by simply using this density function as a weight in the following sum:
\begin{equation}
N_e(t)=\sum_{i=1}^{N/2} n(2i,t
\end{equation}
and similarly
\begin{equation}
N_o(t)=\sum_{i=0}^{N/2-1} n(2i+1,t
\end{equation}
With these definitions we can simply calculate the imbalance as defined in equation \ref{imbalance}.
\begin{figure}
\includegraphics[width=8.8cm]{transitiontimeindependent200pts1.pdf}
\caption{Imbalance in the time independent case as a function of the disorder strength $\lambda$, for lattice size $N=50$, with periodic boundary conditions.
We see a clear critical value at $\lambda_c=2J$ indicating the localisation transition. The time of integration is $\tau=1000 \hbar/J$, at which the imbalance has reached its asymptotic value}
\label{timeindependent}
\end{figure}
Before moving to the results for the driven lattice we show how the Imbalance behaves around the phase transition for the time independent model.
Figure \ref{timeindependent} was obtained considering a lattice made of $N=50$ sites and calculating the asymptotic Imbalance for different values of the disorder strength $\lambda$.
It shows how the transition is marked by a nonzero value of the imbalance as a function of $\lambda$ (all energies are in units of $J$), at the critical value $\lambda_c=2J$.
The imbalance is thus able to signal in a clear way the transition from the localised to the delocalised phase.
\subsection{Inverse Participation Ratio}
In this subsection we link the localisation properties of the model to the localisation of its Floquet modes.
The time periodicity of the full Hamiltonian $H(t)$ allows us to make use of the Floquet theorem, which states that we can write the time evolution of an arbitrary initial state as \cite{floshi}, \cite{flosam}:
\begin{equation}
\label{eq:floquet}
\ket{\psi(t)}=\sum_n c_n e^{-i\epsilon_n(t)/\hbar}\ket{u_n(t)}
\end{equation}
with $c_n=\braket{u_n(0)|\psi(0)}$ and $\epsilon_n$ are the so-called quasienergies. We emphasize the fact that these coefficients do not depend on time.
The states $\ket{u_n(t)}$ are called Floquet modes.
They have the same time periodicity of the original Hamiltonian and can be found as eigenstates of the propagator over one period, which reads:
\begin{equation}
U(T,0):=\mathcal{T} \exp{\bigl(-\cfrac{i}{\hbar} \int_{0}^T H(\tau) d\tau \bigr)}
\end{equation}
where $\mathcal{T}$ denotes the time ordering operator.
We expand the Floquet modes at $t=0$ in the Wannier state basis yielding:
\begin{equation}
\ket{u_n(0)}=\sum_{i} b_i^{(n)} \ket{i}.
\end{equation}
We define the averaged Inverse Participation Ratio (IPR) as the average of the IPRs of all the Floquet eigenmodes on the Wannier states, namely:
\begin{equation}
\text{IPR}=\frac{1}{N} \sum_{i,n} |{b_i^{(n)}}|^4
\end{equation}
where $N$ represents the number of Floquet modes which coincides with the number of sites of the lattice.
If each one of the Floquet eigenmodes is localised on a single Wannier state then for any $n$ there exists an $i$ such that $|{b_i^{(n)}}|\approx 1$ and the sum approaches $1$. If instead the eigenmodes are distributed among many Wannier states then $|{b_i^{(n)}}| \approx 1/\sqrt{N}$ for all $n$, $i$ and the averaged IPR goes to $0$ as $1/N$.
Thanks to the form of equation (\ref{eq:floquet}) we can expect a localised dynamics when very few Floquet modes participate in the time evolution of an initial Wannier state.
This would resemble in the context of time-periodic systems the phenomenon of Anderson localisation where the localisation of the eigenstates of the Hamiltonian implies non-ergodic dynamics \cite{andloc}.
This is not however the only mechanism that is in play, as localisation can occur as a consequence of the degeneracy of energy levels e.g. when the renormalized hopping parameter $J$ becomes very small due to the driving. This particular mechanism is often referred to as dynamic localisation or band collapse (\cite{DreseHolthaus}, \cite{eckdyn}). In \cite{DreseHolthaus}, in particular, the authors propose to observe a dynamic localisation effect in a realization of the Aubry-André Hamiltonian by tuning the amplitude and frequency to a value for which the renormalized hopping would vanish. The periodic modulation that we are considering here is however different from theirs and doesn't allow the same tunability of the renormalized hopping.
Confronting the Inverse Participation Ratio with the Imbalance allows us to discern which effects are due to collapse of the bandwidth of quasienergies and which ones are due to an Anderson localisation transition.
\section{Results}
In what follows we will indicate the disorder strength, $\lambda$, and the amplitude of the modulation, $A$, in units of $J$ and times in units of $1/J$.
The calculations below were done considering a lattice made of $N=50$ sites, averaging over $20$ different realisations of the disorder, which are obtained by varying the value of the phase $\phi$ in equation (\ref{H_0}).
In choosing $\beta$ we decided to follow as close as possible the choice of the experiment in reference [1], so we chose $\beta=532/738.2$.
The simulations were made using the standard Matlab toolbox, solving the time evolution with the ode45 function in order to compute the imbalance and exactly diagonalizing the propagator over one period to find the Floquet modes.
As a first step to outline the behaviour of this model we calculated the imbalance for strong driving, i.e. $A=\lambda$, for a broad range of frequencies, keeping the disorder strength at a fixed value above the critical one $\lambda=3=A$.
The results are shown in figure \ref{Ivsfreq}, which highlights a delocalised regime for low frequencies while for high frequencies the imbalance approaches that of the model in absence of driving.
The similarity in the main features between this figure and the ones in \cite{bloch} is striking. In particular the dip appearing after the imbalance has started to rise, around $\hbar \omega= 2\lambda$, is present also in the many-body experiment.
\begin{figure}
\includegraphics[width=\linewidth]{Ivsfreq.pdf}
\caption{Imbalance as a function of frequency for $A=\lambda$, normalized to its value in the absence of driving. While for low frequencies the imbalance is vanishing, it approaches the value for $A=0$ for high frequencies. This plot can be compared with the ones present in \cite{bloch} and confirms the fact that the single particle picture is able to capture the qualitative features of the many-body experiment.}
\label{Ivsfreq}
\end{figure}
To explain the origin of the above mentioned dip in the value of the imbalance we first have to consider the response of the model to various values of frequency and disorder.
To this intent we computed the time averaged imbalance for different values of the disorder strength $\lambda$ and the angular frequency $\omega$, setting the amplitude of the modulation in the strong driving regime i.e. $A=\lambda$.
The evolution time is chosen to be $100$ times the period of the modulation.
In \ref{frequencyImbalancelog} and \ref{frequencyIPRlog} the vertical axis shows $\frac{\lambda}{\hbar \omega}$ to allow comparison with the experiment in \cite{bloch}.
In \ref{frequencyIPR} the Inverse Participation Ratio is displayed as a function of $\lambda$ and $\hbar \omega$ to more clearly show the relation between the frequency response and the spectrum.
\begin{figure}
\includegraphics[width=\linewidth]{imbalancelog.pdf}
\caption{Imbalance as a function of frequency and disorder strength. On the vertical axis $\frac{\lambda}{\hbar \omega}$ is displayed in logarithmic scale to allow comparison with the experiment in \cite{bloch}. The dashed line is for $\hbar \omega = 2\lambda$, which is the approximate critical value for the frequency.}
\label{frequencyImbalancelog}
\end{figure}
\begin{figure}
\includegraphics[width=\linewidth]{iprlog.pdf}
\caption{Inverse Participation Ratio as a function of frequency and disorder strength. On the vertical axis $\frac{\lambda}{\hbar \omega}$ is displayed in logarithmic scale to allow comparison with the experiment in \cite{bloch}. The dashed line is for $\hbar \omega = 2\lambda$, which is the approximate critical value for the frequency.}
\label{frequencyIPRlog}
\end{figure}
\begin{figure}
\includegraphics[width=\linewidth]{frequencyIPR.pdf}
\caption{Inverse Participation Ratio as a function of frequency and disorder strength, for $A=\lambda$. The white dashed line is for $\hbar \omega = 2\lambda$, dividing the localised phase (yellow) from the delocalised one (blue).}
\label{frequencyIPR}
\end{figure}
Figure \ref{frequencyImbalancelog} confirms that for very low frequencies the system is brought to a delocalised phase (marked by a vanishing imbalance), while for high frequency the driving is not able to bring the system to delocalisation anymore. Moreover it displays distinct analogy with the corresponding phase diagram in \cite{bloch}.
As anticipated in the previous section we computed the Inverse Participation Ratio for various values of $\omega$ and $\lambda$. Figures \ref{frequencyIPR} and \ref{frequencyIPRlog} distinctly show the separation between the two phases. This also shows that the localisation properties of the Floquet eigenmodes at initial time allow us to discern in a broad sense the different localisation properties of the system.
This happens despite the fact that the initial Floquet modes carry no information on the structure of the quasienergy spectrum which can contain accidental crossings of energy levels, causing the system to be partially localised.
The relatively small size of the system implies that the Inverse Participation Ratio will display finite size effects in the delocalised phase, where it vanishes as $1/N$. We run a simulation which computes the IPR as a function of frequency for different system sizes, going from $N=50$ to $N=500$. For each system size the IPR goes to $0$ with the correct scaling with respect to $N$, while in the localised phase its behaviour is largely unchanged.
There's a transition line (white dashed line in \ref{frequencyImbalancelog}, \ref{frequencyIPRlog} and \ref{frequencyIPR}) above which the system remains localised. This line appears for $\hbar \omega_c=2\lambda$ which can be understood from the spectral properties of the Hamiltonian $H_0$ of equation \ref{H_0}.
To better illustrate this, in figure \ref{fig:spectrum} we show the spectrum of the Aubry-André model as a function of the disorder strength for $N=50$ lattice sites.
\begin{figure}
\includegraphics[width=\linewidth]{spectrum.pdf}
\caption{Spectrum of the Aubry-André model as a function of $\lambda$ for $N=50$ lattice sites.}
\label{fig:spectrum}
\end{figure}
The bandwidth of the Aubry-André Model is $\approx 2\lambda$ for any disorder strength above the transition point $\lambda_c=2J$.
Thus the transition line in the time periodic case appears when the quanta of energy that the driving pumps into the system are too big for the system to absorb.
Above the transition line the system's behaviour becomes that of the time independent model.
This is because the period of the driving $T=\frac{2 \pi}{\omega}$ is now smaller than the fastest time scale present in the Aubry-André Hamiltonian, making the system unable to respond to the driving.
Below the transition line there are other smaller revivals of the localised phase. We attribute this intricate structure again to the spectrum of the Aubry-André model which is divided into smaller subbands divided by spectral gaps. In the intermediate range of frequencies where $\hbar \omega$ is comparable to the energy gaps present in the spectrum, the presence of a localised phase has a non monotonic dependence on the frequency of the modulation.
The study of the relevance of the single particle spectrum and the density of spectral lines for the frequency response of the model was studied in the thesis that lead to this paper \cite{thesis}.
All these results confirm the qualitative picture outlined in \cite{bloch} and are well understood in terms of the single particle spectrum. This suggests that in this context the time averaged imbalance, while providing a precise characterization of the phase diagram of the model, doesn't seem able to disentangle the role of the interactions.
For very low frequencies the system is brought to delocalisation. We can define the following parameter:
\begin{equation}
\tilde{\lambda}(t) \stackrel{\text{def}}{=} \lambda+A\cos(\omega t)
\end{equation}
If the frequency is very low the global parameter $\tilde{\lambda}(t)$ is changed adiabatically and sweeps through the transition point $\lambda_c=2J$, bringing the system to delocalisation.
This intuitive picture helps us understand the role of the amplitude of the driving $A$: even for arbitrarily low frequencies the system does not delocalise if the amplitude is not big enough to make $\tilde{\lambda}(t)$ sweep through the critical point $\lambda_c=2J$.
Following this reasoning we define the critical value for $A$ to be such that $\min_t \left\{ \tilde{\lambda}(t) \right\} =\lambda_c=2J$, namely:
\begin{equation}
\label{eq:Acrit}
A_c=\lambda-\lambda_c=\lambda-2J
\end{equation}
\begin{figure}
\includegraphics[width=\linewidth]{AcriticalI.pdf}
\caption{Imbalance as a function of amplitude and disorder strength, for $\nu=0.005(1/J)$. There's a clear line for $A=\lambda-2$ separating the localised phase (yellow) to the delocalised one (blue).}
\label{fig:Acrit(I)}
\end{figure}
\begin{figure}
\includegraphics[width=\linewidth]{AcriticalIPR.pdf}
\caption{Inverse Participation Ratio of the Floquet eigenmodes as a function of amplitude and disorder strength, for $\nu=0.005(1/J)$. The result is consistent with the one in fig. \ref{fig:Acrit(I)}, confirming that the localisation properties of the model are due to the localisation of the Floquet eigenmodes}
\label{fig:Acrit(IPR)}
\end{figure}
This picture is clearly confirmed by the contourplots of the imbalance and the Inverse Participation Ratio as functions of the disorder strength and the amplitude, which are shown on figure \ref{fig:Acrit(I)} and \ref{fig:Acrit(IPR)}. For these plots we considered a frequency $\nu=\omega/2\pi=0.005 (1/J)$.
The same value for the critical amplitude was found in \cite{Sinha}.
We stress that the very existence of a critical value of $A$ as determined here is valid only in the case of a modulation of the form considered in this work, which corresponds to a modulation of the disorder strength.
It is often stated in the literature (see \cite{abanin}, \cite{regimes}, \cite{rehn}) that a modulation of arbitrarily small amplitude will always delocalise a many-body localised quantum system provided that the driving frequency is small enough.
This statement however is not in contrast with our result as it refers to a driving of the form $A\cos(\omega t) \sum_i i \ket{i} \bra{i}$.
Last, we want to briefly comment on the role of the interaction. In presence of the interaction $U$, we expect to retain most of the qualitative results displayed, as comparison with the many-body experiment seems to suggest. In particular, regarding the frequency response of the system we expect the description in terms of the spectrum to be still relevant, with the critical value $\omega_c$ to be shifted to be equal to the badwidth of the interacting model. According to \cite{mastro} the many-body interaction will change the size of the infinite number of spectral gaps of the noninteracting spectrum, without closing any of them. This makes the interacting model very similar to the noninteracting one, except for the intermediate frequency regime where the effect of the interaction in coupling the energy levels is crucial, possibly explaining the less sharp peaks displayed in the experiment in \cite{bloch}.
\section{Conclusions}
Our work shows that the driven noninteracting Aubry-André model qualitatively reproduces many of the localisation phenomena which are found in the experiment \cite{bloch} such as the presence of a delocalised phase for low frequency, the persistence of localisation for a high frequency driving, and the existence of a critical value of driving amplitude for the onset of the localisation transition.
We were able to determine the critical values for the frequency and the amplitude of the driving, and provide a theoretical explanation for their existence.
Moreover we related the phases of the model to the localisation of its Floquet eigenmodes.
Future theoretical studies should focus on the role of interactions and the new qualitative aspects they bring.
\section{Aknowledgements}
This project was supported by the University of Southampton as host of the Vice-Chancellor Fellowship scheme.
D.R. would like to thank the Ludwig Maximilian University (Munich) for the hospitality during the writing of the thesis that lead to this work.
\section{Author contribution}
Starting from an idea by A. R., D. R. has carried out the computations and produced the data in this work, as well as working towards their interpretation with A. R.\\
C. L. has contributed to detailing such interpretations and helped D.R. in writing the paper.\\
D. R., A. R., C. L. have contributed to the discussion of the results and the revision of the paper.
\bibliographystyle{unsrt}
|
{
"timestamp": "2018-07-31T02:12:21",
"yymm": "1802",
"arxiv_id": "1802.08859",
"language": "en",
"url": "https://arxiv.org/abs/1802.08859"
}
|
\section*{Abstract}
In an increasingly data-driven world, facility with statistics is more important than ever for our students. At institutions without a statistician, it often falls to the mathematics faculty to teach statistics courses. This paper presents a model that a mathematician asked to teach statistics can follow. This model entails connecting with faculty from numerous departments on campus to develop a list of topics, building a repository of real-world datasets from these faculty, and creating projects where students interface with these datasets to write lab reports aimed at consumers of statistics in other disciplines. The end result is students who are well prepared for interdisciplinary research, who are accustomed to coping with the idiosyncrasies of real data, and who have sharpened their technical writing and speaking skills.
\section{Introduction} \label{sec:intro}
The ubiquity of computers and computational methods has provided massive amounts of data regarding the world we live in, and computers also provide a powerful way to analyze such data. Data science is an emerging field at the intersection of statistics, computer science, and disciplinary applications. Thus, mathematics departments can expect increasing demand for courses in statistics and data science, and increasing enrollments if such courses are offered. Similarly, individuals with PhDs in statistics and data science will be in short supply, as they will be in demand from both high paying industry jobs, and colleges and universities of all types. As a consequence, it is important that mathematicians develop the ability to teach applied statistics and data science courses. This paper will describe the process I went through, as a mathematician with no formal training in statistics, to develop a two-semester applied statistics sequence for our mathematics majors.
The process I used to develop these courses involved reaching out to colleagues in other departments and leveraging their applied statistics experience. I will describe how I did this, so that my approach can be used at other institutions.
There are many different ways to teach statistics courses. The simplest division is between focusing on theory and focusing on applications. On the theory side, the traditional ``mathematical statistics'' course focuses on probability theory and the mathematical properties of statistical distributions. If any datasets appear in the course, they have usually been pre-cleaned and guaranteed to satisfy the hypotheses of the statistical models (e.g. the hypotheses of the Gauss-Markov Theorem \cite{wooldridge}).
Real-world datasets effectively never look this nice, and students who have only taken a mathematical statistics course do not know what to do with dirty data, or data that does not satisfy the required hypotheses. Unfortunately, the mathematical statistics course just described is the only exposure to statistics that most mathematics faculty had as undergraduates, because curricular reform in statistics has only taken off relatively recently. This paper will focus on applied statistics courses, designed to follow the GAISE Guidelines \cite{gaise-report}, Amstat Guidelines \cite{ams-guidelines}, and ASA Curriculum Guidelines \cite{asa-guidelines}, where the goal is for students to learn how to work with real data, how to build and test statistical models for the data, how to check the conditions for these models, and what to do when the conditions fail. As has been pointed out by the MAA's CUPM Guidelines \cite{cupm} (Content Recommendation 3), mathematics majors programs should include such applied data analysis content. The CUPM guidelines do not recommend a mathematical statistics course for all mathematics majors, so mathematics faculty are left with a dilemma: we can either ignore the CUPM guidelines and teach a course similar to what we took as undergraduates, a course that does not work for the majority of students \cite{moore-math-stat}, or we can ``take the plunge" and learn how to teach a truly applied statistics course, even if we have no experience with applied statistics. By describing the process I went through to develop my applied statistics sequence, I hope to convince the reader that the latter option is achievable.
I teach at a small liberal arts college where class sizes are capped at 24. My department had a mathematical statistics course on the books, but the person who developed that course left the university ten years ago. Pure mathematicians without any formal background in statistics were teaching this course more-or-less in the way described in the previous paragraph. The course never had a large enrollment, and was usually populated entirely by mathematics majors, all of whom had at least taken linear algebra. My colleagues asked me to revamp the course, to make it truly applied, and to reduce the prerequisite to Calculus 1. The new course was to be called Applied Statistics. They also asked me to create a sequel to the course, Statistical Modeling. My position in the department involves teaching courses in both mathematics and computer science, so I decided to make computation a fundamental part of the course, following recommendations in \cite{ASA2}. The applied statistics sequence I developed had almost no overlap with my department's previous mathematical statistics course. I de-emphasized probability theory, and tried to give students just enough theory to appreciate the importance of the hypothesis behind the various statistical tests and models the course covered. The course revolved around weekly labs where students analyzed real-world datasets, using the statistical computing software R and an interface called RStudio, and then wrote lab reports summarizing these analyses for a general audience. The course culminated in a final project where students brought together all the tools they had learned to analyze a dataset of their choosing. This project acted as a kind of capstone, along the lines discussed in \cite{capstone}. I will discuss how I assessed the efficacy of my approach towards achieving the desired student learning outcomes in Section \ref{sec:methodology}.
To make space to teach students R, and to teach about the various real-world datasets, I cut out almost all discussion of deriving formulas for standard errors or properties of probability distributions. I did not show students complicated formulas unless it was necessary (e.g. to see how confidence intervals decrease in size as sample size increases). Students never had to compute regression coefficients or standard errors by hand. Instead, our software computed these quantities, and students learned how to interpret the output of the software, how to verify that the software was being used correctly (i.e. checking the hypotheses of the model), and how to check that what they were observing was a real effect and not statistical noise (e.g. via cross-validation). This is not to say I taught a ``cook book" course: the course remained as rigorous as the mathematical statistics course, but the rigor was shifted away from algebraic manipulations (rendered unnecessary by the software) to carefully thinking about potential bias in the datasets, how to write code to clean the data, how to transform the data to satisfy the hypotheses the statistical models required, and how to write technical lab reports. Students completed several labs over the course of the semester, culminating in a final project. The collection of labs, the instructions I gave students for writing the labs, and the rubrics I used to grade the labs and final project are all hosted on the course webpage. For a mathematics department just beginning to teach statistics, I believe applications should be emphasized to provide students with a mental framework for understanding as many data situations as possible. An upside of this approach is that the prerequisites can be reduced, so that the applied statistics sequence can be made available to students from across campus. In addition, our present enrollments in the applied statistics sequence are much higher than our enrollments in mathematical statistics, so students are ``voting with their feet" that they prefer the applied approach.
My process for developing the applied statistics sequence revolved around outreach to colleagues in partner disciplines. I leveraged the applied statistical expertise of these colleagues to determine what statistics content would be most appropriate for a year-long applied statistics sequence. Research papers written by these colleagues provide excellent material to create projects, as the specifics of each dataset have already been cataloged, as the conclusions students should reach are already known, and as the faculty member can serve as a project supporter if the student should have questions. In addition, such projects naturally set up students to do research with faculty members who contribute their data and analyses. In order to make this course repeatable, and to share resources with others on campus who teach statistics courses (e.g. research methods in social science, biostatistics, experimental physics, etc), I created a data repository hosted on the university server, that all contributing faculty had access to. The main section of this paper, Section \ref{sec:outreach}, describes the process I used for contacting colleagues in other departments, soliciting suggestions for content (especially for the second course in applied statistics), and building the data repository. I also give examples of labs I created using these datasets.
Based on these discussions with partner disciplines, I designed the applied statistics sequence to cover content that working data scientists would need. A discussion of the topics for these two courses is provided in Section \ref{sec:applied-topics}. A selection of projects to support these topics, based on real-world datasets, is provided in Section \ref{sec:projects}. Section \ref{sec:methodology} provides a discussion of how I assessed the resulting courses. Lastly, the Appendix contains specific details about the implementation of these courses, how I structured the day-to-day classes to maximize the educational impact of the projects, and details about the course content. This Appendix is designed to help mathematicians develop an applied statistics course, if they have never taught one before.
\section{Outreach to Partner Disciplines} \label{sec:outreach}
At a small liberal arts college, ``partner disciplines'' often means the entire campus. While attempting to design the applied statistics sequence, I was fortunate to have access to helpful colleagues in a wide variety of departments. It was easy enough to find which colleagues in other departments used quantitative methods in their research. These colleagues were often the same ones who taught the ``research methods'' courses in their departments. Mathematically, these courses are equivalent to the lowest level statistics course in the mathematics curriculum, i.e. our service course for non-majors. When I learned I would be developing the statistics sequence, I reached out to statistically oriented faculty in psychology, neuroscience, biology, chemistry, physics, geoscience, economics, political science, education, sociology, and classics (computational linguistics). I also received datasets from our athletics department, from our institutional research staff (anonymized data on students from previous years), and from our investment office. Lastly, I received materials on machine learning from a computer science professor. All of these materials were put into a data repository, discussed in Section \ref{subsec:data-repository}. I found all these colleagues were willing to meet with me.
\begin{question} Here are questions I asked when meeting colleagues in other departments.
\begin{enumerate}
\item What statistical topics do you cover in your teaching?
\item What topics do you wish you could cover if you had more time?
\item What topics do you often find yourself teaching your summer research students to equip them to do research in your field?
\item What topics do you often need for your research, or that you remember from statistics you took in graduate school?
\item What are some commonly used sources of data in your field?
\item I'm in the process of creating a repository full of datasets, metadata, projects based on the data, and teaching materials. All contributors get access to the whole repository. Would you be willing to contribute the data from your research, or teaching materials from your course?
\end{enumerate}
\end{question}
Question (1) gave me a sense of which topics were most important to users of statistics, and hence in the first course, Applied Statistics. It also gave me a wide range of examples to use when covering those topics, and gave me valuable information about which courses on campus to allow as prerequisites for the Statistical Modeling course, in case students who didn't take the Applied Statistics course wanted to get in. Question (6) was a great way to get materials to flesh out the course, e.g. Excel spreadsheets and handouts working out examples. Questions (5) and (6) gave me access to a huge range of trustworthy data. I used some of this data in the projects I created, and also made it available to students for their final projects. Even colleagues who did not teach statistics in their curriculum (e.g. in chemistry, education, and classics) were able to give me access to datasets and online repositories.
Questions (2)-(4) were geared more towards the Statistical Modeling course. Through these conversations I learned about the need for principal component analysis, non-parametric statistics, time series analysis, and the generalized linear model (especially Poisson regression) in our curriculum. These conversations also served as a valuable way for me to learn about these topics and to get references I could share with my students. For example, colleagues in physics and geoscience highlighted the importance of modeling measurement error, since their data comes from physical devices, and precision is a key concern. Based on an example from the experimental physics course, I developed a short unit on measurement error for Statistical Modeling. As another example, colleagues in economics, political science, and sociology highlighted the importance of more advanced linear regression techniques for when the conditions of regression (e.g. homoscedasticity, no autocorrelation, non-normally distributed errors) are not met. I devoted a large chunk of Statistical Modeling to such situations, and used the examples my colleagues cited as in-class examples and labs. Finally, colleagues in geoscience, biology, and psychology all agreed that principal component analysis is the most common tool in their research, and one they consistently teach to summer research students. Now that this topic is contained in my Statistical Modeling course, and now that the prerequisites have been arranged to allow entry for non-math majors, these professors plan to send future summer research students to take Statistical Modeling.
These discussions with colleagues led to several pleasant side-effects. First, everyone I met with agreed to answer student questions if students selected their dataset for use with a final project. This meant that I did not need to become an expert on every dataset in the data repository. Secondly, I was able to act as an intermediary between different departments, to raise the overall level of statistical knowledge at the university. For example, I taught a colleague in chemistry about Q-Q plots, a tool for testing if a dataset is normal that I learned from a colleague in psychology. Similarly, I taught a colleague in neuroscience about some non-parametric methods I learned from a colleague in geoscience, to get around a lack of normality in his dataset. Thirdly, once faculty saw the wealth of material in the data repository, several expressed interest in setting up a reading group to discuss how best to teach statistical concepts across the curriculum. In the 2016-2017 school year, I worked with the director of our Center for Teaching and Learning to run such a reading group, with a regular attendance of more than 12 faculty. I created a syllabus of statistical pedagogy papers, and associated discussion questions. I later learned that various faculty groups over the past twenty years had called for such a gathering of faculty from diverse disciplines, to improve quantitative offerings across campus.
I hope to write a paper in the near future discussing this experience and making the reading group materials public.
The last side effect relates to the new Data Analytics (DA) major at my university. This major was proposed in 2015, and adopted in 2016. The faculty in the reading group have been strongly supportive of the new major and have created applied tracks through their majors for these data analytics students. In the spring of 2017, Data Analytics 101 was offered for the first time, similar in spirit to Applied Statistics (using R, de-emphasizing theory, and analyzing real-world datasets from the data repository) developed and team-taught by a a psychologist, a political scientist, a biologist, and a mathematician. All these faculty were members of the reading group and contributors to the Data Repository. These faculty created four large-scale projects for DA 101, three of which used datasets from the Data Repository. In the near future, these materials will be made public along with the materials from my statistics sequence, and I will host a link at \href{http://personal.denison.edu/~whiteda/index}{my webpage} \cite{webpage}. The statistics sequence will serve DA majors, following the curricular recommendations of \cite{pcmi}.
\subsection{Data Repository} \label{subsec:data-repository}
Most universities have an internal server system where each faculty member can request storage space. As I gathered datasets and teaching materials from colleagues, I put them into a folder on our university server so that everyone teaching statistics could use the materials and could contribute to them. At a campus without a statistician, it seemed wise to leverage the abilities of all statistically-oriented faculty on campus. In this way, each of us could draw examples from a variety of fields, each of us could use labs developed by the others, and each of us could store the data we planned to use in a safe place in case it ceased to be available at the original source.
When I asked my colleagues Questions (3) and (4), they suggested statistical topics based on their research experiences. With a follow-up question, I was able to obtain the datasets where they needed these statistical methods. As a result, I had real data I could use to demonstrate each topic I wanted to teach, and to create projects for students. I found that students enjoyed replicating results published by their professors and by other students who did summer research. Additionally, my colleagues often had a deep understanding of the datasets they gave me, and together we could create metadata about the datasets. In this way, anyone using the data (either future faculty or students) would know what was in the data, what results to expect from students working on the data, and any issues with the cleanliness of the data. In some fields, this type of metadata is called a \textit{codebook}. I decided early on not to clean the data for my students, since the GAISE Guidelines are clear that students should learn to work with real data, which is often messy. However, I still wanted to know the ways in which the data was messy, e.g. outliers, missing data, saved in a bad file format, etc., so that I could walk the students through the exploratory data analysis and cleaning stage.
In some fields the culture is to keep datasets private and only publish the findings from the data. The data repository allowed colleagues to make their data available for teaching purposes and only on our campus, so that the data would not find its way into the hands of a competing research group. I think the majority of my colleagues had some reticence about sharing data, but the friendly nature of our liberal arts college convinced them. Originally, I think they would have resisted making the datasets public, even for teaching purposes. However, there is currently a push on my campus to make the data repository (and associated teaching materials) publicly available, as part of my university's new major in Data Analytics. This new major received a great deal of support from the faculty I contacted in other departments, and most have agreed to allow their datasets to be made public in the coming months.
\subsection{Sample applications}
As a result of these conversations with partner disciplines, I created a list of datasets my students could draw from when choosing their final projects, and I created several labs, discussed in Section \ref{subsec:labs-repository}. An abbreviated list of datasets is provided below, to help faculty who are new to teaching statistics, but want to use real data. A complete list, with links to data sources, can be found on the course webpage for Statistical Modeling \cite{stat-modeling-webpage}. Most of the data for these projects can be found online, e.g. UCI data archive, ICPSR archive, Kaggle, Data Science Central, deeplearning.net, and KDnuggets.com.
\begin{itemize}
\item Sociology - city of Chicago data, American Time Use Survey, police data on traffic stops, Chinese Census, UK government data, Bureau of Labor Statistics, general social survey.
\item Data Mining - from Google, Facebook, Twitter, Uber, Netflix, Wikipedia, Pandora, Pew Research Center, bike-share programs, traffic data.
\item Biology - genomics, ecological forecasting, healthcare data (NESARC, ADHEALTH), NIST.
\item Astronomy - Radio jet data, FITS image data.
\item Physics - protein folding, human movement data, data from the small Hadron collider.
\item Geoscience - ice core data for global temperatures, Paleobiology Database.
\item Neuroscience - field of vision, connectomics, classifying personality types via relevant factors.
\item Chemistry - energy in reactions, pharmacology data.
\item Psychology - PsycINFO repository, data on brain disorders and genetic correlations.
\item Political science - polarization in Congress, religious congregations, voting patterns.
\item Economics - Conference Board, World Bank, Yahoo Finance, salary data.
\item Computer science - social networks, data on passwords.
\item Education - NAEP database, CUPP report, Common Core data, Department of Education.
\item History - trans-Atlantic slave trade database, CIA Factbook, Global Terrorism Database.
\item Sports - baseball (MLB), football (NFL), soccer (FIFA), and basketball (NBA).
\item Computational linguistics - authorship verification, natural language processing, Google Ngram.
\end{itemize}
I provided a list much like this to prospective students at course registration time (to pique their interest), at the start of the semester (to motivate them), and when I handed out the prompt for the final project. About 50\% of the students chose from one of the datasets on the list. The others found their own datasets online. When a dataset from online is suitable for a final project, I add it to the data repository and use the student final project to create the metadata. In this way, the data repository is continually updated for future classes, whenever students find more usable data sources.
\subsection{Labs based on these datasets} \label{subsec:labs-repository}
As we will see in Section \ref{sec:projects} (and the Appendix), the labs in my statistics sequence were a mixture of labs drawn from statistics courses at other universities, projects taken from textbooks, and labs of my own design. In this section, we focus on labs created as a result of contacts with partner disciplines.
An early homework assignment in Applied Statistics (the first course in the sequence) involves studying several misleading graphics used in the news media (that I compiled with the aid of professors in psychology and political science). For each graphic, students write a short descriptions of what the graphic represents, why it is misleading, and what human biases might cause a desire to mislead. Students are then directed to find another example of a misleading graphic. This assignment reinforces early content about data visualization, helps students to develop skepticism, and demonstrates the real-world utility of studying statistics. This homework takes place before students have learned R, so there is no actual data analysis in the lab.
Another pair of early assignments in Applied Statistics, while students are learning R, leads students through a data analysis of polling data. This unit is particularly popular in election years. I created this unit based on discussions with a political science professor, who pointed me to relevant datasets, methodology from Nate Silver's blog, and published sources on a few subtler points. In the first polling assignment, students analyze different graphics and write about which are most useful. Then, students conduct a rudimentary meta analysis, whose purpose is to reveal radical differences between polling aggregation website. Students write about bias, and how to model trustworthiness of a poll, then compute a simple confidence interval. In the second polling assignment, students work through increasingly sophisticated models for forecasting election results, beginning with simple linear regression, regression with interaction, and weighted regression. Students write about what makes forecasting difficult, and students must carefully list the assumption they are making for their forecast to be valid (e.g. assuming that no large events occur between now and election day to change public sentiment, assuming the trend in poll data retains the same shape at future moments in time, etc.). Students then read Nate Silver's methodology and critique it, based on what they learned.
A late RStudio lab in Applied Statistics, once students are already comfortable with R and with writing lab reports, instructs them to comb through the World Bank dataset to identify a question that they are interested in, then attempt to answer this question using multiple linear regression. Students will find that the conditions for regression are rarely satisfied: transformations to linearity are often required, scatterplots often display heteroscedasticity, and the data is spread across several years of time (leading to a likely violation of the autocorrelation condition). I also give the students an expository reading by an econometrician suggesting there is reason to believe that the World Bank data exhibits severe bias, since countries report their data knowing full well that the amount of funding they receive from the World Bank will be based on this data. In the end, a good lab report conducts an analysis along the lines of those we have covered in the course, with several caveats that the results should be viewed with some skepticism due to the failure of so many regression conditions. This lab can be frustrating for students, but often motivates them to continue on to Statistical Modeling (where we learn how to cope with autocorrelation, and where we learn more powerful tools of coping with heteroscedasticity). This lab also leads nicely into the final project, by giving students lots of flexibility, but also holding them accountable for exercising caution throughout their analysis. Some students even choose World Bank data for their final projects, and try to learn some of the content from Statistical Modeling to fix the issues identified in this lab.
An early RStudio lab in Statistical Modeling involves conducting an exploratory data analysis of human field of vision data gathered at my university by a professor of neuroscience and his students over several years. My students re-create the figures (histograms and boxplots) from a published paper, and then improve on the analysis. First, students recreate the two-sample t-tests and p-values in the published paper. Then, students are directed to use what they have learned from our in-class examples, about checking the conditions of a test before carrying it out, and using tests appropriate to the question at hand. Ideally, students will use Q-Q plots to discover that the normality conditions are not satisfied, then use non-parametric tests for a difference in means to see if this failure affects the main results of the paper (thankfully, it does not). Lastly, strong students will realize that ANOVA should be used for a question involving multiple comparisons, and will carry out a non-parametric version of ANOVA, the Kruskal-Wallis test, to minimize the chance of a Type I error.
A midsemester RStudio lab in Statistical Modeling analyzes a dataset about various species of birds, that I obtained from a colleague in biology. Students have flexibility in choosing the response variable, but are instructed to build the best multivariate regression model possible. This dataset is rife with missing data and impossible data. The dataset also requires transformations of data types to properly read it into R. Students must clean the data while building the model, because different choices of explanatory variables will have different missing data values to remove. The lab comes after a lengthy in-class discussion of p-hacking, so that students will realize the danger of trying numerous models and only reporting the one with the best $R^2$. For this reason, the lab requires students to use cross-validation, and to write in the Methods section about whether the way in which data is missing could lead to bias. In a more advanced class, this would be a good dataset to demonstrate multiple imputation and other methods for filling in missing data, or testing if the way the data is missing is random or not.
Lastly, several students chose datasets from the Data Repository for their final projects. Many of these projects could be adapted to future labs. For example, one student analyzed an internal university dataset with anonymized student data on merit-based financial aid, major, GPA, gender, race, and ranking by the admissions office upon acceptance. The project realized that admissions ranking and merit-based financial aid do not predict for GPA, raising serious questions about the efficacy of current administrative practice. Another student analyzed a dataset on automobile insurance from an alumnus who works in the area. This student presented her analysis during interviews and is now an actuary. Another student carried out an analysis on human movement data provided by a professor in the physics department, and later became her research assistant. Lastly, as part of a reading course with another professor in my department, a student used the tools from my course to analyze a criminology data provided by an alumnus (now part of the data repository), and later won an internship in his company to continue her analysis over the summer.
\section{Applied Statistics Topics} \label{sec:applied-topics}
This section discusses the content I emphasized in the two-semester statistics sequence I developed. My topic selection was informed by discussions with colleagues in partner disciplines, about what topics came up most often in their research, and by discussions with statistics professors at several liberal arts colleges.
\subsection{Emphasizing real world applications}
The goal of the two-semester statistics sequence I developed, following the CUPM Guidelines \cite{cupm}, was to provide students with applied data analysis skills. For this reason, I structured the statistics sequence around weekly labs where students analyzed real-world datasets, as suggested by the GAISE Guidelines \cite{gaise-report}, culminating in a final project where students analyzed a dataset of their choosing (in 2016, this was called a ``Semester Long Project'' to encourage students to start it early). As a byproduct of this approach, students became familiar with datasets in numerous fields, and in particular with certain datasets used by my colleagues in their research. Several students went on to carry out research with those faculty members. Furthermore, when students applied for jobs, they had a portfolio of data analyses to show their skills. Many students reported that this helped them get a job, and that what they learned in the statistics sequence helps them in their jobs. The lab reports and final presentations also provide students experience with interpretation of their results in context, communication of results to non-technical audiences, and technical writing skills. On a daily basis, active learning techniques are used (see \cite{freeman} for the efficacy of this approach), and students are given hands-on opportunities to work with data.
While the idea of teaching via applied labs is not new in statistics (\cite{discovery-projects, halvorsen-projects, nolan-explorations, nolan-speed, nolan-book, beyond-normal, Wild and Pfannkuch}), this style of teaching is less popular in mathematics. When mathematicians teach statistics, theory is often emphasized over applications, and pre-cleaned data (usually, the data that comes with the textbook) is often used instead of real-world data. Emphasizing real-world data, messy data, and exploratory data analysis is much more useful for our students, and my experience suggests improved student retention when content is delivered in a hands-on, applied way, rather than in a way that emphasizes theory. By de-emphasizing theory, the course is free to include a wide variety of models, as well as an emphasis on the consequences of failures of model conditions. In particular, the course features a heavy discussion of randomization-based inference, a computer-reliant non-parametric method that is becoming increasingly important in the data analysis world. More details can be found in the Appendix.
I found the Isostat email listserv \cite{isostat} an extremely valuable resource. This listserv is populated by statisticians discussing statistics pedagogy. Many listserv members are at small liberal arts colleges. People on the listserv were willing to share textbook recommendations, projects they had used, and exam questions. They also helpfully explained concepts to me when I was first learning the material in these courses. I strongly encourage any mathematician assigned to teach statistics to consult this email list. It is free to join. For more about teaching statistics at liberal arts colleges, see \cite{stats-lib-arts, stats-lib-arts2}.
\subsection{First Course in Statistics}
Leveraging the mathematical maturity guaranteed by a calculus prerequisite, the Applied Statistics course I developed contains all the usual topics that a first course in statistics generally covers, plus a bit more: students learn how to visualize data, how to explore data, how to fit multivariate linear models to data (including ANOVA models), correlation vs. causation, basic tools from probability theory (taught via simulation methods as much as possible), sampling distributions, common statistical distributions (e.g. normal distribution), how to compute confidence intervals, and how to do inference regarding data and regarding regression models. This content usually takes about 80\% of the course, leaving roughly 2-3 weeks to cover advanced topics such as multi-way ANOVA, rank-based non-parametric tests, or logistic regression (or to simply slow down the first part of the course). The flexibility also allows for a deeper focus on R at the beginning of the semester, if a particular group of students needs it.
A detailed schedule, created with help from professors of research methods courses in other disciplines, is available on the course webpage \cite{white-242-2016}. Briefly: the course begins with an overview of the statistical programming language R, daily homework from a website called DataCamp helps students build familiarity with R, and class sessions proceed in an interactive way, as I use a prepared RStudio workbook to analyze a dataset related to the night's reading, following suggestions from the class. As the students develop proficiency in R and basic statistical modeling techniques, via the OpenIntro labs \cite{open-intro} in DataCamp, we transition into spending more class time with students working in RStudio, and I begin to assign highly structured labs in RStudio (rather than DataCamp) following the model developed by Wagaman \cite{wagaman}, including several labs based on the datasets in the data repository. These labs feature increasingly dirty data, and increasingly more freedom for students to decide how to carry out their analysis (i.e. less and less structure in the skeleton RStudio file I provide to students). The labs culminate in the semester-long project.
As a liberal arts professor, I've found that a first course in statistics can be an excellent place to teach students about skepticism and fallacies. For the unit on data visualization, the class covered numerous examples of media and politicians using such visualizations to mislead. For the unit on correlation, we discussed numerous logical fallacies (led by ``correlation does not imply causation'', of course, and supplemented by examples from the website \href{http://www.tylervigen.com/spurious-correlations}{http://www.tylervigen.com/spurious-correlations} \cite{spurious}). Lastly, when covering probability theory, we covered Bayes' Theorem and discussed applications to medical testing. The basic idea is that, if students ever get a positive test for some terrible disease, their probability of actually having the disease is lower than they might expect, and thus they should consider getting a re-test. The course devotes a substantial amount of time throughout to the dangers of ``p-hacking,'' where a researcher runs many tests until finding something interesting. Students read case studies and news articles, and write short responses to identify if whether or not a study should be viewed with suspicion and why. These sorts of real-world applications of the concepts of the course keep students engaged. I also presented this section of the course to the faculty reading group, and many faculty in other departments expressed interest in including a similar module in their own courses.
I have taught the Applied Statistics course twice. After switching from a mathematical/theoretical approach to an applied approach, enrollments boomed, so that now we offer multiple sections each semester, rather than one section per year. Additionally, as word spread of the applied nature of the course, the proportion of non-math majors taking the class has drastically increased. Members of the faculty reading group, and contributors to the data repository, now encourage their students to take this class, and some have even sat in on the class themselves. The average GPA has not changed with the switch from my colleagues' mathematical statistics course to my Applied Statistics course, and students report on evaluations that the class is very challenging. Thus, the increased enrollment suggests students really prefer the applied focus of the course, even if it means learning R and statistics simultaneously.
\subsection{Second Course in Statistics}
There are many options for a second course in statistics (\cite{ams-guidelines, 2nd-design, cobb-2nd-mathy, wagaman}). The most common focuses on regression - multivariate linear regression, ANOVA, and logistic regression - with the punchline that ``everything is regression'' via the generalized linear model. This is the second course I chose to teach, and I used the Stat2 book \cite{stat2}. This book is extremely easy to read, and only requires that students have exposure to basic statistics and a bit of R. Hence, the prerequisites can be kept very low. Just like my Applied Statistics course, I emphasized real-world projects, building up to a final project. Each project contained a written report, in which students summarized their findings to an audience I gave them, e.g. to a senator, to a dean, to a CEO, etc. I devoted particular attention to building models, choosing between different models, the conditions required of the various models, and what remains true when these conditions fail to be met. A daily course plan including readings, weekly projects, daily R practice in DataCamp, and videos showing students how to program in R, can be found on the course webpage \cite{stat-modeling-webpage}: \href{http://personal.denison.edu/~whiteda/math401spring2016.html}{http://personal.denison.edu/$\sim$whiteda/math401spring2016.html}. Many of these materials were not developed by me. I found them with help from the isostat listserv \cite{isostat} and thus was able to focus my attention on creating the new labs discussed in Section \ref{subsec:labs-repository}, and included in the data repository. More details on the day-to-day structure of this course can be found in the Appendix.
One benefit of statistics over mathematics is the ability for students to take such a wide variety of classes after completing their first course, as opposed to the sequential nature of a mathematics major. At the end of the semester, all of my non-graduating students (and two who graduated but still lived in the area) requested more statistics, so I offered a seminar in the fall of 2016 on Bayesian statistics and survival analysis. Then, in the spring of 2017, I offered a seminar on data mining and time series analysis for the same students, joined by others from my Applied Statistics course from the fall of 2016. In the future, I hope to turn these seminars into real upper level electives, now that sufficient interest in statistics has been established.
\section{Data-driven projects} \label{sec:projects}
In this section, I describe projects created based on my conversations with partner disciplines, datasets and repositories shared by my colleagues, and teaching materials I received from statisticians via the isostat listserv. More details can be found in the Appendix.
\subsection{Applied Statistics Course} \label{subsec:242-projects}
My goal for Applied Statistics was that students would be capable of conducting a self-driven analysis of a dataset of their choosing. I wanted students to be able to detect issues of bias in a dataset, to clean a given dataset, to choose the correct procedures to analyze the dataset, and to write a report summarizing the results of their findings to a lay readership. Following the principles of backwards design, I decided that the course should culminate in a final project where students do precisely these tasks. I then developed labs to build students up to the final project, and I structured quizzes and exams to emphasize the skills needed to complete the final project (as discussed in Section \ref{sec:applied-topics}). I began the course with basic exercises in R, to supplement the unit on data visualization. I quickly transitioned into the Open Intro labs \cite{open-intro}, and in some weeks I was able to assign two such labs, leveraging the fact that my students all had more mathematical maturity thanks to the calculus prerequisite. Working through these labs, and the basic exercises in R, occupied the first half of the course (the first 7 weeks). During this time, students were learning more about RStudio in class, and at times even asked if they could use RStudio instead of DataCamp. Hence, in the second half of the semester, students were ready for the switch to RStudio.
At this time, students are given access to the data repository, and instructed to find a dataset either from the repository or from the internet. Students then write a two-page proposal for their final project. The proposal summarizes the dataset, identifies a response variable to study, suggests how the various topics from the course will be used in the analysis, and reflects on potential bias in the data. The prompt for this writing assignment is hosted at \cite{white-242-2016}. Students are not allowed to gather novel data, because at this point we have not discussed experimental design or issues related to the Institutional Review Board (IRB).
For the second half of the semester, students complete weekly projects in RStudio, starting with highly structured projects very similar to those in DataCamp, and culminating with free form data analyses. These projects are based on those of Wagaman \cite{wagaman}, but some use datasets from the data repository, as detailed in Section \ref{sec:outreach}. During all labs in the latter half of the course, students are directed to make steady progress in their final projects. Each week, they are supposed to carry out tests and build models for their own dataset, based on
the content we are learning. In the final two weeks of the course, students are free to write their final papers, turning all the analyses they have done into a coherent story. In the last week of the semester, students give presentations of their results, and hand in their final papers. In the past, student projects have replicated results from professors at my university (re-analyzing datasets from the data repository), have extended analyses from the published papers connected to the datasets in the data repository, and have conducted entirely new analyses. Once, a student project found a mistake in a published paper, and was able to conduct a correct analysis using the tools from the course (thankfully, yielding the same result as the published paper).
On several occasions, I have felt student final projects were of publishable quality, and this is even more true of projects in Statistical Modeling. I am currently working with four different students to get their final papers ready to submit to undergraduate research journals.
\subsection{Statistical Modeling Course} \label{subsec:401-projects}
The learning goals for the Statistical Modeling course were similar to those for Applied Statistics: I wanted students to be able to carry out a detailed final project using the tools from the course. Hence, the Statistical Modeling course was also project-driven, now with both a midsemester project and a final project. I de-emphasized exams and quizzes to add weight to the projects, and each project had both a written part and an R part. In the written part, students wrote an Introduction, Results and Conclusions, and Methods section. The introduction was always written to a non-technical audience. The results section needed to contain just the punchlines and the two best graphics produced. The methods section would talk about potential biases, any decisions the students had to make about outliers, justification of the conditions for tests and models, and potential impact of their decisions to the final results. The written part was required to remain below 3 pages. Separately, the R part would walk students through an analysis, so that they could better understand the week's topic and how to implement it in R, and then would give them the dataset to analyze for their project. Students could include as many supporting tests and graphics as they wanted in this document (viewed as an appendix to the main paper). Some of the materials for these labs were adapted from \cite{kuiper-book}, some from Amy Wagaman of Amherst College, and some from colleagues in other departments (replicating their research papers, as described in Section \ref{subsec:labs-repository}). In the future, I will also design labs based on the midsemester and final projects of these students.
Early projects were extremely structured, so that students really only needed to interpret the output of code I gave them, or write basic commands of their own modifying that code. Later projects were not structured at all, instead just giving students space to write R code and conclusions. In addition to these projects, students completed midsemester and final projects on topics of their choosing. I required that the midsemester project involve an in-depth data analysis using both multivariate linear regression and logistic regression. The final project could be another data-driven project (required to use ANOVA, principal component analysis, or time series analysis), a theory project (e.g. for students interested in graduate school), or an R programming project. Several of these projects were excellent, and I encouraged students to turn them into publishable papers. For example, one data-driven project \cite{trevor-genome} models which of five learning disabilities patients had based on their genome. One theory project \cite{tybl} is a comprehensive overview of the field of spatial econometrics, where correlations based on geographic location are taken into account. One programming project \cite{trevor-R-code} conducts an exhaustive search over all exponents $(p,q)$ to attempt to make $y^p$ and $x^q$ linearly related. This code is now available on GitHub and can be imported into RStudio.
\section{Assessing the efficacy of this approach} \label{sec:methodology}
As discussed in Section \ref{sec:intro} and \ref{sec:applied-topics}, when I began teaching statistics, my department did not have any applied statistics courses. Now, we have the two semester sequence described in this paper, the new course DA 101 mentioned in Section \ref{sec:outreach}, and material for several more applied electives from the seminars I ran in 2016 and 2017 (see Section \ref{sec:applied-topics}). Enrollments in statistics increased in each of the past three years, leading to an additional section of Applied Statistics being offered in the fall of 2017. Noticeably, enrollments from non-math majors increased drastically.
On the formal end-of-semester evaluations, students are asked to rate how challenging they found the course, whether their interest increased, and whether their knowledge increased (among other questions). In Applied Statistics in 2016, 85\% of students reported that the course was challenging, 80\% reported that their interest increased, and 85\% reported that their knowledge increased. In Statistical Modeling, 100\% reported that the course was challenging, 84\% reported that their interest increased, and 100\% reported that their knowledge increased. I view these results as justification that these courses are not watered-down mathematics courses. They are challenging and thought-provoking in different ways than traditional mathematics courses, but lead to more interest from students, and higher enrollments.
I asked students to fill out an optional supplemental evaluation at the end of each semester. In Applied Statistics, there was a 100\% response rate, and in Statistical Modeling it was 92\%. In Applied Statistics, 85\% reported that they would like to take another statistics course. All reported that they believed the course content will be important in their careers. In Statistical Modeling, all students reported that they wish they had taken statistics earlier in their college careers. All reported that they believed the course content will be important in their careers. 82\% reported that they would take more statistics after Statistical Modeling, if it were offered. In surveys from both courses, 63\% reported that they would have considered graduate work in statistics (this percentage is significantly higher than the percentage of students my university sends to graduate school in any department).
In terms of my learning objectives for the course, I can report that I am consistently blown away by the high quality of student final projects. I greatly enjoy watching students grow from their early days of being completely unable to use R, up through proficiency in the labs, and into mastery of R, data analysis, and technical writing for the projects. Several projects have been of publishable quality, and I'm working with four different students to get their final papers ready to submit to undergraduate research journals.
Lastly, the development of the two-semester applied statistics sequence, in consultation with faculty from other departments and designed around the data repository, has had several beneficial consequences around campus. Since most departments who have contributed to the data repository already require calculus, the calculus prerequisite for Applied Statistics is no obstacle for their students. Faculty in physics, biology, chemistry, neuroscience, and economics have all encouraged their students to take Applied Statistics, and the current enrollment composition reflects this. As discussed in Section \ref{sec:outreach}, one positive consequence (that I hope to explore in a future paper) was the formation of a faculty reading group, discussing commonalities across the various quantitative methods courses on campus, sharing teaching materials, and updating course content accordingly. Multiple departments are now using R, are now teaching randomization-based inference, and are now emphasizing the importance of reproducible analyses, and the dangers of p-hacking. The faculty who shared data sets with the data repository have been happy to serve as project supporters when my students choose to analyze their data. When student analyses have gone beyond what the faculty member originally did, it has on several occasions led to the student conducting research with the faculty who contributed the dataset. Now more and more faculty are contributing to the data repository, and students from the new data analytics major will also be able to analyze these datasets, as part of their required self-driven data analysis projects in the major.
\section{Conclusion}
In this paper, I describe my experiences as a pure mathematician tasked with creating a two semester statistics sequence. I achieved this goal by reaching out to partner disciplines with a series of questions designed to help me create courses that could best serve their majors. I received course materials and datasets from these colleagues and housed them in a Data Repository. I also convinced these colleagues to serve as problem supporters for projects related to the datasets they shared. I synthesized the materials I received from colleagues with materials given to me by statisticians on the Isostat listserv \cite{isostat}, and added several labs and homework exercises of my own design (Section \ref{subsec:labs-repository}).
This process resulted in two applied courses, built around daily real-world examples, weekly labs, and a final project. In the end, students learned how to cope with dirty data, how to wrangle data in R, how to build statistical models and run statistical tests, how to use statistics to debunk or support claims about the world, and how to summarize their findings to a non-technical audience. Students strengthened their liberal arts skills, such as critical thinking skills, developing an interdisciplinary perspective on the world, considering the ethics of certain types of data and experimental design, and healthy skepticism when presented with statistical arguments. Through their individual projects, students also learned how to teach themselves how to do things in R, what to do when their data situation didn't match anything we'd learned, and how to check their conclusions against what they know about the world in order to have confidence that they'd done the analysis correctly.
After developing these courses, I shared my course materials with the same colleagues in partner disciplines. As a result, these colleagues requested a reading group to discuss statistical pedagogy across the campus, and several of them went on to develop a data analysis course using materials from the Data Repository. I have made my course materials publicly available on the course webpage, and am in the process of making the Data Repository and all associated labs freely and publicly available through the university's new Data Analytics major.
I believe the model outlined in this paper can be replicated at other liberal arts colleges. I recommend starting with a well-respected colleague in another department, who is friendly towards the mathematics department. In my case, I began with a neuroscience professor who was also chair of the faculty at the time. Once this other professor is on-board with contributing to the data repository, I recommend reaching out to other friendly professors until a critical mass is formed. After enough professors have joined, the incentives are higher for reluctant professors - after all, every professor who joined the data repository received access to the course materials of all others who had joined. If relationships between the mathematics department and other departments are strained, then some mending of fences may be required in order for the approach discussed in this paper to work, but new faculty members might still have a chance to make inroads with other departments.
When I began this process, I did not know if statistics courses had a canonical selection of topics - the research methods courses across campus had some overlap but in general emphasized different statistical modeling frameworks (e.g. econometrics focused on regression, psychology focused on experimental design, and physics focused on measurement error). In the Appendix, I have provided details about the content I settled on, based on conversations with partner disciplines and other statisticians. I believe this content, and the emphasis on labs and final projects, provides enough depth so that students can conduct meaningful statistical analysis, and is broad enough so that nothing essential is missing. However, it is worth noting that other selections of content are possible, and might be preferable based on the types of statistical modeling conducted by colleagues in other departments. Unlike the mathematics major, statistics students are often not expected to have all seen the same core content. It is more important that they know how to look for issues of bias, how to wrangle real-world data, and how to learn new modeling frameworks as needed. Thus, a different selection of topics might be perfectly fine. In any event, the community of statistics professors on the Isostat listserv \cite{isostat} is extremely friendly, and I strongly recommend contacting the listserv often as any new statistics course is being developed.
Lastly, I wish to include a word about developing applied statistics courses within traditional mathematics departments. I am fortunate to have a group of senior colleagues who strongly encourage junior faculty to experiment, and who value mathematical modeling. In a department less open to change, and with no statistician on the faculty, it may be dangerous for a junior faculty member to attempt such a drastic course overhaul. In this situation, I recommend reaching out to the statistics community via Isostat \cite{isostat} and trying to invite senior statistics professors to visit (e.g. as part of a colloquium). When the mathematics department sees that statisticians are teaching statistics in the applied way described in this paper, they may be more open to allowing a junior faculty member to mimic this approach. If possible, having a statistician involved in an external departmental review would also provide an impetus to the mathematics department to be more open to applied statistics courses. Finally, there is a copious literature and curricular recommendations about teaching applied statistics courses, that I have tried to make accessible to the reader via the bibliography (notably, \cite{gaise-report, ASA2, ams-guidelines, pcmi, asa-guidelines}, and \cite{cupm} all contain curricular recommendations for including more applied statistics), and sharing some of these sources with senior faculty might make them more open to applied statistics content within the mathematics major.
|
{
"timestamp": "2018-02-27T02:06:22",
"yymm": "1802",
"arxiv_id": "1802.08858",
"language": "en",
"url": "https://arxiv.org/abs/1802.08858"
}
|
\section{Introduction}
Roe-type algebras are $C^*$-algebras associated to discrete metric spaces and they encode the coarse (or large-scale) geometry of the underlying metric spaces, and they have been well-studied, providing a link between coarse geometry of metric spaces and operator algebra theory (e.g., \cite{ALLW17, MR1876896, MR1739727, MR3158244, LL, LW18, MR1763912, MR2873171, Scarparo:2016kl, MR1905840, MR2800923, WZ10}). They feature in the (uniform) coarse Baum-Connes conjecture (e.g., \cite{MR1905840, MR2523336, Yu95, MR1451759, Yu00}), and have also recently been found to be useful in the study of topological phases of matter (e.g., \cite{Kub,EwertMeyer}). A natural question that arises is about the rigidity of these algebras, namely how well the $C^\ast$-algebra encodes the coarse geometry of the metric space, or whether the coarse geometry of a metric space can be recovered from its Roe algebra, and this was well studied in \cite{MR3116573, WW}.
We will consider metric spaces with bounded geometry defined as follows:
\begin{defn}
Let $X$ be a metric space. Then $X$ is said to have \emph{bounded geometry} if for all $R\geq 0$ there exists $N_R\in\mathbb{N}$ such that for all $x\in X$, the ball of radius $R$ about $x$ has at most $N_R$ elements.
\end{defn}
Note that every metric space with bounded geometry is necessarily countable and discrete. The simplest examples of such metric spaces are finitely generated discrete groups equipped with word metrics. Other interesting examples are box spaces of finitely generated residually finite groups (see e.g. \cite[Definition~6.3.2]{MR2562146}). We are particularly interested in (bijective) coarse equivalence classes of such metric spaces in the following sense:
\begin{defn}
Let $X$ and $Y$ be metric spaces.
\begin{itemize}
\item A (not necessarily continuous) map $f:X\rightarrow Y$ is said to be \emph{uniformly expansive} if for all $R>0$ there exists $S>0$ such that if $x_1,x_2\in X$ satisfy $d(x_1,x_2)\leq R$, then $d(f(x_1),f(x_2))\leq S$.
\item Two maps $f,g:X\rightarrow Y$ are said to be \emph{close} if there exists $C>0$ such that $d(f(x),g(x))\leq C$ for all $x\in X$.
\item Two metric spaces $X$ and $Y$ are said to be \emph{coarsely equivalent} if there exist uniformly expansive maps $f:X\rightarrow Y$ and $g:Y\rightarrow X$ such that $f\circ g$ and $g\circ f$ are close to the identity maps, respectively. In this case, we say both $f$ and $g$ are \emph{coarse equivalences} between $X$ and $Y$.
\item We say a map $f:X\to Y$ is a \emph{bijective coarse equivalence} if $f$ is both a coarse equivalence and a bijection. In this case, we say $X$ and $Y$ are \emph{bijectively coarsely equivalent}.
\end{itemize}
\end{defn}
It was shown in \cite[Theorem 4]{BNW} that if $X$ and $Y$ are coarsely equivalent metric spaces with bounded geometry, then their uniform Roe $C^\ast$-algebras are Morita equivalent\footnote{Note that for $\sigma$-unital (in particular unital) $C^\ast$-algebras, being Morita equivalent is the same as being stably $*$-isomorphic \cite[Theorem 1.2]{BGR}.}. A partial converse was obtained in \cite{MR3116573} under the assumption that the metric spaces have Yu's property A (see \cite[Definition 2.1]{Yu00}), which can be regarded as a coarse variant of amenability. In fact, it was shown in \cite{MR3116573} that under the assumption of property A, $*$-isomorphisms between Roe-type $C^\ast$-algebras yield coarse equivalences:
\begin{thm}\cite[Theorem 4.1, Theorem 6.1 and Corollary 6.2]{MR3116573}
Let $X$ and $Y$ be metric spaces with bounded geometry and property A.
\begin{enumerate}
\item $X$ and $Y$ are coarsely equivalent if and only if their uniform Roe $C^\ast$-algebras are stably $*$-isomorphic.
\item If $A^\ast(X)$ and $A^\ast(Y)$ are Roe-type $C^\ast$-algebras associated to $X$ and $Y$ respectively, and there is a $*$-isomorphism between $A^\ast(X)$ and $A^\ast(Y)$, then $X$ and $Y$ are coarsely equivalent. The same conclusion holds for the stable uniform Roe $C^\ast$-algebra.
The converse is true for $UC^\ast(X)$, $C^\ast(X)$, and $C^\ast_s(X)$.
\end{enumerate}
\end{thm}
If there is a bijective coarse equivalence between two metric spaces with bounded geometry, then the proof of \cite[Theorem 4]{BNW} or \cite[Proposition~2.3]{LL} shows that their uniform Roe $C^\ast$-algebras are $*$-isomorphic. A partial converse to this statement was obtained in \cite{WW} under the assumption that the metric spaces have finite decomposition complexity as defined in \cite{GTY}.
\begin{thm}\cite[Corollary 1.16]{WW}
Let $X$ and $Y$ be metric spaces with bounded geometry. Suppose $X$ and $Y$ have finite decomposition complexity. Then $X$ and $Y$ are bijectively coarsely equivalent if and only if their uniform Roe $C^\ast$-algebras are $*$-isomorphic.
\end{thm}
In the purely algebraic setting, it was shown in \cite[Theorem~5.1 and Theorem~6.1]{MR3116573}, \cite{WW} and \cite[Theorem~8.1]{BF18} that the results above hold without requiring property A or finite decomposition complexity as follows:
\begin{thm}
Let $X$ and $Y$ be metric spaces with bounded geometry.
\begin{enumerate}
\item $X$ and $Y$ are bijectively coarsely equivalent if and only if their algebraic uniform Roe algebras are $*$-isomorphic.
\item If $A[X]$ and $A[Y]$ are algebraic Roe-type algebras associated to $X$ and $Y$ respectively, and there is a $*$-isomorphism between $A[X]$ and $A[Y]$, then $X$ and $Y$ are coarsely equivalent. The same conclusion holds for the algebraic stable uniform Roe algebra.
The converse is true for $U\mathbb{C}[X]$, $\mathbb{C}[X]$, and $\mathbb{C}_s[X]$.
\end{enumerate}
\end{thm}
In this paper, we will study the rigidity problem for the $\ell^p$ analog of uniform Roe algebras and other Roe-type algebras for $1\leq p<\infty$.
\begin{defn} \label{pRoedef}
Let $(X,d)$ be a metric space with bounded geometry and $1\leq p<\infty$. For an operator $T=(T_{xy})_{x,y\in X}\in B(\ell^p(X))$, where $T_{xy}=(T\delta_y)(x)$, we define the propagation of $T$ to be
\[ \mathop{\rm prop}(T)=\sup\{ d(x,y):x,y\in X,T_{xy}\neq 0 \}\in[0,\infty]. \]
We denote by $\mathbb{C}_u^p[X]$ the unital algebra of all bounded operators on $\ell^p(X)$ with finite propagation. The $\ell^p$ uniform Roe algebra, denoted by $B^p_u(X)$, is defined to be the operator norm closure of $\mathbb{C}_u^p[X]$ in $B(\ell^p(X))$.
\end{defn}
The $\ell^p$ uniform Roe algebra belongs to a class of algebras that we may call $\ell^p$ Roe-type algebras (see Definition \ref{Roetype}). Such algebras were considered in \cite{MR3557774} in connection with criteria for Fredholmness. Other examples of $\ell^p$ Roe-type algebras that may be of particular interest are the following:
\begin{enumerate} \label{egRoetype}
\item The $\ell^p$ uniform algebra of $X$, denoted $UB^p(X)$, is the operator norm closure of the algebra $U\mathbb{C}^p[X]$ of all finite propagation bounded operators $T$ on $\ell^p(X,\ell^p)$ such that there exists $N\in\mathbb{N}$ such that for all $x,y\in X$, we have that $T_{xy}$ is an operator on $\ell^p$ of rank at most $N$.
\item The $\ell^p$ Roe algebra of $X$, denoted $B^p(X)$, is the operator norm closure of the algebra $\mathbb{C}^p[X]$ of all finite propagation bounded operators $T$ on $\ell^p(X,\ell^p)$ such that for all $x,y\in X$, we have $T_{xy}\in\overline{M}^p_\infty$, where $\overline{M}^p_\infty=\overline{\bigcup_{n\in\mathbb{N}}M_n(\mathbb{C})}\subset B(\ell^p(\mathbb{N}))$.
\end{enumerate}
An alternative definition of the $\ell^p$ Roe algebra is to require $T_{xy}\in K(\ell^p(\mathbb{N}))$ for all $x,y\in X$. When $p\in(1,\infty)$, we have $\overline{M}^p_\infty=K(\ell^p(\mathbb{N}))$ so we get the same algebra as above. However, when $p=1$, $\overline{M}^1_\infty$ is strictly contained in $K(\ell^1(\mathbb{N}))$. Nevertheless, we still get an $\ell^p$ Roe-type algebra that is a coarse invariant, and our rigidity result still holds for this algebra.
Another related example that is not quite a Roe-type algebra but exhibits similar behavior is the stable $\ell^p$ uniform Roe algebra of $X$, denoted $B^p_s(X)$, and is the operator norm closure of the algebra $\mathbb{C}^p_s[X]$ of all finite propagation bounded operators $T$ on $\ell^p(X,\ell^p)$ such that there exists a finite-dimensional subspace $E_T\subset\ell^p$ such that for all $x,y\in X$, we have $T_{xy}\in B(E_T)$.
One can easily check that it is isomorphic to $B^p_u(X)\otimes K(\ell^p(\mathbb{N}))$ for $p\in[1,\infty)$.
We can also give an alternative definition of the stable $\ell^p$ uniform Roe algebra by requiring the existence of some $k\in \mathbb{N}$ such that $T_{xy}\in M_k(\mathbb{C})$ for all $x,y\in X$. This algebra will then be isomorphic to $B^p_u(X)\otimes\overline{M}^p_\infty$ for $p\in[1,\infty)$, and is different from the algebra defined above only when $p=1$.
It may be worth noting that $\ell^1$ Roe-type algebras are structurally different from $\ell^p$ Roe-type algebras for $p\in(1,\infty)$ in that $\ell^1$ Roe-type algebras may not contain some finite rank operators while $\ell^p$ Roe-type algebras contain all finite rank operators when $p\in(1,\infty)$. An example is given in Remark \ref{rankone} for the $\ell^1$ uniform Roe algebra.
The following is a summary of the main results of this paper:
\begin{thm} (see Theorem \ref{thm1}, Theorem \ref{thm2}, and Theorem \ref{thm:Roetype})
Let $X$ and $Y$ be metric spaces with bounded geometry, and let $p\in [1,\infty)\setminus\{2\}$.
\begin{enumerate}
\item $X$ and $Y$ are bijectively coarsely equivalent if and only if their $\ell^p$ uniform Roe algebras are isometrically isomorphic.
\item $X$ and $Y$ are coarsely equivalent if and only if their $\ell^p$ uniform Roe algebras are stably isometrically isomorphic.
\item If $A^p(X)$ and $A^p(Y)$ are $\ell^p$ Roe-type algebras associated to $X$ and $Y$ respectively, and there is an isometric isomorphism between $A^p(X)$ and $A^p(Y)$, then $X$ and $Y$ are coarsely equivalent. The same conclusion holds if $B^p_s(X)$ and $B^p_s(Y)$ are isometrically isomorphic.
The converse is true for $UB^p(X)$, $B^p(X)$, and $B^p_s(X)$.
\end{enumerate}
\end{thm}
\begin{cor} (see Corollary \ref{cor2} and Corollary \ref{cor:Roetype})
Let $X$ and $Y$ be metric spaces with bounded geometry, and let $p\in [1,\infty)$.
\begin{enumerate}
\item $X$ and $Y$ are bijectively coarsely equivalent if and only if $\mathbb{C}^p_u[X]$ and $\mathbb{C}^p_u[Y]$ are isometrically isomorphic.
\item If $A^p[X]$ and $A^p[Y]$ are algebraic $\ell^p$ Roe-type algebras associated to $X$ and $Y$ respectively, and there is an isometric isomorphism between $A^p[X]$ and $A^p[Y]$, then $X$ and $Y$ are coarsely equivalent. The same conclusion holds if $\mathbb{C}^p_s[X]$ and $\mathbb{C}^p_s[Y]$ are isometrically isomorphic.
The converse is true for $U\mathbb{C}^p[X]$, $\mathbb{C}^p[X]$, and $\mathbb{C}^p_s[X]$.
\end{enumerate}
\end{cor}
Note that we actually do not need to assume property A or finite decomposition complexity in the theorem as long as we exclude the case $p=2$.
Isometric isomorphisms between $\ell^p$ uniform Roe algebras are spatially implemented by invertible isometries between the underlying $\ell^p$ spaces (see Lemma~\ref{spatially implemented}). Moreover, we have Lamperti's theorem from \cite{Lamp} that describes such invertible isometries when $p\in[1,\infty)\setminus\{2\}$. In particular, Lamperti's theorem provides a bijective map between the underlying metric spaces, and this map turns out to be a coarse equivalence. Another consequence of Lamperti's theorem is that such isometric isomorphisms map matrix units to matrix units up to multiplying by a scalar with absolute value one, which makes the arguments in the $p\neq 2$ case slightly simpler than in the $p=2$ case (e.g. in the proofs of Lemma \ref{Lem2} and Lemma \ref{Lem2s}).
The same is true for isometric isomorphisms between general $\ell^p$ Roe-type algebras, except that Lamperti's theorem may provide a bijective map between $X\times\mathbb{N}$ and $Y\times\mathbb{N}$, from which one can obtain a coarse equivalence between $X$ and $Y$.
In the $C^\ast$-algebraic setting as in \cite{MR3116573, WW}, Lamperti's theorem is inapplicable, and property A and finite decomposition complexity played an essential role in producing a (bijective) coarse equivalence between the metric spaces.
Let us briefly describe how this paper is organized. In section 2, we consider isometric isomorphisms between $\ell^p$ uniform Roe algebras, while in section 3, we consider stable isometric isomorphisms between $\ell^p$ uniform Roe algebras and isometric isomorphisms between other $\ell^p$ Roe-type algebras. In fact, the arguments in section 3 are very similar to those in section 2, and require just a little more work, mainly because Lamperti's theorem gives us a map from $X\times\mathbb{N}$ to $Y\times\mathbb{N}$ in that case instead of a map from $X$ to $Y$.
\section{Isometrically isomorphic $\ell^p$ uniform Roe algebras}
In this section, we consider isometric isomorphisms between $\ell^p$ uniform Roe algebras associated to metric spaces with bounded geometry, and show that when $p\in[1,\infty)\setminus\{2\}$, these isometric isomorphisms give rise to bijective coarse equivalences between the underlying metric spaces.
We begin by noting that any isometric isomorphism between $\ell^p$ uniform Roe algebras must be spatially implemented by an invertible isometry.
\begin{lem}\label{spatially implemented}
Let $X$ and $Y$ be metric spaces with bounded geometry, and let $p\in[1,\infty)$. If $\phi:B^p_u(X)\rightarrow B^p_u(Y)$ is an algebra isomorphism between $\ell^p$ uniform Roe algebras, then there exists a bounded linear bijection $U:\ell^p(X)\rightarrow\ell^p(Y)$ such that $\phi(T)=UTU^{-1}$ for all $T\in B^p_u(X)$. In particular, $\phi$ is continuous.
Moreover, if $\phi$ is also isometric, then $U$ is an invertible isometry.
\end{lem}
\begin{proof}
Fix $x_0\in X$, and consider the rank one idempotent operator $\delta_{x_0}\otimes\delta_{x_0}\in\ell^p(X)\otimes\ell^p(X)^\ast$. Note that this operator has propagation zero so it belongs to $B^p_u(X)$. Now $\phi(\delta_{x_0}\otimes\delta_{x_0})$ is also a rank one idempotent operator, so $\phi(\delta_{x_0}\otimes\delta_{x_0})=f\otimes\sigma$ for some unit vector $f\in\ell^p(Y)$ and $\sigma\in\ell^p(Y)^\ast$ with $\sigma(f)=1$. If $\phi$ is isometric, then we also have $||\sigma||=1$.
Now note that $\xi\otimes\delta_{x_0}\in B^p_u(X)$ for all $\xi\in\ell^p(X)$. Moreover, we have
\[ \phi(\xi\otimes\delta_{x_0}) = \phi(\xi\otimes\delta_{x_0})\phi(\delta_{x_0}\otimes\delta_{x_0}) = \phi(\xi\otimes\delta_{x_0})f\otimes\sigma. \]
Define $U:\ell^p(X)\rightarrow\ell^p(Y)$ by $U\xi=\phi(\xi\otimes\delta_{x_0})f$. Then $U^{-1}\eta=\phi^{-1}(\eta\otimes\sigma)\delta_{x_0}$ for $\eta\in\ell^p(Y)$. Moreover, if $\phi$ is an isometry, then $U$ is an isometry. For any $T\in B^p_u(X)$ and $\xi, \sigma \in\ell^p(X)$, we have
\[ \phi(T)U\xi\otimes\sigma = \phi(T(\xi\otimes\delta_{x_0}))f\otimes\sigma = \phi(T\xi\otimes\delta_{x_0})f\otimes\sigma = UT\xi\otimes\sigma, \]
showing that $\phi(T)=UTU^{-1}$.
To see that $U$ is bounded, first note that $\delta_y\circ U$ is bounded for all $y\in Y$. Indeed, since $\eta\otimes\delta_y\in B^p_u(Y)$ for any $\eta\in\ell^p(Y)$ and $y\in Y$, there exists $T\in B^p_u(X)$ such that $UTU^{-1}=\phi(T)=\eta\otimes\delta_y$, and
\[ T=U^{-1}(\eta\otimes\delta_y)U=U^{-1}\eta\otimes(\delta_y\circ U). \]
Since $T$ is bounded, it follows that $\delta_y\circ U$ is bounded. Now suppose that $\xi_n\rightarrow 0$ in $\ell^p(X)$ and $U\xi_n\rightarrow\eta$ for some $\eta\in\ell^p(Y)$. Fix $y_0\in Y$. Then $(\delta_{y_0}\otimes\delta_y)U\xi_n\rightarrow(\delta_{y_0}\otimes\delta_y)\eta$ in $\ell^p(Y)$. On the other hand,
\[\delta_{y_0}\otimes(\delta_y\circ U)=(\delta_{y_0}\otimes\delta_y)U=US\] for some $S\in B^p_u(X)$.
Since $\delta_y\circ U$ is bounded, so is $US$, and thus $(\delta_{y_0}\otimes\delta_y)U\xi_n=US\xi_n\rightarrow 0$. Hence $(\delta_{y_0}\otimes\delta_y)\eta=0$ for all $y\in Y$, so $\eta=0$. By the Closed Graph Theorem, $U$ is bounded.
\end{proof}
\begin{rem} \label{rankone} \leavevmode
\begin{enumerate}
\item When $p\in(1,\infty)$, Lemma \ref{spatially implemented} is a special case of \cite[Corollary 3.2]{Cher} (or the theorem in \cite[Section 1.7.15]{Pal}) as $B^p_u(X)$ contains all finite rank operators on $\ell^p(X)$.
However, when $p=1$ and $X$ is infinite, $B^1_u(X)$ does not necessarily contain all finite rank operators on $\ell^1(X)$. For example, consider $X=\mathbb{N}$ with the natural metric, and consider the rank one idempotent operator on $\ell^1(\mathbb{N})$ given by $T\xi=(\sum_n\xi_n)\delta_0$. Suppose $S\in\mathbb{C}_u^1[\mathbb{N}]$ has propagation $R$. Then $S_{0,n}=0$ for all $n>R$ so $||(T-S)\delta_n||_{\ell^1(\mathbb{N})}\geq 1$ for all $n>R$, and $T$ is not a norm-limit of operators with finite propagation, i.e., $T\notin B^1_u(\mathbb{N})$. The proof we present above is a slight modification of that in \cite[Section 1.7.15]{Pal} so as to include the case $p=1$.
\item If $p,q\in(1,\infty)$ satisfy $\frac{1}{p}+\frac{1}{q}=1$, then the map $(T_{xy})\mapsto(\overline{T_{yx}})$ defines an isometric anti-isomorphism between $B^p_u(X)$ and $B^q_u(X)$. However, if $X$ is infinite, then as a consequence of Lemma \ref{spatially implemented}, there is no algebra isomorphism between $B^p_u(X)$ and $B^q_u(X)$ whenever $1\leq p<q<\infty$ since there is no bounded linear bijection between $\ell^p(X)$ and $\ell^q(X)$ in this case as a classical theorem by Pitt \cite{Pitt} (or see \cite[Theorem 2.1.4]{AlbKal}) tells us that any bounded linear operator from $\ell^q(X)$ to $\ell^p(X)$ must be compact.
\end{enumerate}
\end{rem}
A key ingredient for us is Lamperti's theorem in \cite{Lamp}, which we state in the following form.
\begin{prop} \label{Lam}
Let $X$ and $Y$ be metric spaces with bounded geometry, and let $p\in[1,\infty)\setminus\{2\}$. If $U:\ell^p(X)\rightarrow\ell^p(Y)$ is an invertible isometry, then there exists a function $h:Y\rightarrow\mathbb{T}$ and an invertible function $g:X\rightarrow Y$ such that $(U\xi)(y)=h(y)\xi(g^{-1}(y))$ for all $\xi\in\ell^p(X)$ and $y\in Y$.
\end{prop}
Note that for $x\in X$ and $y\in Y$, we have \[ U\delta_x=h(g(x))\delta_{g(x)} \; \text{and} \; U^{-1}\delta_y=\overline{h(y)}\delta_{g^{-1}(y)}. \]
In particular, $|\langle U\delta_x,\delta_{g(x)} \rangle|=1=|\langle \delta_{g^{-1}(y)},U^{-1}\delta_y \rangle|$ for $x\in X$ and $y\in Y$.
Also note that if $\phi:B^p_u(X)\rightarrow B^p_u(Y)$ is given by $\phi(T)=UTU^{-1}$, and $e_{x_1,x_2}$ is a matrix unit, then
\[ \phi(e_{x_1,x_2})=Ue_{x_1,x_2}U^{-1}=h(g(x_1))\overline{h(g(x_2))}e_{g(x_1),g(x_2)}. \]
\begin{rem}
In the $C^\ast$-algebraic setting considered in \cite{MR3116573}, Lamperti's theorem does not apply, so instead property A is used in the form of the metric sparsification property, which in turn allows one to show that if $U:\ell^2(X)\rightarrow\ell^2(Y)$ is the unitary operator implementing the isomorphism $\phi$, then there exists $c>0$ such that for each $x\in X$ there exists $f(x)\in Y$ with $|\langle U\delta_x,\delta_{f(x)} \rangle|\geq c$, and similarly for $U^\ast$.
\end{rem}
The next lemma shows that the function $g$ in Proposition \ref{Lam}, as well as $g^{-1}$, is a bijective coarse equivalence.
\begin{lem} \label{Lem2}
Let $g$ be as in Proposition \ref{Lam}.
For all $R\geq 0$ there exists $S\geq 0$ such that if $x_1,x_2\in X$ are such that $d(x_1,x_2)\leq R$, then $d(g(x_1),g(x_2))\leq S$. Also, if $y_1,y_2\in Y$ are such that $d(y_1,y_2)\leq R$, then $d(g^{-1}(y_1),g^{-1}(y_2))\leq S$.
\end{lem}
\begin{proof}
We prove only the first statement as the same argument holds with the roles of $X$ and $Y$ reversed and with $g^{-1}$ instead of $g$.
Assume for contradiction that it is false. Then there exists $R\geq 0$ and sequences $(x_1^n),(x_2^n)$ such that $d(x_1^n,x_2^n)\leq R$ for all $n$, and $d(g(x_1^n),g(x_2^n))\rightarrow\infty$ as $n\rightarrow\infty$.
We may assume that $(x_1^n,x_2^n)\neq(x_1^m,x_2^m)$ if $n\neq m$.
Thus $\sum_{n\in\mathbb{N}}e_{x_1^n,x_2^n}$ converges strongly to a bounded operator on $\ell^p(X)$ that is moreover in $\mathbb{C}_u^p[X]\subseteq B^p_u(X)$. Hence the sum \[\sum_{n\in\mathbb{N}}\phi(e_{x_1^n,x_2^n})=\sum_{n\in\mathbb{N}}h(g(x_1^n))\overline{h(g(x_2^n))}e_{g(x_1^n),g(x_2^n)}\] converges strongly to an operator in $B^p_u(Y)$.
But this contradicts the assumption that $d(g(x_1^n),g(x_2^n))\rightarrow\infty$ as $n\rightarrow\infty$.
\end{proof}
The next theorem generalizes the non-$K$-theoretic part of \cite[Theorem 4.10]{LL}. The implication (4) $\Rightarrow$ (1) is the $p\neq 2$ analog of \cite[Corollary~1.16]{WW}, which together with \cite[Lemma~8]{MR0043392} tells us that in the $p=2$ case, an isometric isomorphism between the uniform Roe algebras yields a bijective coarse equivalence if the metric spaces have finite decomposition complexity.
\begin{thm} \label{thm1}
Let $X$ and $Y$ be metric spaces with bounded geometry. The following are equivalent:
\begin{enumerate}
\item $X$ and $Y$ are bijectively coarsely equivalent.
\item For every $p\in[1,\infty)$, there is an isometric isomorphism $\phi:B^p_u(X)\rightarrow B^p_u(Y)$ such that $\phi(\ell^\infty(X))=\ell^\infty(Y)$.
\item $B^p_u(X)$ and $B^p_u(Y)$ are isometrically isomorphic for every $p\in[1,\infty)$.
\item $B^p_u(X)$ and $B^p_u(Y)$ are isometrically isomorphic for some $p\in[1,\infty)\setminus\{2\}$.
\item For some $p\in[1,\infty)$, there is an isometric isomorphism $\phi:B^p_u(X)\rightarrow B^p_u(Y)$ such that $\phi(\ell^\infty(X))=\ell^\infty(Y)$.
\end{enumerate}
\end{thm}
\begin{proof}
(1) $\Rightarrow$ (2):
The proof we present here is based on the proof of \cite[Theorem 4]{BNW} and \cite[Proposition~2.3]{LL}.
Let $f:X\rightarrow Y$ be a bijective coarse equivalence, and consider the invertible isometry $U:\ell^p(X)\rightarrow\ell^p(Y)$ satisfying $U\delta_x=\delta_{f(x)}$ for all $x\in X$. Define $\phi:B(\ell^p(X))\rightarrow B(\ell^p(Y))$ by $\phi(T)=UTU^{-1}$. We show that $\phi$ maps $B^p_u(X)$ into $B^p_u(Y)$.
Note that for all $x_1,x_2\in X$, we have
\[ \langle \phi(T)\delta_{f(x_2)},\delta_{f(x_1)} \rangle=\langle T\delta_{x_2},\delta_{x_1} \rangle. \]
Suppose that $\langle T\delta_{x'},\delta_x \rangle=0$ whenever $d(x',x)>R$. There exists $S>0$ such that $d(f(x_1),f(x_2))<S$ whenever $d(x_1,x_2)<R$. Thus $\langle \phi(T)\delta_{f(x')},\delta_{f(x)} \rangle=0$ whenever $d(f(x'),f(x))>S$.
Since $\phi$ is continuous, it maps $B^p_u(X)$ into $B^p_u(Y)$.
Reversing the roles of $X$ and $Y$, one sees that the homomorphism given by $\psi(T)=U^{-1}TU$ maps $B^p_u(Y)$ into $B^p_u(X)$, and is the inverse of $\phi$. Hence $\phi:B^p_u(X)\rightarrow B^p_u(Y)$ is an isometric isomorphism. Moreover, the definition of $\phi$ shows that it maps $\ell^\infty(X)$ onto $\ell^\infty(Y)$.
(2) $\Rightarrow$ (3) $\Rightarrow$ (4) is trivial.
(4) $\Rightarrow$ (1): If $B^p_u(X)$ and $B^p_u(Y)$ are isometrically isomorphic for some $p\in[1,\infty)\setminus\{2\}$, then by Lemma \ref{spatially implemented}, we have an invertible isometry $U:\ell^p(X)\rightarrow\ell^p(Y)$.
Let $g:X\rightarrow Y$ be as in Proposition \ref{Lam}. Then Lemma \ref{Lem2} implies that both $g$ and $g^{-1}$ are uniformly expansive, so $g:X\rightarrow Y$ is a bijective coarse equivalence.
(2) $\Rightarrow$ (5): It is clear.
(5) $\Rightarrow$ (1): If $p=2$, we are done by \cite[Lemma~8]{MR0043392} and \cite[Corollay~1.16]{WW}. If $p \neq 2$, it is clear that (5) $\Rightarrow$ (4).
\end{proof}
Recall from \cite[Definition~1.11]{WW} that a bounded geometry metric space $X$ is \emph{bijectively rigid} if whenever there is a coarse equivalence $f:X\rightarrow Y$ to another bounded geometry metric space $Y$, then there is a bijective coarse equivalence $f':X\rightarrow Y$. It can be deduced from the proof of \cite[Theorem 1.1]{Whyte} that every uniformly discrete, non-amenable bounded geometry metric space is bijectively rigid. It is elementary to see that $\mathbb{Z}$ is also bijectively rigid. On the other hand, locally finite groups and certain lamplighter groups are not (see \cite{LL} and \cite{MR2730576}).
\begin{cor} \label{cor1}
Let $X$ and $Y$ be bijectively rigid metric spaces with bounded geometry. The following are equivalent:
\begin{enumerate}
\item $X$ and $Y$ are coarsely equivalent.
\item For every $p\in[1,\infty)$, there is an isometric isomorphism $\phi:B^p_u(X)\rightarrow B^p_u(Y)$ such that $\phi(\ell^\infty(X))=\ell^\infty(Y)$.
\item $B^p_u(X)$ and $B^p_u(Y)$ are isometrically isomorphic for every $p\in[1,\infty)$.
\item $B^p_u(X)$ and $B^p_u(Y)$ are isometrically isomorphic for some $p\in[1,\infty)\setminus\{2\}$.
\item For some $p\in[1,\infty)$, there is an isometric isomorphism $\phi:B^p_u(X)\rightarrow B^p_u(Y)$ such that $\phi(\ell^\infty(X))=\ell^\infty(Y)$.
\end{enumerate}
\end{cor}
For general metric spaces with bounded geometry, coarse equivalence only corresponds to stable isometric isomorphism of their $\ell^p$ uniform Roe algebras, as we shall see in the next section.
\begin{rem} We make a few remarks about the $p=2$ case.
\begin{enumerate}
\item The proof of \cite[Theorem 4]{BNW} or \cite[Proposition~2.3]{LL} shows that a bijective coarse equivalence between $X$ and $Y$ yields a $*$-isomorphism $\phi$ between the uniform Roe $C^\ast$-algebras $C^\ast_u(X)$ and $C^\ast_u(Y)$ such that $\phi(\ell^\infty(X))=\ell^\infty(Y)$.
\item For the converse, \cite[Theorem 4.1]{MR3116573} says that if $X$ and $Y$ are assumed to have property A, then a $*$-isomorphism between their uniform Roe $C^\ast$-algebras yields a coarse equivalence between $X$ and $Y$.
\item If $X$ and $Y$ are countable locally finite groups equipped with proper left-invariant metrics, then a $*$-isomorphism between their uniform Roe $C^\ast$-algebras yields a bijective coarse equivalence between $X$ and $Y$ by \cite[Theorem 4.10]{LL}.
\end{enumerate}
\end{rem}
For the uncompleted algebras $\mathbb{C}_u^p[X]$, we have the following result.
\begin{cor} \label{cor2}
Let $X$ and $Y$ be metric spaces with bounded geometry, and let $p\in[1,\infty)$. Then $\mathbb{C}_u^p[X]$ and $\mathbb{C}_u^p[Y]$ are isometrically isomorphic if and only if $X$ and $Y$ are bijectively coarsely equivalent.
\end{cor}
\begin{proof}
If $X$ and $Y$ are bijectively coarsely equivalent, then the proof of (1) $\Rightarrow$ (2) in Theorem~\ref{thm1} actually implies that $\mathbb{C}_u^p[X]$ and $\mathbb{C}_u^p[Y]$ are isometrically isomorphic for every $p\in[1,\infty)$.
Now we assume that $\mathbb{C}_u^p[X]$ and $\mathbb{C}_u^p[Y]$ are isometrically isomorphic for some $p\in[1,\infty)$. If $p\neq 2$, we can apply Theorem~\ref{thm1}. If $p=2$, then \cite[Lemma~8]{MR0043392} and the proof in \cite{WW} or \cite[Theorem~8.1]{BF18} actually imply that $X$ and $Y$ are bijectively coarsely equivalent without assuming FDC \footnote{We could compare this with Theorem~1.4 in \cite{MR3116573}.}.
\end{proof}
\section{Stably isometrically isomorphic $\ell^p$ uniform Roe algebras}
In this section, we consider stable isometric isomorphisms between $\ell^p$ uniform Roe algebras associated to metric spaces with bounded geometry, and show that when $p\in [1,\infty)\setminus\{2\}$, these stable isometric isomorphisms give rise to (not necessarily bijective) coarse equivalences between the underlying metric spaces.
We also consider general $\ell^p$ Roe-type algebras.
The ingredients are essentially the same as in the previous section.
We will need to consider tensor products of $L^p$ operator algebras so we begin by making this notion precise.
Details can be found in \cite[Chapter 7]{DF} and \cite[Theorem 2.16]{Phil12}.
For $p\in[1,\infty)$, there is a tensor product of $L^p$ spaces with $\sigma$-finite measures such that we have a canonical isometric isomorphism $L^p(X,\mu)\otimes L^p(Y,\nu)\cong L^p(X\times Y,\mu\times\nu)$, which identifies, for every $\xi\in L^p(X,\mu)$ and $\eta\in L^p(Y,\nu)$, the element $\xi\otimes\eta$ with the function $(x,y)\mapsto\xi(x)\eta(y)$ on $X\times Y$. Moreover, we have the following properties:
\begin{itemize}
\item Under the identification above, the linear span of all $\xi\otimes\eta$ is dense in $L^p(X\times Y,\mu\times\nu)$.
\item $||\xi\otimes\eta||_p=||\xi||_p||\eta||_p$ for all $\xi\in L^p(X,\mu)$ and $\eta\in L^p(Y,\nu)$.
\item The tensor product is commutative and associative.
\item If $a\in B(L^p(X_1,\mu_1),L^p(X_2,\mu_2))$ and $b\in B(L^p(Y_1,\nu_1),L^p(Y_2,\nu_2))$, then there exists a unique \[c\in B(L^p(X_1\times Y_1,\mu_1\times\nu_1),L^p(X_2\times Y_2,\mu_2\times\nu_2))\] such that under the identification above, $c(\xi\otimes\eta)=a(\xi)\otimes b(\eta)$ for all $\xi\in L^p(X_1,\mu_1)$ and $\eta\in L^p(Y_1,\nu_1)$. We will denote this operator by $a\otimes b$. Moreover, $||a\otimes b||=||a|| ||b||$.
\item The tensor product of operators is associative, bilinear, and satisfies $(a_1\otimes b_1)(a_2\otimes b_2)=a_1a_2\otimes b_1b_2$.
\end{itemize}
If $A\subseteq B(L^p(X,\mu))$ and $B\subseteq B(L^p(Y,\nu))$ are norm-closed subalgebras, we then define $A\otimes B\subseteq B(L^p(X\times Y,\mu\times\nu))$ to be the closed linear span of all $a\otimes b$ with $a\in A$ and $b\in B$.
We will write $\overline{M}^p_\infty$ for $\overline{\bigcup_{n\in\mathbb{N}}M_n(\mathbb{C})}\subset B(\ell^p(\mathbb{N}))$. When $p\in(1,\infty)$, $\overline{M}^p_\infty$ is equal to the algebra $K(\ell^p(\mathbb{N}))$ of compact operators on $\ell^p(\mathbb{N})$ \cite[Corollary~1.9]{Phil13}, but when $p=1$, there is a rank one operator (in fact, the operator in Remark \ref{rankone}(1)) that is not in $\overline{M}^1_\infty$ \cite[Example 1.10]{Phil13}, so $\overline{M}^1_\infty$ is strictly contained in $K(\ell^1(\mathbb{N}))$.
We will regard elements of $B^p_u(X)\otimes \overline{M}^p_\infty$ as bounded operators on $\ell^p(X\times\mathbb{N})$.
The following lemma is analogous to Lemma \ref{spatially implemented}, and is proved in the same way with the obvious modifications.
\begin{lem} \label{spatially implemented 2}
Let $X$ and $Y$ be metric spaces with bounded geometry, and let $p\in[1,\infty)$. If $\phi:B^p_u(X)\otimes \overline{M}^p_\infty\rightarrow B^p_u(Y)\otimes \overline{M}^p_\infty$ is an algebra isomorphism, then there exists a bounded linear bijection $U:\ell^p(X\times\mathbb{N})\rightarrow\ell^p(Y\times\mathbb{N})$ such that $\phi(T)=UTU^{-1}$ for all $T\in B^p_u(X)\otimes \overline{M}^p_\infty$. In particular, $\phi$ is continuous.
Moreover, if $\phi$ is also isometric, then $U$ is an invertible isometry.
\end{lem}
We now use Lamperti's theorem in the following form.
\begin{prop} \label{Lam2}
Let $X$ and $Y$ be metric spaces with bounded geometry, and let $p\in[1,\infty)\setminus\{2\}$. If $U:\ell^p(X\times\mathbb{N})\rightarrow\ell^p(Y\times\mathbb{N})$ is an invertible isometry, then there exists a function $h:Y\times\mathbb{N}\rightarrow\mathbb{T}$ and an invertible function $g:X\times\mathbb{N}\rightarrow Y\times\mathbb{N}$ such that $(U\xi)(y,m)=h(y,m)\xi(g^{-1}(y,m))$ for all $\xi\in\ell^p(X\times\mathbb{N})$, $y\in Y$, and $m\in\mathbb{N}$.
\end{prop}
Note that for $x\in X$, $y\in Y$, and $n,m\in\mathbb{N}$, we have \[ U\delta_{x,n}=h(g(x,n))\delta_{g(x,n)} \; \text{and} \; U^{-1}\delta_{y,m}=\overline{h(y,m)}\delta_{g^{-1}(y,m)}. \]
Let $\pi_X:X\times\mathbb{N}\rightarrow X$ and $\pi_Y:Y\times\mathbb{N}\rightarrow Y$ denote the respective coordinate projections. Then consider the maps $f:X\rightarrow Y$ and $f':Y\rightarrow X$ given by $f(x)=\pi_Y(g(x,0))$ and $f'(y)=\pi_X(g^{-1}(y,0))$.
The next lemma shows that $f$ and $f'$ are uniformly expansive, and will also be used in the proof of Theorem \ref{thm2} to show that $f\circ f'$ and $f'\circ f$ are close to the identity.
The proof is essentially the same as that of Lemma \ref{Lem2} and is modeled after \cite[Lemma 4.5]{MR3116573}.
\begin{lem} \label{Lem2s}
Let $g$ be as in Proposition \ref{Lam2}. For all $R\geq 0$, there exists $S\geq 0$ such that if $x,x'\in X$ satisfy $d(x,x')\leq R$ and $y,y'\in Y$ satisfy $|\langle U\delta_{x,n},\delta_{y,m} \rangle|=1=|\langle U\delta_{x',n'},\delta_{y',m'} \rangle|$ for some $m,n,m',n'\in\mathbb{N}$, then $d(y,y')\leq S$.
The same properties hold with the roles of $X$ and $Y$ reversed, and with $U$ replaced by $U^{-1}$.
In particular, if $f(x)=\pi_Y(g(x,0))$ and $f'(y)=\pi_X(g^{-1}(y,0))$, then
\begin{enumerate}
\item for all $R\geq 0$, there exists $S\geq 0$ such that if $x,x'\in X$ are such that $d(x,x')\leq R$, then $d(f(x),f(x'))\leq S$;
\item for all $R\geq 0$, there exists $S\geq 0$ such that if $y,y'\in Y$ are such that $d(y,y')\leq R$, then $d(f'(y),f'(y'))\leq S$.
\end{enumerate}
i.e., $f$ and $f'$ are uniformly expansive.
\end{lem}
\begin{proof}
Assume for contradiction that it is false. Then there exist sequences $(x_k),(x'_k),(y_k),(y'_k),(m_k),(m'_k),(n_k),(n'_k)$ such that $d(x_k,x'_k)\leq R$ for each $k$, \[|\langle U\delta_{x_k,n_k},\delta_{y_k,m_k} \rangle|=1=|U\langle \delta_{x'_k,n'_k},\delta_{y'_k,m'_k} \rangle|,\] and $d(y_k,y'_k)\rightarrow\infty$ as $k\rightarrow\infty$.
Now at least one of the sequences $(y_k)$ or $(y'_k)$ must have a subsequence tending to infinity in $Y$ so without loss of generality, we assume that $(y_k)$ itself tends to infinity in $Y$. It follows that the sequence $(\delta_{y_k,m_k})$ of unit vectors in $\ell^p(Y\times\mathbb{N})$ tends weakly to zero. Thus the sequence $(\delta_{x_k,n_k})$ must eventually leave any norm-compact subset of $\ell^p(X\times\mathbb{N})$, and $(x_k)$ must tend to infinity in $X$. Passing to another subsequence, we may assume that $d(x_k,x_{k+1})>2R$ for all $k$. Since $d(x_k,x'_k)\leq R$, the elements $x_k$ and $x'_k$ are all distinct, and the sum $\sum_{k\in\mathbb{N}}e_{(x_k,n_k),(x'_k,n'_k)}$ converges strongly to a bounded operator on $\ell^p(X\times\mathbb{N})$ that is moreover in $B^p_u(X)\otimes \overline{M}^p_\infty$. Hence the sum \[\sum_{k\in\mathbb{N}}\phi(e_{(x_k,n_k),(x'_k,n'_k)})=\sum_{k\in\mathbb{N}}h(y_k,m_k)\overline{h(y'_k,m'_k)}e_{(y_k,m_k),(y'_k,m'_k)}\] converges strongly to a bounded operator in $B^p_u(Y)\otimes \overline{M}^p_\infty$.
In particular, the sum $\sum_{k\in\mathbb{N}}e_{y_k,y'_k}$ converges strongly to an operator in $B^p_u(Y)$.
But this contradicts the property that $d(y_k,y'_k)\rightarrow\infty$ as $k\rightarrow\infty$.
The same argument works with the roles of $X$ and $Y$ reversed, and with $U$ replaced by $U^{-1}$.
Statements (1) and (2) follow from the observation that
\[|\langle U\delta_{x,0},\delta_{f(x),m} \rangle|=1=|\langle U\delta_{x',0},\delta_{f(x'),n} \rangle|\] and
\[|\langle U^{-1}\delta_{y,0},\delta_{f'(y),m'} \rangle|=1=|\langle U^{-1}\delta_{y',0},\delta_{f'(y'),n'} \rangle|\] for some $n,m,n',m'\in\mathbb{N}$.
\end{proof}
The implication (1) $\Rightarrow$ (2) in the following theorem generalizes \cite[Theorem 4]{BNW} to all $p\in[1,\infty)$, while the implication (4) $\Rightarrow$ (1) is the $p\neq 2$ analog of \cite[Corollary 6.2]{MR3116573}, which together with \cite[Lemma~8]{MR0043392} tells us that in the $p=2$ case, stable isometric isomorphisms between uniform Roe algebras yield coarse equivalences if the metric spaces have property A.
\begin{thm} \label{thm2}
Let $X$ and $Y$ be metric spaces with bounded geometry. The following are equivalent:
\begin{enumerate}
\item $X$ and $Y$ are coarsely equivalent.
\item For every $p\in[1,\infty)$, there is an isometric isomorphism $\phi:B^p_u(X)\otimes \overline{M}^p_\infty\rightarrow B^p_u(Y)\otimes \overline{M}^p_\infty$ such that $\phi(\ell^\infty(X)\otimes C_0(\mathbb{N}))=\ell^\infty(Y)\otimes C_0(\mathbb{N})$.
\item $B^p_u(X)\otimes \overline{M}^p_\infty$ and $B^p_u(Y)\otimes \overline{M}^p_\infty$ are isometrically isomorphic for every $p\in[1,\infty)$.
\item $B^p_u(X)\otimes \overline{M}^p_\infty$ and $B^p_u(Y)\otimes \overline{M}^p_\infty$ are isometrically isomorphic for some $p\in[1,\infty)\setminus\{2\}$.
\item For some $p\in[1,\infty)$, there is an isometric isomorphism $\phi:B^p_u(X)\otimes \overline{M}^p_\infty\rightarrow B^p_u(Y)\otimes \overline{M}^p_\infty$ such that $\phi(\ell^\infty(X)\otimes C_0(\mathbb{N}))=\ell^\infty(Y)\otimes C_0(\mathbb{N})$.
\end{enumerate}
\end{thm}
\begin{proof}
(1) $\Rightarrow$ (2):
The proof we present here is essentially the same as the proof of \cite[Theorem 4]{BNW}.
We first assume that there is a surjective coarse equivalence $f:X\rightarrow Y$. Since $f$ is a coarse equivalence, there exists $R>0$ such that for each $y\in Y$, the preimage $f^{-1}(y)$ lies in some $R$-ball in $X$. Then since $X$ has bounded geometry, there exists $N$ such that the cardinality of $f^{-1}(y)$ is at most $N$ for each $y\in Y$. Define $N(y)$ to be the cardinality of $f^{-1}(y)$. For each $y\in Y$, enumerating the points of $f^{-1}(y)$ gives a bijection between $f^{-1}(y)$ and $\{1,\ldots,N(y)\}\subseteq\{1,\ldots,N\}$. We therefore obtain an identification of $X$ with a subset of $Y\times\{1,\ldots,N\}$. Let $\pi$ denote the corresponding projection from $X$ to $\{1,\ldots,N\}$, so that the identification is given by $x\mapsto(f(x),\pi(x))$.
Define a map $\phi:X\times\mathbb{N}\rightarrow Y\times\mathbb{N}$ by $\phi(x,j)=(f(x),\pi(x)+jN(f(x)))$. Since for each $y\in Y$ there is exactly one $x\in X$ satisfying $f(x)=y$ and $\pi(x)=i$ for $i=1,\ldots,N(y)$, the map $\phi$ is a bijection, which gives rise to an invertible isometry from $\ell^p(X\times\mathbb{N})$ to $\ell^p(Y\times\mathbb{N})$, and thus an isometric isomorphism $\Phi$ from $B(\ell^p(X\times\mathbb{N}))$ to $B(\ell^p(Y\times\mathbb{N}))$. We will show that $\Phi$ maps $B^p_u(X)\otimes \overline{M}^p_\infty$ into $B^p_u(Y)\otimes \overline{M}^p_\infty$, while $\Phi^{-1}$ maps $B^p_u(Y)\otimes \overline{M}^p_\infty$ into $B^p_u(X)\otimes \overline{M}^p_\infty$, and hence $\Phi$ restricts to an isometric isomorphism between $B^p_u(X)\otimes \overline{M}^p_\infty$ and $B^p_u(Y)\otimes \overline{M}^p_\infty$.
We consider the dense subalgebra of $B^p_u(X)\otimes \overline{M}^p_\infty$ consisting of sums of elementary tensors of the form $T\otimes e_{j,j'}$, where $T$ is a finite propagation operator on $\ell^p(X)$ and $e_{j,j'}$ is a matrix unit. It suffices to show that such elementary tensors are mapped by $\Phi$ into $B^p_u(Y)\otimes \overline{M}^p_\infty$.
Partition $X$ as $X=\bigcup_{n=1,\ldots,N,i=1,\ldots,n}X_{n,i}$, where \[ X_{n,i}=\{ x\in X:N(f(x))=n,\pi(x)=i \}. \]
We can write $T$ as \[ T=\sum_{\substack{n,n'\leq N,\\ i\leq n,i'\leq n'}}P_{n,i}TP_{n',i'}, \] where $P_{n,i}$ denotes the projection of $\ell^p(X)$ onto $\ell^p(X_{n,i})$.
Note that for each $i$, the restriction of $f$ to $X_i=\bigcup_{n\geq i}X_{n,i}$ is injective, and let $V_i:\ell^p(X_i)\rightarrow\ell^p(Y)$ denote the corresponding isometry. Also, let $V_i^\dagger:\ell^p(Y)\rightarrow\ell^p(X_i)$ denote the reverse (in the terminology of Phillips \cite[Definition 6.13]{Phil12}).
Now fix $n,n',i,i'$, and let $S=P_{n,i}TP_{n',i'}$. Then $\Phi(S\otimes e_{j,j'})=V_iSV_{i'}^\dagger\otimes e_{i+nj,i'+n'j'}$. Since $f$ is a coarse equivalence, the operator $V_iSV_{i'}^\dagger$ has finite propagation, and hence $V_iSV_{i'}^\dagger\otimes e_{i+nj,i'+n'j'}$ lies in $B^p_u(Y)\otimes \overline{M}^p_\infty$. Since this holds for each $n,n',i,i'$, we conclude that $\Phi(T\otimes e_{j,j'})\in B^p_u(Y)\otimes \overline{M}^p_\infty$.
Showing that $\Phi^{-1}$ maps $B^p_u(Y)\otimes \overline{M}^p_\infty$ into $B^p_u(X)\otimes \overline{M}^p_\infty$ is done in a similar manner, and we omit the details, referring the reader to the proof of \cite[Theorem 4]{BNW}.
Finally, to remove the assumption that the coarse equivalence is surjective, observe that given any coarse equivalence $f:X\rightarrow Y$, there are surjective coarse equivalences from both $X$ and $Y$ to the image $f(X)$. Hence we have isometric isomorphisms \[ B^p_u(X)\otimes \overline{M}^p_\infty\cong B^p_u(f(X))\otimes \overline{M}^p_\infty\cong B^p_u(Y)\otimes \overline{M}^p_\infty. \]
From the definition of $\Phi$, one sees that this isometric isomorphism maps $\ell^\infty(X)\otimes C_0(\mathbb{N})$ onto $\ell^\infty(Y)\otimes C_0(\mathbb{N})$.
(2) $\Rightarrow$ (3) $\Rightarrow$ (4) is trivial.
(4) $\Rightarrow$ (1):
If $B^p_u(X)\otimes \overline{M}^p_\infty$ and $B^p_u(Y)\otimes \overline{M}^p_\infty$ are isometrically isomorphic for some $p\in[1,\infty)\setminus\{2\}$, then by Lemma \ref{spatially implemented 2}, we have an invertible isometry $U:\ell^p(X\times\mathbb{N})\rightarrow\ell^p(Y\times\mathbb{N})$.
Let $g$ be as in Proposition \ref{Lam2}. Then Lemma \ref{Lem2s} implies that $f=\pi_Y(g(-,0))$ and $f'=\pi_X(g^{-1}(-,0))$ are uniformly expansive. It remains to be shown that $f\circ f'$ and $f'\circ f$ are close to the identity.
For each $y\in Y$ we have
\[ |\langle U\delta_{f'(y),0},\delta_{f(f'(y)),m} \rangle|=1=|\langle U\delta_{f'(y),n},\delta_{y,0} \rangle| \]
for some $n,m\in\mathbb{N}$ so Lemma \ref{Lem2s} implies the existence of $S\geq 0$ (independently of $y$) such that $d(f(f'(y)),y)\leq S$, i.e., $f\circ f'$ is close to the identity. Similarly, for each $x\in X$ we have
\[ |\langle U^{-1}\delta_{f(x),0},\delta_{f'(f(x)),m} \rangle|=1=|\langle U^{-1}\delta_{f(x),n},\delta_{x,0} \rangle| \]
for some $n,m\in\mathbb{N}$ so $f'\circ f$ is close to the identity.
(2) $\Rightarrow$ (5): It is clear.
(5) $\Rightarrow$ (1): If $p=2$, we are done by \cite[Lemma~8]{MR0043392} and Proposition~\ref{p=2 case}, which is independent of Theorem~\ref{thm2}. If $p \neq 2$, it is clear that (5) $\Rightarrow$ (4).
\end{proof}
Given a countable group $\Gamma$ and $p\in[1,\infty)$, we may represent elements of $\ell^\infty(\Gamma)$ as multiplication operators on $\ell^p(\Gamma)$, and consider the left translation action of $\Gamma$ on $\ell^\infty(\Gamma)$. Then one can define an $L^p$ reduced crossed product $\ell^\infty(\Gamma)\rtimes_{\lambda,p}\Gamma$ just as how one defines the reduced crossed product $C^\ast$-algebra (cf. \cite[Definition 4.1.4]{BO}). Then the proof of \cite[Proposition 5.1.3]{BO} shows that $\ell^\infty(\Gamma)\rtimes_{\lambda,p}\Gamma$ is isometrically isomorphic to $B^p_u(\Gamma)$.
For finitely generated groups, we may replace coarse equivalence by quasi-isometry in Theorem \ref{thm2}. Moreover, if $\Gamma$ and $\Lambda$ are non-amenable finitely generated groups, then Corollary \ref{cor1} applies, so if they are quasi-isometric, then we get isometric isomorphisms between their $\ell^p$ uniform Roe algebras instead of just stable isometric isomorphisms.
\begin{cor} \label{cor:grps}
Let $\Gamma$ and $\Lambda$ be finitely generated groups. The following are equivalent:
\begin{enumerate}
\item $\Gamma$ and $\Lambda$ are quasi-isometric.
\item For every $p\in[1,\infty)$, there is an isometric isomorphism \[\phi:(\ell^\infty(\Gamma)\rtimes_{\lambda,p}\Gamma)\otimes \overline{M}^p_\infty\rightarrow (\ell^\infty(\Lambda)\rtimes_{\lambda,p}\Lambda)\otimes \overline{M}^p_\infty\] such that $\phi(\ell^\infty(\Gamma)\otimes C_0(\mathbb{N}))=\ell^\infty(\Lambda)\otimes C_0(\mathbb{N})$.
\item $(\ell^\infty(\Gamma)\rtimes_{\lambda,p}\Gamma)\otimes \overline{M}^p_\infty$ and $(\ell^\infty(\Lambda)\rtimes_{\lambda,p}\Lambda)\otimes \overline{M}^p_\infty$ are isometrically isomorphic for every $p\in[1,\infty)$.
\item $(\ell^\infty(\Gamma)\rtimes_{\lambda,p}\Gamma)\otimes \overline{M}^p_\infty$ and $(\ell^\infty(\Lambda)\rtimes_{\lambda,p}\Lambda)\otimes \overline{M}^p_\infty$ are isometrically isomorphic for some $p\in[1,\infty)\setminus\{2\}$.
\item For some $p\in[1,\infty)$, there is an isometric isomorphism \[\phi:(\ell^\infty(\Gamma)\rtimes_{\lambda,p}\Gamma)\otimes \overline{M}^p_\infty\rightarrow (\ell^\infty(\Lambda)\rtimes_{\lambda,p}\Lambda)\otimes \overline{M}^p_\infty\] such that $\phi(\ell^\infty(\Gamma)\otimes C_0(\mathbb{N}))=\ell^\infty(\Lambda)\otimes C_0(\mathbb{N})$.
\end{enumerate}
If moreover $\Gamma$ and $\Lambda$ are non-amenable, then these are also equivalent to:
\begin{enumerate}[resume]
\item $\ell^\infty(\Gamma)\rtimes_{\lambda,p}\Gamma$ and $\ell^\infty(\Lambda)\rtimes_{\lambda,p}\Lambda$ are isometrically isomorphic for every $p\in[1,\infty)$.
\item $\ell^\infty(\Gamma)\rtimes_{\lambda,p}\Gamma$ and $\ell^\infty(\Lambda)\rtimes_{\lambda,p}\Lambda$ are isometrically isomorphic for some $p\in[1,\infty)\setminus\{2\}$.
\end{enumerate}
\end{cor}
In the $p=2$ case, isometric isomorphisms between the algebras yield quasi-isometries between the groups if the groups have property A (or equivalently, if they are $C^\ast$-exact) by \cite[Corollary 6.2]{MR3116573}, \cite[Corollary 6.3]{MR3116573}, and \cite[Lemma~8]{MR0043392}.
Instead of $B^p_u(X)\otimes \overline{M}^p_\infty$, one may consider $\ell^p$ Roe-type algebras, as was done in the $C^\ast$-algebraic setting in \cite{MR3116573}, and also $B^p_u(X)\otimes K(\ell^p(\mathbb{N}))$, which is not quite a $\ell^p$ Roe-type algebra and is different from $B^p_u(X)\otimes \overline{M}^p_\infty$ when $p=1$.
\begin{defn} \label{Roetype}
Let $X$ be a metric space with bounded geometry, let $p\in[1,\infty)$, and let $S$ be either $\mathbb{N}$ or $\{1,\ldots,n\}$ for some $n\in\mathbb{N}$. An operator $T=(T_{xy})_{x,y\in X}$ in $B(\ell^p(X,\ell^p(S)))$ is said to be locally compact if $T_{xy}\in K(\ell^p(S))$ for all $x,y\in X$.
A subalgebra $A^p[X]$ of $B(\ell^p(X,\ell^p(S)))$ is called an algebraic $\ell^p$ Roe-type algebra if
\begin{itemize}
\item it consists only of locally compact, finite propagation operators,
\item it contains all finite propagation operators $T=(T_{xy})_{x,y\in X}$ with the property that there exists $N\in\mathbb{N}$ such that the rank of $T_{xy}$ is at most $N$ for all $x,y\in X$.
\end{itemize}
An $\ell^p$ Roe-type algebra $A^p(X)$ is the operator norm closure of some algebraic $\ell^p$ Roe-type algebra.
\end{defn}
\begin{rem}
In the definition of locally compact operators, we could have required all $T_{x,y}$ to belong to $\overline{M}^p_S:=\overline{\bigcup_{n\in S}M_n(\mathbb{C})}\subset B(\ell^p(S))$, which makes a difference only when $p=1$. This condition will not be satisfied by the $\ell^1$ uniform algebra $UB^1(X)$. Nevertheless, with this alternative definition, the results below (Theorem \ref{thm:Roetype} and Corollary \ref{cor:Roetype}) still hold.
\end{rem}
The $\ell^p$ uniform Roe algebra $B^p_u(X)$ is an example of an $\ell^p$ Roe-type algebra.
Recall the algebras $UB^p(X)$, $B^p(X)$, and $B^p_s(X)$ defined on page \pageref{egRoetype} after Definition \ref{pRoedef}, and that $B^p_s(X)\cong B^p_u(X)\otimes K(\ell^p(\mathbb{N}))$ for all $p\in[1,\infty)$. The algebras $UB^p(X)$ and $B^p(X)$ are also $\ell^p$ Roe-type algebras but $B^p_s(X)$ is not. All three algebras are coarse invariants while $B^p_u(X)$ is not. Indeed, if $X$ and $Y$ are coarsely equivalent, then the isometric isomorphism $\Phi:B(\ell^p(X,\ell^p))\rightarrow B(\ell^p(Y,\ell^p))$ in the proof of (1) $\Rightarrow$ (2) in Theorem \ref{thm2} restricts to isometric isomorphisms $B^p_s(X)\cong B^p_s(Y)$, $UB^p(X)\cong UB^p(Y)$, and $B^p(X)\cong B^p(Y)$.
On the other hand, note that all operators of the form $\xi\otimes\delta_{x_0,n_0}$, where $\xi\in\ell^p(X,\ell^p(S))$, $x_0\in X$, and $n_0\in S$, are contained in any $\ell^p$ Roe-type algebra, which enables one to show that Lemma \ref{spatially implemented 2} holds for $\ell^p$ Roe-type algebras by following the proof of Lemma \ref{spatially implemented}. One can then show that the implication (4) $\Rightarrow$ (1) in Theorem \ref{thm2} holds with $B^p_u(X)\otimes\overline{M}^p_\infty$ replaced by any $\ell^p$ Roe-type algebra or $B^p_s(X)$.
Thus, we obtain the following analogs of \cite[Theorem~4.1, Theorem~5.1 and Theorem~6.1]{MR3116573}, which deal with the $p=2$ case. It is worth noting that the metric spaces are assumed to have property A in \cite[Theorem~4.1 and Theorem~6.1]{MR3116573}.
\begin{thm} \label{thm:Roetype}
Let $X$ and $Y$ be metric spaces with bounded geometry, let $p\in[1,\infty)\setminus\{2\}$, and let $A^p(X)$ and $A^p(Y)$ be $\ell^p$ Roe-type algebras associated to $X$ and $Y$ respectively. If $A^p(X)$ and $A^p(Y)$ are isometrically isomorphic, then $X$ and $Y$ are coarsely equivalent. The same conclusion holds if $B^p_s(X)$ and $B^p_s(Y)$ are isometrically isomorphic.
The converse holds for the algebras $UB^p(X)$, $B^p(X)$, and $B^p_s(X)$. In particular, since $B^p_s(X)\cong B^p_u(X)\otimes K(\ell^p(\mathbb{N}))$ for $p\in[1,\infty)$, Theorem \ref{thm2} and Corollary \ref{cor:grps} also hold with $\overline{M}^p_\infty$ replaced by $K(\ell^p(\mathbb{N}))$.
\end{thm}
\begin{cor} \label{cor:Roetype}
Let $X$ and $Y$ be metric spaces with bounded geometry, let $p\in[1,\infty)$, and let $A^p[X]$ and $A^p[Y]$ be algebraic $\ell^p$ Roe-type algebras associated to $X$ and $Y$ respectively. If $A^p[X]$ and $A^p[Y]$ are isometrically isomorphic, then $X$ and $Y$ are coarsely equivalent. The same conclusion holds if $\mathbb{C}^p_s[X]$ and $\mathbb{C}^p_s[Y]$ are isometrically isomorphic.
The converse holds for the algebras $U\mathbb{C}^p[X]$, $\mathbb{C}^p[X]$, and $\mathbb{C}^p_s[X]$.
\end{cor}
Finally, let us return to the case $p=2$. We will denote the $\ell^2$ uniform Roe algebra of a metric space $X$ by $C_u^*(X)$ as in the literature.
Let $(X,d)$ be a (discrete) metric space with bounded geometry. For every $R>0$, we consider the $R$-neighbourhood of the diagonal in $X\times X$: $\Delta_R:=\{(x,y)\in X\times X: d(x,y)\leq R\}$. Define
\begin{align*}
G(X):=\bigcup_{R>0}\overline{\Delta}_R\subseteq \beta(X\times X).
\end{align*}
It turns out that the domain, range, inversion and multiplication maps on the pair groupoid $X\times X$ have unique continuous extensions to $G(X)$. With respect to these extensions, $G(X)$ becomes a principal, étale, locally compact $\sigma$-compact Hausdorff topological groupoid with the unit space $\beta X$ (see \cite[Proposition~3.2]{MR1905840} or \cite[Theorem~10.20]{Roe03}). Since $G(X)^{(0)}=\beta X$ is totally disconnected, $G(X)$ is also ample (see \cite[Proposition 4.1]{E10}). Moreover, the uniform Roe algebra $C_u^*(X)$ of $X$ is naturally isomorphic to the reduced groupoid $C^*$-algebra of $G(X)$, which maps $\ell^\infty(X)$ onto $C(G(X)^{(0)})$ (see \cite[Proposition~10.29]{Roe03} for a proof). We could compare the following proposition with \cite[Corollary~1.16]{WW}.
\begin{prop}\label{p=2 case}
Let $X$ and $Y$ be metric spaces with bounded geometry. Then the following are equivalent:
\begin{itemize}
\item[(1)] $X$ and $Y$ are coarsely equivalent.
\item[(2)] $G(X)$ and $G(Y)$ are equivalent as topological groupoids.
\item[(3)] $G(X)\times \mathcal{R}$ and $G(Y)\times \mathcal{R}$ are isomorphic, where $\mathcal{R}$ denotes the full countable equivalence relation on $\mathbb{N}$.
\item[(4)] $(l^{\infty}(X)\otimes C_0(\mathbb{N}),C_u^*(X)\otimes K(\ell^2(\mathbb{N}))\cong (l^{\infty}(Y)\otimes C_0(\mathbb{N}),C_u^*(Y)\otimes K(\ell^2(\mathbb{N})))$.
\end{itemize}
\end{prop}
\begin{proof}
$(1) \Rightarrow (2)$: It follows from \cite[Corollary~3.6]{MR1905840}.
$(2) \Leftrightarrow (3)$: It follows from \cite[Theorem~2.1]{MR3601549}.
$(3) \Rightarrow (4)$: This is clear.
$(4) \Rightarrow (3)$: It follows from \cite[Proposition~4.13]{MR2460017}, which is also true without any change for locally compact $\sigma$-compact Hausdorff topologically principal \'etale groupoids (see also \cite[Remark~5.1]{XL18}).
$(3) \Rightarrow (1)$: In particular, $C_c(G(X)\times \mathcal{R})$ and $C_c(G(Y)\times \mathcal{R})$ are $*$-isomorphic. Since $C_c(G(X)\times \mathcal{R})$ and $C_c(G(Y)\times \mathcal{R})$ are respectively nothing but $\mathbb{C}_s[X]$ and $\mathbb{C}_s[Y]$ as in \cite[Example~2.2]{MR3116573}, we complete the proof by \cite[Theorem~6.1]{MR3116573}.
\end{proof}
{\bf Acknowledgments}. The second-named author would like to thank Christian B\"onicke, Hannes Thiel and Jiawen Zhang for helpful and enlightening discussions.
\bibliographystyle{plain}
|
{
"timestamp": "2018-08-14T02:13:54",
"yymm": "1802",
"arxiv_id": "1802.08921",
"language": "en",
"url": "https://arxiv.org/abs/1802.08921"
}
|
\section{Introduction}
The volume of sets of quantum states is an issue of the utmost importance. It can help in finding separable states within all quantum states. The former are states of a composite system that can be written as convex combinations of subsystem states, in contrast to entangled states \cite{Horo}. The notion of volume can also be useful when defining ``typical'' properties of a set of quantum states. In fact, in such a case, one uses the random generation of states according to a measure stemming from their volume \cite{Lupo}.
Clearly, the seminal thing to do to determine the volume of a set of quantum states is to consider a metric on it. For pure quantum states, it is natural to consider the Fubini--Study metric which turns out to be proportional to the classical Fisher--Rao metric \cite{Marmo1}. However, the situation becomes ambiguous for quantum mixed states where there is no single metric \cite{Petz}.
There, several measures have been investigated, each of them arising from different motivations \cite{GQ}. For instance, one employed the Positive Partial Transpose criterion \cite{Horo} to determine an upper bound for the volume of separable quantum states and to figure out that it is non-zero \cite{Zycz}.
{However, this criterion becomes less and less precise as the dimension increases \cite{Ye09}.
An upper bound for the volume of separable states is also provided in \cite{Szarek} by
combining techniques of geometry and random
matrix theory. Here, separable states are approximated by an ellipsoid with respect to a
Euclidean product.
Following up, in Reference \cite{Aubrun}, a measure relying on the convex and Euclidean structure of states has been proposed. Such a volume measure turned out to be very effective in detecting separability for large systems. Nevertheless, for two-qubit states, the Euclidean structure is not given by
the Hilbert--Schmidt scalar product.
Instead, the latter is considered as a very natural measure upon the finite dimensional Hilbert spaces. For this reason, it has been employed as a natural measure in the space of density operators \cite{Zycz}. Such a measure revealed that the set of separable states has a non-zero volume and, in some cases, analytical lower and upper bounds have been found on it.
Still based on the Hilbert--Schmidt product, a measure has been recently introduced to evaluate the volume of Gaussian states in infinite dimensional quantum systems \cite{LS15}.
However, it does not have a classical counterpart, as an approach based on the probabilistic structure of quantum states could have.
This approach has been taken for infinite dimensional quantum systems by using information geometry \cite{FQM17}.} This is the application of differential geometric techniques to the study of families of probabilities \cite{Amari}. Thus, it can be employed whenever quantum states are represented as probability distribution functions, instead of density operators.
{This happens in phase space where Wigner functions representing Gaussian states were used in such a way that each class of states is associated with a statistical model which turns out to be a Riemannian manifold endowed with the well-known Fisher--Rao metric \cite{FQM17}.}
This approach could be extended to the entire set of quantum states considering the Husimi $Q$-function \cite{Husimi} instead of the Wigner function, being the former a true probability distribution function.
Additionally, it could be used in finite dimensional systems by using coherent states of compact groups $SU(d)$.
Here, we take this avenue for two-qubit systems.
Above all, we address the question of whether such an approach gives results similar to other approaches based on the quantum version of the Fisher metric, such as the Helstrom quantum Fisher metric \cite{Helstrom} and the Wigner-Yanase-like quantum Fisher metric \cite{Luo}.
We focus on states with maximally disordered subsystems and analyze the behavior of the volume of sub-manifolds of separable and entangled states with fixed purity. We show that all of the above mentioned approaches give the same qualitative results.
The layout of the paper is as follows. In Section \ref{sec2}, we recall the structure of a set of two-qubit states.
Then, in Section \ref{sec3}, we present different metrics on such a space inspired by the classical Fisher--Rao metric. Section \ref{sec4} is devoted to the evaluation of the volume of states with maximally disordered subsystems.
Finally, we draw our conclusions in Section \ref{sec5}.
\section{Structure of a Set of Two-Qubit States}\label{sec2}
Consider two-qubit states (labeled by 1 and 2) with associated Hilbert space
${\cal H}=\mathbb{C}^2\otimes\mathbb{C}^2$.
The space ${\cal L}({\cal H})$ of linear operators acting on ${\cal H}$ can be supplied with a scalar product $\langle A,B\rangle={\rm Tr}\left( A^\dag B\right)$ to have a Hilbert--Schmidt space.
Then, in such a space, an arbitrary density operator (positive and trace class operator)
can be represented as follows
\begin{equation}
\rho=\frac{1}{4}\left(I\otimes I+{\boldsymbol r}\cdot{\boldsymbol\sigma}\otimes I+I\otimes
{\boldsymbol s}\cdot {\boldsymbol\sigma}+
\sum_{m,n=1}^3t_{mm}\sigma_{m}\otimes\sigma_{n}\right),
\label{T}
\end{equation}
where $I$ is the identity operator on $\mathbb{C}^2$, $\boldsymbol{r,s}\in\mathbb{R}^3$, and
${\boldsymbol\sigma}:=(\sigma_1,\sigma_2,\sigma_3)$ is the vector of Pauli matrices. Furthermore,
the coefficients $t_{mn}:={\rm Tr}\left( \rho \, \sigma_m \otimes \sigma_n\right)$ form a real matrix denoted by $T$.
Note that $\boldsymbol{r, s}$ are local parameters as they determine the reduced states
\begin{eqnarray}
\rho_1&:=&{\rm Tr}_{2} \rho= \frac{1}{2} \left(I+ {\boldsymbol r} \cdot {\boldsymbol\sigma} \right), \\
\rho_2&:=&{\rm Tr}_{1} \rho= \frac{1}{2} \left(I+ {\boldsymbol s} \cdot {\boldsymbol\sigma} \right),
\end{eqnarray}
while the $T$ matrix is responsible for correlations.
The number of (real) parameters characterizing $\rho$ is 15, but they can be reduced
with the following argument. Entanglement (in fact, any quantifier of it) is invariant under
local unitary transformations, i.e., $U_1\otimes U_2$ \cite{Horo}.
Then, without loss of generality,
we can restrict our attention to states with a diagonal matrix $T$.
To show that this class of states is representative, we can use the following fact:
if a state is subjected to $U_1\otimes U_2$ transformation,
the parameters $\boldsymbol{r,s}$ and $T$ transform themselves as
\begin{eqnarray}
&&\boldsymbol{r}'=O_1\boldsymbol{r},\\
&&\boldsymbol{s}'=O_2\boldsymbol{s},\\
&&T'=O_1 T O_2^\dag { ,}\label{TOTO}
\end{eqnarray}
where $O_i$s correspond to $U_i$s via
\begin{equation}
U_i\boldsymbol{ \hat n}\cdot\boldsymbol{\sigma}U_i^\dagger
=(O_i\boldsymbol{\hat n})\cdot\boldsymbol{\sigma}{ ,}
\label{UnsU}
\end{equation}
being $\boldsymbol{ \hat n}$ a versor of $\mathbb{R}^3$.
Thus, given an arbitrary state, we can always choose unitaries $U_1, U_2$ such that the
corresponding rotations will diagonalize its matrix $T$.
As we consider the states with diagonal $T$, we can identify $T$
with the vector $\boldsymbol{t}:=(t_{11},t_{22},t_{33})\in \mathbb{R}^3$.
Then, the following sufficient conditions are known \cite{Horodecki}:
\begin{enumerate}
\item[i)]
\emph{{For any $\rho$, the matrix $T$ belongs
to the tetrahedron $\cal T$ with vertices $(-1,-1,-1)$,
$(-1,1,1)$, $(1,-1,1)$, $(1,1,-1)$.}}
\item[ii)]
\emph{{For any separable state $\rho$, the matrix $T$ belongs to the octahedron
$\cal O$ with vertices $(0,0,\pm1)$, $(0,\pm1,0)$,
$(0,0,\pm1)$.}}
\end{enumerate}
Still, the number of (real) parameters \eqref{rhotmatrix} characterizing $\rho$ is quite large.
Thus, to simplify the treatment, from now on, we focus on the states with maximally disordered subsystems, namely the states
with $\boldsymbol{r,s}=0$.
They are solely characterized by the $T$ matrix.
For these states, the following necessary and sufficient conditions are known \cite{Horodecki}:
\begin{enumerate}
\item[i)]
\emph{
Any operator \eqref{T} with $\boldsymbol{r,s}=0$ and diagonal
{$T$ is a state (density operator) iff $T$ belongs to the tetrahedron $\cal T$.}}
\item[ii)]
\emph{{
Any state $\rho$ with maximally disordered subsystems and diagonal
$T$ is separable iff $T$ belongs to the octahedron $\cal O$.}}
\end{enumerate}
\section{Fisher Metrics}\label{sec3}
The generic state with maximally disordered subsystems parametrized by ${\boldsymbol{t}}$ reads
\begin{equation}
\label{rhot}
\rho_{\boldsymbol{t}}=\frac{1}{4}\sum_{n=1}^3t_{nn} \sigma_n\otimes\sigma_n,
\end{equation}
and has a matrix representation (in the canonical basis):
\begin{equation}
\label{rhotmatrix}
\rho_{\boldsymbol t}=\left(
\begin{array}{cccc}
\frac{t_{33}+1}{4} & 0 & 0 & \frac{t_{11}-t_{22}}{4} \\
0 & \frac{1-t_{33}}{4} & \frac{t_{11}+t_{22}}{4} & 0 \\
0 & \frac{t_{11}+t_{22}}{4} & \frac{1-t_{33}}{4} & 0 \\
\frac{t_{11}-t_{22}}{4} & 0 & 0 & \frac{t_{33}+1}{4} \\
\end{array}
\right).
\end{equation}
We shall derive different Fisher metrics for the set of such states.
\subsection{Classical Fisher Metric in Phase Space}\label{subsec31}
The state \eqref{rhot} can also be represented in the phase space by means of the Husimi $Q$-function \cite{Husimi},
\begin{equation}
Q_{\boldsymbol t}\left( {\boldsymbol x}\right):=\frac{1}{4\pi^2}\langle \Omega_1| \langle \Omega_2| \rho_{\boldsymbol t} | \Omega_2\rangle |\Omega_1\rangle,
\label{p}
\end{equation}
where
\begin{eqnarray}
|\Omega_1\rangle&=&\cos {\frac{\theta_1}{2}} | 0 \rangle+ e^{-i \phi_1} \sin{\frac{\theta_1}{2}}|1\rangle,
\\
|\Omega_2\rangle&=&\cos {\frac{\theta_2}{2}} | 0 \rangle + e^{-i \phi_2} \sin{\frac{\theta_2}{2}}|1\rangle,
\end{eqnarray}
are $SU(2)$ coherent states \cite{KS}. Here, ${\boldsymbol x}:=(\theta_1,\theta_2,\phi_1,\phi_2)$ is the vector of random variables $\theta_{1,2}\in[0,\pi)$, $\phi_{1,2}\in[0,2\pi]${ ,} with probability measure
$d{\boldsymbol x}=\frac{1}{16\pi^2}\sin\theta_1\sin\theta_2 d\theta_1d\theta_2 d\phi_1d\phi_2$.
Recall that the classical Fisher--Rao information metric for probability distribution functions $Q_{\boldsymbol t}$ is given by:
\begin{equation}
g^{FR}_{ij}:=\int Q_{\boldsymbol t}\left( {\boldsymbol x}\right) \partial_i \log Q_{\boldsymbol t}\left( {\boldsymbol x}\right) \partial_j \log Q_{\boldsymbol t}\left( {\boldsymbol x}\right) d {\boldsymbol x},
\label{gFisher}
\end{equation}
where $\partial_i :=\frac{\partial}{\partial t_{ii}}$.
Considering \eqref{rhot} and \eqref{p}, we explicitly get
\begin{eqnarray}
Q_{\boldsymbol t}\left(\theta_1, \theta_2, \phi_1, \phi_2 \right)
&=&\frac{2\sin\theta_1 \sin \theta_2 }{64 \pi ^2} \Big[t_{11} \left(\cos(\phi_1+\phi_2)+\cos(\phi_1-\phi_2)\right)
-t_{22} \left(\cos(\phi_1+\phi_2)-\cos(\phi_1-\phi_2)\right) \Big] \notag \\
&+&\frac{4}{64 \pi ^2}\left[t_{33} \cos \theta_1 \cos \theta_2+1 \right].
\end{eqnarray}
Unfortunately, it is not possible to arrive at an analytical expression for $g_{ij}$.
In fact, the integral \eqref{gFisher} can only be evaluated numerically.
Let us, however, analyze an important property of this metric.
\begin{prop}\label{prop1}
The metric \eqref{gFisher} is invariant under local rotations.
\end{prop}
\begin{proof}
From Section \ref{sec2}, we know that the equivalence relation $\rho\sim \left(U_1\otimes U_2\right)\rho
\left(U_1^\dag\otimes U_2^\dag\right)$
leads to the transformation \eqref{TOTO} for the parameters matrix,
where the correspondence between $O_i$s and $U_i$s, given by Equation \eqref{UnsU}, can be read as follows
\begin{equation}
\label{Correspondence}
(O_i)_{\mu\nu}=\frac{1}{2} {\rm Tr}\left[\sigma_\mu U_i\sigma_\nu U_i^\dagger\right].
\end{equation}
Therefore, the effect of the equivalence relation acts on $Q_{\boldsymbol t}\left( {\boldsymbol x}\right)$ of Equation \eqref{p} by changing the parameter vector ${\boldsymbol{t}}=(t_{11},t_{22},t_{33})$ via an orthogonal transformation given by \eqref{Correspondence}. That is
\begin{equation}
Q_{\boldsymbol t}( {\boldsymbol x})\mapsto Q_{{\boldsymbol t}^{\prime}}\left( {\boldsymbol x}\right),
\end{equation}
where ${\boldsymbol t}^{\prime}=O {\boldsymbol t}$, or explicitly
\begin{eqnarray}\label{transformation}
&&t^{\prime}_{11}=O_{11}t_{11}+O_{12}t_{22}+O_{13}t_{33}, \nonumber\\
&&t^{\prime}_{22}=O_{21}t_{11}+O_{22}t_{22}+O_{23}t_{33}, \\
&&t^{\prime}_{33}=O_{31}t_{11}+O_{32}t_{22}+O_{33}t_{33}\nonumber .
\end{eqnarray}
Let us now see how the entries of the Fisher--Rao metric \eqref{gFisher} change under an orthogonal transformation of $T$ given in terms of \eqref{Correspondence}. To this end, consider \eqref{gFisher}
written for ${\boldsymbol t}^{\prime}$ as follows
\begin{eqnarray}\label{gFishervary}
\widetilde{g}^{FR}_{ij}&=&
\int Q_{{\boldsymbol t}^\prime}\left( {\boldsymbol x}\right) \partial^\prime_i \log Q_{{\boldsymbol t}^\prime}\left( {\boldsymbol x}\right) \partial^\prime_j \log Q_{{\boldsymbol t}^\prime}\left( {\boldsymbol x}\right) d {\boldsymbol x}\notag\\
&=&\int \frac{1}{Q_{ {\boldsymbol t}^{\prime}} \left( {\boldsymbol x}\right)} \partial^{\prime}_i Q_{ {\boldsymbol t}^{\prime}} \left( {\boldsymbol x}\right) \partial^{\prime}_j
Q_{ {\boldsymbol t}^{\prime}} \left( {\boldsymbol x}\right) d{\boldsymbol x},
\end{eqnarray}
where $\partial^{\prime}_i:=\frac{\partial}{\partial t^{\prime}_{ii}}$. From \eqref{transformation}, we recognize the functional relation $t^{\prime}_{ii}=t^{\prime}_{ii}(\boldsymbol{t})$; therefore, we have
\begin{align}\label{transfderiv}
\partial_{i}Q_{ {\boldsymbol t}^{\prime}} \left( {\boldsymbol x}\right)
= \partial_1^{\prime} Q_{ {\boldsymbol t}^{\prime}} \left( {\boldsymbol x}\right)\partial_{i}t^{\prime}_{11}
+\partial_2^{\prime}Q_{ {\boldsymbol t}^{\prime}} \left( {\boldsymbol x}\right)\partial_{i}t^{\prime}_{22}
+\partial_3^{\prime}Q_{ {\boldsymbol t}^{\prime}} \left( {\boldsymbol x}\right)\partial_{i}t^{\prime}_{33}
\end{align}
Finally, from \eqref{gFishervary} and \eqref{transfderiv}, we arrive at
\begin{equation}
g^{FR}_{ij}=\sum_{m,n=1}^3 \widetilde{g}^{FR}_{mn}O_{i n}O_{j m},
\end{equation}
and as a consequence we immediately obtain $g^{FR}=O\ \widetilde{g}^{FR}\ O^\top$.
\end{proof}
\subsection{Quantum Fisher Metrics}\label{subsec32}
It is remarkable that \eqref{gFisher} is the unique (up to a constant factor) monotone Riemannian
metric (that is, the metric contracting under any stochastic map) in the class of probability
distribution functions (classical or commutative case) \cite{Chentsov}.
This is not the case in an operator setting (quantum case), where the notion
of Fisher information has many natural generalizations due to non-commutativity \cite{Petz}.
Among the various generalizations, two are distinguished.
The first natural generalization of the classical Fisher information arises when
one formally generalizes the expression \eqref{gFisher}. This was, in fact, first done in a quantum estimation setting \cite{Helstrom}.
To see how this happens, note that in a symmetric form, the derivative of $Q_{\boldsymbol t}$ reads
\begin{equation}
\partial_i Q_{\boldsymbol t}=\frac{1}{2}\left(
\partial_i \log Q_{\boldsymbol t} \cdot Q_{\boldsymbol t}
+Q_{\boldsymbol t} \cdot \partial_i \log Q_{\boldsymbol t}
\right).
\end{equation}
In Equation \eqref{gFisher}, replacing the integration by trace,
$Q_{\boldsymbol t}$ by $\rho_{\boldsymbol t}$, and the logarithmic derivative
$\partial_i \log Q_{\boldsymbol t}$
by the symmetric logarithmic derivative $L_i$ determined by
\begin{equation}
\label{eqSLD}
\partial_i \rho_{\boldsymbol t}=\frac{1}{2}\left(L_i\rho_{\boldsymbol t}+\rho_{\boldsymbol t} L_i\right),
\end{equation}
we come to the quantum Fisher information (derived via the symmetric logarithmic
derivative)
\begin{equation}
\label{gH}
g^{H}_{ij}(\rho_\theta):=\frac{1}{2}{\rm Tr}\left(L_i L_j \rho_{\boldsymbol t}+L_jL_i\rho_{\boldsymbol t}\right).
\end{equation}
Using \eqref{rhotmatrix} to solve \eqref{eqSLD} yields
\begin{eqnarray}
\label{L1}
L_{1}&=&\left(
\begin{array}{cccc}
\frac{t_{11}-t_{22}}{\left ( t_{11}-t_{22} \right)^2-(t_{33}+1)^2} & 0 & 0 & \frac{t_{33}+1}{-\left ( t_{11}-t_{22} \right)^2+(t_{33}+1)^2} \\
0 & \frac{t_{11}+t_{22}}{\left ( t_{11}+t_{22} \right)^2-(t_{33}-1)^2} & \frac{t_{33}-1}{\left ( t_{11}+t_{22} \right)^2-(t_{33}-1)^2} & 0 \\
0 & \frac{t_{33}-1}{\left ( t_{11}+t_{22} \right)^2-(t_{33}-1)^2} & \frac{t_{11}+t_{22}}{\left ( t_{11}+t_{22} \right)^2-(t_{33}-1)^2} & 0 \\
\frac{t_{33}+1}{-\left ( t_{11}-t_{22} \right)^2+(t_{33}+1)^2} & 0 & 0 & \frac{t_{11}-t_{22}}{\left ( t_{11}-t_{22} \right)^2-(t_{33}+1)^2} \\
\end{array}\right),
\\ \nonumber \\
\label{L2}
L_{2}&=& \left(
\begin{array}{cccc}
\frac{t_{22}-t_{11}}{\left ( t_{11}-t_{22} \right)^2-(t_{33}+1)^2} & 0 & 0 & \frac{t_{33}+1}{\left ( t_{11}-t_{22} \right)^2-(t_{33}+1)^2} \\
0 & \frac{t_{11}+t_{22}}{\left ( t_{11}+t_{22} \right)^2-(t_{33}-1)^2} & \frac{t_{33}-1}{\left ( t_{11}+t_{22} \right)^2-(t_{33}-1)^2} & 0 \\
0 & \frac{t_{33}-1}{\left ( t_{11}+t_{22} \right)^2-(t_{33}-1)^2} & \frac{t_{11}+t_{22}}{\left ( t_{11}+t_{22} \right)^2-(t_{33}-1)^2} & 0 \\
\frac{t_{33}+1}{\left ( t_{11}-t_{22} \right)^2-(t_{33}+1)^2} & 0 & 0 & \frac{t_{22}-t_{11}}{\left ( t_{11}-t_{22} \right)^2-(t_{33}+1)^2} \\
\end{array}
\right),
\\ \nonumber \\
\label{L3}
L_{3}&=& \left(
\begin{array}{cccc}
\frac{t_{33}+1}{-\left ( t_{11}-t_{22} \right)^2+(t_{33}+1)^2} & 0 & 0 & \frac{t_{11}-t_{22}}{\left ( t_{11}-t_{22} \right)^2-(t_{33}+1)^2} \\
0 & \frac{1-t_{33}}{\left ( t_{11}+t_{22} \right)^2-(t_{33}-1)^2} & -\frac{t_{11}+t_{22}}{\left ( t_{11}+t_{22} \right)^2-(t_{33}-1)^2} & 0 \\
0 & -\frac{t_{11}+t_{22}}{\left ( t_{11}+t_{22} \right)^2-(t_{33}-1)^2} & \frac{1-t_{33}}{\left ( t_{11}+t_{22} \right)^2-(t_{33}-1)^2} & 0 \\
\frac{t_{11}-t_{22}}{\left ( t_{11}-t_{22} \right)^2-(t_{33}+1)^2} & 0 & 0 & \frac{t_{33}+1}{-\left ( t_{11}-t_{22} \right)^2+(t_{33}+1)^2} \\
\end{array}
\right){\color{blue} .} \;\;
\end{eqnarray}
Then, from Equation \eqref{gH}, we obtain
{\small
\begin{equation}
\label{gHmatrix}
g^{H}=\frac{1}{\Delta}\left(
\begin{array}{ccc}
1- \|{\boldsymbol t}\|^2 -2 t_{11} t_{22} t_{33}
& (1+ \| {\boldsymbol t}\|^2 -2 t_{33}^2) t_{33}+2 t_{11} t_{22}
& \left(1+ \| {\boldsymbol t} \|^2- 2t_{22}^2\right) t_{22}+2 t_{11} t_{33} \\ \\
(1+ \| {\boldsymbol t}\|^2 -2 t_{33}^2) t_{33}+2 t_{11} t_{22} &
1- \|{\boldsymbol t}\|^2 -2 t_{11} t_{22} t_{33}
&\left(1+ \| {\boldsymbol t} \|^2-2 t_{11}^2\right) t_{11}+2 t_{22} t_{33} \\ \\
\left(1+ \| {\boldsymbol t} \|^2- 2t_{22}^2\right) t_{22}+2 t_{11} t_{33}
&\left(1+ \| {\boldsymbol t} \|^2-2 t_{11}^2\right) t_{11}+2 t_{22} t_{33}
& 1- \|{\boldsymbol t}\|^2 -2 t_{11} t_{22} t_{33} \\
\end{array}
\right),
\end{equation}
}
where
\begin{equation}
\Delta:=\left(\left ( t_{11}+t_{22} \right)^2-(1- t_{33})^2\right) \left(\left ( t_{11}-t_{22} \right)^2-(1+ t_{33})^2\right).
\end{equation}
It follows
\begin{equation}
\det g^{H}=\frac{1}{\Delta}.
\end{equation}
\bigskip
The second generalization of the classical Fisher information arises
by noticing that \eqref{gFisher} can be equivalently expressed as
\begin{equation}
g^{FR}_{ij}=4\int \partial_i \sqrt{Q_{\boldsymbol t}\left( {\boldsymbol x}\right)}
\partial_j \sqrt{ Q_{\boldsymbol t}\left( {\boldsymbol x}\right)} \, d {\boldsymbol x}.
\end{equation}
Replacing the integration by trace, and the parameterized probabilities $Q$
by a parameterized density operator $\rho$, we can arrive at \cite{Luo}
\begin{equation}\label{gWY}
g^{WY}_{ij}:=4 {\rm Tr} \left[ \left(\partial_i \sqrt{\rho_{\boldsymbol t}}\right)
\left(\partial_j \sqrt{\rho_{\boldsymbol t}}\right)\right].
\end{equation}
The superscript indicates the names of Wigner and Yanase since Equation \eqref{gWY} is motivated by their work on skew information \cite{WigYan}.
The following proposition relates $g^{WY}$ and $g^{H}$.
\begin{prop}\label{prop2}
If $\left[\rho_{\boldsymbol t},L_i\right]=0$, $\forall i$, then $g^{WY}=g^{H}$.
\end{prop}
\begin{proof}
If $\left[\rho_{\boldsymbol t},L_i\right]=0$, $i=1,2,3$ from \eqref{eqSLD} it follows
$\partial_i \rho_{\boldsymbol t}=\rho_{\boldsymbol t} L_i$ and $g_{ij}^{H}={\rm Tr}
\left[\rho_{\boldsymbol t} L_iL_j\right]$. It is also
\begin{eqnarray}
g_{ij}^{ WY}&=&{\rm Tr} \left[ {\rho_{\boldsymbol t}}^{-1/2} \left(\partial _i {\rho_{\boldsymbol t}}\right)
{\rho_{\boldsymbol t}}^{-1/2}\left(\partial _j {\rho_{\boldsymbol t}}\right)
\right]\label{line1}\\
&=&{\rm Tr} \left[ {\rho_{\boldsymbol t}}^{-1/2} {\rho_{\boldsymbol t}} L_i
{\rho_{\boldsymbol t}}^{-1/2} {\rho_{\boldsymbol t}} L_j
\right]\label{line2}\\
&=&{\rm Tr} \left[ {\rho_{\boldsymbol t}}^{1/2} L_i
{\rho_{\boldsymbol t}}^{1/2} L_j
\right]\label{line3}\\
&=&{\rm Tr} \left[ {\rho_{\boldsymbol t}} L_i L_j
\right]=g_{ij}^{H},
\label{line4}
\end{eqnarray}
where in \eqref{line1} we have used the definition \eqref{gWY}; in \eqref{line2} we have used the result
$\partial_i \rho_{\boldsymbol t}=\rho_{\boldsymbol t} L_i$; and in \eqref{line4} the commutativity of $\rho_{\boldsymbol t}$ and $L_i$s.
\end{proof}
\medskip
By referring to \eqref{rhotmatrix} and \eqref{L1}--\eqref{L3}, we can see that
the conditions of Proposition \ref{prop2} are satisfied, hence hereafter we shall consider
$g^{WY}=g^{H}$.
\section{Volume of States with Maximally Disordered Subsystems}\label{sec4}
The volume of two-qubit states with maximally disordered subsystems will be given by
\begin{equation}
\label{V}
V=\int_{\cal T} \sqrt{ \det g} \; d{\boldsymbol t},
\end{equation}
where the tetrahedron $\cal T$ is characterized by equations:
\begin{eqnarray}
1-t_{11}-t_{22}-t_{33} &\geq& 0, \notag \\
1-t_{11}+t_{22}+t_{33} &\geq& 0,\notag \\
1+t_{11}-t_{22}+t_{33} &\geq& 0,\notag \\
1+t_{11}+t_{22}-t_{33} &\geq& 0.
\end{eqnarray}
{(If the Fisher metric is degenerate, then the volume is meaningless, given that $\det g = 0$. This reflects on the parameters describing the states. Indeed, at least one of them should be considered as depending on the others. Thus, one should restrict the attention to a proper submanifold.)}
Analogously, the volume of two-qubit separable states with maximally disordered subsystems
will be given by
\begin{equation}
\label{Vs}
V_s=\int_{\cal O} \sqrt{ \det g} \; d{\boldsymbol t},
\end{equation}
where the octahedron $\cal O$ is characterized by equations:
\begin{eqnarray}
1-t_{11}-t_{22}-t_{33} &\geq& 0, \notag\\
1+t_{11}-t_{22}-t_{33} &\geq& 0,\notag \\
1+t_{11}+t_{22}-t_{33} &\geq& 0,\notag \\
1-t_{11}+t_{22}-t_{33} &\geq& 0, \notag\\
1-t_{11}-t_{22}+t_{33} &\geq& 0, \notag \\
1+t_{11}-t_{22}+t_{33} &\geq& 0,\notag \\
1+t_{11}+t_{22}+t_{33} &\geq& 0,\notag \\
1-t_{11}+t_{22}+t_{33} &\geq& 0.
\end{eqnarray}
Concerning the classical Fisher metric,
as a consequence of the result $g^{FR}=O\ \widetilde{g}^{FR} O^\top$ (Proposition \ref{prop1}), we have that the volume computed as $\int \sqrt{\det g^{FR}}\ d{\boldsymbol{t}}$ is invariant under orthogonal transformations of the parameters matrix $T$.
However, the integral \eqref{gFisher} can only be performed numerically.
To this end, we have generated $10^3$ points randomly distributed inside the tetrahedron
${\cal T}$. On each of these points, the integral \eqref{gFisher} has been numerically evaluated, hence the value of the function $\sqrt{\det g^{FR}}$ determined on a set of discrete points.
After that, data has been interpolated and a smooth function $\sqrt{\det g^{FR}}(t_{11},t_{22},t_{33})$ obtained.
This has been used to compute the quantities \eqref{V}, \eqref{Vs} and their ratio, resulting as
$V=0.168$, $V_s=0.055$, $V_s/V=0.327$.
The same volumes computed by the quantum Fisher metric
result in $V=\pi^2$, $V_s=(4-\pi)\pi$ and their ratio is $V_s/V=(4-\pi)/\pi\approx 0.27$.
\bigskip
It would also be instructive to see how the ratio of the volume of separable states and the volume of total states varies versus the purity.
This latter quantity turns out to be
\begin{equation}
P= {\rm Tr} \left(\rho^2\right)=\frac{1}{4}\left( 1+ \|{\boldsymbol t}\|^2 \right)
\end{equation}
Thus, fixing a value of $P\in[\frac{1}{4},1]$ amounts to fixing a sphere $\cal S$ centered in the origin of
$\mathbb{R}^3$ and with a radius $\sqrt{4P-1}$.
Hence, the volume of states with fixed purity will be given by
\begin{equation}
V(P)=\int_{{\cal T}\cap{\cal S}} \sqrt{ \det g} \; d{\boldsymbol t},
\label{Vpurity}
\end{equation}
while the volume of separable states with fixed purity will be given by
\begin{equation}
V_s(P)=\int_{{\cal O}\cap{\cal S}} \sqrt{ \det g} \; d{\boldsymbol t}.
\label{Vspurity}
\end{equation}
As a consequence, we can obtain the ratio
\begin{equation}
R(P)=\frac{V_{s}(P)}{V(P)},
\label{RP}
\end{equation}
as a function of purity $P$.
Such a ratio is plotted in Figure \ref{fig1} as a solid (resp. dashed) line for the classical (resp. quantum) Fisher metric. There, it is shown that by increasing the purity, the volume of separable states diminishes until it becomes a null measure. In fact, this happens already at purity $P=1/2$.
However, when the purity is low enough (below $1/3$), all states are separable ($R=1$).
Above all, the two curves show the same qualitative behavior.
This means that the usage of classical Fisher information on the phase space representation of quantum states (as a true probability distribution function) is able to capture the main feature of the geometry of quantum states.
\begin{figure}[h]
\centering
\includegraphics[width=8cm]{figRatio}
\caption{Ratio of volumes \eqref{RP} vs purity $P$. Solid line refers to the classical Fisher metric; dashed line refers to the quantum Fisher metric.}
\label{fig1}
\end{figure}
\section{Conclusion}\label{sec5}
In conclusion, we have investigated the volume of the set of two-qubit states with maximally disordered subsystems by considering their phase space representation in terms of probability distribution functions and by applying the classical Fisher information metric to them.
The results have been contrasted with those obtained by using quantum versions of the Fisher metric.
Although the absolute values of volumes of separable and entangled states turn out to be different in the two approaches, their ratios are comparable.
Above all, the behavior of the volume of sub-manifolds of separable and entangled states with fixed purity is shown to be almost the same in the two approaches.
Thus, we can conclude that classical Fisher information in phase space is able to capture the features of the volume of quantum states. The question then arise as to which aspects such an approach will be unable to single out with respect to the purely quantum one. Besides that, our work points out other interesting issues
within the quantum metrics. In fact, in the considered class of two-qubit states, the Helstrom and the
Wigner--Yanase-like quantum Fisher metrics coincide. This leads us to ask the following questions: For which more general class of states in finite dimensional systems are the two equal? When they are not equal,
what is their order relation? These investigations are left for future works.
Additionally, it is worth noting that
our approach to the volume of states by information geometry offers the possibility to characterize quantum logical gates.
In fact, given a logical gate (unitary) $G$ on two-qubit states,
the standard entangling power is defined as \cite{ZZF00}:
$$
\mathfrak{E}(G):=\int E\left( G |\psi \rangle \right)
d\mu\left( |\psi\rangle\right),
$$
where $E$ is an entanglement quantifier \cite{Horo} and the overall average is the
product states $|\psi\rangle=|\psi_1\rangle \otimes |\psi_2\rangle$
according to a suitable measure $\mu\left( |\psi\rangle\right)$.
It is then quite natural to take the measure induced by the Haar measure on
$SU(2)\otimes SU(2)$ \cite{NTM16}.
However, $\mathfrak{E}(G)$ is not usable on the subset of states with maximally disordered subsystems as it is restricted to pure states.
Here, our measure comes into play which leads to the following:
$$
\mathfrak{E}(G):=\int_{\cal O} E\left( G \rho_{\boldsymbol t} \, G^\dag \right) \sqrt{\det g} \,
d\boldsymbol{t}.
$$
In turn, this paves the way to a general formulation that involves the average overall separable states by also including parameters $\boldsymbol{r},\boldsymbol{s}$. Clearly, this would provide the most accurate characterization of the entangling potentialities of $G$.
Finally, our approach can be scaled up to three or more qubit, but since analytical calculations will soon become involved, one should consider families of states with a low number of parameters, e.g., those proposed in \cite{Altafini}. Nevertheless, such families can provide geometrical insights for more general cases.
\acknowledgments{The work of M.R. is supported by China Scholarship Council.}
|
{
"timestamp": "2018-02-27T02:05:33",
"yymm": "1802",
"arxiv_id": "1802.08835",
"language": "en",
"url": "https://arxiv.org/abs/1802.08835"
}
|
\section{Introduction}
\label{sec:intro}
\vspace{-1mm}
Specialized skills are required to create a commercially available image. One way to obtain a required image at low cost is by finding existing images through an image search. However, it is difficult to obtain what was exactly imagined since the desired image may not exist on the Web. An automatic image-generation system using natural language has the potential to generate what was actually imagined without requiring any special skill or cost.
The solution to this challenging task should address not only practical benefits but also contributions of bridging natural language understanding with image processing.
Image generation task from natural language have been investigated as ``cap2image'' and several deep neural network (DNN)-based generative models are successful \cite{mansimov2016,reed2016b}.
Although it is difficult to define the relationships between languages and images clearly, DNN-based models make it possible to align these relationships in the latent space. The network is composed of {\em language-encoder} and {\em image-decoder}. Long short-term memory (LSTM) \cite{hochreiter1997long} is generally used as a {\em language-encoder}, and several network structures; Variational auto-encoder \cite{kingma2014auto}, generative adversarial network (GAN) \cite{goodfellow2014generative}, and pixelCNN \cite{van2016conditional} are used as an {\em image-decoder}.
As far as our knowledge, Reed et al.~\cite{reed2016b} firstly succeeded to construct a discriminable image generator conditioned by a caption based on a deep convolutional generative adversarial network (DCGAN) \cite{radford2015unsupervised}. We start at this work from a different viewpoint; we focus on a practical problem in this task.
It is possible that they generate a slightly different image from what the user actually wanted. Our motivation is to tackle this point by introducing an interactive manipulation framework and make a generated image modifiable with natural language.
Figure~\ref{fig:frameworks} shows the difference of the cap2image framework and the proposed framework.
Compared with the cap2image framework models, the proposed framework model generates a new image from the source image and the instruction that represents the difference between the source and the target image.
In our insight, the advantage of the proposed framework is to allow users to modify the source image that has been generated. Furthermore, users only have to focus on the difference and represent it as natural language. It is not only easier for user to use but also easier for {\em language-encoder} to learn because the instruction with a few difference information will be much shorter than caption with all information of the desired image.
We define a latent space composed of image feature vectors and set a problem of manipulation as a transformation of a vector in latent space. The manipulated image is generated from the latent vector that is transformed from the latent vector of the original image by the embedded natural language instruction.
Kiros et al.~\cite{kiros2014unifying} reported that it is possible to learn a model whose shared latent space between languages (captions) and images in a DNN has the characteristic of additivity. Reed et al.~\cite{reed2015} reported that it is possible to generate the target image using image analogy.
According to these property, we realize the image manipulation system by bridging the analogy in the latent space of image and natural language instruction, as $\{source\:image\}+``instruction"=\{target\:image\}$.
We confirm that there are many related works to edit image flexibly using user hand-drawing \cite{brock2016neural,zhu2016generative}. However, manipulating images with natural language will be useful to get a moderate image easily if we can bridge the natural language and modification which contains many drawing operations.
\begin{figure}[t]
\vspace{-5mm}
\centering
\includegraphics[width=1.0\textwidth]{fig1_2.pdf}
\vspace{-5mm}
\caption{\label{fig:frameworks}Comparison of natural language conditioned image generation framework between cap2image (left) and the proposed framework (right).}
\vspace{-4mm}
\end{figure}
\section{Network architecture of proposed framework}
\vspace{-1mm}
The network architecture of the proposed framework to generate an image from a source image and a language instruction is shown in Figure~\ref{fig:fine-tune}. The framework is composed of an encoder and a decoder as existing image generators.
Details of the encoder and decoder models are described in this section.
\vspace{-1mm}
\subsection{Encoder model}
\vspace{-1mm}
The encoder model consists of two parts, an image encoder $ImEnc$ and instruction encoder $IEnc$. In the image encoder, we use the same architecture as the discriminator of a DCGAN; In the instruction encoder, we use a plain LSTM \cite{hochreiter1997long} without peephole.
We assume that a source image is $x_{im}$ and instruction text sequence is $S=[s_1,s_2,\cdots,s_T]$. Then each encoder transformation is defined as,
\vspace{-5mm}
\begin{eqnarray}
\phi_{im} &=& CNN_{ImEnc}(x_{im}) \\
\phi_{i}^{t} &=& LSTM_{IEnc}(s_{t},\phi_{i}^{t-1}) \quad (where, \phi_{i}^{0}=\bf{0})\\
\phi_{fc} &=& FC(\phi_{im},\phi_{i}^T)
\end{eqnarray}
$\phi_{im}$ represents the source image vector. $\phi_{i}^{t}$ is the hidden vector of $ImEnc$ in the time step $t$, then $\phi_{i}^T$ represents the instruction vector. $\phi_{im}$ and $\phi_{i}^T$ are both fed into one free connected ({\sf FC}) layer,
and the output $\phi_{fc}$ is trained to be the latent variable of the target vector. If $\phi_{fc}$ can be the latent variable of the target image through learning, the learned model can generate target images without modifying the DCGAN proposed by Reed et al.~\cite{reed2016b}. We used a single layer for the {\sf FC} layer because we assumed that images in latent space can be transformed linearly, as reported in Kiros et al.~\cite{kiros2014unifying}, and we would like to align language instruction to the linear transformation of images with single non-linear transformation.
\vspace{-1mm}
\subsection{Decoder model}
\vspace{-1mm}
In the decoder model, we basically use the same DCGAN as Reed et al.~\cite{reed2016b} (Figure~\ref{fig:fine-tune}); however, the final layer-activation function of our generator is linear because it was necessary to succeed model training in our trials. In this setting, the range of generated image-pixel values is unlimited. We clipped the values in the range $[0,1]$ because some pixel values are out of the range of the training data, which have the range $[0,1]$. We used class labels, object positions, and size labels by following Odena et al.~\cite{odena2016conditional} to stabilize the training instead of using latent-space interpolation that Reed et al.~\cite{reed2016b} proposed. This is because the label information was essential for training in our experience. While training our model, we used feature matching \cite{salimans2016improved} to stabilize GAN training.
\begin{figure}[t]
\vspace{-5mm}
\begin{minipage}{0.5\hsize}
\centering
\includegraphics[width=1.0\textwidth]{fig2v2.pdf}
\vspace{-5mm}
\caption{\label{fig:fine-tune}Architecture of proposed framework}
\vspace{-5mm}
\label{fig:one}
\end{minipage}
\begin{minipage}{0.5\hsize}
\centering
\vspace{3mm}
\includegraphics[width=0.9\textwidth]{fig4.pdf}
\vspace{-1mm}
\caption{\label{fig:amnist}Artificial MNIST}
\label{fig:two}
\vspace{-5mm}
\end{minipage}
\end{figure}
\section{Experiments}
\subsection{Dataset}
\label{Sec:Dataset}
For the experiment, we constructed a dataset of images controlled using natural language instruction by using MNIST \cite{lecun1998gradient} dataset and manually created modifications.
The main reason to use artificial data is that we want to analyze the learned model.
This setting also makes it easy to collect a lot of examples.
Figure~\ref{fig:amnist} shows an example of data in the corpus. To construct the data, we prepared a canvas that was three times larger than that of the original MNIST data. We also prepared an instruction verb set, \{``put",``remove",``expand",``compress",\\``move"\}, position set, \{``top",``left",``right",``bottom",``middle"\}, and direction set \{``top",``left",``right",``bottom",``top left",``top right",``bottom left",``bottom right"\} to create instructions. The simulator determined a triplet of instructions, ``verb'', ``digit class'', and ``position'', as shown in Figure~\ref{fig:amnist}, and created a transformed image according to this triplet. Instructions were also automatically generated using the triplet. Each canvas had image, digit-class (11-class), position $(x,y)$ (\{3,3\}-class), and size $(width,height)$ (\{4,4\}-class) information. None of the digit objects had another class, $(x,y)=(0,0)$,$(width,height)$=(0,0). Thirty-one unique images for one-digit data in the MNIST were generated. In total, there were 369 triplets of source image, target image, and instruction for each train/test sample.
\subsection{Experimental setting}
We used 1000 samples on each $0 \sim 9$ class from the original MNIST training set with 60,000 samples, then obtained 10,000 samples in the dataset. We prepared 3,690,000 triplets in accordance with the data preparation described in Section~\ref{Sec:Dataset}. We divided them into {training: 90\% and validation: 10\%}. For the test, we used another 100 samples on each $0 \sim 9$ class from the original MNIST, we obtained 1,000 samples for testing. We used Chainer\footnote{http://chainer.org/}\cite{chainer2015} for the implementation. We used the following conditions: images are resize to 64x64, latent-space dimension $=128$, optimization $=Adam$ \cite{kingma2014adam} (initialized by $\alpha = 2.0 \times 10^{-4},\beta = 0.5$), and training-time epochs = 20.
We evaluated the generated image by comparing it to the target image. We used structural similarity (SSIM) (higher is better) \cite{wang2004image}, as in Mansimov et al.~\cite{mansimov2016}, to measure the similarities between the target and the generated images.
\section{Results}
Figure~\ref{fig:result_example} shows examples generated with the proposed framework. Source images and instructions (first and second columns) were given to the model. The generated images, the target (gold) images, and the SSIMs are shown in the third to fifth columns. From these examples, we could confirm that our framework can generate similar images to the target images, especially regarding positions and sizes.
We conducted a subjective evaluation by three subjects. Each subject evaluated similarities of generated images and target (gold) images with 5 degrees (5=very similar, 1=very different).
The subjective evaluation is composed of 100 example following Figure~\ref{fig:result_example} format without SSIM. The rate of the score was \{1:~9.00\%, 2:~10.7\%, 3:~18.3\%, 4:~16.7\%, 5:~45.3\%\}.
Figure~\ref{fig:insvector} shows a visualization of the cosine similarities of instruction vectors of ``move,'' ``expand'' and ``compress''. They are sorted by verb-direction-number. The map is clearly separated by a certain size of blocks.
The large black of ``expand"-``compress" in the left figure indicates that ``expand" and ``compress" are learned as the inverse. Furthermore, in the block of ``move"-``move" (the right figure), the enlarged part of the red square of the left figure, is also clearly separated by small blocks and the cosine similarities follow the direction similarity as well. These results indicate that instruction vectors learned the concept of verb and direction. However, the concept of number is not significant in the instruction vectors. We guess that it is because we used just one digit operation in the experiment.
We also tried the visualization of ``put" and ``remove," but the clear blocks did not appear.
We guess that this is because the concept of position or number is learned independently.
We also tried inputting unseen operation, e.g.``move zero to the right" to the source image that has a digit zero in the right position, to investigate the limitation of our model.
If our model learned the concept of instruction ideally, the zero should go away from the canvas, however, the digit did not go away from the canvas.
This is probably caused by that there are no instructions to take a digit away from the canvas except ``remove" in our dataset.
\begin{figure}[t]
\vspace{-5mm}
\centering
\includegraphics[width=1.0\textwidth]{example_64_2.pdf}
\vspace{-4mm}
\caption{\label{fig:result_example} Examples generated with our framework. Examples are randomly sampled from top 10\%, 10\%-20\% and 20\%-30\% (left) and bottom 30\%-20\%, 20\%-10\% and 10\% (right) groups in SSIM.}
\end{figure}
\begin{figure}[t]
\centering
\vspace{-4mm}
\includegraphics[width=1.0\textwidth]{fig6analysis.pdf}
\vspace{-6mm}
\caption{\label{fig:insvector} A visualization of instruction vector for ``move,'' ``expand'' and ``compress''. The left image shows a cosine similarity map. Each element shows the cosine similarity between two of instructions vector. The order is sorted by verb-direction-number. The right image shows the enlarged part (red squared) of ``move" instructions of the left image.}
\vspace{-6mm}
\end{figure}
\section{Discussion}
We proposed an image-manipulation framework using natural language instruction to make image generation systems more controllable.
The experimental results indicate that the embedded instructions capture the concepts of operation apparently except for the digit information.
Our framework worked well for limited types of instructions.
The results also indicate the potential to bridge the embedded natural language instructions and analogies of two images using our framework.
Future work includes applying this framework to data that have a variety of images and manipulations.
\bibliographystyle{plain}
|
{
"timestamp": "2018-02-26T02:12:01",
"yymm": "1802",
"arxiv_id": "1802.08645",
"language": "en",
"url": "https://arxiv.org/abs/1802.08645"
}
|
\section{Introduction}
Let $X$ be a smooth projective variety and let $L$ be an ample line bundle
on $X$. The concept of the \emph{local positivity} of $L$ has been coined
by Demailly, who introduced in \cite{Dem90} the following invariants
measuring in effect the local positivity.
\begin{definition}[Seshadri constant]
Let $X$ be a smooth projective variety and let $L$ be an ample line bundle
on $X$. Let $P\in X$ be a fixed point and let $f:\Bl_PX\to X$ be the blow
up of $X$ at $P$ with the exceptional divisor $E$. The real number
$$\eps(X;L,P)=\sup\left\{t\in\R:\;f^*L-tE\;\mbox{ is nef}\right\}$$
is the \emph{Seshadri constant} of $L$ at $P$.
\end{definition}
Thus $\eps(X;L,P)$ is the value of $t$ for which the ray $f^*L-tE$
hits the boundary of the nef cone on $\Bl_P X$.
It is natural to introduce a similar invariant, which gives the value
of $t$, where the ray $f^*L-tE$ hits the boundary of the pseudo-effective cone on $\Bl_P X$.
We consider this invariant (more precisely its reciprocal introduced in Definition \ref{def:Waldschmidt constant})
as a way to measure the \emph{local effectivity}
of $L$.
\begin{definition}[The $\mu$-invariant]
Let $X$ be a smooth projective variety and let $L$ be an ample line bundle
on $X$. Let $P\in X$ be a fixed point and let $f:\Bl_PX\to X$ be the blow
up of $X$ at $P$ with the exceptional divisor $E$. The real number
$$\mu(X;L,P)=\sup\left\{t\in\R:\;f^*L-tE\;\mbox{ is effective}\right\}$$
is the \emph{$\mu$-invariant} of $L$ at $P$.
\end{definition}
Both notions can be easily generalized replacing the point $P$ by an
arbitrary subscheme $Z\subset X$ and taking $f:\Bl_ZX\to X$ to be the blow up
of $X$ along the ideal sheaf $\cali_Z\subset\calo_X$. We denote the exceptional
divisor of $f$ again by $E$.
Whereas the $\mu$-invariant $\mu(X;L,Z)$ is not much present in the literature,
its reciprocal is the well-known Waldschmidt constant of $Z$.
We define first the \emph{initial degree of $Z$ with respect to $L$} as
$$\alpha(X;L,Z)=\min\left\{d:\; df^*L-E\;\mbox{ is effective}\right\}.$$
For an integer $m\geq 1$, let $mZ$ denote the subscheme defined by the symbolic power
$\cali_Z^{(m)}$ of $\cali_Z$, see \cite[Definition 9.3.4]{PAG}.
Then the asymptotic version of the initial degree is the following.
\begin{definition}[Waldschmidt constant]\label{def:Waldschmidt constant}
Let $X$ be a smooth projective variety and let $L$ be an ample line bundle on $X$.
Let $Z\subset X$ be a subscheme. The real number
$$\alphahat(X;L,Z)=\inf_{m\geq 1}\frac{\alpha(X;L,mZ)}{m}$$
is the \emph{Waldschmidt constant} of $Z$ with respect to $L$.
\end{definition}
\begin{remark}\rm
Since the numbers $\alpha(X;L,mZ)$ for $m\geq 1$ form a subadditive sequence, i.e.
there is
$$\alpha(X;L,(k+\ell)Z)\leq \alpha(X;L,kZ) + \alpha(X;L,\ell Z)$$
for all $k$ and $\ell$, the infimum in Definition\ref{def:Waldschmidt constant}
exists and moreover we have
$$\alphahat(X;L,Z)=\lim_{m\to\infty}\frac{\alpha(X;L,mZ)}{m}.$$
\end{remark}
Waldschmidt constants appear in different guises in various branches
of mathematics. Apparently, they were first considered in complex analysis
in connection with estimates on the growth order of holomorphic functions,
see \cite{Wal77}. In this setup $X$ is simply $\C^n$ or $\P^n$. We prefer
the homogeneous approach here. Then the polarization $L$ is just the
hyperplane bundle $\calo_{\P^N}(1)$. Let $I$ be a non-zero, proper homogeneous ideal
in the polynomial ring $\C[x_0,\ldots,x_N]$. The \emph{initial degree}
of $I$ is
$$\alpha(\P^N;I)=\min\left\{d:\; (I)_d\neq 0\right\},$$
where $(I)_d$ denotes the degree $d$ part of $I$.
The Waldschmidt constant of $I \subset \C[x_0,\ldots,x_N]$ is then
$$\alphahat(\P^N;I)=\inf_{m\geq 1}\frac{\alpha(\P^N;I^{(m)})}{m},$$
which of course agrees with Definition \ref{def:Waldschmidt constant}.
In recent years there has been considerable interest in Waldschmidt constants in general,
see e.g. \cite{DHST14}, \cite{BCGHJNSvTT16}, \cite{MosHag16}, \cite{FGHLMS17}.
Special attention has been given to the following
Conjecture stated originally by Demailly in \cite[p. 101]{Dem82}.
It has been formulated recently by Harbourne and Huneke in \cite[Question 4.2.1]{HaHu13}.
Apparently the authors were not aware of Demailly's work. We use again the projective version.
\begin{conjecture}[Demailly]\label{conj:Demailly}
Let $Z\subset\P^N$ be a finite set of points and let $I$ be the homogeneous
saturated ideal defining $Z$. Then for all $m\geq 1$
\begin{equation}\label{eq:Demailly Conjecture}
\alphahat(\P^N;I)\geq \frac{\alpha(\P^N;I^{(m)})+N-1}{m+N-1}.
\end{equation}
\end{conjecture}
For $m=1$ the Conjecture of Demailly reduces to the statement which is best known as the Conjecture of Chudnovsky,
see \cite[Problem 1]{Chu81}, to the effect that the inequality
\begin{equation}\label{eq:Chudnovsky Conjecture}
\alphahat(\P^N;I)\geq \frac{\alpha(\P^N;I)+N-1}{N}.
\end{equation}
holds for all ideals defining finite sets of points in $\P^N$.
Demailly's Conjecture for $\P^2$ has been proved by Esnault and Viehweg
using methods of complex projective geometry, see \cite[In\'egalit\'e A]{EsnVie83}.
In the present note, we provide lower bounds on Waldschmidt constants
of sets of general points in projective spaces and obtain as a corollary
a proof of the Demailly's Conjecture in certain cases, see Theorem \ref{thm:DC ok for geq mN}.
The new tool developed in this note is the concept of Waldschmidt decomposition
introduced in Section \ref{sec:WD}. Our main results are
Theorem \ref{thm:main} which gives an iterative way to control
Waldschmidt constants of very general points and Proposition \ref{prop:distribution k s-k}
which is an effective criterion derived from Theorem \ref{thm:main}.
\paragraph{Convention and notation.}
We work throughout over the field $\C$ of complex numbers.
\section{Waldschmidt decomposition}\label{sec:WD}
The numerical meaning of the Waldschmidt constant $\alphahat(X;L,Z)$
is that if $D\in|kL|$ is an effective divisor vanishing along $Z$
with multiplicity $m$, then
$$\frac{k}{m}\geq \alphahat(X;L,Z).$$
This condition extends easily to effective $\R$-divisors. Indeed, let
$D=\sum\delta_iD_i$ be an effective $\R$-divisor with $D\equiv\delta L$
for some $\delta>0$. Then $\mult_ZD=\sum\delta_i\mult_ZD_i$ and
$$\frac{\delta}{\mult_ZD}\geq \alphahat(X;L,Z).$$
In this section we introduce certain decomposition of a divisor,
depending on its numerical properties. We call it the Waldschmidt
decomposition as it is governed by Waldschmidt constants.
This decomposition can be viewed as a higher dimensional version
of the Bezout decomposition defined in \cite[Section 2.1]{DST16}.
Whereas it is possible to define it on arbitrary varieties, we restrict
our approach here to $\P^N$ and its linear subspaces. In this setting
the definition is most transparent.
\begin{definition}[Waldschmidt decomposition in $\P^N$]\label{def:Waldschmidt decomposition}
Let $H\cong \P^{N-1}$ be a hyperplane in $\P^N$ and let $Z$
be a subscheme in $H$. Let $D$ be a divisor of degree $d$ in $\P^N$.
The \emph{Waldschmidt decomposition of $D$ with respect to $H$ and $Z$} is the sum
of $\R$-divisors
$$D=D'+\lambda\cdot H$$
such that $\deg(D')=d-\lambda,$
\begin{equation}\label{eq:Waldschmidt decomposition cond}
\frac{d-\lambda}{\mult_ZD'}\geq\alphahat(H;\calo_H(1),Z)
\end{equation}
and $\lambda$ is the least non-negative real number such that \eqref{eq:Waldschmidt decomposition cond}
is satisfied.
\end{definition}
Of course, it may happen that $\lambda=0$ in Definition \ref{def:Waldschmidt decomposition}.
This number is positive, if the restriction of $D$ to $H$ would produce a divisor in $|\calo_H(1)|$
violating the inequality \eqref{eq:Waldschmidt decomposition cond}. Thus $\lambda$ is the least multiplicity such that
$H$ is numerically forced to be contained in $D$ with this multiplicity. It may well happen
that the divisor $D'$ still contains $H$ as a component.
\begin{remark}
The definition of the Waldschmidt decomposition with respect to $H$ can be extended to a finite number of hyperplanes $H_1,\ldots,H_s$.
\end{remark}
\section{The main result}
In this section we state our main result. The statement is motivated by the proof of the following lower
bound on Waldschmidt constants presented in \cite[Theorem 3]{DT16}.
\begin{theorem}[Lower bound on Waldschmidt constants]\label{thm:lower bound}
Let $I$ be the saturated ideal of a set of $r$ very general points in $\P^N$. Then
$$ \alphahat(\P^N;I)\geq\lfloor \sqrt[N]{r}\rfloor.$$
\end{theorem}
It is expected that for $r$ sufficiently big, there is actually the equality $\alphahat(\P^N;I)=\sqrt[N]{r}$
but this statement seems out of reach with present methods.
\begin{theorem}\label{thm:main}
Let $H_1,\ldots,H_s$ be $s\geq 2$ mutually distinct hyperplanes in $\P^N$.
Let $a_1,\ldots,a_s \geq 1$ be real numbers such that
\begin{equation}\label{eq:inequalities on a_i}
1 - \sum_{j=1}^{s-1} \frac{1}{a_j} > 0
\end{equation}
and
\begin{equation}\label{eq:inequality on a_s}
1 - \sum_{j=1}^{s} \frac{1}{a_j} \leq 0.\\
\end{equation}
Let
$$Z_i=\left\{P_{i,1},\ldots,P_{i,r_i}\right\}\in H_i\setminus \bigcup_{j\neq i}H_j$$
be the set of $r_i$ points
such that
\begin{equation}\label{eq:bound on alpha H_i}
\alphahat(H_i;Z_i)\geq a_i
\end{equation}
and let $Z=\bigcup_{i=1}^s Z_i$.
Finally, let
\begin{equation}\label{eq:q}
q:=\left(1-\sum\limits_{j=1}^{s-1} \frac{1}{a_j}\right) \cdot a_s+s-1.
\end{equation}
Then
$$\alphahat(\P^N;Z)\geq q.$$
\end{theorem}
\proof
First observe that, for any $t=1,\dots,s-1$, by \eqref{eq:inequalities on a_i} we have
$$1 - \sum_{j=1}^{t} \frac{1}{a_j} > 0.$$
Multiplying by $a_t$, moving $a_t/a_t=1$ to the right hand side and making some preparation we get
$$a_t - \sum_{j=1}^{t-1} \frac{a_t}{a_j} > \left(1 - \sum_{j=1}^{t-1} \frac{1}{a_j} \right) + \sum_{j=1}^{t-1} \frac{1}{a_j}.$$
Dividing by $1-\sum\limits_{j=1}^{t-1} \frac{1}{a_j}$ we get
\begin{equation}\label{eq:incmore}
a_t > 1 + \frac{\sum\limits_{j=1}^{t-1} \frac{1}{a_j}}{1 - \sum\limits_{j=1}^{t-1}\frac{1}{a_j}}
\end{equation}
for $t \leq s-1$. Similarly, starting with \eqref{eq:inequalities on a_i}, we get
\begin{equation}\label{eq:incmore2}
a_s \leq 1 + \frac{\sum\limits_{j=1}^{s-1} \frac{1}{a_j}}{1 - \sum\limits_{j=1}^{s-1}\frac{1}{a_j}}.
\end{equation}
We assume to the contrary that there is a divisor $D$ of degree $d$ in $\P^N$ vanishing
to order at least $m$ at all points of $Z$ such that
\begin{equation}\label{eq:p lower than q}
p:=\frac{d}{m} < q.
\end{equation}
It is convenient to work with the $\Q$-divisor $\Gamma=\frac1m D$, which is of degree $p$ and has multiplicities
at least $1$ at every point of $Z$.
\medskip
\noindent
\textbf{Step 0.}\\
Let $\Gamma=\Gamma'+\sum_{i=1}^s\lambda_i H_i$ be the Waldschmidt decomposition of $\Gamma$ with respect
to $H_1,\ldots,H_s$ and $Z_1,\ldots,Z_s$ respectively. The conditions \eqref{eq:Waldschmidt decomposition cond} and \eqref{eq:bound on alpha H_i}
imply then that
\begin{equation}\label{eq:Bezout condition for p}
\left\{\begin{array}{ccrcl}
(\ref{eq:Bezout condition for p}.1) && p-\sum\limits_{i=1}^s\lambda_i & \geq & a_1(1-\lambda_1)\\
(\ref{eq:Bezout condition for p}.2) && p-\sum\limits_{i=1}^s\lambda_i & \geq & a_2(1-\lambda_2)\\
\vdots && \vdots &&\\
(\ref{eq:Bezout condition for p}.s) && p-\sum\limits_{i=1}^s\lambda_i & \geq & a_s(1-\lambda_s)\\
\end{array}\right.
\end{equation}
We will show that the conditions in \eqref{eq:inequalities on a_i}, \eqref{eq:inequality on a_s}, \eqref{eq:p lower than q}
and \eqref{eq:Bezout condition for p} cannot hold simultaneously. This will provide the desired
contradiction to the existence of $D$. The idea is first to achieve equalities in \eqref{eq:Bezout condition for p}.
\medskip
\noindent
\textbf{Step 1.}\\
Our first claim is that there exists $\lambda_1'\leq \lambda_1$ such that
\begin{equation}\label{eq:Bezout condition for p 1}
\left\{\begin{array}{ccrcl}
(\ref{eq:Bezout condition for p 1}.1) && p-\lambda_1'-\sum\limits_{i=2}^s\lambda_i & = & a_1(1-\lambda_1')\\
(\ref{eq:Bezout condition for p 1}.2) && p-\lambda_1'-\sum\limits_{i=2}^s\lambda_i & \geq & a_2(1-\lambda_2)\\
\vdots &&\\
(\ref{eq:Bezout condition for p 1}.s) && p-\lambda_1'-\sum\limits_{i=2}^s\lambda_i & \geq & a_s(1-\lambda_s)\\
\end{array}\right.
\end{equation}
Indeed, we have
$$p-\lambda_1-\sum_{i=2}^s\lambda_i \geq a_1(1-\lambda_1)$$
from (\ref{eq:Bezout condition for p}.1).
Decreasing $\lambda_1$ by $\eps$, the left hand side increases
by $\eps$ as well, whereas the right hand side increases
by $a_1\cdot\eps$. Since $a_1>1$ by (\ref{eq:inequalities on a_i}.1),
there must exist $\eps\geq 0$ such that
$$p-(\lambda_1-\eps)-\sum\limits_{i=2}^s\lambda_i \;=\; a_1(1-(\lambda_1-\eps)).$$
We put $\lambda_1'=\lambda_1-\eps$. Note also that decreasing $\lambda_1$
preserves the inequalities with indices $j=2,\ldots,s$ in \eqref{eq:Bezout condition for p}
because the left hand sides of all these inequalities increase, while
the right hand sides remain unaltered.
In order to alleviate the notation, we drop the prime index by the new $\lambda_1$.
\medskip
\noindent
\textbf{Step t (the induction step).}
In the second step we assume that we found new $\lambda_1,\dots,\lambda_{t-1}$ such that the following holds:
\begin{equation}\label{eq:step2a}
\left\{\begin{array}{rcl}
p-\sum\limits_{i=1}^{t-1} \lambda_i-\lambda_t-\sum\limits_{i=t+1}^s\lambda_i & = & a_1(1-\lambda_1)\\
\vdots &&\\
p-\sum\limits_{i=1}^{t-1} \lambda_i-\lambda_t-\sum\limits_{i=t+1}^s\lambda_i & = & a_{t-1}(1-\lambda_{t-1})\\
p-\sum\limits_{i=1}^{t-1} \lambda_i-\lambda_t-\sum\limits_{i=t+1}^s\lambda_i & \geq & a_{t}(1-\lambda_{t})\\
\vdots &&\\
p-\sum\limits_{i=1}^{t-1} \lambda_i-\lambda_t-\sum\limits_{i=t+1}^s\lambda_i & \geq & a_{s}(1-\lambda_{s})\\
\end{array}\right.
\end{equation}
Our aim is to push this one step further, to the situation, where (for new $\lambda_1,\dots,\lambda_t$) we will have
at least $t$ equalities.
Let
$$C := p - \lambda_t - \sum_{i=t+1}^{s} \lambda_i.$$
Solve the following system of equalities with respect to $\lambda_1,\dots,\lambda_{t-1}$
and a parameter $\lambda_t$.
\begin{equation}\label{eq:step2b}
\left\{\begin{array}{rcl}
C-\sum\limits_{i=1}^{t-1} \lambda_i & = & a_1(1-\lambda_1)\\
\vdots &&\\
C-\sum\limits_{i=1}^{t-1} \lambda_i & = & a_{t-1}(1-\lambda_{t-1})\\
\end{array}\right.
\end{equation}
Let $\lambda_1',\ldots,\lambda_{t-1}'$ be unique (by Lemma \ref{solutionlem}) solutions to that system.
Again, by Lemma \ref{solutionlem},
\begin{equation}\label{eq:step2c}
\sum\limits_{i=1}^{t-1} \lambda_{i}' = \frac{ C\left(\sum\limits_{j=1}^{t-1} \frac{1}{a_j} \right) - (t-1) }{ \left(\sum\limits_{j=1}^{t-1} \frac{1}{a_j} \right) - 1}.
\end{equation}
Since $\lambda_t$ is hidden in $C$ (as $-\lambda_t$), decreasing $\lambda_t$ by $\varepsilon$ increases
$\sum\limits_{j=1}^{t-1} \lambda_i'$ by
$$ \varepsilon \left( \frac{\sum\limits_{j=1}^{t-1} \frac{1}{a_j}}{\sum\limits_{j=1}^{t-1} \frac{1}{a_j}-1} \right).$$
Thus the left hand side of the inequality \eqref{eq:step2a}.t increases by
$$\varepsilon \left(1 + \frac{\sum\limits_{j=1}^{t-1} \frac{1}{a_j}}{1 - \sum\limits_{j=1}^{t-1}\frac{1}{a_j}}\right),$$
which by \eqref{eq:incmore} is strictly less than $\varepsilon a_t$. In effect, decreasing $\lambda_t$, solving \eqref{eq:step2b} for $\lambda_1,\dots,\lambda_{t-1}$ gives a new sequence $\lambda_1',\dots,\lambda_t'$, with
\begin{itemize}
\item
preserved equalities $\eqref{eq:step2a}.1$ --- $\eqref{eq:step2a}.(t-1)$,
\item
left hand side of $\eqref{eq:step2a}.t$ increasing faster than the right hand side,
\item
left hand sides of $\eqref{eq:step2a}.(t+1)$ --- $\eqref{eq:step2a}.s$ increasing, while right hand sides remain unaltered.
\end{itemize}
As in Step 1, this suffices to obtain new $\lambda_1,\dots,\lambda_t$ with one more equality in \eqref{eq:step2a}.
\medskip
\noindent
\textbf{Step s (the final step).}\\
Assume that we have now $s-1$ equalities in \eqref{eq:step2a}, with the last inequality not necessaritly being an equality. We begin exactly as in the previous step. The only difference is that, by \eqref{eq:incmore2}, decreasing $\lambda_t$
forces the left hand side of the last inequality \eqref{eq:step2a} to increase \emph{faster} than the right hand side.
Thus we may decrease $\lambda_t$ (altering $\lambda_1,\dots,\lambda_{t-1}$ to preserve equalities) to zero to obtain
\begin{equation}\label{eq:Bezout condition for p s}
\left\{\begin{array}{ccrcl}
(\ref{eq:Bezout condition for p s}.1) && p-\sum\limits_{i=1}^{s-1}\lambda_i & = & a_1(1-\lambda_1)\\
(\ref{eq:Bezout condition for p s}.2) && p-\sum\limits_{i=1}^{s-1}\lambda_i & = & a_2(1-\lambda_2)\\
\vdots &&\vdots &&\\
(\ref{eq:Bezout condition for p s}.(s-1)) && p-\sum\limits_{i=1}^{s-1}\lambda_i & = & a_{s-1}(1-\lambda_{s-1})\\
(\ref{eq:Bezout condition for p s}.s) && p-\sum\limits_{i=1}^{s-1}\lambda_i & \geq & a_s\\
\end{array}\right.
\end{equation}
It follows from Lemma \ref{solutionlem} that now
\begin{equation}\label{eq:sum of lambda solutions}
\sum\limits_{i=1}^{s-1}\lambda_i=\frac{pR-(s-1)}{R-1},
\end{equation}
where $R=\sum\limits_{j=1}^{s-1} 1/a_j$.
From \eqref{eq:q} we have
\begin{equation}\label{eq:loc3}
q=(1-R)a_s+(s-1).
\end{equation}
Taking (\ref{eq:Bezout condition for p s}.s) into account we get
$$q\leq (1-R)\left(p-\frac{pR-(s-1)}{R-1}\right)+(s-1)=p-Rp+pR-(s-1)+(s-1)=p.$$
This contradicts however clearly \eqref{eq:p lower than q} and we are done.
\hspace*{\fill}\frame{\rule[0pt]{0pt}{6pt}\rule[0pt]{6pt}{0pt}}\endtrivlist
\section{Applications}
We will focus on Waldschmidt constants of sets of very general points in $\P^N$. The notation
$$\alphahat(\P^N;r)$$
denotes the Waldschmidt constant $\alphahat(\P^N;I)$ of a radical ideal $I$ of $r$ very general points in $\P^N$.
\begin{theorem}
\label{stepback}
Let $N \geq 2$, let $k \geq 1$ be an integer. Assume that for some integers
$r_1,\dots,r_{k+1}$ and rational numbers $a_1,\dots,a_{k+1}$ we have
$$\alphahat(\P^{N-1};r_j) \geq a_j \text{ for } j=1,\dots,k+1,$$
$$k \leq a_j \leq k+1 \text{ for } j=1,\dots,k, \quad a_1 > k, \quad a_{k+1} \leq k+1.$$
Then
$$\alphahat(\P^N;r_1+\ldots+r_{k+1}) \geq \left( 1-\sum\limits_{j=1}^{k} \frac{1}{a_j} \right) a_{k+1}+k.$$
\end{theorem}
\proof
We combine Theorem \ref{thm:main} and the specialization. We take hyperplanes $H_1,\dots,H_{k+1}$ and specialize
$r_j$ points to a set $Z_j \subset H_j$ for $j=1,\dots,k+1$, so that the points in $Z_j$ are in very general position on $H_j$. Hence
$$\alphahat(H_j;Z_j) = \alphahat(\P^{N-1};r_j).$$
To check that (\ref{eq:inequalities on a_i}) is satisfied, we compute
$$\sum_{j=1}^{k} \frac{1}{a_j} < \sum_{j=1}^{k} \frac{1}{k} = 1$$
since $a_j \geq k$ and $a_1 > k$.
Similarly we check that (\ref{eq:inequality on a_s}) holds,
$$\sum_{j=1}^{k+1} \frac{1}{a_j} \geq \sum_{j=1}^{k+1} \frac{1}{k+1} = 1.$$
The inequalities (\ref{eq:bound on alpha H_i}) are satisfied by assumptions. Thus the Waldschmidt constant of
specialized points is bounded as desired, hence for points in the very general position the bound also holds.
\hspace*{\fill}\frame{\rule[0pt]{0pt}{6pt}\rule[0pt]{6pt}{0pt}}\endtrivlist
\begin{example}
We bound from below $\alphahat(\P^3;20)$. Let $k=2$ (in fact, it is very easy to find the suitable $k$ in general; it must satisfy
$k^N < r < (k+1)^N$, where $r$ is the number of points in $\P^N$). Then we look for integers $r_1$, $r_2$ and $r_3$ and rational numbers
$a_1$, $a_2$, $a_3$ satisfying the assumptions of Theorem \ref{stepback}. Since we want to bound $\alphahat(\P^3;20)$, it must be
$$r_1 + r_2 + r_3 \leq 20.$$
Since $\alphahat(\P^2;r_1) \geq a_1 > 2$, we see that $r_1 > 4$. Similarly $r_2 \geq 4$, $r_3 \geq 1$.
Moreover, from $a_j \leq 3$ we see that we may restrict ourselves to the case when $r_j \leq 9$. Since $\alphahat(\P^2;r)$
is known for $r \leq 9$, it suffices to search through all the possibilities $(r_1,r_2,r_3)$, compute $(a_1,a_2,a_3)$ for each of them
and get the bound. This can be done by hand in principle. We have used a simple computer program to do the dully
calculations for us. As a result, for
$$r_1=8, \qquad r_2=8, \qquad r_3=4$$
we get
$$a_1 = 48/17, \qquad a_2 = 48/17, \qquad a_3=2.$$
Thus, from the formula, $\alphahat(\P^3;20) \geq 31/12 \simeq 2.583$. Note that the upper bound is $\sqrt[3]{20} \simeq 2.714$.
\end{example}
\subsection{A recursive approach}
Now we study a much harder example which allows us to discuss some algorithmic issues.
\begin{example}
We want to bound $\alphahat(\P^4;180)$. Since now $N=4$, we get immediately $k=3$, since then $k^N < 180 < (k+1)^N$.
We are interested in sequences of integers
$$(r_1,r_2,r_3,r_4) \text{ with } r_1+r_2+r_3+r_4 \leq 180.$$
As before, we have additional constraints. Since $\alphahat(\P^{N-1};r_j) \geq a_j \geq k$, we get (in general) that
$r_j \geq k^{N-1}$. In our situation this gives $r_2,r_3 \geq 27$, $r_1 \geq 28$, $r_4 \geq 1$. It is reasonable to restrict
to $r_j \leq (k+1)^{N-1}$, so in our case, $r_j \leq 64$.
The first problem we encounter here is the number of sequences $(r_1,r_2,r_3,r_4)$ with above properties. But this can be
(in the case studied here, $N=4$, $r=180$) easily managed by a suitable computer program.
What requires much more attention is
coming up with good bounds $a_j$ for $\alphahat(\P^3;r_j)$. These constants are not known, except for several cases: $2$ for $\alphahat(\P^3;8)$,
$3$ for $\alphahat(\P^3;27)$ and $4$ for $\alphahat(\P^3;64)$. So the first approach is to use only numbers $r_j$
of the form $\ell^{N-1}$, which is weak, but manageable (we will address this later, in Proposition \ref{prop:distribution k s-k}).
Taking
$$r_1 = 64, \qquad r_2 = 64, \qquad r_3 = 27, \qquad r_4 = 8$$
we get
$$a_1 = 4, \quad a_2 = 4, \quad a_3 = 3, \quad a_4 = 2, \quad \text{thus} \quad \alphahat(\P^4;180) \geq \frac{10}{3} \simeq 3.333.$$
Using again a computer program we can find, as in the previous example, all necessary bounds for $\alphahat(\P^{N-1};\widetilde{r})$
for $\widetilde{r} = 1,\dots,(k+1)^{N-1}$. In our case it requires $64$ computations to find a bound in $\P^3$. Each of them requires
again looking for sequences satisfying certain properties and then going down to $\P^2$. In effect, the run time grows exponentially
when $N$ is increased. For $\alphahat(\P^2;\widetilde{r})$, however, a much better idea is to use known best bounds, e.g.,
\cite[Theorem 2.2 and discussion thereafter]{HarRoe09}.
Coming back to our case, with the help of a computer program, which run several minutes, all possibilities were scanned
and the best results were found taking
$$r_1 = 52, \qquad r_2 = 52, \qquad r_3 = 49, \qquad r_4 = 27.$$
Again with a computer we obtain
$$a_1 = a_2 = \frac{17457}{4816}, \quad a_3 = \frac{63495}{17974}, \quad a_4 = 3, \quad \text{thus} \quad \alphahat(\P^4;180) \geq 3.495.$$
In fact, the last number is exactly $430502824/123159135$. Observe that the upper bound is $\sqrt[4]{180} \simeq 3.663$.
From the above considerations we conclude that checking all partitions of $r$ into $k+1$ numbers would
take too much time for bigger $N$. To make this faster and manageable even in the case, e.g., $N \geq 100$ we must drastically reduce
the number of subcases. The radical idea is to consider only one distribution, and go down to $\P^{N-1}$ with only one case.
Observe that we look for the numbers $a_1,\dots,a_{k+1}$ such that
$$\left( 1 - \sum_{j=1}^{k} \frac{1}{a_j} \right) a_{k+1}$$
is as big as possible. The numbers $a_j$ are good bounds for $\alphahat(\P^{N-1};r_j)$, so we may as well assume, that
they are close to $\sqrt[N-1]{r_j}$ or even pretend they are equal.
We consider first the expression
\begin{equation}\label{fracterm}
\sum_{j=1}^{k} \frac{1}{\sqrt[N-1]{r_j}}.
\end{equation}
For all partitions $r_1+\dots+r_k = const$, we want \eqref{fracterm} to be as small as possible. Without
going into details, this forces all numbers $r_j$ to be nearly equal. Therefore we want to maximize
$$\left(1-\frac{k}{\sqrt[N-1]{r_1}}\right) \sqrt[N-1]{r_{k+1}}$$
under the condition
$$kr_1 + r_{k+1} = r,$$
or, which is much nicer to compute, to maximize
$$\left(1-\frac{k}{a_1}\right) a_{k+1}$$
under the condition
$$ka_1^{N-1}+a_{k+1}^{N-1} \leq r.$$
Since we want to go down with only one case, we force $a_{k+1}$ to be an integer. Now the problem is to distribute
points to $r_1$ and $r_{k+1}$. It is a matter of an easy calculation to check integer $a_{k+1}$ with $r_1=\lfloor (r-a_{k+1}^{N-1})/k \rfloor$ gives the best result.
In our case, $N=4$ and $r=180$, the following distribution was found:
$$r_1 = 51, \qquad r_2 = 51, \qquad r_3 = 51, \qquad r_4 = 27.$$
Thus we need a lower bound for $\alphahat(\P^3;51)$. Again, we use the above heuristic method to find the distribution
$$r_1' = 14, \qquad r_2' = 14, \qquad r_3' = 14, \qquad r_4' = 9.$$
We take the bound for $\alphahat(\P^2;14) \geq 86/23$. Thus
$$\alphahat(\P^3;51) \geq \frac{309}{86}, \qquad \alphahat(\P^4;180) \geq \frac{360}{103} \simeq 3.495.$$
Our previous best bound is better only by $\simeq 0.0003549$ but the run time of the algorithm outlined here is considerably shorter.
Less radical, but a better approach is to consider all distributions $kr_1+r_{k+1} \leq r$ with $r_{k+1}$ being a pure $(N-1)$th power. The implementation of these two approaches in Singular \cite{Singular} can be found in the file \verb"boundforWC", \cite{boundforWC}.
Running \verb"bound" works faster (for big $N$), but \verb"boundmore" gives better bounds.
\end{example}
\subsection{An easy way to distribute points on hyperplanes}
We pass now to some general effective lower bounds.
\begin{proposition}\label{prop:distribution k s-k}
Let $k$ be a positive integer and let $s$ be an integer in the range $1\leq s\leq k$. Let
$$r \geq s(k+1)^{N-1}+(k+1-s)k^{N-1}.$$
Then
$$\alphahat(\P^N;r) \geq k+\frac{s}{k+1}.$$
\end{proposition}
\proof
This is an easy consequence of Theorem \ref{stepback}. Namely,
taking
$$r_1 = \ldots = r_s = (k+1)^{N-1}, \qquad r_{s+1}= \ldots = r_{k+1} = k^{N-1},$$
we get by Theorem \ref{thm:lower bound}
$$a_1 = \ldots = a_s = k+1, \qquad a_{s+1} = \ldots = a_{k+1} = k.$$
Consequently,
$$\alphahat(r) \geq \left(1-\frac{s}{k+1}-\frac{k-s}{k}\right)k+k = s-\frac{sk}{k+1}+k=k+\frac{s}{k+1}.$$
\hspace*{\fill}\frame{\rule[0pt]{0pt}{6pt}\rule[0pt]{6pt}{0pt}}\endtrivlist
\begin{example}
Without the above proposition, the general available lower bound for $\alphahat(\P^5;1024)$ is $4$.
It requires at least $r \geq 3125$ points to pass to the better bound $\alphahat(\P^5;r) \geq 5$. But
with Proposition \ref{prop:distribution k s-k} we can take
$s=1$, $k=4$ to get
$$\alphahat(\P^5;1649) \geq 4 + \frac{1}{5}.$$
Similarly, we need only $2018$ points to get $4 + \frac{2}{5}$, only $2387$ to get $4 + \frac{3}{5}$ and only $2756$
to get $4 + \frac{4}{5}$.
\end{example}
\begin{proposition}\label{quotb}
Let $r \leq (k+1)^N$. Then
$$\alphahat(\P^N;r) \geq \frac{r}{(k+1)^{N-1}}.$$
\end{proposition}
\proof
We will use induction. Consider two cases.
\medskip
\textbf{Case $r \leq k(k+1)^{N-1}$.}\\
Since
$$k \leq k(k+1)^{N-1} \leq k^N,$$
we have (by induction on $k$)
$$\alphahat(\P^N;r) \geq \frac{r}{k^{N-1}} \geq \frac{r}{(k+1)^{N-1}}.$$
\medskip
\textbf{Case $r > k(k+1)^{N-1}$.}\\
Take
$$r_1 = \ldots = r_k = (k+1)^{N-1}, \qquad r_{k+1} = r-k(k+1)^{N-1}.$$
Observe that
$$r-k(k+1)^{N-1} \leq (k+1)^N-k(k+1)^{N-1}=(k+1)^{N-1},$$
thus by Theorem \ref{thm:lower bound} and induction (on $N$) we get
$$a_1 = \ldots = a_k = k+1, \qquad a_{k+1} = \frac{r-k(k+1)^{N-1}}{(k+1)^{N-2}}.$$
By Theorem \ref{stepback} we get
$$\alphahat(\P^N;r) \geq \left(1-\frac{k}{k+1}\right)\frac{r-k(k+1)^{N-1}}{(k+1)^{N-2}}+k = \frac{r}{(k+1)^{N-1}}.$$
\hspace*{\fill}\frame{\rule[0pt]{0pt}{6pt}\rule[0pt]{6pt}{0pt}}\endtrivlist
\begin{example}
We can now give very accurate bounds for $\alphahat(\P^5;r)$ for $r$ close to $3125$.
Since $3125 = 5^5$, we have
$$\alphahat(\P^5;3124) \geq 5 - \frac{1}{625}, \qquad \alphahat(\P^5;3123) \geq 5 - \frac{2}{625}$$
and so on.
\end{example}
\subsection{Discussion on the accuracy}
By Theorem \ref{thm:lower bound} it is obvious that we can locate every $\alphahat(\P^N;r)$ in an interval of
length at most $1$. It is interesting to know what is the difference between the upper bound (which is conjectured
to be the actual bound for $r \geq 2^N$) and the lower bound obtained by our algorithm. In Figure \ref{figp3}
we present the upper and lower bounds for $r=1,\dots,125$ points in $\P^3$.
\begin{figure}[h]
\caption{Upper and lower bounds for $\alphahat(\P^3;r)$}
\label{figp3}
\centering
\includegraphics[scale=0.3]{wykres.png}
\end{figure}
In Table \ref{tab:accuracy} we present the maximal difference $\delta$ between the lower and upper bound.
\renewcommand*{\arraystretch}{1.2}
\begin{table}[h]
$$
\begin{array}{c|cc|cc|cc|c|c}
N & 3 & 3 & 4 & 4 & 5 & 5 & 6 & 7 \\ \hline
r & 8-125 & 125-1000 & 16-256 & 256-1296 & 32-243 & 243-1024 & 64-729 & 128-2187 \\
\delta & 0.289 & 0.186 & 0.295 & 0.259 & 0.305 & 0.277 & 0.305 & 0.301 \\
\end{array}
$$
\caption{Maximal differences for lower and upper bounds in various intervals of the number of points in projective spaces of low dimensions}
\label{tab:accuracy}
\end{table}
\subsection{Towards Demailly's Conjecture}
As an important consequence of Theorem \ref{thm:main} we obtain the following result.
\begin{theorem}\label{thm:DC ok for geq mN}
Demailly's Conjecture \ref{conj:Demailly} holds for $r\geq m^N$ very general points in $\P^N$.
\end{theorem}
\proof
The Main Theorem in \cite{MSS17} states that Conjecture \ref{conj:Demailly}
holds for $r\geq (m+1)^N$ very general points in $\P^N$. Hence it is enough
to deal with sets $Z$ containing $r$ very general points with $r$ in the range $m^N\leq r<(m+1)^N$.
The general yoga of our proof is the following: We use lower bounds on the Waldschmidt constant of $Z$
provided either by Theorem \ref{thm:lower bound} or by Proposition \ref{prop:distribution k s-k}
and check, by naive conditions count, that $\alpha(mZ)$ is small enough for the inequality
\eqref{eq:Chudnovsky Conjecture} to be satisfied.\\
\textbf{Case 1.} For $N\geq 4$ and $m\geq 3$, it follows from Lemma \ref{lem:inequality}
that there exists a hypersurface in $\P^N$ of degree $m(m+N-1)-N+1$
vanishing to order at least $m$ at all points of $Z$. Since in any case it is $\alphahat(Z)\geq m$
by Theorem \ref{thm:lower bound}, it follows that
$$\alphahat(Z)\geq m\geq \frac{\alpha(mZ)+N-1}{m+N-1}$$
and we are done in this case.\\
\textbf{Case 2.} Let $N=3$ and let $2\leq m=2n+\eps$ with $\eps\in\left\{0,1\right\}$.
Assume that
$$m^3\leq r\leq (n+1+\eps)(m+1)^2+(n-1)m^2.$$
It follows again from the naive conditions count that there exists a surface in $\P^3$
of degree $m^2+2m-2$ passing with multiplicity at least $m$ through all points in $Z$.
Hence $\alpha(mZ)\leq m(m+2)-2$ and thus
$$\frac{\alpha(mZ)+2}{m+2}\leq m\leq \alphahat(Z),$$
which is exactly \eqref{eq:Demailly Conjecture}.\\
If the number of points is in the range
$$(n+1+\eps)(m+1)^2+(n-1)m^2\leq r <(m+1)^3,$$
then Proposition \ref{prop:distribution k s-k} implies that
$$\alphahat(Z)\geq m+1-\frac{m+1-n-1-\eps}{m+1}\geq m+\frac12.$$
If $m=2n$ is even, then there exists a surface of degree $4n^2+9n+2$
vanishing at all points of $Z$ to order at least $m$. Indeed, this follows
from the inequality
$$\binom{4n^2+9n+5}{3}\geq (m+1)^3\binom{m+2}{3},$$
which is equivalent to
$$8n^5+(62/3)n^4+(39/2)n^3+(47/6)n^2+n\geq 0.$$
Hence $\alpha(mZ)\leq (m+\frac12)(m+2)-2$, which gives
$$\alphahat(Z)\geq m+\frac{1}{2}\geq \frac{\alpha(mZ)+2}{m+2},$$
hence \eqref{eq:Demailly Conjecture} holds.\\
The case $m=2n+\eps$ is similar and we leave it as a simple exercise.\\
\textbf{Case 3.} Let $m=2$ and let $Z$ be a set of $r$ very general points
in $\P^N$ with $2^N\leq r<3^N$. In any case it is $\alphahat(Z)\geq 2$
by Theorem \ref{thm:lower bound}. For $N\geq 7$ this bound suffices
to conclude Conjecture \ref{conj:Demailly}. Indeed, since
$$\binom{2N+3}{N}\geq 3^N(N+1)\;\;\mbox{ holds for }N\geq 7,$$
there is a hypersurface of degree $N+3$ singular in points of $Z$.
Hence $\alpha(2Z)\leq N+3$ and this implies
$$\alphahat(Z)\geq 2\geq\frac{\alpha(2Z)+N-1}{N+1}.$$
For $4\leq m\leq 6$ we split the argument in two cases:
\begin{itemize}
\item[a)] $r\leq 2\cdot 3^{N-1}+2^{N-1}$ and
\item[b)] $r\geq 2\cdot 3^{N-1}+2^{N-1}$.
\end{itemize}
In case a) the previous argument works. There is a hypersurface
of degree $N+3$ in $\P^N$ singular in points of $Z$.
In case b) we
apply Proposition \ref{prop:distribution k s-k}
with $s=2$ and $k=2$. It follows then that $\alphahat(Z)\geq 8/3$.
By elementary conditions count, there is a hypersurface of degree $2N+1$
singular at $Z$, so that $\alpha(2Z)\leq 2N+1$. Hence
$$\alphahat(Z)\geq \frac83\geq\frac{2N+1+N-1}{N+1}$$
holds as $N\leq 7$. \\
\textbf{Case 4.} Finally we are left with $m=1$ but this has been proved
for all $N$ in \cite{DT16} and independently in \cite{FMX16}.
\hspace*{\fill}\frame{\rule[0pt]{0pt}{6pt}\rule[0pt]{6pt}{0pt}}\endtrivlist
\begin{remark}
Using similar methods one can easily check if the bound for $\alphahat(\P^N;r)$ is
sufficient to prove the Demailly Conjecture for a given $N$, $m$ and $r$. We wrote
an appropriate procedure (\verb"Demailly" in \verb"boundforWC") and check that, for example,
the Conjecture holds for all $N \leq 3$, $m \leq 3$ and any number of very general points.
\end{remark}
\section{Auxiliary results}
\begin{lemma}\label{solutionlem}
Assume that positive real numbers $a_1,\dots,a_{t-1}$ are given, satisfying
$$1-\sum\limits_{j=1}^{t-1} \frac{1}{a_j} \neq 0.$$
Let $C$ be a real number. Consider the following system of linear equations:
$$\left\{
\begin{array}{rcl}
C - \sum\limits_{i=1}^{t-1} x_i & = & a_k(1-x_k) \text{ for } k=1,\dots,t-1 \\
y & = & \sum\limits_{i=1}^{t-1} x_i.
\end{array}
\right.$$
Then there is the unique solution for $(x_1,\dots,x_{t-1},y)$ to this system. In particular
$$y = \frac{C(\sum\limits_{j=1}^{t-1} \frac{1}{a_j})-(t-1)}{\sum\limits_{j=1}^{t-1} \frac{1}{a_j} - 1}.$$
\end{lemma}
\proof
We look for the (unique) solution for $y$, thus we use Cramer's rule. The matrix of this system (after some reorganisation:
the variable $y$ is placed in the first column, then $x_1,\dots,x_{t-1}$, then non-linear part) is equal to
$$M := \left[\begin{array}{ccccccc}
0 & 1-a_1 & 1 & 1 & \dots & 1 & C-a_1 \\
0 & 1 & 1-a_2 & 1 & \dots & 1 & C-a_2 \\
0 & 1 & 1 & 1-a_3 & \dots & 1 & C-a_3 \\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\
0 & 1 & 1 & 1 & \dots & 1-a_{t-1} & C-a_{t-1} \\
-1 & 1 & 1 & 1 & \dots & 1 & 0.
\end{array}
\right].$$
We denote the columns of $M$ by $[A_0 A_1 \dots A_{t-1} B]$.
To compute the determinant of the main matrix $[A_0 A_1 \dots A_{t-1}]$ we subtract the last row from the others, obtaining the matrix
with the first column and last rows filled with $1$ (except $-1$ in the left bottom corner), and then $-a_1,-a_2,\dots$ over the diagonal.
Applying Laplace rule we compute this determinant to be equal to
$$D_1 = \left( \sum_{i=1}^{t-1} a_1\ldots \widehat{a_i} \ldots a_{t-1} \right) - a_1 \ldots a_{t-1} = a_1 \ldots a_{t-1} \cdot \left(\sum\limits_{j=1}^{t-1} \frac{1}{a_j} - 1 \right)$$
which is non-zero (by the assumption). Hence the solution is unique.
To compute the determinant of the matrix $[B A_1 A_2 \dots A_{t-1}]$ we ''kill'' all $1$'s using the last row, then ''kill''
all $a_i$'s in the first column using other columns, obtaining the matrix with
$$\left[\begin{array}{cccc}
C & -a_1 & & \\
\vdots & & \ddots & \\
C & & & -a_{t-1} \\
t & 1 & \ldots & 1
\end{array}
\right].$$
By the Laplace rule, the determinant
$$D_2 = C \left( \sum\limits_{i=1}^{t-1} a_1 \ldots \widehat{a_i} \ldots a_{t-1} \right) - (t-1) a_1 \dots a_{t-1} =
a_1 \dots a_{t-1} \left( C \left( \sum_{j=1}^{t-1} \frac{1}{a_j} \right) - (t-1) \right).$$
By the Cramer's rule, the claim follows.
\hspace*{\fill}\frame{\rule[0pt]{0pt}{6pt}\rule[0pt]{6pt}{0pt}}\endtrivlist
\begin{lemma}\label{lem:inequality}
For all $N\geq 4$, $m\geq 3$ there is
\begin{equation}\label{eq: combinatorial}
\binom{m(m+N-1)+1}{N}>\binom{m+N-1}{N}(m+1)^N.
\end{equation}
\end{lemma}
\proof
With $m\geq 3$ fixed, the proof goes by induction on $N$. In the initial case $N=4$ it is elementary
to check that the claim is equivalent to the inequality
$$m^2(2m^5+11m^4-89m^2-146m-42)>0,$$
which is fulfilled for all $m\geq 3$. \\
For the induction step, we assume that \eqref{eq: combinatorial} holds and we want to
show that
\begin{equation}\label{eq:loc4}
\binom{m(m+N)+1}{N+1}> \binom{m+N}{N+1}(m+1)^{N+1}
\end{equation}
holds as well. It is convenient to abbreviate $A=m(m+N)$. Using the induction assumption
and after elementary operations we get
\begin{align*}
\binom{m(m+N)+1}{N+1}&> \binom{m+N}{N+1}(m+1)^{N+1}\cdot\\
&\cdot\frac{(A+1)A(A-1)\ldots(A-m)}{(m+1)(m+N)(A-N)(A-N-1)\ldots(A-N-m+2)},
\end{align*}
so that in order to get \eqref{eq:loc4}, it suffices to show
$$(A+1)A(A-1)\ldots(A-m)\geq (m+1)(m+N)(A-N)(A-N-1)\ldots(A-N-m+2),$$
which follows by comparing both sides term by term (there are $(m+1)$ terms
on both sides of the inequality).
\hspace*{\fill}\frame{\rule[0pt]{0pt}{6pt}\rule[0pt]{6pt}{0pt}}\endtrivlist
\paragraph*{Acknowledgement.}
Our research was partially supported by National Science Centre, Poland, grant
2014/15/B/ST1/02197.
|
{
"timestamp": "2018-02-27T02:00:26",
"yymm": "1802",
"arxiv_id": "1802.08699",
"language": "en",
"url": "https://arxiv.org/abs/1802.08699"
}
|
\section{Physical motivations and problem statement}
\label{sec:physics}
The standard approach to heat conduction in a medium is based on the continuity relation
linking for the heat density $u$ with the heat flux $v$, by means of the identity
\begin{equation}
\label{continuity}
\partial_t u + \partial_x v = 0.
\end{equation}
Such equation can be considered as a localized version of the global balance
\begin{equation*}
\frac{d}{dt}\int_{C} u(t,x)\,dx+v(b)-v(a)=0,
\end{equation*}
where $C=(a,b)$ is an arbitrarily chosen control interval and $dx$ describes the length element.
Equation~\eqref{continuity} has to be coupled with a second equation relating again density $u$ and flux $v$.
\subsection{Parabolic diffusion modeling and traveling waves}
Among others, the most common choice is the {\it Fourier's law}, which is considered a good description of heat conduction,
\begin{equation}
\label{fourier}
v=-\mu\,\partial_x u
\end{equation}
for some non-negative proportionality parameter $\mu$.
The same equation is also called {\it Fick's law} when considered in biomathematical settings, {\it Ohm's law} in electromagnetism,
{\it Darcy's law} in porous media.
In general, the coefficient $\mu$ may depend on space and time (in case of heterogeneous media) and also on the density
variable itself $u$ (and/or on its derivatives).
Here, we concentrate on the easiest case where $\mu$ is a strictly positive constant.
The coupling of~\eqref{continuity} with~\eqref{fourier} gives raise to the standard {\it parabolic diffusion equation}
\begin{equation}
\label{parabolicdiffusion}
\partial_t u=\mu\,\partial_{xx} u
\end{equation}
which can be considered as a reliable description of many diffusive behaviors, such as heat conduction.
The same equation can be obtained as an appropriate limit of a brownian random walk.
Adding a reactive term $f$, which may, at first instance, depends only on the state variable $u$, consists in
modifying the continuity equation~\eqref{continuity} into a balance law with the form
\begin{equation}
\label{balance}
\partial_t u + \partial_x v=f(u).
\end{equation}
Then, coupling with the Fourier's law~\eqref{fourier}, we end up with the standard scalar parabolic
reaction--diffusion equation
\begin{equation}
\label{reactiondiffusion}
\partial_t u =\mu\,\partial_{xx} u+f(u).
\end{equation}
Two basic example of nonlinear smooth functions $f$ are usually considered
\begin{itemize}
\item[\bf i.] {\sl monostable type:} the function $f$ is strictly positive in some fixed interval, say $(U_0,U_1)$ for some $U_0<U_1$,
negative in $(-\infty,U_0)\cup(U_1,+\infty)$, and with simple zeros, i.e. $f'(U_1)<0<f'(U_0)$;
\item[\bf ii.] {\sl bistable type.} the function $f$ is strictly positive in some fixed interval $(-\infty,U_0)\cup(U_\alpha,U_1)$
for some $U_0<U_\alpha<U_1$, negative in $(U_0,U_\alpha)\cup(U_1,+\infty)$, and with simple zeros, i.e. $f'(U_0), f'(U_1)$
strictly negative and $f'(U_\alpha)$ strictly positive.
\end{itemize}
The former case, whose prototype is $f(u)\propto u(1-u)$, corresponds to a logistic-type reaction term and it is usually
referred to as {\it Fisher--KPP equation} (using the initials of the names Kolmogorov, Petrovskii and Piscounov);
the latter, roughly given by the third order polynomial $f(u)\propto u(u-\alpha)(1-u)$ with $\alpha\in(0,1)$,
is referred to the presence of an {\it Allee-type effect} (see~\cite{CourBereGasc08}), and it is called {\it Allen--Cahn equation}
(sometimes, also named {\it Nagumo} and/or {\it Ginzburg--Landau equation}).
In both cases, the equations support existence of {\it traveling wave solutions}, namely functions with the form
$u(t,x):=\phi(\xi)$ with $\xi:=x-ct$.
Hence, the {\it profile of the wave} $\phi$ is such that
\begin{equation*}
\mu\,\phi''+c\,\phi'+f(\phi)=0,
\end{equation*}
for some {\it speed} $c\in\mathbb{R}$.
Due to the fact that equation~\eqref{reactiondiffusion} is autonomous, the profile is determined up to a space translation.
In addition, traveling waves are called \\
{\bf i.} {\it traveling pulses}, if they are homoclinic orbits connecting one equilibrium with itself, that is
\begin{equation*}
\phi_0:=\lim_{\xi\to+\infty} \phi(\xi),
\end{equation*}
for some non-constant wave profile $\phi$;\\
{\bf ii.} {\it traveling fronts} (or {\it propagating fronts}), if they are heteroclinic orbits connecting two distinct equilibria,
that is
\begin{equation*}
\phi_\pm:=\lim_{\xi\to+\infty} \phi(\xi).
\end{equation*}
To fix ideas, let us concentrate on the case $\phi_+$ stable.
Monotonicity of the front is a necessary condition for stability.
In fact, when dealing with partial differential equations for which a maximum principle holds, such as for the scalar
parabolic case~\eqref{reactiondiffusion}, the first eigenfunction has one sign.
Thus the first order derivative with respect to the variable $\xi$, which can be verified is an eigenfunction of the linearized
operator at the wave itself relative to the eigenvalue $\lambda=0$, is the first eigenfunction since it has one sign.
Therefore, when the maximum principle holds, all monotone waves, in case of existence, are (weakly) stable.
Analogously, non-monotone waves, again in case of existence, are unstable.
In term of existence of traveling waves, there is a significant difference between the two cases
(Fisher--KPP and Allen--Cahn), consequence of the different nature of stability of the critical points of the
associated ODE for the traveling wave profile.
Specifically, in the case of the Fisher--KPP equation, the heteroclinic orbit is a saddle/node connection;
while, in the case of the Allen--Cahn equation, it is a saddle/saddle connection.
This translates into the fact that, for the Fisher--KPP equation, there exists a (strictly negative) maximal speed $c_0$
such that traveling wave solutions exists if and only if $c\leq c_0$ (remember that we have chosen $\phi_+$ stable).
On the contrary, for the Allen--Cahn equation there exists
a unique value of the speed $c_\ast$ which corresponds to a traveling profile $\phi_\ast$.
For the Allen--Cahn equation, an explicit formula for both the profile $\phi$ and the speed $c$ can be found
in the specific case of the third order polynomial case $f(u)=\kappa\,u(u-\alpha)(1-u)$.
In this case, the equation for the traveling wave solutions can be rewritten as
\begin{equation}
\label{someq}
\mu\,\phi''+c\,\phi'+\kappa\,\phi(\phi-\alpha)(1-\phi)=0,
\end{equation}
and thus, considering the new variable $\phi'=-A\phi(1-\phi)$ with $A>0$ to be determined, since
\begin{equation*}
\phi''=\frac{d\phi'}{d\phi}\phi'=-A(1-2\phi)\phi'
\end{equation*}
equation~\eqref{someq} reduces to
\begin{equation*}
\mu\,A^2(1-2\phi)+c\,A+\kappa(\phi-\alpha)=0.
\end{equation*}
Such relation can be further rewritten as a first order polynomial in $\phi$
\begin{equation*}
(\kappa-2\mu A^2)\phi+\mu A^2+cA-\kappa\alpha=0.
\end{equation*}
In order to satisfy the identity, we need to impose the conditions
\begin{equation*}
A=-\sqrt{\frac{\kappa}{2\mu}},\qquad c=c_\ast:=\sqrt{2\mu\kappa}\left(\frac{1}{2}-\alpha\right),
\end{equation*}
so that the unique traveling front for the Allen--Cahn equation has speed $c_\ast$
and profile $\phi$ given by the solution to
\begin{equation*}
\phi'=-\sqrt{\frac{\kappa}{2\mu}}\phi(1-\phi)
=-\sqrt{\frac{\kappa}{2\mu}}\phi+\sqrt{\frac{\kappa}{2\mu}}\phi^2,
\end{equation*}
which has an explicit solution given by
\begin{equation}
\label{explicitfront}
\phi(\xi)=\frac{1}{1+e^{\sqrt{\frac{\kappa}{2\mu}}(\xi-\xi_0)}}
=\frac{1}{2}\left\{1-\tanh(C_{\kappa,\mu}\xi)\right\},
\end{equation}
where $C_{\kappa,\mu}=\sqrt{{\kappa}/{8\mu}}$.
\subsection{Extended models}
While both the continuity equation~\eqref{continuity} and the balance law~\eqref{balance} can be considered reliable
in general contexts, the Fourier law~\eqref{fourier} should be regarded as a single possible choice among many others.
Using the same words of Onsager (cf. \cite{Onsag31a}), {\it Fourier's law is only an approximate description of the process of conduction,
neglecting the time needed for acceleration of the heat flow; for practical purposes the time-lag can be neglected in all cases
of heat conduction that are likely to be studied}.
Nevertheless, in many applications, considering extensions of the Fourier's law is required.
The first possible modification is the so-called {\it Maxwell--Cattaneo law} (or {\it Maxwell--Cattaneo--Vernotte law})
\begin{equation}
\label{maxwellcattaneo}
\tau\partial_t v+v=-\mu\,\partial_x u,
\end{equation}
where $\tau>0$ is a relaxation parameter describing the time needed by the the flux $v$ to alignate
with the (negative) gradient of the density unknown $u$.
Different alternative to the Fourier's law could be considered.
Among others, let us quote here the so-called {\it Guyer--Krumhansl's law}.
In the one-dimensional setting, this consists in adding a further term
at the righthand side, namely
\begin{equation}
\label{guyerkrumhansl}
\tau\partial_t v+v=-\mu\,\partial_x u+\nu\,\partial_{xx} v
\end{equation}
where $\nu>0$ is related to the mean free path of (heat) carriers.
Both Maxwell--Cattaneo's and Guyer--Krumhansl's law can be considered as a way for incorporating into the diffusion
modeling some physical terms in the framework of Extended Irreversible Thermodynamics~\cite{CimmJouRuggVan14}.
In such a context, appropriate modification of the entropy law has to be taken into account for each one of the
corresponding modified flux laws.
Coupling~\eqref{maxwellcattaneo} with~\eqref{continuity} give raise to the classical {\it telegraph equation}
\begin{equation}
\label{telegraph}
\tau\,\partial_{tt} u+\partial_t u=\mu\,\partial_{xx} u.
\end{equation}
The principal part of equation~\eqref{telegraph} coincides with the one of the wave equation, and the equation is
thus of hyperbolic type.
Therefore, for $\tau$ sufficiently small, this new equation amends a number of drawbacks inherent in~\eqref{parabolicdiffusion}
such as {\it infinite speed of propagation}, {\it ill-posedness of boundary value problems} and {\it lack of inertia}.
Here, we take into particular consideration the amendment of the latter drawback.
Similarly, coupling~\eqref{guyerkrumhansl} with~\eqref{continuity} furnishes the third order equation
\begin{equation}
\label{pseudopar}
\tau\partial_{tt} u+\partial_t u=(\mu+\nu\,\partial_t)\partial_{xx} u,
\end{equation}
which is usually classified as a pseudo-parabolic regularization of the standard telegraph equation,
that is formally obtained in the singular limit $\nu\to 0^+$.
The variable $v$ can be eliminated from the coupled system given by the balance law~\eqref{balance}
and the Maxwell--Cattaneo equation~\eqref{maxwellcattaneo} by using the so-called \textit{Kac's trick} (see~\cite{HaMu01,Kac74}),
consisting in differentiating equation~\eqref{balance} with respect to time $t$ and the relation~\eqref{maxwellcattaneo}
with respect to space $x$ and merging them together, giving raise to the \textit{one-field equation}
\begin{equation}
\label{onefield}
\tau\partial_{tt} u+\bigl(1-\tau f'(u)\bigr)\partial_t u-\mu\,\partial_{xx} u=f(u).
\end{equation}
Let us stress that the specific form for the hyperbolic reaction-diffusion equation~\eqref{onefield} depends only on the
coupling of the balance law~\eqref{balance} with the Maxwell--Cattaneo's law~\eqref{maxwellcattaneo} and not on the
specific dependency of $f$ with respect to $u$.
In particular, the same form holds for both monostable and bistable cases.
A similar, but more complicated, equation can be in principle obtained coupling with the Guyer--Krumhansl's law,
namely
\begin{equation}
\label{guyerkrumRD}
\tau\partial_{tt} u+\bigl(1-\tau f'(u)\bigr)\partial_t u=\partial_{xx}\bigl(\mu\,u-\nu f(u)+\nu\,\partial_t u\bigr)+f(u).
\end{equation}
which is an additional alternative pseudo-parabolic variation of~\eqref{reactiondiffusion}.
In all of the three models above presented, it is possible to introduce a convenient
rescaling of the dependent variables.
To start with, let us consider the standard reaction-diffusion equation~\eqref{reactiondiffusion}.
Next, let us introduce a rescaled variable $\tilde u$ in the form defined by
\begin{equation*}
\tilde u=\frac{u-U_0}{U-U_0}.
\end{equation*}
for some significant value $U$.
A reasonable choice could be $U=U_1$ so that $f(U_1)=0$.
Plugging into~\eqref{reactiondiffusion}, we obtain an equation for $\tilde u$
\begin{equation*}
\partial_t \tilde u =\mu\,\partial_{xx} \tilde u+\tilde f(\tilde u)
\end{equation*}
where
\begin{equation*}
\tilde f(\tilde u):=\frac{f\bigl(U_0+(U_1-U_0)\tilde u\bigr)}{U_1-U_0},
\end{equation*}
with the advantage of having $\tilde f(1)=f(U_1)/(U_1-U_0)=0$.
Similarly, since both Maxwell--Cattaneo~\eqref{maxwellcattaneo} and Guyer--Krumhansl relations~\eqref{guyerkrumhansl}
are linear in both $u$ and $v$, considering the same scaling for $u$ and $v$
\begin{equation*}
\tilde u=\frac{u-U_0}{U-U_0},\qquad \tilde v:=\frac{v}{U-U_0},
\end{equation*}
gives an analogous reduction to the corresponding one-field equation.
As an example, in the case of Allen--Cahn equation with relaxation, we obtain
\begin{equation*}
\tau\partial_{tt} \tilde u+\bigl(1-\tau \tilde f'(\tilde u)\bigr)\partial_t \tilde u-\mu\,\partial_{xx} \tilde u=\tilde f(\tilde u),
\end{equation*}
with the same definition of $\tilde f$ reported above.
In particular, the assumption $f(1)=0$ is not restrictive.
A comprehensive theory of traveling waves for the Allen-Cahn model with relaxation is presented in~\cite{LMPS}, and further extension to the case of the Guyer-Krumhansl variation is in progress.
\subsection{Diagonalization and kinetic representation}
From now on, we will focus on the case of Allen--Cahn equation with relaxation,
that is the semilinear hyperbolic system
\begin{equation}
\label{mainsystem}
\partial_t u + \partial_x v = f(u)\,, \qquad \partial_t v + \frac{\mu}{\tau}\,\partial_x u = -\frac1{\tau}\,v\,,
\end{equation}
for $t\in \mathbb{R}^{+}$, $x \in \mathbb{R}$, relaxation parameter $\tau > 0$ and viscosity $\mu >0\,$, with the assumption that $f$ is of bistable type with $U_0=0$, $U_\alpha \in(0,1)$ and $U_1=1$ (refer to Section~\ref{sec:physics}). Specifically, we are interested in studying numerically the dynamics of solutions to~\eqref{mainsystem} for $f(u)=\kappa\,u(u-\alpha)(1-u)$, $\kappa>0$ and $\alpha\in(0,1)$. The corresponding Cauchy problem is determined by the initial conditions
\begin{equation}
\label{mainitialdata}
u(0,x) = u_0(x)\,, \qquad v(0,x) = v_0(x)\,,
\end{equation}
whereas the initial conditions for~\eqref{onefield} should be assigned by deducing them from~\eqref{mainitialdata} through system~\eqref{mainsystem} as
\begin{equation*}
u(0,x) = u_0(x)\,, \qquad \partial_t u(0,x) = f(u_0(x))-v_0^{\prime}(x)\,.
\end{equation*}
Setting $W=(u,v)$, together with $\mathcal{A}(W)=\left(v,\frac{\mu}{\tau} u\right)$ and $\mathcal{S}(W)=\left(f(u),-\frac1{\tau} v\right)$ in~\eqref{mainsystem}, we recognize the following hyperbolic system of balance laws
\begin{equation*}
\partial_t W + \partial_x \mathcal{A}(W) = \mathcal{S}(W)\,,
\end{equation*}
where the Jacobian of the flux $\mathcal{A}$ is given by the $2\! \times \!2$ constant coefficients matrix
\begin{equation*}
\mathcal{A}^{\prime} = \begin{pmatrix} 0 & 1 \\ \mu / \tau & 0\end{pmatrix},
\end{equation*}
thus leading to the nonconservative form
\begin{equation*}
\partial_t W + \mathcal{A}^{\prime} \partial_x W = \mathcal{S}(W)\,.
\end{equation*}
This system can be directly diagonalized for numerical purposes, with eigenvalues $\lambda_{\pm}=\pm \sqrt{\mu / \tau}$ and diagonalization matrix $\mathcal{D}$, with its inverse $\mathcal{D}^{-1}$, given by
\begin{equation*}
\mathcal{D} = \begin{pmatrix} 1 & 1 \\ -\sqrt{\mu / \tau} & \sqrt{\mu / \tau}\end{pmatrix}, \qquad
\mathcal{D}^{-1} = \dfrac12 \!\begin{pmatrix} 1 & -\sqrt{\tau / \mu} \\ 1 & \sqrt{\tau / \mu}\end{pmatrix},
\end{equation*}
so that $\mathcal{D}^{-1} \mathcal{A}^{\prime}\,\mathcal{D} = {\rm diag}\left(\lambda_{-},\lambda_{+}\right)$. Therefore, the diagonal variables $Z=\mathcal{D}^{-1} W$, corresponding to the \textit{Riemann invariants} for the homogeneous part of~\eqref{mainsystem}, namely
\begin{equation*}
\partial_t u + \partial_x v = 0\,, \qquad \partial_t v + \frac{\mu}{\tau}\,\partial_x u = 0\,,
\end{equation*}
they have components
\begin{equation*}
z_{-} = \dfrac12 \left( u - \sqrt{\frac{\tau}{\mu}}\,v \right), \qquad z_{+} = \dfrac12 \left( u + \sqrt{\frac{\tau}{\mu}}\,v \right),
\end{equation*}
so that
\begin{equation}
\label{antidiag}
u = z_{-} + z_{+}\,, \qquad v = \sqrt{\frac{\mu}{\tau}} \left(z_{+}-z_{-}\right).
\end{equation}
The source term is transformed into
\begin{equation*}
\mathcal{D}^{-1} \mathcal{S}(W) = \dfrac12 \!\begin{pmatrix} f(u)+\frac1{\sqrt{\tau \mu}}\,v \\ f(u)-\frac1{\sqrt{\tau \mu}}\,v \end{pmatrix},
\end{equation*}
that is
\begin{equation*}
\mathcal{D}^{-1} \mathcal{S}(\mathcal{D} Z) = \dfrac12 \!\begin{pmatrix} f\!\left(z_{-}+z_{+}\right)+\frac1{\tau}\left(z_{+}-z_{-}\right) \\ f\!\left(z_{-}+z_{+}\right)-\frac1{\tau} \left(z_{+}-z_{-}\right) \end{pmatrix}.
\end{equation*}
Finally, for $\varrho=\sqrt{\mu / \tau}$, the diagonal system reads
\begin{equation}
\label{diagsystem}
\left\{ \begin{aligned}
\partial_t z_{-} - \varrho\,\partial_x z_{-} = \frac12\,f\!\left(z_{-}+z_{+}\right) + \frac1{2\tau}\left(z_{+}-z_{-}\right)\\
\partial_t z_{+} + \varrho\,\partial_x z_{+} = \frac12\,f\!\left(z_{-}+z_{+}\right) - \frac1{2\tau}\left(z_{+}-z_{-}\right)
\end{aligned} \right.
\end{equation}
meaning that the diagonal variables satisfy the so-called weakly coupled semilinear \textit{Goldstein--Taylor model} of diffusion equations. Such system admits an important physical interpretation, since it can be interpreted as the reactive version of the hyperbolic \textit{Goldstein--Kac model}~\cite{Kac74} for the (easiest possible) correlated random walk. In view of its numerical approximation, this representation is intrinsically \textit{upwind} in the sense that $z_{-}$ represents the contribution to the density $u$ of the particles moving to the left with negative velocity $-\varrho\,$, while $z_{+}$ corresponds to the particles moving to the right with positive velocity $\varrho\,$, according to the uniform jump process with equally distributed transition probability.
\section{Formulation of the numerical method}
\label{sec:numerics}
We perform \textit{finite volume schemes} because of the possible implementation for models with low regularity of the solutions, so that an integral formulation is suitable. Moreover, nonuniform discretizations of the physical space are specially required, taking into account the typical inhomogeneity of the dynamics over different regions. This is important as well for computational issues, when nonuniform time-grids are used for improving the CPU performance.
\subsection{First order scheme and nonuniform grids}
We set up a nonuniform mesh on the one-dimensional space (see Figure~\ref{fig:firstmesh}) and we denote by $C_i\!=\![{\rm x}_{i-\frac12},{\rm x}_{i+\frac12})$ the finite volume (cell) centered at point ${\rm x}_i\!=\!\tfrac{1}{2}({\rm x}_{i-\frac12}+{\rm x}_{i+\frac12}),\,i\!\in\!\mathbb{Z}\,$, where ${\rm x}_{i-\frac12}$ and ${\rm x}_{i+\frac12}$ are the cell's interfaces and ${\rm dx}_i\!=\!\text{length}(C_i)$, therefore the characteristic space-step is given by ${\rm dx}\!=\!\sup_{i\in\mathbb{Z}} {\rm dx}_i\,$. We build a piecewise constant approximation of any (sufficiently smooth) function by means of its \textit{integral cell-averages}, namely
\begin{equation}
\label{effenum}
{\rm w}_i = \frac1{{\rm dx}_i} \int_{C_i}\!w(x)\,dx \approx w({\rm x}_i) + {\mathcal O}({\rm dx}^2)\,,
\end{equation}
because of the symmetric integral $\int_{C_i}(x-{\rm x}_i)\,dx=0$ due to the cell-centered structure of the grid, that converges uniformly to~$w(x)$ as ${\rm dx} \rightarrow 0\,$. Moreover, a straightforward computation leads to the approximation
\begin{equation}
\label{interfacial}
{\rm w}_{i+1} - {\rm w}_i \,=\, w^{\prime}({\rm x}_i)\!\left( \!\frac{{\rm dx}_{i+1}}2 + \frac{{\rm dx}_i}2 \!\right)
+ {\mathcal O}({\rm dx}^2)\,,
\end{equation}
that is defined over an interfacial interval $[{\rm x}_i,{\rm x}_{i+1}]$ and, for example, it reproduces the correct \textit{upwind interfacial quadrature} for the advection with negative speed if we observe that
\begin{equation}
\label{effederiv}
\frac1{{\rm dx}_i} \int_{C_i}\!w^{\prime}(x)\,dx = \frac1{{\rm dx}_i} \left( w({\rm x}_{i+\frac12}) - w({\rm x}_{i-\frac12}) \right).
\end{equation}
\begin{figure*}[tb]
\blankbox{.97\columnwidth}{7.5pc
\put(42,18){\line(0,1){63}}
\put(13,22){\line(1,0){295}}
\put(75,22){\circle*{4}} \put(68,12){${\rm x}_{i-1}$}
\put(75,33){\vector(-1,0){31}}
\put(75,33){\vector(1,0){31}}
\put(63,37){${\rm dx}_{i-1}$} \put(101,11){${\rm x}_{i-\frac12}$}
\put(108,18){\line(0,1){63}}
\put(148,22){\circle*{4}} \put(144,12){${\rm x}_i$}
\put(148,33){\vector(-1,0){38}}
\put(148,33){\vector(1,0){38}}
\put(141,37){${\rm dx}_i$}
\put(188,18){\line(0,1){63}}
\put(243,22){\circle*{4}} \put(236,12){${\rm x}_{i+1}$}
\put(298,18){\line(0,1){63}}
\put(243,33){\vector(-1,0){53}}
\put(243,33){\vector(1,0){53}}
\put(234,37){${\rm dx}_{i+1}$} \put(181,11){${\rm x}_{i+\frac12}$}
\linethickness{0.4mm}
\put(42,58){\line(1,0){66}} \put(18,58){${\rm w}_{i-1}$}
\put(108,65){\line(1,0){80}} \put(192,65){${\rm w}_i$}
\put(188,53){\line(1,0){110}} \put(302,52){${\rm w}_{i+1}$}}
\caption{piecewise constant reconstruction on nonuniform mesh/grid \eqref{interfacial}}
\label{fig:firstmesh}
\end{figure*}
In that framework, a \textit{semi-discrete finite volume scheme} applied to the system~\eqref{diagsystem} produces a numerical solution in the form of a (discrete valued) vector whose in-cell values are interpreted as approximations of the cell-averages, i.e.
\begin{equation}
\label{averages}
{\rm r}_i(t) \approx \frac1{{\rm dx}_i} \int_{C_i}\!z_{-}(t,x)\,dx \,, \quad {\rm s}_i(t) \approx \frac1{{\rm dx}_i} \int_{C_i}\!z_{+}(t,x)\,dx \,,
\end{equation}
and which satisfy the upwind three-points scheme
\begin{equation}
\label{semidiag}
\begin{split}
\frac{d {\rm r}_i}{dt} &= \frac{\varrho}{{\rm dx}_i} \left({\rm r}_{i+1} - {\rm r}_i \right) + \frac12\,f\!\left({\rm r}_i+{\rm s}_i\right)
+ \frac1{2\tau} \left({\rm s}_i - {\rm r}_i\right)\\
\frac{d {\rm s}_i}{dt} &= -\frac{\varrho}{{\rm dx}_i} \left({\rm s}_i - {\rm s}_{i-1} \right) + \frac12\,f\!\left({\rm r}_i+{\rm s}_i\right)
- \frac1{2\tau} \left({\rm s}_i - {\rm r}_i\right)
\end{split}
\end{equation}
when considering~\eqref{effederiv} for the diagonal variables in~\eqref{diagsystem} which are advected with constant speed. By setting ${\rm u}_i = {\rm r}_i+{\rm s}_i$ and ${\rm v}_i = \varrho \left({\rm s}_i - {\rm r}_i \right)$ according to~\eqref{antidiag}, and recalling that $\varrho=\sqrt{\mu/\tau}\,$, we obtain through a straightforward computation a semi-discrete version of~\eqref{mainsystem} that is
\begin{equation}
\label{semimain}
\begin{split}
\frac{d {\rm u}_i}{dt} &= - \frac{{\rm v}_{i+1} - {\rm v}_{i-1}}{2 {\rm dx}_i} + f({\rm u}_i)
+ \frac12 \varrho\,{\rm dx}_i\,\frac{{\rm u}_{i+1}-2{\rm u}_i+{\rm u}_{i-1}}{{\rm dx}_i^2}\\
\frac{d {\rm v}_i}{dt} &= - \varrho^2 \frac{{\rm u}_{i+1} - {\rm u}_{i-1}}{2 {\rm dx}_i} - \frac1{\tau}\,{\rm v}_i
+ \frac12 \varrho\,{\rm dx}_i\,\frac{{\rm v}_{i+1}-2{\rm v}_i+{\rm v}_{i-1}}{{\rm dx}_i^2}
\end{split}
\end{equation}
with initial data corresponding to~\eqref{mainitialdata} by means of an approximate condition
\begin{equation*}
{\rm u}_i(0) = \frac1{{\rm dx}_i} \int_{C_i}\!u_0(x)\,dx \,, \quad {\rm v}_i(0) = \frac1{{\rm dx}_i} \int_{C_i}\!v_0(x)\,dx \,, \qquad i \in \mathbb{Z}\,.
\end{equation*}
\noindent It is worthwhile noticing that, in case of uniform grids, i.e. ${\rm dx}_i={\rm dx}\,$, for any $i\in \mathbb{Z}\,$, a standard Taylor's expansion from~\eqref{effenum}-\eqref{interfacial} leads to show that~\eqref{semimain} formally corresponds to
\begin{equation*}
\partial_t u + \partial_x v = f(u) + \frac12 \varrho\,{\rm dx}\,\partial_{xx} u\,, \qquad \partial_t v + \varrho^2 \partial_x u = - \frac1{\tau}\,v + \frac12 \varrho\,{\rm dx}\,\partial_{xx} v\,,
\end{equation*}
so that the scheme is consistent in the usual sense of the \textit{modified equation}~\cite{LV}, although we expect the appearance of a numerical viscosity with strength measured through the physical and numerical parameters $\varrho$ and ${\rm dx}\,$.
However, the utilization of unstructured spatial grids is required for problems incorporating composite geometries, also in view of the recent theoretical advances on adaptive techniques for mesh refinement in the resolution of multi-scale complex systems. For the case of a nonuniform mesh, the approximation~\eqref{interfacial} seems to reveal a lack of consistency of the numerical scheme~\eqref{semimain} with the underlying continuous equations, as the space-step ${\rm dx}_i$ could be very different from the length of an interfacial interval $\bigl|{\rm x}_{i+1}-{\rm x}_i\bigr|=\tfrac{1}{2}{\rm dx}_i +\tfrac{1}{2}{{\rm dx}_{i+1}}$. Nevertheless, the issue of an error analysis with optimal rates can be pursued, by virtue of the results concerning the \textit{supra-convergence phenomenon} for numerical approximation of hyperbolic conservation laws. In fact, despite a deterioration of the pointwise consistency is observed in consequence of the non-uniformity of the mesh, the formal accuracy is actually maintained as the global error behaves better than the (local) truncation error would indicate. This property of enhancement of the numerical error has been widely explored, and the question of (finite volume) upwind schemes for conservation laws and balance equations is addressed in~\cite{BGP}, \cite{KS} and~\cite{CHS}, with proof of convergence at optimal rates for smooth solutions.
\subsection{Time discretization}
We introduce a variable time-step ${\rm dt}_n\!=\!{\rm t}_{n+1}\!-\!{\rm t}_n$, $n\!\in\!\mathbb{N}$, and we set ${\rm dt}\!=\!\sup_{n\in\mathbb{N}} {\rm dt}_n\,$, therefore we have to consider a {\sl CFL-condition}~\cite{LV} on the ratio $\frac{{\rm dt}_n}{{\rm dx}_i}$ for the numerical stability. We discretize the time operator in~\eqref{semidiag} by means of a mixed explicit-implicit approach, as follows
\begin{equation*}
\begin{split}
\frac{{\rm r}_i^{n+1}-{\rm r}_i^{n}}{{\rm dt}_n} &= \frac{\varrho}{{\rm dx}_i} \bigl({\rm r}_{i+1}^{n+1}-{\rm r}_i^{n+1}\bigr)
+ \frac12\,f({\rm r}_i^{n}+{\rm s}_i^{n}) + \frac1{2\tau} \bigl({\rm s}_i^{n+1}-{\rm r}_i^{n+1}\bigr)\\
\frac{{\rm s}_i^{n+1}-{\rm s}_i^{n}}{{\rm dt}_n} &= -\frac{\varrho}{{\rm dx}_i} \bigl({\rm s}_i^{n+1}-{\rm s}_{i-1}^{n+1}\bigr)
+ \frac12\,f({\rm r}_i^{n}+{\rm s}_i^{n}) - \frac1{2\tau} \bigl({\rm s}_i^{n+1}-{\rm r}_i^{n+1}\bigr)
\end{split}
\end{equation*}
Fully implicit schemes have also been tested with no significant advantage in the quality of the approximation, but with a significant increase of the computational time.
At this point, an important simplification in terms of the actual implementation of the above algorithm arises if considering uniform time and space stepping, i.e. ${\rm dt}_n={\rm dt}\,$, for any $n\in \mathbb{N}\,$ and ${\rm dx}_i={\rm dx}\,$, for any $i\in \mathbb{Z}\,$. Indeed, by setting
\begin{equation*}
\alpha = \varrho\frac{\rm dt}{\rm dx}\,, \qquad \beta = \frac{\rm dt}{2\tau}\,, \qquad {\rm f}_i^n = f({\rm r}_i^n+{\rm s}_i^n)\,,
\end{equation*}
the above algorithm can be rewritten in compact form as
\begin{equation}
\label{vectorscheme}
\begin{pmatrix}
(1+\beta)\mathbb{I}-\alpha\,\mathbb{D}_{+} & - \beta\,\mathbb{I} \\
- \beta\,\mathbb{I} & (1+\beta)\mathbb{I}+\alpha\,\mathbb{D}_{-} \\
\end{pmatrix}
\!\begin{pmatrix} {\rm r}^{n+1} \\ {\rm s}^{n+1} \end{pmatrix} =
\begin{pmatrix} {\rm r}^n + \frac{\rm dt}2{\rm f}^n \\ {\rm s}^n + \frac{\rm dt}2{\rm f}^n \end{pmatrix}
\end{equation}
where the matrices $\mathbb{I}\,$, $\mathbb{D}_{-}$ and $\mathbb{D}_{+}$ are given by
\begin{equation*}
\mathbb{I}=(\delta_{i,j})\,, \qquad
\mathbb{D}_{-} = \bigl( \delta_{i,j} - \delta_{i,j+1} \bigr)\,, \quad
\mathbb{D}_{+} = \bigl( \delta_{i+1,j} - \delta_{i,j} \bigr)\,,
\end{equation*}
and $\delta_{i,j}$ is the standard \textit{Kronecker symbol}\,. The block-matrix in~\eqref{vectorscheme} is invertible, since its spectrum is contained in the complex half plane $\bigl\{ \lambda \in \mathbb{C}\,:\,{\rm Rel}(\lambda) \geq 1 \bigr\}$ as a consequence of the \textit{Ger\v sgorin criterion}~\cite{QSS}.\\
A direct manipulation of~\eqref{vectorscheme} gives
\begin{equation}
\label{finalscheme}
\begin{aligned}
{\rm r}^{n+1} &= \bigl( \mathbb{S} - \alpha^2\,\mathbb{D}_{-} \mathbb{D}_{+} \bigr)^{-1}
\Bigl\{ \bigl[ (1+\beta)\mathbb{I}+\alpha\,\mathbb{D}_{-} \bigr] {\rm r}^n + \beta\,{\rm s}^n\\
& \hskip4.5cm + \frac{\rm dt}2 \bigl[ (1+2\beta)\mathbb{I} + \alpha\,\mathbb{D}_{-} \bigr] {\rm f}^n \Bigr\}\\
{\rm s}^{n+1} &= \bigl( \mathbb{S} - \alpha^2\,\mathbb{D}_{+} \mathbb{D}_{-} \bigr)^{-1}
\Bigl\{ \beta\,{\rm r}^n + \bigl[ (1+\beta)\mathbb{I} - \alpha\,\mathbb{D}_{+} \bigr] {\rm s}^n\\
& \hskip4.5cm + \frac{\rm dt}2 \bigl[ (1+2\beta)\mathbb{I} - \alpha\,\mathbb{D}_{+} \bigr] {\rm f}^n \Bigr\}
\end{aligned}
\end{equation}
where $\mathbb{S}$ is the symmetric matrix
\begin{equation*}
\mathbb{S} = (1+2\beta)\mathbb{I}+\alpha\,(1+\beta)\bigl(\mathbb{D}_{-} - \mathbb{D}_{+}\bigr)\,.
\end{equation*}
Nevertheless, one of the most important features of the models described in Section~\ref{sec:physics} is that they could produce strikingly nontrivial patterns. Therefore, the use of nonuniform meshes is somehow mandatory and hence the numerical solution often requires very long computational time, for the large amount of data to be traded in order to accurately capture the details of physical phenomena. Moreover, especially for applied scientists involved in setting up realistic experiments, the possibility of running fast comparative simulations using simple algorithms implemented into affordable processors is of a primary interest. In this context, parallel computing based on modern graphics processing units (GPUs) enjoys the advantages of a high performance system with relatively low cost, allowing for software development on general-purpose microprocessors even in personal computers. As a matter of fact, GPUs are revolutionizing scientific simulation by providing several orders of magnitude of increased computing capability inside a mass-market product, making these facilities economically attractive across subsets of industry domains~\cite{HHH,TNL,MIM,HWW}. Simple approximation schemes like~\eqref{semimain} are often acceptable even for real problems, so that proper numerical modeling becomes accessible to practitioners from various scientific fields.
\subsection{Second order scheme}
The basic idea to develop second order schemes is to replace the piecewise constant reconstruction~\eqref{effenum} by piecewise linear approximations (see Figure~\ref{fig:secondmesh}), which provide more accurate values at the cell's interfaces.
\begin{figure*}[tb]
\blankbox{.97\columnwidth}{8.3pc
\put(13,22){\line(1,0){295}}
\put(33,17){\line(0,1){77}}
\put(295,17){\line(0,1){77}}
\put(74,22){\circle*{4}} \put(66,12){${\rm x}_{i-1}$}
\put(108,17){\line(0,1){77}} \put(100,11){${\rm x}_{i-\frac12}$}
\put(148,22){\circle*{4}} \put(144,12){${\rm x}_i$}
\put(188,17){\line(0,1){77}} \put(180,11){${\rm x}_{i+\frac12}$}
\put(248,22){\circle*{4}} \put(240,12){${\rm x}_{i+1}$}
\put(75,66){\circle*{3}} \put(59,72){${\rm w}_{i-1}$}
\put(148,54){\circle*{3}} \put(148,57){${\rm w}_i$}
\put(248,45){\circle*{3}} \put(237,51){${\rm w}_{i+1}$}
\thicklines
\put(33,45){\line(2,1){75}}
\put(108,64){\line(4,-1){80}}
\put(188,33){\line(5,1){107}}
\put(93,62){${\rm w}_i^{-}$}
\put(191,42){${\rm w}_i^{+}$}
\put(110,81){${\rm w}_{i-1}^{+}$}
\put(166,33){${\rm w}_{i+1}^{-}$}}
\caption{piecewise linear reconstruction on nonuniform mesh}
\label{fig:secondmesh}
\end{figure*}
On that account, based on the cell-averages, we associate to~\eqref{averages} some correct coefficients, for all $i\!\in\! \mathbb{Z}\,$, $x\!\in\! C_i\,$, which are given by
\begin{equation}
\label{effenum2}
{\rm r}_i(t,x) = {\rm r}_i(t)+(x-{\rm x}_i)\,{\rm r}_i^{\prime}\,, \qquad {\rm s}_i(t,x) = {\rm s}_i(t)+(x-{\rm x}_i)\,{\rm s}_i^{\prime}\,,
\end{equation}
where ${\rm r}^{\prime}_i$ and ${\rm s}^{\prime}_i$ indicate the numerical derivatives, which are defined as appropriate interpolations of the discrete increments between neighboring cells, for example,
\begin{equation}
\label{slopelimiter1}
{\rm r}_i^{\prime} = lmtr \left\{ \frac{{\rm r}_{i+1}-{\rm r}_i}{{\rm x}_{i+1}-{\rm x}_i} \,, \frac{{\rm r}_i-{\rm r}_{i-1}}{{\rm x}_i-{\rm x}_{i-1}} \right\}, \quad i \in \mathbb{Z}\,.
\end{equation}
Because also higher-order reconstructions are, in general, discontinuous at the cell's interfaces, possible oscillations are suppressed by applying suitable \textit{slope-limiter} techniques (see~\cite{HO,HH} for instance).
Therefore, second order interpolations are computed from~\eqref{effenum2} to define the interfacial values at ${\rm x}_{i-\frac12}$ and ${\rm x}_{i+\frac12}$ as follows
\begin{align*}
{\rm r}_i^{-}(t) & = {\rm r}_i(t) - \frac{{\rm dx}_i}2\,{\rm r}_i^{\prime}\,, \quad {\rm r}_i^{+}(t) = {\rm r}_i(t) + \frac{{\rm dx}_i}2\,{\rm r}_i^{\prime}\,, \\
{\rm s}_i^{-}(t) & = {\rm s}_i(t) - \frac{{\rm dx}_i}2\,{\rm s}_i^{\prime}\,, \quad {\rm s}_i^{+}(t) = {\rm v}_i(t) + \frac{{\rm dx}_i}2\,{\rm s}_i^{\prime}\,,
\end{align*}
which are then substituted inside~\eqref{semidiag} to obtain more accurate numerical jumps at the interfaces, namely
\begin{equation}
\label{semidiag2}
\begin{split}
\frac{d {\rm r}_i}{dt} &= \frac{\varrho}{{\rm dx}_i} \left({\rm r}_{i+1}^{-} - {\rm r}_i^{+} \right)
+ \frac12\,f\!\left({\rm r}_i+{\rm s}_i\right) + \frac1{2\tau} \left({\rm s}_i - {\rm r}_i\right)\\
\frac{d {\rm s}_i}{dt} &= -\frac{\varrho}{{\rm dx}_i} \left({\rm s}_i^{-} - {\rm s}_{i-1}^{+} \right)
+ \frac12\,f\!\left({\rm r}_i+{\rm s}_i\right) - \frac1{2\tau} \left({\rm s}_i - {\rm r}_i\right)
\end{split}
\end{equation}
We notice that, the equation being linear in the principal hyperbolic part, the second order scheme with \textit{flux limiter} in~\cite{BCN} is precisely of the type above, since the flux is trivially given by the conservation variables.
For the sake of simplicity, we have been considering in Section~\ref{sec:experiments} only the first order discretization in time, but it is easy recovering higher order accuracy by applying Runge-Kutta methods (refer to~\cite{GS} for an overall introduction), that appears to be essential for practical computations.
\section{Numerical simulations}
\label{sec:experiments}
We start by briefly revising some basics of the numerical results in~\cite{LMPS}, in order to assess the reliability of the numerical method presented in Section~\ref{sec:numerics} for determining the behavior of the solutions to reaction-diffusion models with relaxation introduced in Section~\ref{sec:physics}.
We use the algorithm~\eqref{finalscheme} to analyse the wave speeds $c_\ast$ of the traveling front connecting the stable states $0$ and $1$. Following~\cite{LeVYee90}, we introduce an \textit{average speed} of the numerical solution
at time ${\rm t}^n$ defined by
\begin{equation}
\label{numerAve}
c^n = \frac1{\rm dt} \mathbf{1} \cdot ({\rm u}^{n} - {\rm u}^{n+1})
= \frac1{\rm dt} \sum_i ( {\rm u}^{n}_i - {\rm u}^{n+1}_i ),
\end{equation}
where $\mathbf{1}=(1,\dots,1)$ and recalling that ${\rm u}^n={\rm r}^n+{\rm s}^n\,, n\in \mathbb{N}\,$. We consider the bistable function $f(u)=u(u-\alpha)(1-u)$ with $\alpha\in(0,1)$, aiming at comparing the values for the propagation speed $c_\ast$ as obtained by means
of the shooting argument in~\cite{LMPS} and the ones given by~\eqref{numerAve}.
The solution to the Cauchy problem is selected with an increasing
datum connecting $0$ and $1\,$, and then computing $c^n$ at a time $t$ so large that stabilization
of the propagation speed for the numerical solution is reached. We have been testing three choices for the couple $(\tau, \alpha)$ for different values
of ${\rm dt}$ and ${\rm dx}$, where the range of variation of $\tau$ is chosen so that the condition
$\tau\,f'(u)<1$ is satisfied for all values of the unstable zero $\alpha$ (see Table~\ref{tab:numerspeed}). Requiring to detect the speed value with an error always less than
5\% of the effective value, we heuristically determine ${\rm dx}=2^{-3}$ and
${\rm dt}=10^{-2}$, that will be used for subsequent numerical experiments.
For such a choice, we record in Table~\ref{tab:firstorder} the results of the first order scheme
for various values of $\alpha$ and $\tau=1$ or $\tau=4$ (together with
the corresponding relative error) and in Table~\ref{tab:secondorder} those of a second order scheme.
\begin{table}
\centering
\caption{Relative error for the numerical velocity of the Riemann problem with jump at $\ell/2$, $\ell=25$ ($T$ final time and $N$ number of grid points):
A. $\tau=1$, $\alpha=0.9$, $c_\ast=0.5646$, $T=40$;
B. $\tau=2$, $\alpha=0.6$, $c_\ast=0.1737$, $T=30$;
C. $\tau=4$, $\alpha=0.7$, $c_\ast=0.3682$, $T=35$.}
\vskip6pt
{\begin{tabular}{@{}r|c|r|r|r|r|r@{}}
&${\rm dx}$ &$2^0$ &$2^{-1}$ &$2^{-2}$ &$2^{-3}$ &$2^{-4}$ \\ \hline
&A &0.1664 &0.0787 &0.0325 &0.0091 &0.0018 \\
${\rm dt}=10^{-1}$ &B &0.0383 &0.0306 &0.0241 &0.0198 &0.0175 \\
&C &0.1527 &0.1144 &0.0818 &0.0581 &0.0442 \\
\hline
&A &0.1751 &0.0876 &0.0417 &0.0186 &0.0079 \\
${\rm dt}=10^{-2}$ &B &0.0275 &0.0196 &0.0128 &0.0084 &0.0061 \\
&C &0.1420 &0.1018 &0.0684 &0.0457 &0.0339 \\
\hline
&A &0.1760 &0.0885 &0.0427 &0.0196 &0.0089 \\
${\rm dt}=10^{-3}$ &B &0.0265 &0.0184 &0.0117 &0.0072 &0.0049 \\
&C &0.1411 &0.1006 &0.0670 &0.0441 &0.0321
\end{tabular}}
\label{tab:numerspeed}
\end{table}
\begin{table}
\centering
\caption{First order in space: final average speed~\eqref{numerAve} and relative error
with respect to $c_\ast$ given in~\cite{LMPS}
($N=400$, ${\rm dx}=0.125$, ${\rm dt}=0.01$, $\ell=25$, $T=40$)}
\vskip6pt
{\begin{tabular}{@{}r|c|c|c|c@{}}
&$\alpha=0.6$ &$\alpha=0.7$ &$\alpha=0.8$ &$\alpha=0.9$ \\ \hline
$\tau=1$ &0.1580 &0.3096 &0.4497 &0.5751 \\
&{\scriptsize 0.0101} &{\scriptsize 0.0118} &{\scriptsize 0.0145} &{\scriptsize 0.0186} \\
\hline
$\tau=4$ &0.2102 &0.3533 &0.4337 &0.4825 \\
&{\scriptsize 0.0396} &{\scriptsize 0.0404} &{\scriptsize 0.0365} &{\scriptsize 0.0118}
\end{tabular}}
\label{tab:firstorder}
\end{table}
\begin{table}
\centering
\caption{Second order in space: final average speed~\eqref{numerAve} and relative error
with respect to $c_\ast$ given in~\cite{LMPS}
($N=400$, ${\rm dx}=0.125$, ${\rm dt}=0.01$, $\ell=25$, $T=40$)}
\vskip6pt
{\begin{tabular}{@{}r|c|c|c|c@{}}
&$\alpha=0.6$ &$\alpha=0.7$ &$\alpha=0.8$ &$\alpha=0.9$ \\ \hline
$\tau=1$ &0.1560 &0.3052 &0.4421 &0.5630 \\
&{\scriptsize 0.0025} &{\scriptsize 0.0025} &{\scriptsize 0.0026} &{\scriptsize 0.0029} \\
\hline
$\tau=4$ &0.2184 &0.3672 &0.4485 &0.4885 \\
&{\scriptsize 0.0022} &{\scriptsize 0.0025} &{\scriptsize 0.0034} &{\scriptsize 0.0004}
\end{tabular}}
\label{tab:secondorder}
\end{table}
\subsection{Riemann problem as a large perturbation}
For these applications, we restrict to the first order discretization, since we are interested in considering initial data
with sharp transitions. In such cases, higher order approximations of the derivatives typically
introduce spurious oscillations that, even being transient and possibly cured by employing suitable \textit{slope limiters}\,, they may however lead to catastrophic
consequences because of the bistable nature of the reaction term.
The main achievement is
that we are able to show that the actual domain of attraction of the
front is much larger than guaranteed by the nonlinear stability analysis performed in~\cite{LMPS}.
Indeed, the analytical results state that small perturbations
to the propagating fronts are dissipated, with an exponential rate. Nevertheless, we expect that
the front possesses a larger domain of attraction (as already known for the parabolic Allen--Cahn equation~\cite{FifeMcLe77}) and, specifically, that any bounded initial data $u_0$ such that
\begin{equation}
\label{decay}
\limsup_{x\to-\infty} \,u_0(x) < \alpha < \liminf_{x\to+\infty} \,u_0(x)
\end{equation}
gives raise to a solution that is asymptotically convergent to some traveling
front connecting $u=0$ with $u=1$.
To support such conjecture, we perform numerical experiments with
\begin{equation*}
\tau=4\,, \qquad \ell=25\,, \qquad {\rm dx}=0.125\,, \qquad {\rm dt}=0.01\,.
\end{equation*}
We consider the case $\alpha=1/2$ motivated by the fact that the profile of the traveling front for the hyperbolic Allen--Cahn equation is stationary and it coincides
with the one of the corresponding original parabolic equation, explicitly given by~\eqref{explicitfront} and normalized by the condition $U(0)=1/2$. Numerical simulations confirm the decay of the solution to the equilibrium profile (see Figure~\ref{fig:Riemann2}, left). When compared with the standard Allen--Cahn equation, it appears evident that the dissipation mechanism of the hyperbolic equation is weaker with respect to the parabolic case (see Figure~\ref{fig:Riemann2}, right).
\begin{figure}[hbt]
\begin{center}
{\includegraphics[width=6.75cm]{Riemann2a.pdf}}
{\includegraphics[width=6.75cm]{Riemann2b.pdf}}
\end{center}
\caption{Riemann problem with initial datum $\chi_{{}_{(0,\ell)}}$ in $(-\ell,\ell)$, $\ell=25$.
Left: solution profiles zoomed in the interval $(-5,5)$ at time $t=1$ (dash-dot), $t=5$ (dash), $t=15$ (continuous), for comparison, solution to the parabolic Allen--Cahn equation at time $t=1$ (dot).
Right: Decay of the $L^2$ distance to the exact equilibrium solution for the hyperbolic
(continuous) and parabolic (dot) Allen--Cahn equations.}
\label{fig:Riemann2}
\end{figure}
\subsection{Randomly perturbed initial data}
The genuine novelty of the numerical simulations illustrated in this section consists in suggesting that stability of the traveling waves actually goes beyond the regime $1-\tau f'(u)$ positive, that is required in the theoretical statements proven in~\cite{LMPS}.
We consider initial data that resemble only very roughly the transition from 0 to 1.
More precisely, we divide the interval $(0,\ell)$ into three parts of equal length and we choose a
random value in each of these sub-intervals coherently with the requirement~\eqref{decay}.
We assign $u_0(x)$ to be any different random value in $(0,0.5)$ for each $x\in(0,\ell/3)$,
in $(0,1)$ for each $x\in(\ell/3,2\ell/3)$ and in $(0.5,1)$ for each $x\in(2\ell/3,\ell)$.
Such choice can be considered as reasonable concerning the hypothesis~\eqref{decay}, and the results of the computation are shown in Figure~\ref{fig:Random1tau}.
\begin{figure}[hbt]
\begin{center}
{\includegraphics[width=6.75cm]{plot2_u_t10_B.pdf}}
{\includegraphics[width=6.75cm]{plot2_u_t20_B.pdf}}
\end{center}
\caption{Random initial datum ($\square$).
Solution profiles for the hyperbolic Allen--Cahn equation with relaxation at time $t=10$ (left)
and time $t=20$ (right) for $\tau=1$ (continuous line), $\tau=5$ (dashed) and $\tau=10$ (dotted).
}\label{fig:Random1tau}
\end{figure}
The transition is even more robust than what the previous computation shows, since initial data
that do not satisfy the requirement~\eqref{decay} still exhibits convergence.
As an example, let us consider the case of a randomly chosen initial datum $u_0(x)$ given by
any random value in $(0,0.7)$ for each $x\in(0,\ell/3)$, in $(0,1)$ for each $x\in(\ell/3,2\ell/3)$ and in $(0.3,1)$ for each $x\in(2\ell/3,\ell)$.
Also in such a case, we clearly observe the appearance and formation of a stable front, as shown in Figure~\ref{fig:Random2tau}.
\begin{figure}[hbt]
\begin{center}
{\includegraphics[width=6.75cm]{plot2_u_t10_D.pdf}}
{\includegraphics[width=6.75cm]{plot2_u_t20_D.pdf}}
\end{center}
\caption{Random initial datum ($\square$) in $(0,\ell)$.
Solution profiles for the hyperbolic Allen--Cahn equation with relaxation
at times $t=10$ (left) and $t=20$ (right),
for $\tau=1$ (continuous line), $\tau=5$ (dashed) and $\tau=10$ (dotted).
}\label{fig:Random2tau}
\end{figure}
The convergence is manifest also in the case where the stability condition
$g(u):=1-\tau f'(u)>0$ fails in some region.
At least for the cubic (bistable) nonlinear reaction term $f$, such region is typically centered at $u=\alpha$.
In particular, being $u=\alpha$ an unstable equilibrium, $f'(\alpha)$ is positive, thus $g(\alpha)$
is negative when $\tau$ is sufficiently large.
The values of the function $g$ are plotted in Figure~\ref{fig:Random1g} and Figure~\ref{fig:Random2g}, respectively,
for two different times, namely $t=10$ and $t=20$, and different values of $\tau$, namely $\tau=1$, $\tau=5$ and $\tau=10$. Of course, the function $g$ is asymptotically positive, since $0$ and $1$ are stable equilibria, and thus the value of the first order derivative $f'$ is negative.
The numerical results show that, for sufficiently large values of $\tau\,$, some region corresponding to the center of the wave profile appears where $\tau>1/f'(u)$ for some $u\in(0,1)\,$, and it contains the value $u=\alpha$ (at least for the cubic case).
\begin{figure}[hbt]
\begin{center}
{\includegraphics[width=6.75cm]{plot2_primef_t10_B.pdf}}
{\includegraphics[width=6.75cm]{plot2_primef_t20_B.pdf}}
\end{center}
\caption{Profile of the function $g(u):=1-\tau f'(u)$ for time $t=10$ (left) and $t=20$ (right)
corresponding to the initial datum shown in Figure~\ref{fig:Random1tau}.
The legend for the lines is the same as in the previous figures.}
\label{fig:Random1g}
\end{figure}
\begin{figure}[hbt]
\begin{center}
{\includegraphics[width=6.75cm]{plot2_primef_t10_D.pdf}}
{\includegraphics[width=6.75cm]{plot2_primef_t20_D.pdf}}
\end{center}
\caption{Profile of the function $g(u):=1-\tau f'(u)$ for time $t=10$ (left) and $t=20$ (right)
corresponding to the initial datum shown in Figure~\ref{fig:Random2tau}.
The legend for the lines is the same as in the previous figures.
}\label{fig:Random2g}
\end{figure}
\subsection{Pseudo-kinetic scheme for the Guyer-Krumhansl's law}
The diagonalization procedure that has been performed in Section~\ref{sec:physics} to deduce a kinetic interpretation of the reaction-diffusion equation with relaxation, starting from the \textit{Maxwell-Cattaneo law}~\eqref{maxwellcattaneo}, it cannot be straightforwardly extended to the case of the \textit{Guyer–Krumhansl law} because of the presence of a higher order (conservative) operator in the model~\eqref{guyerkrumhansl}. Although such an issue is rigorously pursued in a work in progress, here we attempt at presenting an hybrid version of the kinetic scheme~\eqref{semimain} to adapt to the present case, thus providing an easy-to-implement algorithm for the pseudo-parabolic equation~\eqref{guyerkrumRD}.
Starting from~\eqref{semimain}, we consider the following variation,
\begin{equation}
\label{semimain2}
\frac{d {\rm v}_i}{dt} = - \varrho^2 \frac{{\rm u}_{i+1} - {\rm u}_{i-1}}{2 {\rm dx}_i} - \frac1{\tau}\,{\rm v}_i
+ \bigl( \nu + \frac12 \varrho\,{\rm dx}_i \bigr) \frac{{\rm v}_{i+1}-2{\rm v}_i+{\rm v}_{i-1}}{{\rm dx}_i^2}\,,
\end{equation}
that enjoys the same consistency properties as the original scheme, since the order of magnitude of the physical parameter $\nu$ is clearly bigger than that of the correction by the numerical viscosity $\frac12 \varrho\,{\rm dx}\,$.
Although it deserves to be rigorously justified and further confirmed by extensive numerical simulations, this approach is clearly more convenient than the usual way of putting higher order hyperbolic equations like~\eqref{onefield} and~\eqref{guyerkrumRD} in form of lower order systems for numerical issues, namely
\begin{equation*}
\partial_t u = w\,, \qquad \tau \partial_t w + \bigl(1-\tau f^{\prime}(u)\bigr) w - \mu\,\partial_{xx} u + \nu\,\partial_{xx} w = f(u) + \nu\,\partial_{xx} f(u)\,,
\end{equation*}
for which a direct semi-discrete approximation provides $\frac{d {\rm u}_i}{dt} = {\rm w}_i\,$, together with
\begin{equation}
\label{prova1}
\begin{split}
\tau \frac{d {\rm w}_i}{dt} = \,& f({\rm u}_i) - \bigl(1-\tau f^{\prime}({\rm u}_i)\bigr) {\rm w}_i + \mu\,\frac{{\rm u}_{i+1}-2\,{\rm u}_i+{\rm u}_{i-1}}{{\rm dx}^2}\,, \\
& - \nu\,\frac{{\rm w}_{i+1}-2\,{\rm w}_i+{\rm w}_{i-1}}{{\rm dx}^2} + \nu\,\frac{{\rm f(u)}_{i+1}-2\,{\rm f(u)}_i+{\rm f(u)}_{i-1}}{{\rm dx}^2}\,.
\end{split}
\end{equation}
Another way of dealing with higher order one-field equations can be the following: we rewrite~\eqref{guyerkrumRD} as
\begin{equation*}
\partial_t \bigl( \tau \partial_t u + u - \tau f(u) - \nu\,\partial_{xx} u \bigr) - \mu\,\partial_{xx} u = f(u) - \nu\,\partial_{xx} f(u)\,,
\end{equation*}
for which an alternative representation as second order system is given by
\begin{equation*}
\tau \partial_t u + u - \tau f(u) - \nu\,\partial_{xx} u = w\,, \qquad \partial_t w - \mu\,\partial_{xx} u = f(u) - \nu\,\partial_{xx} f(u)\,,
\end{equation*}
thus generalizing~\eqref{reactiondiffusion}, with corresponding semi-discrete approximation
\begin{equation}
\label{prova2}
\begin{split}
\tau \frac{d {\rm u}_i}{dt} &= {\rm w}_i - {\rm u}_i + \tau f({\rm u}_i) + \nu\,\frac{{\rm u}_{i+1}-2\,{\rm u}_i+{\rm u}_{i-1}}{{\rm dx}^2}\,, \\
\frac{d {\rm w}_i}{dt} &= f({\rm u}_i) + \mu\,\frac{{\rm u}_{i+1}-2\,{\rm u}_i+{\rm u}_{i-1}}{{\rm dx}^2} - \nu\,\frac{{\rm f(u)}_{i+1}-2\,{\rm f(u)}_i+{\rm f(u)}_{i-1}}{{\rm dx}^2}\,.
\end{split}
\end{equation}
Both schemes~\eqref{prova1} and~\eqref{prova2} formally converge to the standard discretization of~\eqref{reactiondiffusion} for $\tau\to 0^+$ and $\nu \to 0\,$, but they exhibit the well-known criticality of defining the correct reconstruction of the external field $f(u)$ on the (possibly nonuniform) spatial mesh. Therefore, the pseudo-kinetic scheme~\eqref{semimain2} maintains a wider interest in view of its underlying physical interpretation.
We conclude by remarking that such peculiar feature is not shared by other more general forms of relaxation system, for instance
\begin{equation}
\label{onefield2}
\tau \partial_{tt} u + g(t,x,u\,;\tau)\,\partial_t u - \mu\,\partial_{xx} u = f(u)\,,
\end{equation}
that is considered in~\cite{FLM}, for example. Unless specific expression for the external field $g$ are taken into account for physical reasons, the only approach to the numerical approximation of~\eqref{onefield2} seems to be the transcription into a first order system by putting
\begin{equation*}
\partial_t u = w\,, \qquad \tau \partial_t w + g(t,x,u\,;\tau)\,w - \mu\,\partial_{xx} u = f(u)\,.
\end{equation*}
On the other hand, under the hypothesis that $g$ does not depend explicitly on the independent variables, one can consider
\begin{equation*}
g(u\,;\tau)\,\partial_t u = \partial_t \bigl(g(u\,;\tau) u\bigr) - \partial_{u} g(u\,;\tau)\,u\,\partial_t u
\end{equation*}
and then equation~\eqref{onefield2} reads
\begin{equation*}
\partial_t \bigl( \tau\,\partial_t u + g(u\,;\tau) u \bigr) - \partial_{u} g(u\,;\tau)\,u\,\partial_t u - \mu\,\partial_{xx} u = f(u)\,,
\end{equation*}
so that we can define
\begin{equation*}
\tau \partial_t u + g(u\,;\tau) u = w\,, \qquad \partial_t w - \partial_{u} g(u\,;\tau)\,u\,\partial_t u - \mu\,\partial_{xx} u = f(u)\,,
\end{equation*}
with the second equation rewritten like
\begin{equation*}
\partial_t w - \frac1{\tau}\partial_{u} g(u\,;\tau)\,u \left(w - g(u\,;\tau)\,u \right) - \mu\,\partial_{xx} u = f(u)
\end{equation*}
that is even different from all the previous versions, thus revealing the great advantage of a physical justification for the models at hands, as already suggested in Section~\ref{sec:physics}.
\section*{Acknowledgements}
This work has been partially supported by CONACyT (Mexico) and MIUR (Italy), through the MAE Program for Bilateral Research, grant no. 146529. The work of RGP was partially supported by DGAPA-UNAM, grant IN100318.
|
{
"timestamp": "2018-02-26T02:12:07",
"yymm": "1802",
"arxiv_id": "1802.08651",
"language": "en",
"url": "https://arxiv.org/abs/1802.08651"
}
|
\section{Introduction}
Quantum entanglement has fascinated many physicist because of its counterintuitive nature. Quantum entanglement makes it possible to know everything about a system composed of two subsystems (in a pure state) but know nothing at all about the subsystems (in the case of maximal entanglement).
After Aspect et al. succeeded in showing experimental evidence of the quantum nature of entanglement by measuring correlations of linear polarizations of pairs of photons~\cite{Aspect:1981zz, Aspect:1982fx}, much attention has been paid to this genuine quantum property in various research areas including quantum information theory, quantum communication, quantum cryptography, quantum teleportation and quantum computation.
Quantum entanglement should play an important role in cosmology. In de Sitter space where the universe expands exponentially, any two mutually separated regions eventually become causally disconnected. This is most conveniently described by spanning open universe coordinates on two open charts in de Sitter space. The positive frequency mode functions of a free massive scalar field for the Euclidean vacuum (the Bunch-Davies vacuum) that have support on both regions were derived in~\cite{Sasaki:1994yt}. Using them, quantum entanglement between two causally disconnected regions in de Sitter space was first studied by Maldacena and Pimentel~\cite{Maldacena:2012xp}. They showed that the entanglement entropy, which is a measure of quantum entanglement, of a free massive scalar field between two disconnected open charts is non-vanishing. Motivated by this, the entanglement entropy of $\alpha$-vacua~\cite{Kanno:2014lma, Iizuka:2014rua}, that of the Dirac field~\cite{Kanno:2016qcc} and axion field were examined~\cite{Choudhury:2017bou, Choudhury:2017qyl}. The spectrum of cosmological fluctuation was also studied in~\cite{Kanno:2014ifa, Dimitrakopoulos:2015yva}. Quantum entanglement is also of considerable interest in the context of the proposed ``entanglement -- geometry correspondence''(e.g.~\cite{Ryu:2006bv, Hubeny:2007xt}).
One of the cornerstones of inflationary cosmology is that primordial density fluctuations have a quantum mechanical origin. Inflation leads to an ``initial state'' of the universe following inflation which is highly entangled. This invites the question of whether compelling observational evidence for the entangled nature of the initial density fluctuations can be found. Several studies have been made on quantifying the initial state entanglement by using some measure of entanglement such as the Bell inequality~\cite{Campo:2005sv, Campo:2005qn, Maldacena:2015bha, Martin:2016tbd, Choudhury:2016cso, Kanno:2017dci, Martin:2017zxs}, entanglement negativity~\cite{Nambu:2008my, Kanno:2014bma, Matsumura:2017swh} and quantum discord~\cite{Martin:2015qta, Kanno:2016gas}. There have also been several attempts to find some observational signatures on the CMB when the initial state is a non-Bunch-Davies vacuum due to entanglement between two scalar fields~\cite{Albrecht:2014aga, Kanno:2015ewa}, between two universes~\cite{Kanno:2015lja}, and due to scalar-tensor entanglement~\cite{Collins:2016ahj, Bolis:2016vas}
In this paper we extend the calculation of Maldacena and Pimentel~\cite{Maldacena:2012xp} to the case where a bubble wall is present between the two open charts. The modes of the scalar field are changed by the presence of the wall, which in turn changes the entanglement entropy between the two regions. We find that for sufficiently large walls, the entanglement entropy approaches zero. Our technical results may prove useful in several of the areas discussed above. Here we focus on the possible implications for the decoherence of bubble universes.
The paper is organized as follows. In section 2, we review the method developed by Maldacena and Pimentel with some comments relevant to the calculation of the entanglement entropy with a bubble wall. In section 3, we introduce the bubble wall in the system and construct the positive frequency mode functions for the Bunch-Davies vacuum. We then compute the entanglement entropy and logarithmic negativity. Finally we summarize our result and discuss the implications in section 4.
\section{Entanglement entropy in de Sitter space}
\noindent
Recently, Maldacena and Pimentel studied quantum entanglement between two causally disconnected regions in de Sitter space in~\cite{Maldacena:2012xp}. They showed that the entanglement entropy of a free massive scalar field between two disconnected open charts is non-vanishing. In this section, we review their result.
\subsection{Mode functions in the open chart}
\begin{figure}[t]
\vspace{-2cm}
\includegraphics[height=8cm]{open.pdf}\centering
\vspace{-1.2cm}
\caption{The Penrose diagram of the de Sitter space is shown.
$L$ and $R$ are the two causally disconnected regions
described by the open charts. A late-time spatial hypersurface
in each region is depicted.}
\label{fig1}
\end{figure}
We consider a free scalar field $\phi$ with mass $m$ in de Sitter space
represented by the metric $g_{\mu\nu}$. The action is given by
\begin{eqnarray}
S=\int d^4 x\sqrt{-g}\left[\,-\frac{1}{2}\,g^{\mu\nu}
\partial_\mu\phi\,\partial_\nu \phi
-\frac{m^2}{2}\phi^2\,\right]\,.
\label{action}
\end{eqnarray}
The metric in each $R$ and $L$ region of open charts in de Sitter space
(see Figure~\,\ref{fig1}) can be obtained by analytic continuation from
the Euclidean metric,
\begin{align}
ds^2_E=H^{-2}\left[d\tau^2+\cos^2\tau\left(d\rho^2+\sin^2\rho\,d\Omega^2\right)
\right]\,,
\label{Emetric}
\end{align}
and expressed, respectively, as
\begin{eqnarray}
ds^2_R&=&H^{-2}\left[-dt^2_R+\sinh^2t_R\left(dr^2_R+\sinh^2r_R\,d\Omega^2\right)
\right]\,,\nonumber\\
ds^2_L&=&H^{-2}\left[-dt^2_L+\sinh^2t_L\left(dr^2_L+\sinh^2r_L\,d\Omega^2\right)
\right]\,,
\end{eqnarray}
where $H^{-1}$ is the Hubble radius and $d\Omega^2$ is the metric on the two-sphere.
Note that the region $R$ and $L$ covered by the coordinates $(t_L, r_L)$ and $(t_R, r_R)$ respectively are the two causally disconnected open charts of de Sitter
space\footnote{The point between $R$ and $L$ regions is a part of the timelike
infinity where infinite volume exists.}.
The solutions of the Klein-Gordon equation are expressed as
\begin{eqnarray}
u_{\sigma p\ell m}(t,r,\Omega)\sim\frac{H}{\sinh t}\,
\chi_{p,\sigma}(t)\,Y_{p\ell m} (r,\Omega)\,,\qquad
-{\rm\bf L^2}Y_{p\ell m}=\left(1+p^2\right)Y_{p\ell m}\,,
\end{eqnarray}
where $(t,r)=(t_R,r_R)$ or $(t_L,r_L)$ and $Y_{p\ell m}$ are harmonic functions on the three-dimensional hyperbolic space. The eigenvalues $p$ normalized by $H$ take positive real values. The positive frequency mode functions corresponding to the Euclidean vacuum (the Bunch-Davies vacuum) that are supported both on the $R$ and $L$ regions are derived by Sasaki, Tanaka and Yamamoto in~\cite{Sasaki:1994yt}:
\begin{eqnarray}
\chi_{p,\sigma}(t)=\left\{
\begin{array}{l}
\frac{e^{\pi p}-i\sigma e^{-i\pi\nu}}{\Gamma(\nu+ip+\frac{1}{2})}P_{\nu-\frac{1}{2}}^{ip}(\cosh t_R)
-\frac{e^{-\pi p}-i\sigma e^{-i\pi\nu}}{\Gamma(\nu-ip+\frac{1}{2})}P_{\nu-\frac{1}{2}}^{-ip}(\cosh t_R)
\,,\\
\\
\frac{\sigma e^{\pi p}-i\,e^{-i\pi\nu}}{\Gamma(\nu+ip+\frac{1}{2})}P_{\nu-\frac{1}{2}}^{ip}(\cosh t_L)
-\frac{\sigma e^{-\pi p}-i\,e^{-i\pi\nu}}{\Gamma(\nu-ip+\frac{1}{2})}P_{\nu-\frac{1}{2}}^{-ip}(\cosh t_L)
\,,
\label{solutions}
\end{array}
\right.
\end{eqnarray}
where $P^{\pm ip}_{\nu-\frac{1}{2}}$ are the associated Legendre functions
and the index $\sigma$ takes the values $\pm 1$ which distinguishes two
independent solutions for each region, and $\nu$ is a mass parameter
\begin{eqnarray}
\nu=\sqrt{\frac{9}{4}-\frac{m^2}{H^2}}\,.
\end{eqnarray}
Here and below in the text, we focus on the case $m^2/H^2<9/4$
to save space and make discussion clear.
The extension to the case $m^2/H^2>9/4$ is straightforward,
and the result we present will include both mass ranges.
Note that $\nu=1/2$ ($m^2=2H^2$) is equivalent to a conformally coupled
massless scalar. The minimally coupled massless limit is $\nu=3/2$.
For $1/2<\nu<3/2$, it is known that there exists a supercurvature mode
$p=ik$ where $0<k<1$, which may be regarded as a bound-state
mode. The role of supercurvature modes in the quantum entanglement is not clear.
In~\cite{Maldacena:2012xp}, it is conjectured that they won't contribute.
In the body of this paper we simply ignore them. An analysis in the case of
a conformal scalar in the presence of a bubble wall is given in the
Appendix~\ref{app:a}. It turns out that a bubble wall can make the effective potential
deep and allow a supercurvature mode to exist. In fact,
we find that the eigenvalue $k$ can exceed unity
and become arbitrarily large as the effective potential becomes deeper,
and as a result the contribution of the supercurvature mode
in the vacuum spectrum in each open chart is more important\footnote{See Eq.~(3.10) in~\cite{Yamamoto:1996qq}}.
Going back to the solutions in Eq.~(\ref{solutions}),
the Klein-Gordon normalization fixes the normalization factor as
\begin{eqnarray}
N_{p}=\frac{4\sinh\pi p\,\sqrt{\cosh\pi p-\sigma\sin\pi\nu}}{\sqrt{\pi}\,|\Gamma(\nu+ip+\frac{1}{2})|}\,.
\label{norm}
\end{eqnarray}
Since they form a complete orthonormal set of modes, the field can be
expanded in terms of the creation and annihilation operators,
\begin{eqnarray}
\hat\phi(t,r,\Omega) &=& \frac{H}{\sinh t}\int dp \sum_{\sigma,\ell,m}
\left[\,a_{\sigma p\ell m}\,\chi_{p,\sigma}(t)
+a_{\sigma p\ell -m}^\dagger\,\chi^*_{p,\sigma}(t)\,\right]Y_{p\ell m}(r,\Omega)
\nonumber\\
&=&\frac{H}{\sinh t}\int dp \sum_{\ell,m}\phi_{p\ell m}(t)Y_{p\ell m}(r,\Omega)
\,,
\end{eqnarray}
where $Y_{p\ell m}^*=Y_{p\ell -m}$,
$[a_{\sigma p\ell m},a_{\sigma' p'\ell' m'}^\dag]=\delta(p-p')\delta_{\sigma,\sigma'}\delta_{\ell,\ell'}\delta_{m,m'}$, and
$a_{\sigma p\ell m}$ annihilates the Bunch-Davies vacuum,
$a_{\sigma p\ell m}|0\rangle_{\rm BD}=0$,
and we introduced a Fourier mode field operator,
\begin{eqnarray}
\phi_{p\ell m}(t)\equiv
\sum_\sigma\left[\,a_{\sigma p\ell m}\,\chi_{p,\sigma}(t)
+a_{\sigma p\ell -m}^\dagger\,\chi^*_{p,\sigma}(t)\right]\,.
\label{phi1}
\end{eqnarray}
For convenience, we write the mode functions and the associated Legendre
functions of the $R$ and $L$ regions in a simple form
$\chi_{p,\sigma}(t)\equiv\chi^{\sigma}$,
$P_{\nu-1/2}^{ip}(\cosh t_{R,L})\equiv P^{R, L}$,
$P_{\nu-1/2}^{-ip}(\cosh t_{R,L})\equiv P^{R*, L*}$.
Also below we omit the indices $p$, $\ell$, $m$ of $\phi_{p\ell m}$,
$a_{\sigma p\ell m}$ and $a_{\sigma p\ell -m}^\dag$ for simplicity.
For example, $a_\sigma=a_{\sigma p\ell m}$ and $a_\sigma^\dag=a_{\sigma p\ell -m}$
unless there may be any confusion.\footnote{It may be noted that this
abbreviation implies
$(a_\sigma)^\dag=a_{\sigma p\ell m}^\dag\neq a_\sigma^\dag=a_{\sigma p\ell -m}^\dag$.
But since this is a small technical problem that can be easily solved by
doubling the degrees of freedom, below we assume
$(a_\sigma)^\dag=a_\sigma^\dag$.}
\subsection{Bogoliubov transformations and entangled states}
Next we consider the positive frequency
mode functions for the $R$ or $L$ vacuum that
have support only on the $R$ or $L$ region, respectively.
They are given by
\begin{eqnarray}
\varphi^q=\left\{
\begin{array}{ll}
\tilde{N}_p^{-1}P^q~&\mbox{in region}~q\,,
\\
0~ &\mbox{in the opposite region}\,,
\end{array}
\right.
\quad\tilde{N}_p=\frac{\sqrt{2p}}{|\Gamma(1+ip)|}\,,
\label{varphi}
\end{eqnarray}
where $q=(R, L)$. As the Fourier mode field operator (\ref{phi1}) should
be the same under this change of mode functions, we have
\begin{eqnarray}
\phi(t)=a_\sigma\,\chi^\sigma+a_\sigma^\dag\,\chi^\sigma{}^*
=b_q\,\varphi^q+b_q^\dag\,\varphi^q{}^*\,,
\label{fo}
\end{eqnarray}
where we have introduce the new creation and annihilation operators ($b_q,b_q^\dag$)
such that $b_q|0\rangle_{q}=0$.
The operators $(a_\sigma,a_\sigma^\dag)$ and $(b_q,b_q^\dag)$
are related by a Bogoliubov transformation.
The Bunch-Davies vacuum may be constructed from the states
over $|0\rangle_{q}$ as
\begin{eqnarray}
|0\rangle_{\rm BD}\propto\exp\left(\frac{1}{2}\sum_{i,j=R,L}
m_{ij}\,b_i^\dagger\, b_j^\dagger\right) |0\rangle_R|0\rangle_L\,,
\label{bogoliubov1}
\end{eqnarray}
where $m_{ij}$ is a symmetric matrix.
The condition $a_\sigma|0\rangle_{\rm BD}=0$ determines $m_{ij}$:
\begin{eqnarray}
m_{ij}=e^{i\theta}\frac{\sqrt{2}\,e^{-p\pi}}{\sqrt{\cosh 2\pi p+\cos 2\pi\nu}}
\left(
\begin{array}{cc}
\cos \pi\nu & i\sinh p\pi \vspace{1mm}\\
i\sinh p\pi & \cos \pi\nu \\
\end{array}
\right)\,,
\label{mij}
\end{eqnarray}
where $e^{i\theta}$ contains all unimportant phase factors for $\nu^2>0$.
This is an entangled state of the ${\cal H}_R\otimes{\cal H}_L$ Hilbert space.
The density matrix $\rho=|0\rangle_{\rm BD}\,{}_{\rm BD}\langle0|$ is not diagonal
in the $|0\rangle_R|0\rangle_L$ basis unless $\nu= 1/2$ or $3/2$.
To make the calculation easier for tracing out the degrees of freedom in, say,
the $L$ space later, we perform a further Bogoliubov transformation
in each of $R$ and $L$ region. Apparently, this Bogoliubov transformation does
not mix the operators in ${\cal H}_R$ space and those in ${\cal H}_L$ space.
We introduce new operators $c_q=(c_R,c_L)$ that satisfy
\begin{eqnarray}
c_R = u\,b_R + v\,b_R^\dagger \,,\qquad
c_L = u^*\,b_L + v^*\,b_L^\dagger\,,
\label{bc}
\end{eqnarray}
to obtain
\begin{eqnarray}
|0\rangle_{\rm BD} = N_{\gamma_p}^{-1}
\exp\left(\gamma_p\,c_R^\dagger\,c_L^\dagger\,\right)|0\rangle_{R'}|0\rangle_{L'}\,.
\label{bogoliubov2}
\end{eqnarray}
Note that the condition $|u|^2-|v|^2=1$ is assumed
so that the new operators satisfy the commutation relation
$[c_i,(c_j)^\dagger]=\delta_{ij}$.
The normalization factor $N_{\gamma_p}$ is given by
\begin{eqnarray}
N_{\gamma_p}^2
=\left|\exp\left(\gamma_p\,c_R^\dagger\,c_L^\dagger\,\right)|0\rangle_{R'}|0\rangle_{L'}
\right|^2
=\frac{1}{1-|\gamma_p|^2}\,,
\label{norm2}
\end{eqnarray}
where $|\gamma_p|<1$ should be satisfied. The consistency relations from
Eq.~(\ref{bogoliubov2})
($c_R|0\rangle_{\rm BD}=\gamma_p\,c_L^\dag|0\rangle_{\rm BD}$,
$c_L|0\rangle_{\rm BD}=\gamma_p\,c_R^\dag|0\rangle_{\rm BD}$) give
\begin{eqnarray}
\gamma_p=\frac{1}{2\zeta}
\left[-\omega^2+\zeta^2+1-\sqrt{\left(\omega^2-\zeta^2-1\right)^2-4\zeta^2}\,\right]\,,
\label{gammap}
\end{eqnarray}
where we defined $\omega\equiv m_{RR} = m_{LL}$ and $\zeta\equiv m_{RL}=m_{LR}$ in Eq.~(\ref{mij}). Note that a minus sign in front of the square root term is taken to make $\gamma_p$ converge. Putting the $\omega$ and $\zeta$ defined in Eq.~(\ref{mij})
into Eq.~(\ref{gammap}), we obtain
\begin{eqnarray}
\gamma_p = i\frac{\sqrt{2}}{\sqrt{\cosh 2\pi p + \cos 2\pi \nu}
+ \sqrt{\cosh 2\pi p + \cos 2\pi \nu +2 }}\,.
\label{gammap2}
\end{eqnarray}
Note that $\gamma_p$ simplifies to $|\gamma_p|=e^{-\pi p}$
for $\nu=1/2$ (conformal) and $\nu=3/2$ (massless).
The $u$ and $v$ may be determined by inserting the above $\gamma_p$ into
the consistency conditions.
\subsection{Reduced density matrix and entanglement entropy}
Given the density matrix in the diagonalized form,
it is straightforward to obtain the reduced density matrix.
From Eqs.~(\ref{bogoliubov2}) and (\ref{norm2}), we obtain
the density matrix for each mode labeled by $p,\ell, m$ as
\begin{eqnarray}
\rho_R ={\rm Tr}_{L}\,|0\rangle_{\rm BD}\,{}_{\rm BD}\langle 0|
=\left(1-|\gamma_p|^2\,\right)\sum_{n=0}^\infty
|\gamma_p |^{2n}\,|n;p\ell m\rangle\langle n;p\ell m|\,,
\label{dm}
\end{eqnarray}
where we defined $|n;p\ell m\rangle=1/\sqrt{n!}\,(c_R^\dagger)^n\,|0\rangle_{R'}$.
In the conformal ($\nu=1/2$) and massless ($\nu=3/2$) cases,
the reduced density matrix reduces to a thermal state with temperature $T=H/(2\pi)$.
The entanglement entropy for each mode is given by
\begin{eqnarray}
S(p,\nu)=-{\rm Tr}\,\rho_R(p)\log_2\rho_R(p)
=-\log_2\left(1-|\gamma_p|^2\right)
-\frac{|\gamma_p|^2}{1-|\gamma_p|^2}\log_2|\gamma_p|^2\,.
\label{s}
\end{eqnarray}
Then the total entanglement entropy between two causally disconnected
open regions is obtained by integrating over $p$ and a volume integral
over the hyperboloid,
\begin{eqnarray}
S(\nu)=V_{H^3}^{\rm{reg}}\int_0^\infty\frac{dp\,p^2}{2\pi^2}S(p,\nu)
=\frac{1}{\pi}\int_0^\infty dp\,p^2S(p,\nu)\,,
\label{ints}
\end{eqnarray}
where $V_{H^3}^{\rm{reg}}=2\pi$ is the regularized volume of the
hyperboloid~\cite{Maldacena:2012xp}.
The result is plotted in Figure~\ref{fig2}. We see that the entanglement
is largest for small mass (positive $\nu^2$) and decays exponentially
for large mass (negative $\nu^2$). The two peaks correspond
to the massless ($\nu=3/2$) and conformal ($\nu=1/2$) cases.
\begin{figure}[t]
\vspace{-2cm}
\includegraphics[height=7.8cm]{ee.pdf}\centering
\vspace{-0.8cm}
\caption{Plot of the entanglement entropy normalized by the conformal
scalar case ($\nu=1/2$) as a function of $\nu^2$.}
\label{fig2}
\end{figure}
\section{Effects of a bubble wall on the entanglement}
Now we study the effect of a bubble wall on the entanglement.
The Penrose diagram of our setup is depicted in Figure~\ref{fig3}.
We consider the same action as Eq.~(\ref{action}) but now with
$m^2$ as a function of the background geometry which contains
a wall. In the case when the background geometry is given by
an instanton solution with $\sigma(\tau)$ being the scalar field
configuration and $\phi$ being its fluctuations, $m^2$ will be given by
\begin{align}
m^2(\tau)=\frac{d^2V(\sigma)}{d\sigma^2}\,,
\end{align}
where $V$ is the potential of the $\sigma$ field,
and the $\tau$-dependence of $m^2$ is through its
$\sigma$-dependence. In a realistic situation, $m^2$ would
be a smooth function of $\tau$, and is positive on both sides
of the wall, but negative at the wall where the potential has a peak.
For simplicity, however, here we model the wall with a delta-function.
\subsection{Setup}
We consider the same action as Eq.~(\ref{action}) but now with
a delta-functional wall in region $C$ parameterized by $\Lambda$ according to,
\begin{eqnarray}
S=\int d^4 x\sqrt{-g}\left[\,-\frac{1}{2}\,g^{\mu\nu}
\partial_\mu\phi\,\partial_\nu \phi
-\frac{m^2-\Lambda\delta(t_C)}{2}\,\phi^2\,\right]\,.
\label{action2}
\end{eqnarray}
where the metric is expressed as
\begin{eqnarray}
ds^2_C&=&H^{-2}\left[dt_C^2+\cos^2t_C\left(-dr_C^2+\cosh^2r_C\,d\Omega^2\right)\right]\,.
\end{eqnarray}
Note that the radial coordinate $t_C$ in the region $C$
coincides with $\tau$ of the instanton solution (see Eq.~(\ref{region:C}) below).
Note also that if we denote the width of the wall by $\Delta\tau_w$,
we have $\Lambda=|d^2V/d\sigma^2|H\Delta\tau_w$.
Setting the field $\phi$ as
\begin{eqnarray}
\phi = \frac{H}{\cos t_C}\chi_{p}(t_C)Y_{p\ell m}(r_C,\Omega)\,,
\end{eqnarray}
the solution of the mode function $\chi_p$ in the $C$ region is given
by the associated Legendre function,
$\chi_p\propto P^{\pm ip}_{\nu-\frac{1}{2}}(\sin t_C)$.
\begin{figure}[t]
\begin{center}
\vspace{-2cm}
\hspace{-1.5cm}
\begin{minipage}{8.0cm}
\includegraphics[height=8cm]{open2.pdf}\centering
\end{minipage}
\begin{minipage}{8.0cm}
\includegraphics[height=8cm]{open3.pdf}
\end{minipage}
\vspace{-0.8cm}
\caption{The Penrose diagrams of de Sitter space with
and without a delta function wall.
We assume that pair creation of identical vacuum bubbles
through false vacuum decay, and that the bubbles are
separated by an infinitesimally thin wall in region $C$.}
\label{fig3}
\end{center}
\end{figure}
\subsection{Mode functions in the presence of a wall}
Now we want to pick up the positive frequency mode functions
which are relevant for the pair creation of bubble universes
through false vacuum decay. Namely, those mode functions
that describe the Euclidean vacuum in the presence of a wall in region $C$.
They are obtained by requiring regularity in
the lower hemisphere of the Euclidean de Sitter space with the wall
when they are analytically continued to that
region~\cite{Sasaki:1994yt,Yamamoto:1996qq}.
\subsubsection{The relation between the Lorentzian and the Euclidean coordinates}
The open chart is obtained by analytic continuation of the Euclidean sphere $S^4$.
The Lorentzian coordinates of the regions $L$, $R$ and $C$ are
related to the Euclidean coordinates given in Eq.~(\ref{Emetric}) as
\begin{eqnarray}
\left\{
\begin{array}{l}
t_R=i\left(\tau-\frac{\pi}{2}\right)\hspace{2.2cm}\,,t_R\geq 0\\
r_R=i\rho\hspace{3.5cm}\,,r_R\geq 0
\end{array}
\right.
\label{region:R}
\end{eqnarray}
\begin{eqnarray}
\left\{
\begin{array}{l}
t_C=\tau\hspace{2.5cm}\,,-\frac{\pi}{2}\leq t_C\leq \frac{\pi}{2}\\
r_C=i\left(\rho-\frac{\pi}{2}\right)\hspace{1.0cm}\,,0\leq r_C\leq \infty
\end{array}
\right.
\label{region:C}
\end{eqnarray}
\begin{eqnarray}
\left\{
\begin{array}{l}
t_L=i\left(-\tau-\frac{\pi}{2}\right)\hspace{2cm}\,,t_L\geq 0\\
r_L=i\rho\hspace{3.6cm}\,,r_L\geq 0
\end{array}
\right.
\end{eqnarray}
For simplicity, we write $\sin t_C\equiv z_C$,
$\cosh t_R\equiv z_R$, and $\cosh t_L\equiv -z_L$ below.
Then, the above relations give $z_C=z_R=-z_L$.
\subsubsection{Analytic continuation in the presence of the wall}
Let $\chi_p^R(z_R)=P^{ip}_{\nu-\frac{1}{2}}(z_R)$
and $\chi_p^L(z_L)=P^{ip}_{\nu-\frac{1}{2}}(z_L)$ ($z_L=-z_R$)
where
\begin{eqnarray}
P^\mu_\nu(z)=\frac{1}{\Gamma(1-\mu)}
\left(\frac{z+1}{z-1}\right)^{\frac{\mu}{2}}
F\left(-\nu,\nu+1,1-\mu;\frac{1-z}{2}\right)\,;
\quad z>1~\mbox{or}~z<-1\,.
\label{legendre1}
\end{eqnarray}
\vspace{6mm}
\noindent
$\bullet$ From $R$ ($R=\{z_R>1\}$) to $C^+$ ($C^+=\{0<z_C<1\}$):
\\
Analytic continuation is through $\Im z_R<0$.
This means that the argument of $z_R-1=z_C-1$ is $-\pi$.
Hence $z_R-1=z_C-1=e^{-i\pi}(1-z_C)$.
Thus
\begin{eqnarray}
(1+z_C)=(1+z_R)\,,
\quad
(1-z_C)=|1-z_R|e^{i\pi}=(z_R-1)e^{i\pi}\,,
\end{eqnarray}
which gives
\begin{eqnarray}
\left(\frac{1+z_R}{z_R-1}\right)^{i\frac{p}{2}}
=e^{-\frac{\pi}{2}p}\left(\frac{1+z_C}{1-z_C}\right)^{i\frac{p}{2}}\,,
\end{eqnarray}
when analytically continued from $z_R>1$ to $z_R=z_C<1$.
This means
\begin{eqnarray}
\chi_p^R(z_C)=e^{-\frac{\pi}{2}p}\,\tilde{P}^{ip}_{\nu-\frac{1}{2}}(z_C)
\,,
\label{RtoC}
\end{eqnarray}
for $z_R=z_C<1$, where $\tilde{P}^\mu_\nu(x)$ for
$-1<x<1$ is defined as
\begin{eqnarray}
\tilde{P}^\mu_\nu(x)=\frac{1}{\Gamma(1-\mu)}
\left(\frac{1+x}{1-x}\right)^{\frac{\mu}{2}}
F\left(-\nu,\nu+1,1-\mu;\frac{1-x}{2}\right)\,.
\label{legendre2}
\end{eqnarray}
\vspace{6mm}
\noindent
$\bullet$ From $C^+$ to $C^{-}$ ($C^-=\{-1<z_C<0\}$):
Assuming there is a delta-functional wall of height $\Lambda$
at $z_C=0$, $\chi_p^R(z_C)$ is deformed to
\begin{eqnarray}
\chi_p^R(z_C)=e^{\frac{\pi}{2}p}
\left(A_p e^{-\pi p}\tilde{P}^{ip}_{\nu-\frac{1}{2}}(z_C)
+B_p e^{\pi p}\tilde{P}^{-ip}_{\nu-\frac{1}{2}}(z_C)\right)\,,
\label{chiCminus}
\end{eqnarray}
in $C^-$, where $A_p$ and $B_p$ are given by
\begin{eqnarray}
A_p&=&1+\frac{\pi}{2i\sinh\pi p}\frac{\Lambda}{H^2}
|\tilde{P}^{ip}_{\nu-\frac{1}{2}}(0)|^2\,,
\\
B_p&=&-\frac{\pi}{2i\sinh\pi p}\frac{\Lambda}{H^2}
e^{-2\pi p}\left(\tilde{P}^{ip}_{\nu-\frac{1}{2}}(0)\right)^2\,.
\end{eqnarray}
Note that in the absence of a wall ($\Lambda=0$), we have $A_p=1$ and $B_p=0$.
\vspace{6mm}
\noindent
$\bullet$ From $L$ ($L=\{z_L<-1\}$) to $C^-$ ($C^-=\{-1<z_C<0\}$):
Now we express the above solution in terms of $\chi^L_p$.
To do this, we first introduce $\hat{z}_C=-z_C$ and analytically
continue $\chi^L_p$ from $L$ to $C^-$ where $0<\hat{z}_C<1$.
Exactly the same as the analytic continuation from $R$ to $C^+$,
we have
\begin{eqnarray}
\chi_p^L(\hat{z}_C)=e^{-\frac{\pi}{2}p}\tilde{P}^{ip}_{\nu-\frac{1}{2}}(\hat{z}_C)
\,,\label{LtoC}
\end{eqnarray}
for $z_L=\hat{z}_C<1$.
\vspace{6mm}
\noindent
$\bullet$ matching $\chi^R$ with $\chi^L$:
We now express $\chi^R_p$ in terms of $\chi^L_p$
and $\chi^L_{-p}$. To do this, we express $P^{\pm ip}_{\nu-\frac{1}{2}}(z_C)
=P^{\pm ip}_{\nu-\frac{1}{2}}(-\hat{z}_C)$
in terms of $P^{\pm ip}_{\nu-\frac{1}{2}}(\hat{z}_C)$,
which can be achieved by using the transformation formulas for the
hypergeometric functions in Appendix~\ref{app:b}. We find
\begin{eqnarray}
P^{ip}_{\nu-\frac{1}{2}}(-\hat{z}_C)
=C_p\,P^{ip}_{\nu-\frac{1}{2}}(\hat{z}_C)+D_p\, P^{-ip}_{\nu-\frac{1}{2}}(\hat{z}_C)\,,
\label{Ptrans}
\end{eqnarray}
where
\begin{eqnarray}
C_p=\frac{\cos\pi\nu}{i\sinh\pi p}\,,
\qquad
D_p=-e^{-2\pi p}\,
\frac{\cos\left(\nu+ip\right)\pi}{i\sinh\pi p}
\frac{\Gamma\left(\frac{1}{2}+\nu+ip\right)}{\Gamma\left(\frac{1}{2}+\nu-ip\right)}\,.
\end{eqnarray}
Using (\ref{Ptrans}), $\chi^R_p$ is expressed as\footnote{In the language
of~\cite{Yamamoto:1996qq}, we have
\begin{eqnarray}
\alpha_p=e^{\pi p}\left(A_pD_{p}+B_pC_{-p}\right)\,,
\quad
\beta_p=A_pC_{p}+B_pD_{-p}\,.
\label{alphabeta}
\end{eqnarray}
We can check the symmetry,
$\alpha^*_p=e^{-2\pi p}\alpha_{-p}\,,~
\beta^*_p=\beta_{-p}\,.$}
\begin{eqnarray}
\chi^R_p(z_C)
&=&e^{\frac{\pi}{2}}\left[A_p\,P^{ip}_{\nu-\frac{1}{2}}(-\hat{z}_C)
+B_p\,P^{-ip}_{\nu-\frac{1}{2}}(-\hat{z}_C)\right]
\nonumber\\
&=&\left(A_pC_p+B_pD_{-p}\right)
\chi^L_{p}(\hat{z}_C)
+e^{\pi p}\left(A_pD_p+B_pC_{-p}\right)\chi^L_{-p}(\hat{z}_C)\,.
\end{eqnarray}
Notice that $P^{-ip}_{\nu-\frac{1}{2}}(-\hat{z}_C)$ is not complex conjugate of
$P^{ip}_{\nu-\frac{1}{2}}(-\hat{z}_C)$.
\subsubsection{The Euclidean vacuum in the presence of the wall}
Finally, the positive frequency mode functions for the
Euclidean vacuum in the presence of the bubble wall are found to be
\begin{eqnarray}
\chi_p^R(z)&=&\frac{1}{N_w}\left\{
\begin{array}{l}
P_{\nu-\frac{1}{2}}^{ip}(z_R)
\,,\\\\
\left(A_pC_p+B_pD_{-p}\right)P_{\nu-\frac{1}{2}}^{ip}(z_L)
+e^{\pi p}\left(A_pD_p+B_pC_{-p}\right)P_{\nu-\frac{1}{2}}^{-ip}(z_L)\,,
\label{bwsolutions1}
\end{array}
\right.\\\nonumber\\
\chi_p^L(z)&=&\frac{1}{N_w}\left\{
\begin{array}{l}
\left(A_pC_p+B_pD_{-p}\right)P_{\nu-\frac{1}{2}}^{ip}(z_R)
+e^{\pi p}\left(A_pD_p+B_pC_{-p}\right)P_{\nu-\frac{1}{2}}^{-ip}(z_R)
\,,\\\\
P_{\nu-\frac{1}{2}}^{ip}(z_L)
\,,
\label{bwsolutions2}
\end{array}
\right.
\end{eqnarray}
where the Klein-Gordon normalization for the above solutions is
\begin{eqnarray}
N_w^2=\tilde{N_p}^2\left(1+|f_p|^2-|g_p|^2\right)\,,
\end{eqnarray}
where $\tilde{N}_p$ is defined in Eq.~(\ref{varphi}) and we have defined
\begin{eqnarray}
f_p=A_pC_p+B_pD_{-p}\,,\qquad g_p=e^{\pi p}\left(A_pD_p+B_pC_{-p}\right)\,.
\label{fg}
\end{eqnarray}
Note that in the absence of the wall ($\Lambda=0$),
we have $f_p=C_p$ and $g_p=e^{\pi p}D_p$.
\subsection{Bogoliubov transformations and entangled states}
We perform the same Bogoliubov transformation in Eq.~(\ref{fo}) that mixes
the operators in the Hilbert spaces ${\cal H}_R$ and ${\cal H}_L$.
The derivation of the symmetric matrix $m_{ij}$ is given in
Appendix~\ref{app:c}. The components of the $m_{ij}$ in Eq.~(\ref{mij})
is now expressed as
\begin{align}
\omega&=&-F\left[
\left(f_p+f_p^*\right)\left(1-\frac{|g_p|^2}{1-f_p^{*2}}\right)
-\left(1+|f_p|^2-|g_p|^2\right)\left(f_p-\frac{f_p^*|g_p|^2}{1-f_p^{*2}}\right)
\right]\,,\qquad
\label{omega}
\\
\zeta&=&-F\left[
\left(f_p+f_p^*\right)\left(f_p-\frac{f_p^*|g_p|^2}{1-f_p^{*2}}\right)
-\left(1+|f_p|^2-|g_p|^2\right)\left(1-\frac{|g_p|^2}{1-f_p^{*2}}\right)
\right]\,,\qquad
\label{zeta}
\end{align}
where
\begin{eqnarray}
F=\frac{g_p^*}{1-f_p^{*2}}\frac{1}{E}\,,
\qquad
E=\left(1-\frac{|g_p|^2}{1-f_p^{*2}}\right)^2
-\left(f_p-\frac{f_p^*|g_p|^2}{1-f_p^{*2}}\right)^2\,.
\end{eqnarray}
If there is no wall, $\omega$ is real ($\omega=\omega^*$) and $\zeta$ is pure imaginary ($\zeta=-\zeta^*$) for positive $\nu^2$, then the second Bogoliubov transformation was simplified as in Eq.~(\ref{bc}). In the presence of the wall, however, Eqs~(\ref{omega}) and (\ref{zeta}) are not real and pure imaginary respectively for positive $\nu^2$. In this case,
we need to perform the Bogoliubov transformation of the form
\begin{eqnarray}
c_R = u\,b_R + v\,b_R^\dagger \,,\qquad
c_L = \bar{u}\,b_L + \bar{v}\,b_L^\dagger\,,
\end{eqnarray}
to get the relation Eq.~(\ref{bogoliubov2}). Note that $|u|^2-|v|^2=1$ and $|\bar{u}|^2-|\bar{v}|^2=1$ are assumed. Then the consistency relations ($c_R|0\rangle_{\rm BD}=\gamma_p\,c_L^\dag|0\rangle_{\rm BD}$,
$c_L|0\rangle_{\rm BD}=\gamma_p\,c_R^\dag|0\rangle_{\rm BD}$) give the system of four homogeneous equations
\begin{eqnarray}
&&\omega\,u+v-\gamma_p\,\zeta\,\bar{v}^*=0\,,\qquad
\zeta\,u-\gamma_p\,\bar{u}^*-\gamma_p\,\omega\,\bar{v}^*=0\,,\nonumber\\
&&\omega\,\bar{u}+\bar{v}-\gamma_p\,\zeta\,v^*=0\,,\qquad
\zeta\,\bar{u}-\gamma_p\,u^*-\gamma_p\,\omega\,v^*=0\,.
\end{eqnarray}
In order to have a non-trivial solution in the above system of equations, $\gamma_p$ must be~\cite{Kanno:2014lma}
\begin{eqnarray}
|\gamma_p|^2&=&\frac{1}{2|\zeta|^2}\left[
-\omega^2\zeta^{*2}-\omega^{*2}\zeta^2+|\omega|^4-2|\omega|^2+1+|\zeta|^4\right.\nonumber\\
&&\qquad\quad\left.-\sqrt{\left(\omega^2\zeta^{*2}+\omega^{*2}\zeta^2-|\omega|^4+2|\omega|^2-1-|\zeta|^4\right)^2-4|\zeta|^4}\,
\right]\,,
\label{gammap3}
\end{eqnarray}
where we took a minus sign in front of the square root term to reduce Eq.~(\ref{gammap}) when there is no wall.
Then putting Eqs.~(\ref{omega}) and (\ref{zeta}) into
Eq.~(\ref{gammap3}), we can calculate the entanglement entropy for each
mode in Eq.~(\ref{s}). The resulting total entanglement entropy, Eq.~(\ref{ints}),
is plotted in Figure~\ref{fig4}.
\begin{figure}[t]
\begin{center}
\vspace{-2cm}
\hspace{-1.2cm}
\begin{minipage}{8cm}
\includegraphics[height=6.8cm]{eenu.pdf}\centering
\end{minipage}
\begin{minipage}{8cm}
\hspace{0.2cm}
\includegraphics[height=6.8cm]{eelambda.pdf}
\end{minipage}
\vspace{-0.5cm}
\caption{The left panel shows plots of the entanglement entropy versus $\nu^2$
for several values of $\Lambda$. Running from top to bottom on the right side of the panel: $\Lambda=0$ (blue), $\Lambda/H^2=1$ (orange), $\Lambda/H^2=3$ (green), $\Lambda/H^2=5$ (red) and $\Lambda/H^2=8$ (purple).
The right panel shows the $\Lambda$ dependence of the entanglement
entropy, where the horizontal axis
is in units of $H=1$. Again running from top to bottom along the right, we show
the massless case ($\nu=3/2$, blue), $\nu=1$ (orange) and the conformal case ($\nu=1/2$, green).}
\label{fig4}
\end{center}
\end{figure}
\subsection{Entanglement entropy}
From the left panel in Figure~\ref{fig4}, we see that the entanglement entropy decreases as the effect of the wall increases for small mass (positive $\nu^2$).
For large mass (negative $\nu^2$), the entanglement entropy decays
exponentially in the absence of the wall ($\Lambda=0$). In the presence of the wall, the peak at the conformally coupled scalar ($\nu=1/2$) shifts to the
left and eventually disappears as the effect of the wall increases.
The right panel also shows that the peak of the entanglement entropy
corresponding to the massless case and the conformally coupled scalar
is an identical value in the absence of the wall ($\Lambda=0$).
However, as the effect of the wall becomes large, the entanglement entropy
in the case of conformally coupled scalar decays faster than that of massless case.
\subsection{Logarithmic negativity}
\label{ln}
In order to characterize the entanglement of a quantum state, there have been many entanglement measures proposed. The logarithmic negativity is one such measure of quantum entanglement. This measure is derived from the positive partial transpose criterion for separability~\cite{Horodecki:2009zz}. The idea of it is to characterize an entangled state as a state that is not separable. In this subsection, we compute the entanglement of our model with the logarithmic negativity.
The second Bogoliubov transformation Eq.~(\ref{bogoliubov2}) is rewritten as
\begin{eqnarray}
|0\rangle_{\rm BD} = N_{\gamma_p}^{-1}
\sum_{n=0}^\infty\gamma_p^n\,|n;p\ell m\rangle_{R'}|n;p\ell m\rangle_{L'}\,,
\label{bogoliubov3}
\end{eqnarray}
where the states $|n;p\ell m\rangle_{R'}$ and $|n;p\ell m\rangle_{L'}$ are $n$ particle excitation states in $R'$ and $L'$ spaces. For a pure state, any state has a Schmidt decomposition expressed as
\begin{eqnarray}
|\psi\rangle=\sum_i\sqrt{\lambda_i}\,|i\rangle_A\otimes|i\rangle_B\,,
\label{lnschmidt}
\end{eqnarray}
where $\lambda_i$ is the probability to observe the $i$th state and satisfies $\sum_i\lambda_i=1$. By using the eigenvalues, the logarithmic negativity is expressed as
\begin{eqnarray}
L{\cal N}=2\log_2\left(\sum_i\sqrt{\lambda_i}\right)\,.
\end{eqnarray}
Thus, if we compare Eq.~(\ref{bogoliubov3}) with the Schmidt decomposition, we can read off the corresponding eigenvalues
\begin{eqnarray}
\sqrt{\lambda_i}=N_{\gamma_p}^{-1}|\gamma_p|^n\,,
\end{eqnarray}
and the logarithmic negativity for each mode is calculated as~\cite{Kanno:2014bma}
\begin{eqnarray}
L{\cal N}(p,\,\nu)=2\log_2\left(\sum_i N_{\gamma_p}^{-1}|\gamma_p|^n\right)=\log_2\frac{1+|\gamma_p|^2}{1-|\gamma_p|^2}\,.
\end{eqnarray}
Then the logarithmic negativity is obtained by integrating over $p$ and a volume integral over the hyperboloid,
\begin{eqnarray}
L{\cal N}(\nu)=\frac{1}{\pi}\int_0^\infty dp\,p^2 L{\cal N}(p,\,\nu)\,.
\end{eqnarray}
The result is plotted in Figure~\ref{fig5}. We find that the qualitative features are the same as the result of entanglement entropy.
\begin{figure}[t]
\begin{center}
\vspace{-2cm}
\hspace{-1.2cm}
\begin{minipage}{8.0cm}
\includegraphics[height=6.8cm]{lnnu.pdf}\centering
\end{minipage}
\begin{minipage}{8.0cm}
\hspace{0.2cm}
\includegraphics[height=6.8cm]{lnlambda.pdf}
\end{minipage}
\vspace{-0.5cm}
\caption{The left panel shows plots of the logarithmic negativity versus $\nu^2$. We set $H=1$.
Running from top to bottom on the right side of the panel: $\Lambda=0$ (blue), $\Lambda/H^2=1$ (orange), $\Lambda/H^2=3$ (green), $\Lambda/H^2=5$ (red) and $\Lambda/H^2=8$ (purple).
The right panel shows the $\Lambda$ dependence of the logarithmic negativity, where the horizontal axis is in units of $H=1$. Again running from top to bottom along the right, we show
the massless case ($\nu=3/2$, blue), $\nu=1$ (orange) and the conformal case ($\nu=1/2$, green).}
\label{fig5}
\end{center}
\end{figure}
\section{Summary and discussion}
We have studied the effect of a bubble wall on the entanglement entropy of
a free massive scalar field between two causally disconnected open charts in
de Sitter space. We assume there is a delta-functional wall between them parameterized by our wall parameter $\Lambda$.
This may be regarded as a model describing the pair creation of
identical bubble universes separated by a bubble wall.
To analyze the system, we first derived the Euclidean vacuum mode functions
of the scalar field in the presence of the wall in the coordinates that
respect the open charts.
We then gave the Bogoliubov transformation between the Euclidean vacuum and
the open chart vacua that makes the reduced density matrix diagonal.
We derived the reduced density matrix in one of the open charts ($R$ space)
after tracing out the other ($L$ space). We then computed the entanglement entropy of the scalar field by using the reduced density matrix and compared the result with the case of no bubble wall. We found that larger values of parameter $\Lambda$ correspond to less entanglement. We also computed a different measure of entanglement called logarithmic negativity. The qualitative features were found to be the same as the result of entanglement entropy.
In the limit of small entanglement entropy the BD quantum state approaches a product of ground state wavefunctions for each of the charts. Our results thus show that for large $\Lambda$ the dynamics of bubble formation select this product state and ensure its stability under evolution. These are the features identified in the literature\footnote{See for example~\cite{Zurek:1981xq,Zurek:1992mv,Zurek:1992aa1,Anglin:1995pg} and also~\cite{Zurek:2003zz}, section IV-D, for a nice review.} to correspond to the selection of special “pointer states” via the decoherence process. Our results thus may be regarded as evidence of decoherence of bubble universes from (and by) each other.
We also note that in discussions of the black hole firewall problem~\cite{Braunstein:2009my,Almheiri:2012rt} it is argued that the absence of entanglement implies the existence of a firewall. We are intrigued by a certain parallel, in a kind of reverse engineered way, with our results: We show a particular example of how the presence of a wall can reduce entanglement.
\section*{Acknowledgments}
This work was supported in part by the MEXT KAKENHI Nos.~15H05888 and 15K21733.
SK was supported by IKERBASQUE, the Basque Foundation
for Science and the Basque Government (IT-979-16),
and Spanish Ministry MINECO (FPA2015-64041-C2-1P). AA was supported by a grant from UC Davis, and thanks A. Arrasmith for helpful conversations.
|
{
"timestamp": "2018-02-27T02:03:58",
"yymm": "1802",
"arxiv_id": "1802.08794",
"language": "en",
"url": "https://arxiv.org/abs/1802.08794"
}
|
\section{Introduction}
The localization of a vehicle is an important task in the field of autonomous driving. The current trend in research is to find solutions using accurate maps. However, when such maps are not available (an area is not mapped or there have been big changes since the last update), we need Simultaneous Localization And Mapping (SLAM) solutions. There are many such solutions based on different sensors, such as cameras (mono or stereovision), odometers and depth sensors or a combination of these sensors.
The advantage of LiDARs with respect to cameras is that the noise associated with each distance measurement is independent of the distance and the lighting conditions. However, the amount of data to process and the sparse density of collected range images are still challenging. In this paper, we present a new scan-to-model framework using an implicit surface representation of the map inspired by previous RGB-D methods to better handle the large amount and sparsity of acquired data. The result is low-drift LiDAR odometry and an improvement in the quality of the mapping.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.92\linewidth]{figures/test5.jpg}
\end{center}
\caption[width=\linewidth]{Trajectory in red of our IMLS SLAM with a vertical Velodyne HDL32 on top of a car (two loops of $2~km$ length) in the city of Paris. We can see the good superposition of the two loops. In black is the trajectory of the classical scan-to-scan matching. The starting point of both methods is the same green circle. The end point of the IMLS SLAM and the end point of the scan-to-scan matching are inside the blue circles.}
\label{fig:traj_velo32_paris}
\end{figure}
\section{Related Work}
There are hundreds of works on SLAM in the literature. Here, we only present the recent advances in six degrees of freedom (6-DOF) SLAM LiDAR with 3D mapping. Most LiDAR approaches are variations of the traditionnal iterative closest point (ICP) scan matching. ICP is a well-known scan-to-scan registration method.~\cite{rusinkiewicz2001} and more recently~\cite{pomerleau2015} have surveyed efficient variants of ICP, such as the point-to-plane matching.
~\cite{nuchter2007} give a good review of different 6-DOF LiDAR methods based on 2D or 3D depth sensors, but their solution uses only a stop-scan-go strategy.~\cite{bosse2009} studies a continuous spinning 2D laser. They build a voxel grid from laser points and in each voxel compute shapes to keep only cylindrical and planar areas for matching. Using a 3D laser (Velodyne HDL64),~\cite{moosmann2011} presents a SLAM taking into account the spinning effect of the Velodyne to de-skew the range image along the trajectory. They build a map as a 3D grid structure containing small surfaces, and the de-skewed LiDAR scan is localized with respect to that map. Using a 3D laser,~\cite{ceriani2015} uses a 6-DOF SLAM based on a sparse voxelized representation of the map and a generalization of ICP to find the trajectory.
More recently, LiDAR Odometry And Mapping (LOAM) by~\cite{zhang2017} has become considered state-of-the-art in 6-DOF LiDAR SLAM. They focus on edges and planar features in the LiDAR sweep and keep them in a map for edge-line and planar-planar surface matching.
Different to 2D or 3D spinning LiDAR, RGB-D sensors, such as Kinect, are able to produce dense range images at high frequency. Kinect Fusion~\cite{newcombe2011} presents 3D mapping and localization algorithms using these sensors. They track the 6-DOF position of the Kinect relying on a voxel map storing truncated signed distances to the surface. Such methods are fast and accurate but limited in the volume explored.
\medskip
Our method relies only on 3D LiDAR sensors, such as those produced by Velodyne, and the continuous spinning effects of such sensors. We do not use any data from other sensors, such as IMU, GPS, or cameras. Our algorithm is decomposed in three parts. First, we compute a local de-skewed point cloud from one rotation of the 3D LiDAR. Second, we select specific samples from that point cloud to minimize the distance to the model cloud in the third part. The main contributions of our work are twofold and concern the point selection in each laser scan and the definition of the model map as a point set surface.
\section{Scan egomotion and dynamic object removal}
We define a scan $S$ as the data coming from one rotation of the LiDAR sensor. During the rotation, the vehicle has moved and we need to create a point cloud taking into account that displacement (its egomotion, defined as the movement of the vehicle during the acquisition time of a scan). For that purpose, we assume that the egomotion is relatively similar between two consecutive scans; therfore, we compute the actual egomotion using the previous relative displacement.
We define at any time $t$ the transformation of the vehicle pose as $\tau(t)$ relative to its first position. We only look for discrete solutions for the vehicle positions: $\tau(t_k)$ as the position of the vehicle at the end of the current scan (at time $t_k$ for scan $k$). For any LiDAR measurement at time $t$, the vehicle pose is computed as a linear interpolation between the end of the previous scan $\tau(t_{k-1})$ and the end of the current scan $\tau(t_{k})$.
At time $t_k$, we already know all $\tau(t_{i})$ for $i \leq k-1$ and look for the discrete position $\tau(t_{k})$. To build a local de-skewed point cloud from the current sweep measurements, we must have an estimate $\tilde{\tau}(t_{k})$ of the vehicle position at time $t_k$. We use only previous odometry and define $\tilde{\tau}(t_{k}) = \tau(t_{k-1})*\tau(t_{k-2})^{-1}*\tau(t_{k-1})$.
We build the point cloud scan $S_k$ using a linear interpolation of positions between $\tau(t_{k-1})$ and $\tilde{\tau}(t_{k})$. That egomotion is a good approximation if we assume that the angular and linear velocities of the LiDAR are smooth and continuous over time. Next, we do a rigid registration of that point cloud scan to our map to find $\tau(t_{k})$.
Before matching the scan to the model map, we need to remove all dynamic objects from the scan. This is a very complicated task, requiring a high level of semantic information about the scene to be exact. We perform a small object removal instead of a dynamic object removal and discard from the scene all objects whose size makes it possible to assume that they could be dynamic objects.
To begin, we detect the ground points in the scan point cloud using a voxel growing similar to that in~\cite{deschaud2010}. We remove these ground points and cluster the remaining points (clusters are points with a distance to the nearest point less than $0.5~m$ in our case). We discard from the scan small group of points; they can represent pedestrians, cars, buses, or trucks. We remove groups of points whose bounding box is less than $14~m$ in $X_v$, less than $14~m$ in $Y_v$, and less than $4~m$ in $Z_v$. ($(X_v,Y_v,Z_v)$ are the axes of a vehicle frame with $X_v$ pointing right, $Y_v$ pointing forward, and $Z_v$ pointing upward). Even removing all these data, we keep enough information about large infrastructure, such as walls, fences, facades, and trees (those with a height of more than $4~m$). Finally, we add back the ground points to the scan point cloud.
\section{Scan sampling strategy}
Once the unwarped point cloud from a scan has been created, we need to select sampling points to do the matching. The classical ICP strategy is to select random samples like in~\cite{rusinkiewicz2001}.~\cite{gelfand2003} gives a more interesting strategy using the covariance matrix of the scan point cloud to find geometric stable sampling. They show that if they select suitable samples, convergence of ICP is possible in cases where it was not possible with random samples. However, their strategy can be slow for the matching part because they may select a large number of samples.
We propose a different sampling inspired by~\cite{gelfand2003}. Instead of using principal axes of the point cloud from the covariance matrix, we keep the axes of the vehicle frame. We define the LiDAR scan point cloud in the vehicle frame with axes ($(X_v,Y_v,Z_v)$). By doing so, most of the planar areas of the scan point cloud (if they are present) are aligned to the $(X_v,Y_v,Z_v)$ axes. For example, ground points provide observability of the translation along $Z_v$. Facades give observability of the translation along $X_v$ and $Y_v$.
First, we need to compute the normals at every point. To do that quickly, we can compute an approximate normal using the spherical range image of the sensor, similar to~\cite{badino2011}. For every point we keep the planar scalar $a_{2D}$ of its neighborhood, as defined by~\cite{demantke2011}: $a_{2D}=(\sigma_2-\sigma_3)/\sigma_1$ where $\sigma_i=\sqrt{\lambda_i}$ and $\lambda_i$ are eigenvalues of the PCA for the normal computation (see \cite{demantke2011} for more details). Second, we compute the nine values for every point $x_i$ in the scan cloud $S_k$:
\begin{itemize}
\item $a_{2D}^2 (x_i \times \vec{n_i}) \cdot X_v $
\item $-a_{2D}^2 (x_i \times \vec{n_i}) \cdot X_v $
\item $a_{2D}^2 (x_i \times \vec{n_i}) \cdot Y_v $
\item $-a_{2D}^2 (x_i \times \vec{n_i}) \cdot Y_v $
\item $a_{2D}^2 (x_i \times \vec{n_i}) \cdot Z_v $
\item $-a_{2D}^2 (x_i \times \vec{n_i}) \cdot Z_v $
\item $a_{2D}^2 | \vec{n_i} \cdot X_v |$
\item $a_{2D}^2 | \vec{n_i} \cdot Y_v |$
\item $a_{2D}^2 | \vec{n_i} \cdot Z_v |$
\end{itemize}
It is not mandatory in our method to have planar zones in the environment, but such zones allow us to improve the quality of matching compared to non-planar zones; that is why we have $a_{2D}$ in the formulation of choice of samples.
The first 6 values give the contribution of the point $x_i$ of the scan to the observability of the different unknown angles (roll, pitch, yaw) of the vehicle (we see that we provide more important contribution to points far from the sensor center). The 3 last values give the contribution of the point $x_i$ to the observability of the unknown translations (same importance for points far or close to the sensor center). We sort the nine lists in descending order so that the first points of every list are points with more observability in relation to the unknown parameters. During the matching part, we select from each list a sample $x$ starting from the beginning of the list. We find the closest point $p_c$ of $x$ in the model cloud. We keep sample $x$ only if $\|x-p_c\|\leq r$. The parameter $r$ remove outliers between the scan and model cloud. We do this until we find $s$ samples from each list. We may have the same point with a good score in different lists, so we will use it multiple times as sample during the matching process. In any case, we have in total $9s$ samples (we choose the parameter $s$ to keep fewer points than the size of the scan $S_k$).
Figure~\ref{fig:sampling} shows an example of $s=100$ points taken from each list for a scan composed of $13683$ points. For our experiments, we used only $900$ points as samples (around $7\%$ of the scan) to do the matching. It is important to have the minimum number of sampling points (the speed of the matching process depends mainly on that number), but at the same time, we need enough points to have good observability of all parameters of the transformation $\tau(t_k)$. We define as $\tilde{S_k}$ the subset of points in $S_k$ chosen by our sampling strategy.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.92\linewidth]{figures/sampling3.jpg}
\end{center}
\caption[width=\linewidth]{Our sampling strategy on a scan point cloud. The points in red are selected samples to do the scan matching. We can see there are points far from the sensor center to better lock the rotations and points on the most planar zones for better matching.}
\label{fig:sampling}
\end{figure}
\section{Scan-to-Model matching with Implicit Moving Least Squares (IMLS) surface representation}
KinectFusion~\cite{newcombe2011} is a well-known SLAM based on the Kinect depth sensor. They do scan-to-model matching using an implicit surface from~\cite{curless1996} as a model. The implicit surface is defined by a Truncated Signed Distance Function (TSDF) and is coded in a voxel map.~\cite{newcombe2011} show great results of scan-to-model matching compared to classical scan-to-scan matching. The problem of TSDF is that the surface is defined by a voxel grid (empty, SDF, unknown) and then is usable only in a small volume space. That is why that TSDF representation cannot be used in large outdoor environments for autonomous driving. In our SLAM, we use the same scan-to-model strategy, but we chose a different surface representation. We take the Implicit Moving Least Square (IMLS) representation computed directly on the map point cloud of the last $n$ localized scans.
Point set surfaces are implicit definitions of surfaces directly on point clouds. In~\cite{levin2004}, Levin is the first to define a Moving Least Square (MLS) surface, the set of stable points of a projection operator. It generates a $C^\infty$ smooth surface from a raw noisy point cloud. Later,~\cite{kolluri2008} defined the IMLS surface: the set of zeros of a function $I(x)$. That function $I(x)$ also behaves as a distance function close to the surface.
We define our point cloud map $P_k$ as the accumulation of $n$ previous localized scans. That point cloud $P_k$ contains noise because of the LiDAR measurements but also errors in localization of the previous scans.
Using the IMLS framework by~\cite{kolluri2008}, we define the function $I^{P_k}(x)$ using equation~\ref{eq:imls} as an approximate distance of any point $x$ in $\mathbb{R}^3$ to the implicit surface defined by the point cloud $P_k$:
\begin{equation}
I^{P_k}(x) = \frac{\sum_{p_i \in P_k}{W_i(x)((x-p_i) \cdot \vec{n_i})}}{\sum_{p_j \in P_k}{W_j(x)}},
\label{eq:imls}
\end{equation}
where $p_i$ are points of the point cloud $P_k$ and $\vec{n_i}$ normals at point $p_i$.
The weights $W_i(x)$ are defined as $W_i(x)=e^{-\|x-p_i\|^2 / h^2}$ for $x$ in $\mathbb{R}^3$. Because the function $W_i(x)$ decreases quickly when points $p_i$ in $P_k$ are far from $x$, we keep only points of $P_k$ inside a ball $B(x,r)$ of radius $r$ (when $r=3h$ and points $p_i$ further than $r$ to $x$, $W_i(x) \leq 0.0002$). The parameter $r$ is the maximum distance for neighbor search and rejected outliers are seen as having no correspondence between the scan and the map (as described in the previous section). $h$ is a parameter for defining the IMLS surface, has been well studied in previous papers~\cite{kolluri2008} and depends on the sampling density and noise of the point cloud $P_k$.
We want to localize the current scan $S_k$ in point cloud $P_k$. To do so, we want to find the transformation $R$ and $t$ that minimizes the sum of squared IMLS distances: $\sum_{x_j \in \tilde{S_k}} {|I^{P_k}(R x_j + t)|^2}$. Due to exponential weights, we cannot approximate that nonlinear least-square optimization problem by a linear least-square one, as in ICP point to plane. Instead of minimizing that sum, we project every point $x_j$ of $\tilde{S_k}$ on the IMLS surface defined by $P_k$: $y_j = x_j - I^{P_k}(x_j)\vec{n_j}$ where $\vec{n_j}$ is the normal of the closest point $p_c$ to $x_j$ and is a good approximation of the surface normal at the projected point $y_j$.
Now, we have a point cloud $Y_k$, the set of projected points $y_j$ and we look for the transformation $R$ and $t$ that minimizes the sum $\sum_{x_j \in \tilde{S_k}}{| \vec{n_j} \cdot (R x_j + t - y_j) |^2 }$. Like in ICP point-to-plane, we can now make the small angle assumption on $R$ to get a linear least-square optimization problem that can be solved efficiently (more details in the technical report~\cite{low2004}). We compute $R$ and $t$ and move the scan $S_k$ using that transformation. The process is then started again: project the points $x_j$ of the scan on the IMLS surface to form $Y_k$, find the new transformation $R$ and $t$ between the scan $S_k$ and point cloud $Y_k$ and move the scan with the found transformation $R$ and $t$. We iterate until a maximum number of iterations has been made. The final transformation $\tau(t_k)$ is the composition of the transformation between the first and last iteration of the scan during the matching process and the estimate position $\tilde{\tau}(t_k)$. Now, we can compute a new point cloud from raw data of the current scan by linear interpolation of vehicle position between $\tau(t_{k-1})$ and $\tau(t_k)$. We add that point cloud to the map point cloud and remove the oldest point cloud scan to always keep $n$ scans in the map.
With IMLS formulation, we need to compute normal $\vec{n_j}$ from the point cloud $P_k$ for every query point $x_j$ of scan $S_k$. This is done at every iteration during the neighbor search but only for the selected samples using the same normal for neighbors points (so $9s$ normals are calculated at each iteration).
Figure~\ref{fig:imls} is a schematic example of the difference between our IMLS scan-to-model matching and a classical ICP scan-to-point cloud matching. The advantage of this formulation is to move the scan converge towards the implicit surface and improve the quality of matching.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.92\linewidth]{figures/icp_imls.jpg}
\end{center}
\caption[width=\linewidth]{Schematic example of classical ICP scan matching compared to our IMLS scan-to-model framework. The blue points are the noisy point cloud $P_k$ from the $n$ previous localized scans. The red points are points from the new scan. The first row shows an example of the first and last iteration of ICP point-to-plane matching. At each iteration, we minimize the distance to the closest point. The second row shows an example of the first and last iteration of our IMLS point-to-model distance. The dashed black line is the IMLS surface. At each iteration, we minimize the sum of the distance to the IMLS surface (for simplification purposes, we removed the normal formulation from the schema).}
\label{fig:imls}
\end{figure}
\section{Experiments}
Our IMLS SLAM has been implemented in C++ using only the FLANN library (for nearest neighbor research with k-d tree) and Eigen. We have done tests on a real outdoor dataset from LiDAR Velodyne HDL32 and Velodyne HDL64 spinning at 10~Hz (each scan has been acquired during $100~ms$). The method runs on one CPU core at 4~GHz and uses less than 1~Go of RAM. Velodyne HDL32 and HDL64 are rotating 3D LiDAR sensors with 32 and 64 laser beams.
For all experiments, we used $s=100$ for the number of sampling points in each list, $h=0.06~m$ (for the IMLS surface definition), $r=0.20~m$ (maximum distance for neighbors search), $20$ is the number of matching iterations (to keep a constant timing process instead of using convergence criteria), and $n=100$ is the number of scans we have in the model point cloud.
\subsection{Tests on our Velodyne HDL32 dataset}
To test the SLAM, we made a $4~km$ acquisition in the center of Paris with a Velodyne HDL32 sensor in a vertical position on the roof of a vehicle (total of 12951 scans). This is a $2~km$ loop we did two times and came back exactly at the same place (less than a meter difference). We then measured the distance between the first and last localized scan as an error metric. Figure~\ref{fig:traj_velo32_paris} shows the trajectory of our IMLS SLAM and the trajectory of a classical ICP scan-to-scan matching (equivalent to $n=1$). We can see the good superposition of the two loops with our SLAM. Figure~\ref{fig:pc_velo32_paris} shows a small portion of the point cloud generated from the SLAM. We can see fine details like the fence. This means we have good egomotion of the vehicle during each scan. The distance error between the first and last scan with our IMLS SLAM is $16~m$, a drift of only 0.40\%.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.92\linewidth]{figures/pc_velo32_paris.jpg}
\end{center}
\caption[width=\linewidth]{Small portion of the point cloud generated by our IMLS SLAM (red rectangle in Fig.~\ref{fig:traj_velo32_paris}). The visible details of the fence show we get a good assumption on the egomotion for each laser scan.}
\label{fig:pc_velo32_paris}
\end{figure}
We tested our SLAM with the Velodyne HDL32 in a different orientation. The Velodyne is still on the roof of a vehicle but is tilted 60 degrees in pitch. The acquisition has been made in a square of the city of Lille with many turns to test the robustness of the matching (total of $1500$ scans). Figure~\ref{fig:pc_traj_velo32_lille} shows in red the trajectory of the vehicle computed by our SLAM and the generated point cloud. We can see that there are no duplicate objects despite having done many turns in the square. The point cloud provides a qualitative evaluation of the mapping.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.92\linewidth]{figures/pc_traj_velo32_lille.jpg}
\end{center}
\caption[width=\linewidth]{Trajectory in red and point cloud of our IMLS SLAM in the city of Lille. We can see the quality of the map with no duplicate objects despite numerous turns.}
\label{fig:pc_traj_velo32_lille}
\end{figure}
\subsection{Tests on the public dataset KITTI with Velodyne HDL64}
We tested our SLAM method on the public dataset KITTI. The odometry evaluation dataset has 22 sequences with stereo and LiDAR data (results of 86 methods are available online). The LiDAR is a vertical Velodyne HDL64 on the roof of a car. Eleven sequences are provided with ground truth (GPS+IMU navigation) and 11 sequences are given without ground truth for odometry evaluation. The dataset is composed of a wide variety of environments (urban city, rural road, highways, roads with a lot of vegetation, low or high traffic, etc.). More details are available at~\footnote{\url{http://www.cvlibs.net/datasets/kitti/eval_odometry.php}} or in~\cite{geiger2012} regarding the metric used for evaluation. The LiDAR scans are de-skewed with an external odometry, so we did not apply our egomotion algorithm to this dataset.
In the training dataset, we get 0.55\% drift in translation and 0.0015 deg/m error in rotation. We can compare the results to~\cite{ceriani2015}, who had around 1.5\% drift error in translation and 0.01 deg/m error in rotation. Table~\ref{table:training_kitti_results} compares our results for the training dataset to LOAM~\cite{zhang2017}. We see we outperfom previously published results.
\begin{table}[h]
\begin{center}
\begin{tabular}{|c||c|c|c|}
\hline
\textbf{Sequence} & \textbf{Environment} & \textbf{LOAM~\cite{zhang2017}} & \textbf{Our SLAM}\\
\hline
0 & Urban & 0.78\% & \textbf{0.50\%}\\
\hline
1 & Highway & 1.43\% & \textbf{0.82\%}\\
\hline
2 & Urban+Country & 0.92\% & \textbf{0.53\%}\\
\hline
3 & Country & 0.86\% & \textbf{0.68\%}\\
\hline
4 & Country & 0.71\% & \textbf{0.33\%}\\
\hline
5 & Urban & 0.57\% & \textbf{0.32\%}\\
\hline
6 & Urban & 0.65\% & \textbf{0.33\%}\\
\hline
7 & Urban & 0.63\% & \textbf{0.33\%}\\
\hline
8 & Urban+Country & 1.12\% & \textbf{0.80\%}\\
\hline
9 & Urban+Country & 0.77\% & \textbf{0.55\%}\\
\hline
10 & Urban+Country & 0.79\% & \textbf{0.53\%}\\
\hline
\end{tabular}
\end{center}
\caption{Comparison of drift KITTI training dataset between our SLAM and LOAM state-of-the-art odometry (LOAM results are taken from paper~\cite{zhang2017}). We can see we outperform on every type of environment.}
\label{table:training_kitti_results}
\end{table}
In the test dataset, we have 0.69\% drift in translation and 0.0018 deg/m error in rotation (visible on the KITTI website). It is better than the state-of-the-art published results of LOAM in~\cite{zhang2017} where they had 0.88\% drift. On the KITTI website, LOAM improved their results, which were a little better than ours with 0.64\% drift.
The drift we get on the KITTI benchmark is not as good as the results we obtained with the Velodyne HDL32. This is due to three facts. First, we found a distortion of the scan point clouds because of a bad intrinsic calibration (we did a calibration of the intrinsic vertical angle of all laser beams of $0.22$ degrees using the training data). Second, we found big errors in GPS data (used as ground truth) with, for example, more than $5~m$ in the beginning of sequence~8. Third, the environment has more variety (vegetation, highways, etc.) than the urban environment for the Velodyne HDL32 test.
We measure the contributions of the different parts of the algorithm to the KITTI training dataset. Table~\ref{table:object_removal_kitti} shows the importance of dynamic object removal. Table~\ref{table:sampling_kitti} shows the contribution of our sampling strategy compared to random sampling and geometric stable sampling (from~\cite{gelfand2003}). Table~\ref{table:model_kitti} shows the importance of parameter $n$. We keep improving the results by taking $n=100$ instead of only $n=10$ scans of the map. We also tried to keep more than $n=100$ scans, but this does not change the results because then the oldest scans are too far from the current scan to have an influence. We also tested changing the parameter $s$ of the number of samples in Table~\ref{table:samples_kitti}. When the number of samples is too small ($s=10$), we do not have enough points to have good observability for matching the scan to the map point cloud. But when the number of samples is too big ($s=1000$), the results are worse because we keep too many points from the scan (as explained in~\cite{gelfand2003}, keeping too many points can alter the constraints to find the final pose).
\begin{table}[h]
\begin{center}
\begin{tabular}{|c||c|}
\hline
\textbf{Object Removal} & \textbf{Drift on KITTI training dataset}\\
\hline
Without & 0.58\%\\
\hline
With & \textbf{0.55}\%\\
\hline
\end{tabular}
\end{center}
\caption{Importance of dynamic object removal of a scan on the KITTI training dataset}
\label{table:object_removal_kitti}
\end{table}
\begin{table}[h]
\begin{center}
\begin{tabular}{|c||c|}
\hline
\textbf{Sampling strategy} & \textbf{Drift on KITTI training dataset}\\
\hline
Random sampling & 0.64\%\\
\hline
Geometric stable sampling~\cite{gelfand2003} & 0.57\%\\
\hline
Our sampling & \textbf{0.55\%}\\
\hline
\end{tabular}
\end{center}
\caption{Importance of sampling strategy of a scan on the KITTI training dataset}
\label{table:sampling_kitti}
\end{table}
\begin{table}[h]
\begin{center}
\begin{tabular}{|c||c|}
\hline
\textbf{Parameter $n$} & \textbf{Drift on KITTI training dataset}\\
\hline
$n=1$ scan & 1.41\%\\
\hline
$n=5$ scans & 0.58\%\\
\hline
$n=10$ scans & 0.56\%\\
\hline
$n=100$ scans & \textbf{0.55\%}\\
\hline
\end{tabular}
\end{center}
\caption{Importance of the parameter $n$, last scans kept as model on the KITTI training dataset}
\label{table:model_kitti}
\end{table}
\begin{table}[h]
\begin{center}
\begin{tabular}{|c||c|}
\hline
\textbf{Parameter $s$} & \textbf{Drift on KITTI training dataset}\\
\hline
$s=10$ samples/list & 0.79\%\\
\hline
$s=100$ samples/list & \textbf{0.55\%}\\
\hline
$s=1000$ samples/list & 0.57\%\\
\hline
\end{tabular}
\end{center}
\caption{Importance of the parameter $s$, the number of samples for the matching part on the KITTI training dataset}
\label{table:samples_kitti}
\end{table}
Figure~\ref{fig:kitti_sequence} shows two point clouds produced by our LiDAR odometry in the KITTI dataset. We can see the details of the environment (cars, poles) and the large number of outliers (for which we are robust).
\begin{figure}[!ht]
\begin{tabular}{c}
\includegraphics[width=0.92\linewidth]{figures/kitti_sequence_0.jpg} \\
\includegraphics[width=0.92\linewidth]{figures/kitti_sequence_6.jpg} \\
\end{tabular}
\caption[width=\linewidth]{Point clouds generated from sequence 0 (top) and 6 (bottom) of the KITTI training dataset with our LiDAR odometry. We can see the details of the different objects even with multiple passages, like in sequence 6 (bottom).}
\label{fig:kitti_sequence}
\end{figure}
\subsection{Discussion of the processing time}
Our IMLS SLAM implementation is not in real time. First, we compute the normal at every scan using the 3D point cloud of the scan instead of using the fast normal computation of~\cite{badino2011} in the spherical range image (being 17 times faster). This is because the KITTI dataset provides only 3D point cloud and not the raw LiDAR data. It takes $0.2~s$ per scan to do our normal computation. Second, at every scan, we compute a new k-d tree from the whole point cloud $P_k$ to find the nearest neighbors in the IMLS formulation. It takes time, depending on the number $n$ of last scans stored. When $n=100$, it takes $1~s$ per sweep. One solution would be to build a specific k-d tree (keeping the k-d tree between scans by only removing points from the oldest scan and adding points from the previous scan). The matching iterations are very fast (thanks to the limited number of queries with our sampling strategy) and takes $0.05~s$ per scan. So, because of the KITTI dataset and our implementation, our SLAM runs at $1.25~s$ per scan. We think it could be improved with better normal computation and a specific k-d tree to run in real time. For comparison, as explained in~\cite{zhang2017}, LOAM runs at $1~s$ per scan on the KITTI dataset.
\section{Conclusion}
We presented a new 3D LiDAR SLAM based on a specific sampling strategy and new scan-to-model matching. Experiments showed low drift results on Velodyne HDL32 dataset and among best results on KITTI benchmark. We think that our method could be improved to run in real time in future work.
{\small
\bibliographystyle{IEEEtran}
|
{
"timestamp": "2018-02-26T02:11:25",
"yymm": "1802",
"arxiv_id": "1802.08633",
"language": "en",
"url": "https://arxiv.org/abs/1802.08633"
}
|
\section{Appendix}
\vspace*{-3mm}
{\it Order Parameter for the Double Helix Phase} ---
Numerical minimization of the GL functional to determine the exact structure of the $\text{S}_\mathbf{A}${}
phase is made efficient, without loss of accuracy, by developing the $\phi$-dependence as
an expansion in symmetry-preserving harmonics,
\begin{align}
A_{\alpha i} = \hat{d}_\alpha
\sum_{j=0}^{\infty}
&\left\lbrace
\vphantom{\sum}\quad \Delta^\prime_{r,j}(r)\sin[2j(\phi+Qz)]\,\hat{r}_i
\right.
\nonumber\\
&+ i\,\Delta^{\prime \prime}_{r,j}(r)\cos[(2j+1)(\phi+Qz)]\,\hat{r}_i
\vphantom{\sum}
\nonumber\\
&+ \Delta^\prime_{\phi,j}(r)\cos[2j(\phi+Qz)]\, \hat{\phi}_i
\vphantom{\sum}
\nonumber\\
&+ i\,\Delta^{\prime \prime}_{\phi,j}(r)\sin[(2j+1)(\phi+Qz)]\, \hat{\phi}_i
\vphantom{\sum}
\nonumber\\
&+ \Delta^\prime_{z,j}(r)\cos[2j(\phi+Qz)] \,\hat{z}_i
\vphantom{\sum}
\nonumber\\
&\left. \vphantom{\sum}+ i\,\Delta^{\prime \prime}_{z,j}(r)\sin[(2j+1)(\phi+Qz)] \,\hat{z}_i
\right\rbrace
\,.
\label{eq:op-full}
\end{align}
The numerical result for the $\text{S}_\mathbf{A}${} phase converges rapidly to the exact solution with the
addition of higher harmonics.
{\it Sensitivity of the Phase Diagram to Strong Pairbreaking} ---
The anisotropic chiral $\text{A}_\mathbf{C_2}${} and $\text{S}_\mathbf{A}${} phases are favored under conditions of strong pairbreaking
on the boundary because the energy cost of the boundary half-disgyrations is minimal due to
suppression of all the order parameter components. By contrast the $\text{A}_\mathbf{SO(2)}${} phase, which hosts
a radial disgyration at the center of the cell is disfavored over both anisotropic chiral phases.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\columnwidth]{PhaseDiagram-bT-00_R-100.pdf}
\caption{Phase diagram for the cylindrical channel with $R=100\mbox{nm}$
and $b_T^\prime=0$ (maximal pairbreaking). The $\text{S}_\mathbf{A}${} phase has displaced
much of the $\text{A}_\mathbf{C_2}${} phase compared to the phase diagram calculated for only
$z$ translationally invariant phases \cite{wim15}.
}
\label{diagram-maximal}
\end{center}
\end{figure}
This is reflected in the phase diagram for maximal pairbreaking, $b_T^{\prime}=0$,
shown in Fig. \ref{diagram-maximal} for a $R=100\,\mbox{nm}$
cylindrical channel. Note that strong-coupling, which is relatively stronger at higher
temperatures favors the helical phase over the translationally invariant $\text{A}_\mathbf{C_2}${} phase.
Also note that $\text{S}_\mathbf{B}${} phase is not stable at this confinement ($R=100$ nm) for maximal
pair-breaking.
\begin{figure}[h]
\begin{center}
\includegraphics[width=3.4in]{PhaseDiagram-T_vs_R-bT-01_p-26bar.pdf}
\caption{Phase diagram: temperature versus radius for cylindrical channel at
$p=26\,\mbox{bar}{}$ with strong pairbreaking. $b_T^\prime=0.1$.
All six phases we have found are shown in this diagram, although the $\text{S}_\mathbf{B}${}
phase is stable in a very narrow window of $R$ and $T$.}
\label{diagram-radius}
\end{center}
\end{figure}
We also include the phase diagram as a function of the channel radius $R$ (Fig.
\ref{diagram-radius}) for $p = 26\,\mbox{bar}{}$ and $b_T^\prime=0.1$. The $\text{S}_\mathbf{A}${}
phase is clearly favored by high confinement relative to the $\text{A}_\mathbf{C_2}${}, $\text{A}_\mathbf{SO(2)}${}, and $\text{B}_\mathbf{SO(2)}${} phases.
However, at this pressure the $\text{S}_\mathbf{B}${} phase is very fragile and stable only at very low
temperatures where non-local corrections to the gradient energy, which are not included
in the extended GL functional, are likely relevant.
We also note that there is a critical radius above which the $\text{A}_\mathbf{C_2}${} phase appears.
|
{
"timestamp": "2018-08-14T02:12:28",
"yymm": "1802",
"arxiv_id": "1802.08719",
"language": "en",
"url": "https://arxiv.org/abs/1802.08719"
}
|
\section{Introduction}
Decoherence, dephasing and dissipation in large open quantum systems
are important phenomena in a broad variety of fields, such as nonadiabatic
processes in chemistry and materials science, \cite{Head-Gordon1995,schaller2005breaking,baer2003therole,Baer2006,Gdor2015,shenvi2012nonadiabatic,dong2015observation},
quantum biology~\cite{collini2010coherently,romero2014quantum} and
quantum information~\cite{ingarden2013information,schlosshauer2007decoherence}.
They are commonly described using the concept of the density matrix
(DM), which generalizes the notion of a wave function as the quantum
state descriptor. Despite great success in atomic physics, DM approaches
have not found extensive application in the field of large electronic
systems, except in cases of small systems, where it is sufficient
and possible to address only a small number of electronic states \cite{prezhdo1999mean,muhlbacher2008real,abramavicius2009coherent,Esposito2009,wilner2013bistability,cohen2013numerically,schinabeck2016hierarchical}.
For describing the quantum dynamics of open systems having a large
number of electrons and electronic states a different approach is
probably needed. Here, it is natural to consider time-dependent (current)
density functional theory (TDDFT), based on the Runge-Gross (RG) theorem~\cite{Runge1984}
which simplifies the treatment of the dynamics of interacting electrons
by mapping them onto non-interacting Fermions. Extensions of the RG
theorem to open systems have indeed appeared \cite{Burke2005,Kurth2005,Zheng2007,Pershin2008,Yuen-Zhou2010},
but the follow-up progress has yet to be achieved, and the main cause
for delay is the fact that non-interacting Fermions develop an interaction
through the coupling with the bath.\footnote{This is true when the electrons interact with the bath through the
one-body density matrix, which the case of interest here and in most
practical applications. There exists an important class of problems
in which the electron interaction with the bath is ``linear'' with
the particle creation/destruction operators allowing an easier TDDFT
adaption (see \cite{Kurth2005}).}.
In this paper, we develop a method to describe the DM time evolution
of non-interacting Fermions (Section~\ref{sec:Fermion-Unraveling})
as they are coupled to an external bath. We work within the Lindblad
formalism~\cite{Lindblad1976,gorini1976completely,Breuer2002,schaller2014open},
which is useful for describing Markovian open system dynamics.. The
method makes use of the unraveling procedure, which transforms the
Lindblad equation on the DM into a random walk in wave functions space.
The effective Fermion-Fermion interactions induced by the bath are
converted into \emph{additional }random-walk terms. Applications of
the method, first to an analytically solvable model and then to a
system of trapped 1D Fermions in a double-well are given in Section~\ref{sec:Applications-to-trapped}.
We believe, that the present development forms a significant stepping
stone for applying TDDFT to the study of the dynamics of open electronic
systems in the future.
\begin{figure*}[t]
\begin{raggedright}
\includegraphics[width=1\textwidth]{HarmonicTraj}
\par\end{raggedright}
\caption{\label{fig:harmonic-XP}The time-dependent total displacement $X_{t}$,
total momentum $P_{t}$, total energy $H_{t}$ and total kinetic energy
$T_{t}$ transients for $8$ non-interacting Fermions in the Harmonic
trap of Section~\ref{subsec:A-harmonic-trap}, starting from the
pure state $\hat{\rho}_{\theta}=\left|\Phi_{\theta}\right\rangle \left\langle \Phi_{\theta}\right|$
with $\theta=\pi/4$. The $X_{t}$ and $P_{t}$ panels show analytical
transients (Eq.~(\ref{eq:xt})) as dashed lines while the calculated
results of 10 independent runs (each having 160 trajectories) are
shown as symbols. The $H_{t}$ and $T_{t}$ results are depicted as
statistical error bars centered on the average over the 10 runs. The
computed results shown in the left and right panels are based on different
time steps $\Delta t$ and number of HS iterations $N_{HS}$. }
\end{figure*}
\section{\label{sec:Fermion-Unraveling}Fermion Unraveling }
The density matrix (DM) operator $\hat{\rho}$ represents the quantum
state of a system, open or closed, generalizing the concept of a pure
wave function. It can be written in terms of its eigenvalues $w_{s}$
and eigenfunctions $\Phi^{s}$ as
\begin{equation}
\hat{\rho}=\sum_{s}p_{s}\left|\Phi^{s}\right\rangle \left\langle \Phi^{s}\right|
\end{equation}
and the eigenvalues $p_{s}$ being the probability for the system
to be in state $\Phi^{s}$. Clearly the DM must be Hermitean, positive-definite
($p_{s}>0$) and unit-traced ($\sum_{s}p_{s}=1$). If $\hat{O}$ is
an operator corresponding to an observable property, then the expectation
value of its measurement is expressed neatly as a trace: $O=tr\left[\hat{\rho}\hat{O}\right]=\sum_{s}p_{s}\left\langle \Phi_{s}\left|\hat{O}\right|\Phi_{s}\right\rangle $.
In any time-dependent process, the DM evolution is determined by an
equation of motion which must preserve its trace, its Hermiticity
and its positivity. The most general ``Markovian'' equation of motion
that respects these basic tenants is the so-called Lindblad-equation
\cite{Lindblad1976,Alicki2007,Breuer2002,schaller2014open}:
\begin{equation}
\dot{\hat{\rho}}\left(t\right)=-\frac{i}{\hbar}\left[\hat{H},\hat{\rho}\right]+\text{\ensuremath{\mathfrak{D}}}\hat{\rho}\left(t\right)\label{eq:LindbladMasterEq}
\end{equation}
where $\hat{H}=\hat{H}^{\dagger}$ is the Hermitean \emph{effective
Hamiltonian }and the \emph{dissipative }part, which is of the form:
\begin{equation}
\text{\ensuremath{\mathfrak{D}}}\hat{\rho}=\left(\hat{L}^{\alpha}\hat{\rho}\hat{L}^{\alpha\dagger}-\frac{1}{2}\hat{L}^{\alpha\dagger}\hat{L}^{\alpha}\hat{\rho}-\frac{1}{2}\hat{\rho}\hat{L}^{\alpha\dagger}\hat{L}^{\alpha}\right)\label{eq:Disspitaor}
\end{equation}
where $\hat{L}^{\alpha}$ are \emph{Lindblad operators }($\alpha=1,2,\dots,N_{L}$
and we adopt the Einstein convention that repeated dummy indices get
summed). These equations of motions are supplemented by an initial
condition $w_{s}\left(0\right)$ and $\Phi_{0}^{s}$ at $t=0$.
For many-body systems, working with the DM is difficult, if not impossible
and therefore unraveling procedures~\cite{carmichael1993quantum,wiseman1993quantum,dalibard1992wave,Gisin1992}
were developed where the DM is represented as an expected value involving
\emph{random }wave functions $\Phi\left(t\right)$
\begin{equation}
\hat{\rho}\left(t\right)=E\left\{ \frac{\left|\Phi\left(t\right)\right\rangle \left\langle \Phi\left(t\right)\right|}{\left\langle \Phi\left(t\right)\left|\Phi\left(t\right)\right.\right\rangle }\right\} .
\end{equation}
Each of the these states, $\Phi\left(t\right)$ start from a randomly
selected initial state $\Phi_{0}^{s}$ with probability $w_{s}$ and
is then evolved separately in time according to the following nonlinear
stochastic Schr�dinger equation:
\begin{align}
d\Phi\left(t\right) & =\left[-\frac{i}{\hbar}\hat{H}dt\right.\label{eq:unraveling-two-body}\\
& \left.+\left(\left(\left\langle L^{\alpha}\right\rangle _{t}^{*}-\frac{1}{2}\hat{L}^{\alpha\dagger}\right)dt+dw_{\alpha}\right)\hat{L}^{\alpha}\right]\Phi\left(t\right)\nonumber
\end{align}
where $dw_{\alpha}$ are independent Wiener processes with $E\left[dw^{\alpha}dw^{\beta}\right]=\delta^{\alpha\beta}dt$
and $\left\langle L^{\alpha}\right\rangle _{t}\equiv\frac{\left\langle \Phi\left(t\right)\left|\hat{L}^{\alpha}\right|\Phi\left(t\right)\right\rangle }{\left\langle \Phi\left(t\right)\left|\Phi\left(t\right)\right.\right\rangle }$~\cite{Gisin1992}
or $\left\langle L^{\alpha}\right\rangle _{t}=\frac{Re\left\langle \Phi\left(t\right)\left|\hat{L}_{\alpha}\right|\Phi\left(t\right)\right\rangle }{\left\langle \Phi\left(t\right)\left|\Phi\left(t\right)\right.\right\rangle }$.~\cite{wiseman1993quantum}
This equation is much easier to handle than Eq.~(\ref{eq:LindbladMasterEq})
since it involves only the wave function. But this comes with a sizable
price tag: a non-linear Schr�dinger equation combined with stochastic
noise.
The unraveling procedure given above applies to all Lindblad equations,
in particular for non-interacting Fermion systems, where the effective
Hamiltonian and the Lindblad operators are one-body operators:
\begin{align}
\hat{H} & =\sum_{n}\hat{h}\left(n\right)\label{eq:H-hat}\\
\hat{L}^{\alpha} & =\sum_{n}\hat{\ell}^{\alpha}\left(n\right)\label{eq:L-alpha}
\end{align}
where
\begin{equation}
\hat{h}=\frac{\hat{p}^{2}}{2m}+V\left(\hat{x}\right),\label{eq:single-particle-h}
\end{equation}
is a single electron Hamiltonian ($\hat{h\left(n\right)}$ is this
Hamiltonian applied for electron number $n$). One notices that the
term $\hat{L}^{\alpha\dagger}\hat{L}^{\alpha}=\sum_{nm}\hat{\ell}^{\alpha\dagger}\left(n\right)\hat{\ell}^{\alpha}\left(m\right)$
appearing in Eq.~\ref{eq:unraveling-two-body} is a two-body operator
and thus the unraveling of non-interacting electrons is essentially
an interacting electron problem.
We make progress here through the Hubbard-Stratonovich transformation
\cite{stratonovich1957rl,Hubbard1959}, $e^{-\frac{1}{2}\hat{R}^{2}dt}\propto\int_{-\infty}^{\infty}e^{-\frac{\eta^{2}}{2dt}}e^{i\hat{R}\eta}d\eta$
which converts Eq.~(\ref{eq:unraveling-one-body}) into a new equation
involving a 3-component Brownian (Wiener) motion:
\begin{align}
d\Phi\left(t\right) & =-\frac{i}{\hbar}\left(\hat{H}dt-i\left(d\hat{H}_{R}+d\hat{H}_{C}+d\hat{H}_{S}\right)\right)\Phi\left(t\right)\label{eq:unraveling-one-body}
\end{align}
where:
\begin{align}
d\hat{H}_{R} & \equiv-\hbar\left[\left\langle L^{\alpha}\right\rangle _{t}^{*}dt+dw_{\alpha}+idu_{\alpha}\right]\hat{R}^{\alpha}\\
d\hat{H}_{S} & \equiv-\hbar\left[\left\langle L^{\alpha}\right\rangle _{t}^{*}dt+dw_{\alpha}+dv_{\alpha}\right]i\hat{S}^{\alpha}\\
d\hat{H}_{C} & \equiv\hbar\sum_{\alpha}\hat{C}^{\alpha}dt
\end{align}
where $\hat{R}^{\alpha}=Re\left[\hat{L}^{\alpha}\right]$ and $\hat{S}^{\alpha}=Im\left[\hat{L}^{\alpha}\right]$
, $\hat{C}^{\alpha}=i\left[\hat{R}^{\alpha},\hat{S}^{\alpha}\right]_{-}$
are three Hermitean one-particle operators (so that $\hat{L}^{\alpha\dagger}\hat{L}^{\alpha}=\hat{R}^{\alpha}\hat{R}^{\alpha}+\hat{S}^{\alpha}\hat{S}^{\alpha}+\hat{C}$)
and where like $dw_{\alpha}$, also $du_{\alpha}$, $dv_{\alpha}$
are each a Wiener processes i.e. a random number drawn from the normal
distribution with mean zero and variance $dt$. There is, however,
an important, delicate, point here: for each $dw_{\alpha}$ we must
sample $du_{\alpha}$ and $dv_{\alpha}$ many times so as to enable
an accurate calculation of $\left\langle L^{\alpha}\right\rangle _{t}$.
Hence, the algorithm we use to evolve the DM of non-interacting Fermions
is as follows:
\begin{enumerate}
\item Assume we have the Slater wave function $\Phi\left(t\right)=\det\left[\phi_{1}\left(t\right)\cdots\phi_{N}\left(t\right)\right]$
and the expected values $L^{\alpha}\left(t\right)$.
\item Propagate from $t\to t+dt$:
\begin{enumerate}
\item Sample $dw_{\alpha}$.
\item Holding $dw_{\alpha}$ fixed, we sample $du_{\alpha}$ and $dv_{\alpha}$
$N_{HS}$ times and for each pair of such values we propagate in time
$\Phi\left(t\right)$ to a new Slater wave-function $\Phi^{\left(k\right)}\left(t+\Delta t\right)=\det\left[\phi_{1}^{\left(k\right)}\left(x_{1}\right)\cdots\phi_{N}^{\left(k\right)}\left(x_{N}\right)\right]$
\begin{equation}
\phi_{n}^{\left(k\right)}\left(t+\Delta t\right)=e^{-\frac{i}{\hbar}\left(\hat{h}dt-i\left(d\hat{h}_{R}+d\hat{h}_{C}+d\hat{h}_{S}\right)\right)}\phi_{n}\left(t\right)
\end{equation}
for $k=1,\dots,N_{HS}$ .
\item Generate from $\Phi^{\left(1\right)},\dots,\Phi^{\left(N_{HS}\right)}$
the one-particle density matrix $\rho_{1}\left(r,r'\right)$ and diagonalize
it:
\begin{equation}
\rho_{1}\left(r,r'\right)=\sum_{n}\tilde{w}_{n}\tilde{\phi}_{n}\left(r\right)\tilde{\phi}_{n}\left(r'\right)^{*},
\end{equation}
(where $\tilde{w}_{1}\ge\tilde{w}_{2}\ge\tilde{w}_{3}\dots$). Now
select the first eigenfunctions $\tilde{\phi}_{n}$ of $\rho_{1}$
and form from them the Slater wave function to be used as the wave
function for the next time step $\Phi\left(t+dt\right)\equiv\det\left[\tilde{\phi}_{1}\cdots\tilde{\phi}_{N}\right]$.
We note that $\Phi\left(t+dt\right)$ is the single determinant wave
function which reproduces the one-body DM $\rho_{1}$as close as possible.
The initial state and the expected values for the Lindblad operator
to be used in the next iteration will thus be be calculated as:
\begin{equation}
\left\langle L^{\alpha}\right\rangle _{t+dt}=\sum_{n=1}^{N}\left\langle \tilde{\phi}_{n}\left|\hat{\ell}^{\alpha}\right|\tilde{\phi}_{n}\right\rangle .
\end{equation}
\end{enumerate}
\end{enumerate}
The last step of the algorithm involves collapsing the Hubbard-Stratonovich
step into a Slater wave function having a similar one-body density
matrix. This step can be generalized and one can retain a wave function
which is a linear combination of $Q\ge1$ determinants that yield
a similar one-body density matrix. In principle, one needs to take
$Q\to\infty$ but in practice we should check that the calculation
is converged with respect to $Q$. In the present paper we do not
attempt to converge the calculation results with respect to $Q$.
In many applications the system is driven to its thermal equal
\section{\label{sec:Applications-to-trapped}Applications to trapped 1D Fermions}
To demonstrate the validity of the method we report calculations on
systems of $N$ non-interacting spin-up Fermions of mass $m=1m_{e}$
(atomic units are used in all reported numerical results) trapped
in a 1D potential $V\left(x\right)$ (see Eq.~(\ref{eq:single-particle-h}))
and using only one Lindblad operator
\begin{equation}
\hat{\ell}\equiv\sqrt{\frac{m\omega_{\ell}\gamma}{2\hbar N}}\left(\hat{x}+\frac{i}{m\omega_{\ell}}\hat{p}\right)
\end{equation}
to be used in Eq.~(\ref{eq:L-alpha}). Note that $\hat{\ell}$ is
the lowering ladder operator for a harmonic oscillator of frequency
$\omega_{\ell}$(although it is a still also Fermionic operator).
In the results shown below, we use $\omega_{\ell}=1E_{h}/\hbar$ and
$\gamma=0.2E_{h}/\hbar$ and $N=8$ Fermions. The calculation were
carried out using a high-order numerical implementation of the algorithm
depicted in the previous section, where the single particle wave functions
and operators were represented on a Fourier grid and the non-unitary
time propagation was performed using a high-degree interpolating polynomial
in the Newton form.\cite{tal1988high,Kosloff1994}
\subsection{\label{subsec:A-harmonic-trap}Validation: Fermions in an harmonic
trap}
To demonstrate the validity of the method we apply it to a system
of $N=8$ Fermions having the Hamiltonian of Eq.~(\ref{eq:single-particle-h})
with a Harmonic potential
\begin{align}
V\left(x\right) & =\frac{1}{2}m\omega^{2}x^{2},\label{eq:vHarmonic}\\
\omega & =\omega_{\ell}=1E_{h}/\hbar.\nonumber
\end{align}
In this case, the expectation values of the total displacement $X_{t}=\left\langle \sum_{n=1}^{N}\hat{x}_{n}\right\rangle _{t}$
and total momentum $P_{t}=\left\langle \sum_{n=1}^{N}\hat{p}_{n}\right\rangle _{t}$
can be determined analytically directly from the Lindblad equation:
\begin{align}
X_{t}^{an} & =\left(P_{0}\cos\omega t+\frac{P_{0}}{m\omega}\sin\omega t\right)e^{-\frac{\gamma}{2}t}\label{eq:xt}\\
P_{t}^{an} & =\left(P_{0}\cos\omega t-m\omega X_{0}\sin\omega t\right)e^{-\frac{\gamma}{2}t}.\label{eq:pt}
\end{align}
These trajectories are dependent only on the initial values of the
total displacement $X_{0}$ and momentum $P_{0}$ and not explicitly
on the number of electrons $N$ or on other properties of the initial
state. In our demonstration we start from a pure state which is taken
as the a non-stationary Slater wave-function
\begin{align}
\Phi_{\theta} & =\frac{1}{N!}\det\left[\psi_{1}\left(x_{1}\right)\cdots\psi_{N-1}\left(x_{N-1}\right)\varphi_{\theta}\left(x_{N}\right)\right]\label{eq:InitialPureState}
\end{align}
where $\left\{ \psi_{n}\left(x\right)\right\} _{n=1}^{N+1}$ are the
$N+1$ lowest energy single-particle eigenstates (so-called molecular
orbitals (MO)) of $\hat{h}$, and
\begin{equation}
\varphi_{\theta}\left(x\right)=\psi_{N}\left(x\right)\cos\theta+\psi_{N+1}\left(x\right)\sin\theta\label{eq:varPhi-Theta}
\end{equation}
is a linear combination involving the highest occupied MO (HOMO) $\psi_{N}\left(x\right)$
and the lowest unoccupied MO (LUMO) $\psi_{N+1}\left(x\right)$. The
angle $\theta$ is taken as $\pi/4$, expressing an equal weight of
these two orbitals.
In Fig.~\ref{fig:harmonic-XP} we show the analytical trajectory
and the results of 10 independent runs, each based on $N_{traj}=160$
trajectories. The results in the left panel use a time step of $\Delta t=1\hbar/E_{h}$
each employing $N_{HS}=10$ HS iterations while in the right panel
$\Delta t=0.25\hbar/E_{h}$ and $N_{HS}=80$. It can be seen that
the numerical results follow closely the analytical trajectories,
with somewhat improved performance for the smaller time step and more
intensive Hubbard-Stratonovich sampling. The total and kinetic energies
for the trajectories decay to a finite value as $t$ grows. The asymptotic
values for the total and kinetic energies are pushed closer to their
ground state values (which, for this system are $E=32E_{h}$ and $T=16E_{h}$
respectively) as we reduce $\Delta t$ and increase the number of
HS iterations.
A closer look into the accuracy of the dynamics is given in Fig.~\ref{fig:Bias},
where the the 75\% confidence intervals (CIs) for the difference $X_{t}-X_{t}^{an}$
are given at two times, namely $t=23\hbar/E_{h}$ and $t=25\hbar/E_{h}$
as a function of $N_{HS}$ and for two times-steps $\Delta t$. For
$N_{HS}<8$ the results show explicit bias since the error bars of
$N_{HS}\ge8$ are almost non-overlapping with those of $N_{HS}<8$.
For $N_{HS}>8$ the main effect of $N_{HS}$ is reduction of the error
bars (namely improved sampling removes noise). Even for $N_{HS}>8$
the confidence intervals do not include the exact result ($X_{t}-X_{t}^{an}=0$)
showing that a bias exists due to another source, namely the time-step
error. Indeed, as $\Delta t$ decreases from $1$to $0.25$ this bias
decreases by this bias decreases substantially. Hence the time time
step error is the main source of bias for $N_{HS}\ge8$.
Note however that the correct value, namely $X_{t}-X_{t}^{an}=0$,
will not be included in the CI's when we increase $N_{traj}$ due
to the finite-$N_{HS}$ and finite-$\Delta t$ errors which are clearly
noticeable and which can be systematically reduced by increasing $N_{HS}$
and by diminishing $\Delta t$. The results seem converged with respect
to $N_{HS}$ once $N_{HS}>10$ (i.e. although increasing $N_{HS}$
lowers the fluctuation, it does not chan, on the other hand, the main
source of bias is the size of the time step. Once $\Delta t<0.25\hbar/E_{h}$
the 99\% confidence intercal cust the
\begin{figure}
\includegraphics[width=1\columnwidth]{bias}
\caption{\label{fig:Bias}75\% confidence intervals for the difference between
the estimated and analytical displacements at two different values
of $t$ as a function of the number $N_{HS}$ of HS iterations, for
the harmonic system of Section~\ref{subsec:A-harmonic-trap}. Two
time steps are considered: $\Delta t=0.25\hbar/E_{h}$ (top panel)
and $\Delta t=0.125\hbar/E_{h}$ (bottom panel). The confidence intervals
are based on the calculated results from 10 independent runs each
having $N_{traj}=160$ trajectories. }
\end{figure}
\begin{figure*}[t]
\includegraphics[width=1\textwidth]{DblWTraj}
\caption{\label{fig:dw-XP}The time-dependent total displacement $X_{t}$,
total momentum $P_{t}$, total energy $H_{t}$ and total kinetic energy
$T_{t}$ transients for $8$ non-interacting Fermions in the double-well
trap of Section~\ref{subsec:A-double-well-trap}, starting from a
pure state $\hat{\rho}_{\theta}=\left|\Phi_{\theta}\right\rangle \left\langle \Phi_{\theta}\right|$
with $\theta=\pi/4$ (left panels) and $\theta=\pi/2$ (right panels).
The $X_{t}$ and $P_{t}$ panels show as symbols the calculated results
of 10 independent runs (each having 160 trajectories). The $H_{t}$
and $T_{t}$ results are depicted as statistical error bars centered
on the average over the 10 runs and the blue dots are approximate
transients computed from Eq.~(\ref{eq:SSPopulatio}). }
\end{figure*}
\subsection{\label{subsec:A-double-well-trap}A double-well trap}
As an application of the method, we study $N=8$ Fermions in a double-well
potential obtained by adding to the Harmonic potential of Eq.~(\ref{eq:vHarmonic})
a Gaussian barrier centered at the origin of coordinates:
\begin{align}
V\left(x\right) & =\frac{1}{2}m\omega^{2}x^{2}+V_{B}e^{-\frac{x^{2}}{2\sigma_{B}^{2}}},\label{eq:vdblWPot}\\
V_{B} & =8E_{h},\,\,\,\sigma_{B}=0.2a_{0}.\nonumber
\end{align}
In Fig.~\ref{fig:dw-XP} we study the dynamics under similar conditions
of the previous section starting from two different initial pure states
$\hat{\rho}_{\theta}=\left|\Phi_{\theta}\right\rangle \left\langle \Phi_{\theta}\right|$.
On the left panel the initial state ($\theta=\pi/4$ ) involves a
linear combination of HOMO and LUMO and thus is not an eigenstate
of $\hat{H}$; therefore a damped oscillation in $X$ and $P$ is
observed, which is accompanied by a gradual decrease in the frequency
of oscillation. The energy of the system grows in time, as does the
kinetic energy, indicating that the bath is \emph{injecting} energy
into the system, raising its temperature while at the same time oscillations
are damped due to dephasing. On the right panel we show the transients
corresponding to $\theta=\pi/2$, in which the initial state is an
excited eigenstate of $\hat{H}$ (where the HOMO is replaced by the
LUMO). In an eigenstate there is no motion, so we observe no oscillations
in $X$ and $T$, and it can be supposed that any energy injected
by the bath into the system cannot not stir up observable oscillation
due to the dephasing effects seen in the left panel. The energy here
starts, at early times to decrease but then at $t=11\hbar/E_{h}$
it reverses and starts ascending. The kinetic energy follows this
trend, indicating a tendency for the temperature to initial drop,
reach a minimum somewhat later than the total energy at $t\approx15\hbar/E_{h}$
and then rise at later times.
We compare these transients to approximate transients based the approximation
that the population of state $i$ is given by:
\begin{equation}
\dot{n}_{i}\left(t\right)=\sum_{j}\left[\gamma_{ij}n_{j}\left(1-n_{i}\right)-\gamma_{ji}n_{i}\left(1-n_{j}\right)\right],\label{eq:SSPopulatio}
\end{equation}
where $\gamma_{ij}=\sum_{\alpha}\left|\ell_{ij}^{\alpha}\right|^{2}$.\footnote{This equation is obtained by first neglecting the off-diagonal elements
of the DM (expressed as matrix in the eigenstate basis of the Hamiltonian),
which leads to Pauli master equation \cite{schaller2014open} and
then assuming that the populations in states $i$ and $j$ are uncorrelated.} The populations $n_{i}$ enable calculation of the energy and kinetic
energy transients shown as blue dots in Fig.~\ref{fig:dw-XP}. Consider
first the right panel. Here, the initial DM is diagonal so the blue
dots are close to the stochastic calculation, only deviating significantly
when coherences build up at around $t=5\hbar/E_{h}$. While the two
transients are close only at very early times, they both indicate
a non-monotonic behavior of the energy, first cooling and then heating.
For the kinetic energy too there is an agreement at early times where
the system cools at first and then heats up. For the left panel the
initial state is not diagonal so the blue-dot transient transient
breaks off from the more accurate calculation almost immediately.
Again both the accurate and the approximate transients agree qualitatively
that the system is heated by the bath.
\section{\label{sec:Summary}Summary}
In this paper we have introduced a new method for treating the dynamics
of non-interacting Fermions coupled to an external bath. The main
obstacle is the effective inter-particle interactions. We have used
the Hubbard-Stratonovich transformation for reformulation of the unraveled
dynamics to include several types of random walks (three for each
Lindblad operator) which together allow for sampling of the expected
value of the Lindblad operator at a given time. Between different
time-steps a linear combination of $K$ Slater wave functions, reproducing
approximately the the reduced density matrix of the system is formed
(in this work we set $K=1)$. We have shown that this approach allows
for accurate reconstruction of the dynamics of non-interacting Fermions
in a Harmonic oscillator potential well, coupled to a bath through
a specific Lindblad operator. We have also studied the dynamics of
such Fermions in a double-well system, where a non-monotonic behavior
of the energy can be seen when starting from an excited eigenstate
of the Hamiltonian.
This development has the potential of technically enabling a time-dependent
density functional approach for electron dynamics in open systems.
Future development is needed to assess the generality of the results
presented here, implement the option of using a many determinant wave
function (extending the method in this paper where we ``collapse''
to a single determinant state after every time step) and apply the
shifted contour technique for decreasing the Hubbard-Stratonovich
statistical fluctuations.\cite{Rom1997,Baer1998b} Finally, the combination
of the present development with stochastic orbital methods for electronic
structure is an exciting venue.\cite{Baer2013,Baer2013a,Cytter2018,Rabani2015,Neuhauser2017}
\begin{acknowledgments}
This article is submitted to the Festschrift in honor of Prof. Michael
Baer. The second author, Roi Baer, hereby sends his father a happy
80'th birthday with deep love, appreciation and gratitude! Both authors
gratefully thank the Israel Science Foundation Grant No. 189-14 for
kindly funding this research.
\end{acknowledgments}
|
{
"timestamp": "2018-02-27T02:01:20",
"yymm": "1802",
"arxiv_id": "1802.08728",
"language": "en",
"url": "https://arxiv.org/abs/1802.08728"
}
|
\section{Introduction}
Harassment and related undesirable behaviors online are incredibly problematic. We also know that harassment harms victims, affects an unacceptable number of internet users, disproportionately impacts members of marginalized and at-risk populations, and comes in many forms. However, efforts to curb harassment --- e.g., crowdsourced moderation, block lists, reporting and escalation to platform moderation and policy teams --- are often ineffective for victims and labor-intensive and traumatic for moderators.
To move toward more effective methods to curb harassment, I propose that we think about the problem of harassment differently in two ways. First, we must be more specific and explicit about the behaviors and content that are unacceptable in particular contexts so that we can design targeted mechanisms for addressing them and recognize the potential unintended impacts of our interventions. Second, we must treat harassment as a social problem, not just an individual one, which demands that we address the contexts in which harassment occurs. These two shifts in the way we think about harassment come from a feminist orientation that requires us to consider power and oppression when trying to understand why and how something --- like harassment --- happens.
I argue that harassment is, at it's root, about power --- especially the power to make someone else feel powerless. This means that I see harassment not as an issue of compliance but as a result of structural differences in power that make it possible for some actors --- maybe we call them ``bad actors'' --- to harm others. Recognizing that structural inequalities enable harassment does not mean harassers are not responsible for their actions, but it does mean that addressing ``bad actors'' requires systemic change and not just interventions targeted at individuals.
I'm interested in how social media can be leveraged to challenge existing power structures, and part of my research agenda aims to automatically identify and address unacceptable behavior. I'm a queer, white, cis woman who talks to other people on the internet and who studies online conversations. I want to reduce unacceptable behaviors because they disproportionately silence voices I think we should hear (e.g., women, racial and ethnic minorities, LGBTQ+). I mention all of this to address the requests for additional information in the workshop papers call. Next I expand on the reframings I called for and briefly propose changes they can enable.
\section{Defining Specific Unacceptable Behaviors}
Holding people to community standards requires that we have some in the first place, and it helps if they are clearly articulated and consistently applied. Unfortunately, humans don't readily or consistently agree about what behaviors are acceptable, or which behaviors are acceptable in which contexts. Instead, we have fuzzy, flexible, leaky bins into which we place and move things we witness or do ourselves; the policies we write exhibit similar properties \cite{Pater2016-bv}.
Even when we use automated or algorithmic approaches to do the classifying, classification is a fundamentally human activity, and that means that it occurs in the social, historical, technical, racial, gendered, moral etc. contexts of all other human activities. Bowker and Star \cite{Bowker1997-dh} also point out that classifying involves negotiation and that no classification scheme works for everyone. By calling for specificity, I am not suggesting that we will produce perfect, unproblematic classification of ``good'' and ``bad'' behaviors. Instead, I'm suggesting that the process of trying to articulate specific definitions will better equip us to attend to them productively. It's important for mechanism designers to articulate the problems they are trying to solve even if a fuzzy definition produces ``good enough'' results because the clarity helps us understand \textit{why} a particular mechanism works. Knowing why puts us in better positions to address issues such as workarounds bad actors develop and shifts in the communities' norms.
\subsection{Acceptable Does Not Mean Civil}
What counts as acceptable behavior depends on who's behaving, who's seeing the behavior, where the discussion is happening, and what's being discussed. One reason these things matter is that the realities of oppression mean that rights to speech and harms from speech are inequitably distributed. Flattening behaviors into categories like acceptable and unacceptable ignores the importance of behaviors that are productively and purposefully disruptive. For example, one recent turn in research on fighting harassment uses the language of ``civility'' to describe desired behaviors. When we call for civility we are often effectively silencing marginalized voices through tone policing\footnote{http://www.robot-hugs.com/tone-policing/}. We mask our discomfort with the emotional impact of oppression in our calls for a particular type of discourse. When we set rules for discourse, we exercise power to determine what is ``real'' or `'`right'' \cite{Butler1993-hx,Foucault1981-pq}, and blunt instruments like blocking profanity to encourage civility are abuses of that power that disproportionately impact those who experience multiple oppressions that elicit emotional responses. Emotional responses are valid and informative and should be valued.
Profanity provides an example for how clarity could help us design better systems. The presence of profanity is highly predictive of undesirable content~\cite{Martens2015-bl}. However, not all ``fucks'' are the same. Some are threats of violence\footnote{https://femfreq.tumblr.com/post/109319269825/one-week-of-harassment-on-twitter}, and some are part of valid emotional responses to external threats\footnote{https://twitter.com/rosemcgowan/status/917844865806778368}. Most classifiers can't tell the difference, and most moderators can't either if they don't have the full context of the expletive. A system that could, however, tell the difference would be more inclusive and empowering that the status quo.
\section{Recognizing Harassment as a Social Problem}
Speaking of context, the second way I argued we need to rethink harassment is to consider it a social problem instead of an individual one. Researchers often mention properties and/or features of platforms when discussing the prevalence of bad actions. For instance, reddit's karma system (like other vote-based systems) is easily gamed, encourages reposts that migrate content across communities, and reifies the dominant culture's values through social processes such as herding \cite{Massanari2015-jq,Muchnik2013-xq}. Twitter's character limits effectively discourage nuance and extended explanation, making it hard to contextualize comments or to provide enough information for outsiders to make sense of individual comments. Removing limits makes more space for hate but also for context and background knowledge that can make conversations more productive \cite{Thelandersson2014-fx}. Anonymity is a similarly multi-edged sword that is sometimes used for evil \cite{Dinakar2011-jr} and sometimes for good \cite{Andalibi2016-bz} or otherwise productively \cite{Cross2014-ry}.
Recent conversations around \#MeToo \cite{Gilbert2017-uq} and collective efforts like Time's Up\footnote{https://www.timesupnow.com/} are bringing the notion of ``harassment as a systemic issue'' into the mainstream, and it's time we do so in system design as well. Just as firing Harvey Weinstein didn't suddenly make the Weinstein Company a great place to work or stop sexual exploitation in Hollywood, playing whack-a-mole with individuals online will not end harassment. Instead, we need to recognize that platforms' affordances facilitate bad actions, that bad actions migrate across platforms or occur simultaneously in multiple spaces, that bad actions are contagious \cite{Cheng2017-qp,Phillips2017-ep}, and that features can silence, protect, and empower different groups at the same time. The systematic devaluing of marginalized voices and their experiences happens online as well as off, and we cannot stop harassment without attending to the structures and values that enable it.
For instance, instead of designing for civility or compliance, we could design to fight oppression \cite{Smyth2014-it} or to encourage productive discussion \cite{Thelandersson2014-fx, Massanari2015-jq}. We could highlight the controversial instead of the merely popular. We could focus on encouraging learning, constructive critique, accessibility, and inclusion like fan fiction sites \cite{Fiesler2016-cp,Campbell2016-di} or well-moderated discussions such as \textbackslash r \textbackslash AskHistorians\footnote{http://reddit.com/r/AskHistorians} or Autostraddle\footnote{http://www.autostraddle.com}.
\section{Conclusion}
In order for ``bad actors'' to ``comply with community standards''\footnote{All quotes in this paragraph are from the workshop call at http://understandingbadactors.org/.}, we need to articulate them. To ``help them discover more appropriate ways of connecting with others online'', we must communicate and reify the value of connection. And to ``design more effective interventions'' we must be specific about what we're trying to accomplish and address the contexts in which both behaviors and interventions operate. I don't have clear solutions for the problems I've raised, and I don't think the way forward is technically or socially easy by any means. Instead, it's likely messy, fraught, and computationally pretty hard but will be worth it.
\balance{}
\bibliographystyle{SIGCHI-Reference-Format}
|
{
"timestamp": "2018-02-26T02:10:42",
"yymm": "1802",
"arxiv_id": "1802.08612",
"language": "en",
"url": "https://arxiv.org/abs/1802.08612"
}
|
\section*{First page}
This page and all the following have a frame around the
text area.
\vfill
X \hfill X\newpage
\section*{Second page}
\AddToShipoutPicture*{%
\AtTextCenter{%
\makebox(0,0)[c]{\resizebox{\textwidth}{!}{%
\rotatebox{45}{\textsf{\textbf{\color{lightgray}DRAFT}}}}}
}
}
Only this page has rotated text in the center of the text area.
\vfill
X \hfill X\newpage
\section*{Last page}
\vfill
X \hfill X
\end{document}
\section*{First page}
This page and all the following have a frame with 15~mm
distance from the paper edges.\newpage
\section*{Second page}
\AddToShipoutPicture*{\put(100,100){\circle{40}}}
Only this page has a circle on the lower left side.
\newpage
\section*{Last page}
\end{document}
\section{\TeX}
\newpage
\ClearShipoutPicture
\section{Empty}
\newpage
\AddToShipoutPicture{\BackgroundPicture}
\section{\LaTeX}
\end{document}
\section{First page of the main document}
\includepdfpages{ltx3info.pdf}{1}{3}
\section{First page after the imported pages of the external document}
\end{document}
\section*{Abstract}
The NPMLE of a distribution function from doubly truncated data was introduced in the seminal
paper of \cite{Efron99}. The consistency of the Efron-Petrosian estimator depends however on the
assumption of independent truncation. In this work we introduce an extension of the Efron-Petrosian NPMLE when
the lifetime and the truncation times may be dependent. The proposed estimator is constructed on the basis of
a copula function which represents the dependence structure between the lifetime and the truncation times. Two different iterative algorithms to
compute the estimator in practice are introduced, and their performance is explored through an intensive Monte Carlo simulation study. We illustrate the use of the estimators on a real data example.
\section*{Introduction}
\label{Section1}
Let $X^{\ast }$ be the random variable of ultimate interest or 'lifetime', with distribution function (df)
$F$, and assume that it is doubly truncated by the random pair
$\left( U^{\ast },V^{\ast }\right) $ with joint df $K$, where
$U^{\ast }$ and $V^{\ast }$ ($U^{\ast }\leq V^{\ast }$) are the left
and right truncation variables respectively. This means that the
triplet $\left( U^{\ast },X^{\ast },V^{\ast }\right) $ is observed
if and only if $U^{\ast }\leq X^{\ast }\leq V^{\ast }$, while no
information is
available when $X^{\ast }<U^{\ast }$ or $X^{\ast }>V^{\ast }$.
We assume that the truncation comes from the existence of an observational window of length $\phi$, and therefore $V^{\ast }= U^{\ast}+ \phi$ $(\phi>0)$. This model is suitable when the sample reduces to individuals with event dates between two fixed calendar times (e.g. \cite{Moreira10}). \cite{Austin2014} termed this type of truncation as 'complete truncation dependence', while \cite{Zhu2012, Zhu2014} referred this problem as 'interval sampling'. Besides, we assume that
$U^*$ depends on the lifetime, and that the dependence structure of $(X^*,U^*)$ is given by a copula function such that (cfr. \cite{Nelsen2006})
\[
P(X^{\ast}\leq x, U^{\ast}\leq u)=\mathcal{C}_{\theta}\left(F(x), G(u)\right),
\]
where $G(u)=K(u,\infty)$ is the marginal df of $U^*$ and $\mathcal{C}_{\theta}$ a parametric family of copula's, with $\theta$ belonging to a certain euclidean parametric space $\Theta$. Dependent truncation may appear in practice when, for example, the birth date of the process ($U^*$) has influence on the subsequent lifetime of interest ($X^*$); \cite{Austin2014} introduced a test for independence based on a Kendall's Tau in this setting. For example, in the study of transfusion-related AIDS in Section \ref{Section4}, the incubation time $X^*$ is doubly truncated by the time from HIV infection to January 1, 1982 ($U^*$) and the lapse time from HIV infection to the end of study (July 1, 1986) ($V^*$). Hereby we note that several persons in this study were infected a long time ago with the HIV virus without developing AIDS. Considering that the knowledge about AIDS was not well in the early days of the epidemic, this suggests that there is a positive dependence between $X^*$ and $U^*$, and several persons with a HIV infection mated go unnoticed. \\
To assess the degree of dependence between lifetime and truncation variables \cite{Chaieb06} propose a semiparametric estimation for a copula model describing dependent truncation data. \cite{Emura2011} and \cite{EMURA2012} considered estimators based on conditional likelihood and nonparametric likelihood, respectively. The referred author \cite{Emura2015} revisits the estimation presented in \cite{Chaieb06} and proposes a different algorithm of solving their estimation function. For the best of our knowledge, these contributions on the dependence between lifetime and truncation time only referred to the case of one-sided truncation. This paper presents new statistical methods for modeling a possible dependency between $X^{\ast}$ and $(U^{\ast}, U^{\ast}+ \phi)$ when only triplets such that $U^{\ast }\leq X^{\ast }\leq U^{\ast }+\phi$ are observed. \\
$\ $Let $%
\left( U_{i},X_{i},V_{i}\right) $, $i=1,...,n$, denote the sampling
information, these are iid data with the distribution of
$\left( U^{\ast },X^{\ast },V^{\ast }\right) $ conditionally on $U^{\ast }\leq
X^{\ast }\leq V^{\ast }$.
Under the given model, the full likelihood of the $(U_i,X_i,V_i)$'s is given by (see the Appendix) \begin{eqnarray}
{L}(\theta, f, k)=\displaystyle \prod_{i=1}^n \frac{W_{ii}f_ik_i}{\displaystyle \sum_{j=1}^{n}\displaystyle \sum_{m=1}^n W_{jm} f_jk_m J_{mj}},
\label{eq0}\end{eqnarray}
where $f = (f_1, f_2, . . . , f_n)$ and $k = (k_1, k_2, . . . , k_n)$ are distributions putting probability $f_i$ on
$X_i$ and $k_i$ on $(U_i, V_i)$ and
\noindent where $J_{ij}=I_{[U_i\leq X_j \leq U_i+\phi]}$ and $W_{ij}= \mathcal{C}_{\theta}^{(1,1)}(F_i, K_j)$, with $F_i=\displaystyle \sum_{m=1}^n f_mI_{[X_m \leq X_i]}$ and $K_i=\displaystyle \sum_{m=1}^n k_mI_{[U_m \leq U_i]}$. Here, $\mathcal{C}_{\theta}^{(1,1)}$ denotes the density of the copula family.\\
For independent truncation, we have $\mathcal{C}_{\theta}^{(1,1)}=1$ and the likelihood (\ref{eq0}) reduces to that in \cite{Efron99}. When $X^*$ and $U^*$ are dependent, the weights $W_{ij}$ introduce a suitable correction of the Efron-Petrosian NPMLE. The goal is the estimation of the $r+2n$ parameters $\theta$, $f_i$ and $k_i$, $i=1,\ldots,n$, where $r$ denotes the dimension of $\Theta$. Then, the NPMLE's of $F(x)$ and $G(u)$ under truncation are simply obtained as $\widehat F(x)= \displaystyle \sum_{i=1}^n f_i I_{[X_i\leq x]}$ and $\widehat G(u)= \displaystyle \sum_{i=1}^n \widehat k_i I_{[U_i\leq u]}$.\\
This paper is organized as follows. In Section \ref{Section2} two different algorithms to estimate the parameter $\theta$ and the distributions $F$ and $G$ are introduced. The finite sample behaviour of the estimators is investigated through simulations in Section \ref{Section3}. An application to the analysis of AIDS incubation times is provided in Section \ref{Section4}, while the conclusions are deferred to Section \ref{Section5}. Technical details are provided in the Appendix.
\section{The estimators}
\label{Section2}
\bigskip
First, we introduce a simple algorithm to estimate the parameters. Here we assume for the moment that the weights $W_{ij}$ are free of $f$ and $k$. Then, by differentiating the loglikelihood with respect to the $f_m$'s and $k_m$'s we obtain the following simple score equations:\\
\begin{equation}
\frac{\partial log L}{\partial f_m}=0 \Leftrightarrow f_m=\left[\displaystyle \sum _{i=1}^n \frac 1{K_i^w}\right]^{-1}\frac1{K_m^w}, \enspace m=1, \ldots, n
\label{eq1}
\end{equation}
\noindent with $K_i^w=\displaystyle \sum _{j=1}^mW_{ij}k_jJ_{ji}$, and \\
\begin{equation}
\frac{\partial log L}{\partial k_m}=0 \Leftrightarrow k_m=\left[\displaystyle \sum _{i=1}^n \frac 1{F_i^w}\right]^{-1}\frac1{F_m^w},\enspace m=1, \ldots, n
\label{eq2}
\end{equation}
\noindent with $F_i^w=\displaystyle \sum _{j=1}^mW_{ij}f_jJ_{ij}$. Equations (\ref{eq1}) and (\ref{eq2}) can be used to introduce the
following iterative simple algorithm.
\begin{enumerate}
\item [Step 0] Take the Efron-Petrosian NPMLE for independent truncation $ f^{(0)}=(f_1^{EP},...,f_n^{EP})$, $ k^{(0)}=(k_1^{EP},...,k_2^{EP})$ as initial solution for $f$ and $k$, and compute
\[
\theta^{(0)}=\mathnormal{argmax}_{\theta}L^{(0)}(\theta),
\] where,
\begin{eqnarray*}
L^{(0)}(\theta)=\displaystyle \prod_{i=1}^n \frac{\mathcal {C}_{\theta}^{(1,1)}\left(F_i^{(0)}, K_i^{(0)}\right)f_i^{(0)}k_i^{(0)}}{\displaystyle \sum_{j=1}^{n}\displaystyle \sum_{m=1}^n {\mathcal {C}_{\theta}^{(1,1)}\left(F_j^{(0)}, K_m^{(0)}\right) f_j^{(0)}k_m^{(0)} J_{mj}}},
\end{eqnarray*}
and where $F_i^{(0)}=\displaystyle \sum_{m=1}^n f_m^{(0)}I_{[X_m \leq X_i]}$ and $K_i^{(0)}=\displaystyle \sum_{m=1}^n k_m^{(0)}I_{[U_m \leq U_i]}$
\item [Step 1] Use (\ref{eq2}) to improve $k^{(0)}$:
\begin{eqnarray*}
k_m^{(1)}=\left[\displaystyle \sum _{i=1}^n \frac 1{F_i^{w_0,0}}\right]^{-1}\frac 1{F_m^{w_0,0}}, \enspace m=1, \ldots, n
\end{eqnarray*} where $w_0=\{W_{ij}^{(0)}: 1 \leq i, j \leq n\}$, $W_{ij}^{(0)}= \mathcal{C}_{\theta^{(0)}}^{(1,1)}\left(F_i^{(0)}, K_j^{(0)}\right)$
and $F_i^{w_0,0}=\displaystyle \sum_{j=1}^n W_{ij}^{(0)}f_j^{(0)}J_{ij}$
\item [Step 2] Use (\ref{eq1}) to improve $f^{(0)}$:
\begin{eqnarray*}
f_m^{(1)}=\left[\displaystyle \sum _{i=1}^n \frac 1{K_i^{w_0,1}}\right]^{-1}\frac 1{K_m^{w_0,1}}, \enspace m=1, \ldots, n
\end{eqnarray*} where $K_i^{w_0,1}=\displaystyle \sum_{j=1}^n W_{ij}^{(0)}k_j^{(1)}J_{ji}$
\item [Step 3] Improve $\theta^{(0)}$ by taking
\[
\theta^{(1)}=\mathnormal{argmax}_{\theta}L^{(1)}(\theta),
\] where
\begin{eqnarray*}
L^{(1)}(\theta)=\displaystyle \prod_{i=1}^n \frac{\mathcal {C}_{\theta}^{(1,1)}\left(F_i^{(1)}, K_i^{(1)}\right)f_i^{(1)}k_i^{(1)}}{\displaystyle \sum_{j=1}^{n}\displaystyle \sum_{m=1}^n {\mathcal {C}_{\theta}^{(1,1)}\left(F_j^{(1)}, K_m^{(1)}\right) f_j^{(1)}k_m^{(1)} J_{mj}}},
\end{eqnarray*}
and where $F_i^{(1)}=\displaystyle \sum_{m=1}^n f_m^{(1)}I_{[X_m \leq X_i]}$, $K_i^{(1)}=\displaystyle \sum_{m=1}^n k_m^{(1)}I_{[U_m \leq U_i]}$\\
\item [Step 4] Repeat steps $(1)-(3)$ until convergence.
\end{enumerate}
That is, algorithm Step 0-Step 4 fits the copula function by starting with the Efron-Petrosian NPMLE estimator under independent truncation. Then, it improves first $k$ and then $f$ by using the simple score equations (\ref{eq2}) and (\ref{eq1}); and, finally, it updates $\theta$ by maximizing the loglikelihood (based on the improved $k$ and $f$) with respect to the copula parameter. This procedure is repeated until a stable solution is reached. As convergence criterion, we have used $\displaystyle \max_{1 \leq i \leq n}|f_i^{q-1}-f_i^q|\leq 1e-06$ and $\displaystyle \max_{1 \leq j \leq n}|k_j^{q-1}-k_j^q|\leq 1e-06$ and $\displaystyle \max|\theta^{q-1}-\theta^q|\leq 1e-06$. Then, the NPMLE's $\widehat F(x)$ and $\widehat G(u)$ are constructed from the q-th solution $f_i^q$, $k_j^q$ and $\theta^q$. \\
A second algorithm to estimate the different parameters is called the full algorithm and is obtained if one differentiates the loglikelihood with respect to $f$ and $k$ by taking the dependence of the $W_{ij}$'s on these parameters into account. Then, the substitutes for equations (\ref{eq2}) and (\ref{eq1}) are (see the Appendix for details):\\
\begin{eqnarray}
&&\frac{\partial log L}{\partial f_m}=0 \Leftrightarrow \nonumber\\
&&f_m=\left[\displaystyle \sum _{i=1}^n \frac 1{{nA_i}+n K_i^w{- \alpha B_i}}\right]^{-1}\frac1{{nA_m}+nK_m^w{-\alpha B_m}}, \enspace m=1, \ldots, n
\label{eq1full}
\end{eqnarray}
\noindent with
$A_m=\displaystyle \sum_{i=1}^n \displaystyle \sum _{j=1}^n W_{ij}^{(2,1)}f_ik_j J_{ji}I_{[X_m \leq X_i]}$,
$B_m=\displaystyle \sum_{i=1}^n\frac{W_{ii}^{(2,1)}I_{[X_m \leq X_i]}}{W_{ii}^{(1,1)}}$ and\\
$
\alpha= \displaystyle \sum_{j=1}^{n} \displaystyle \sum_{m=1}^n W_{jm}^{(1,1)} f_jk_m J_{mj}
$, and
\begin{eqnarray}
&&\frac{\partial log L}{\partial k_m}=0 \Leftrightarrow \nonumber\\
&&k_m=\left[\displaystyle \sum _{i=1}^n \frac 1{{nC_i}+n F_i^w{- \alpha D_i}}\right]^{-1}\frac1{{nC_m}+nF_m^w{-\alpha D_m}}, m=1, \ldots, n
\label{eq2full}
\end{eqnarray}
with
$C_m=\displaystyle \sum_{i=1}^n \displaystyle \sum _{j=1}^n W_{ij}^{(1,2)}f_ik_j J_{ji}I_{[U_m \leq U_i]}$ and $D_m=\displaystyle \sum_{i=1}^n\frac{W_{ii}^{(1,2)}I_{[U_m \leq U_i]}}{W_{ii}^{(1,1)}}$.\\
In (\ref{eq1full}) and (\ref{eq2full}) we use the notation $W_{ij}^{(l,m)}$, $1\leq l,m\leq 2$, for $\mathcal C_{\theta}^{(l,m)}(F_i,K_j)$, where $\mathcal C_{\theta}^{(l,m)}(u,v)=\frac {\partial^{l+m}}{\partial u^l\partial v^m} \mathcal C_{\theta}(u,v)$. Note that $W_{ij}=W_{ij}^{(u,v)}$ with this notation.
The 'full' algorithm we propose is defined following Steps 0-4 above, but using these two equations (\ref{eq1full}) and (\ref{eq2full}) in the place of (\ref{eq2}) and (\ref{eq1}). Note that moving from the simple EM algorithm to this full algorithm implies changing the way in which $k$ and $f$ are improved, while the updating of $\theta$ (Step 0) remains the same.\\
In Section \ref{Section3} we investigate through simulations the performance of these two algorithms for several copula functions and marginal models. Interestingly, it is seen that the simple algorithm is accurate enough for practical purposes, while giving a more efficient solution in terms of computational speed. For the final implementation we multiply $A_m$, $B_m$, $C_m$ and $D_m$ by $n/{n+1}$; this is equivalent to replace each $W_{ij}=\mathcal {C}_{\theta}^{(1,1)}\left(F_i, K_j\right)$ by $W_{ij}^*= \mathcal {C}_{\theta}^{(1,1)}\left(\frac n {n+1}F_i, \frac n {n+1} K_j\right)$, which avoids problems at the upper-right corner of the copulas function.
Since the truncation interval $(U^*,V^*)$ prevents us from always observing the lifetime of interest $X^*$, we are not able to fully see the dependence structure between $(X^*,U^*)$ which was expressed using a copula function $\mathcal{C}$. Hence, we note that it is not possible to estimate the association copula function and the marginal distributions without introducing extra assumptions. The copula function $\mathcal{C}$ and the marginal distributions $F$ and $K$ are in this case non-identifiable from the observed data. To avoid this non-identifiability while estimating the copula function and marginal distributions in this model by the two estimating algorithms, we do as in \cite{Ding2012} for left-truncated data and assume that all copula functions used in the simulations and real data analysis, satisfy the identifiability condition in \cite{Ding2012} and are strong lower-left tail identifiable. Under this condition, we noted that the estimation of the copula function and the marginal distributions is identified in this model. In the future, we intend to study in more detail whether this condition is also sufficient for the identifiability under interval truncation.
In practice, it is important to report standard errors to know the accuracy of a given estimator for the triplet $(\theta,F,K)$. To this end, we propose to use a bootstrap algorithm based on the fitted chosen copula. To be specific, let $(T_1,T_2)$ be a pair of $U(0,1)$ random variables following the fitted copula $\mathcal{C}_{\widehat \theta}$. Let $U^*=\widehat G^{-1}(T_1)$ and $X^*=\widehat F^{-1}(T_2)$ where $\widehat F$ and $\widehat G(.)=\widehat K(.,\infty)$ are the estimators based on the simple or the full algorithm, and $\widehat F^{-1}$ and $\widehat G^{-1}$ are their respective quantile functions. Reject the pair $(U^*,X^*)$ if $U^*\leq X^* \leq U^*+\phi$ is violated. Form a resample of $n$ data following this scheme, and repeat up to forming $B$ resamples. Then, the bootstrap standard error of $\widehat \theta$, $\widehat F$ or $\widehat K$ is defined as the standard deviation of these estimators along the $B$ resamples. In Section \ref{Section3} we include some simulation results for this method when the goal is the estimation of the standard error of $\widehat \theta$; these results suggest that the copula-based bootstrap performs well.
\section{Simulations}
\label{Section3}
In this section we investigate the finite sample performance of the algorithms proposed in Section \ref{Section2} through simulations. We simulate the scenario $X^* \sim U(0,1)$, $U^* \sim U(-0.6, 0.4)$ and then we take $V^*=U^* + \phi$, with $\phi=1.5$. Note that, in this way, the df of $X^*$ is identifiable, because the lower (resp. upper) limit of the support of $U^*$ (resp. $V^*$) is smaller than the lower (resp. upper) limit of the support of $X^*$ \cite{Woodroofe85}. We consider three different copula families: the Farlie-Gumble-Morgentein (FGM) copula (Case 1), the Frank copula (Case 2) and the Clayton copula (Case 3). \\
In Case 1 the variables $X^*$ and $U^*$ follow a FGM copula family with parameter $\theta$, that is, $\mathcal{C}_{\theta}(u_1,u_2)=u_1u_2+\theta u_1u_2 (1-u_1)(1-u_2), \theta \in [-1,1]$. The Kendall's Tau ($\tau_{\theta}$) corresponding to this copula is $\tau_{\theta}=\frac29\theta$. We consider the cases $\theta= -1, -0.5, 1$ reporting association levels between $X^*$ and $U^*$ equal to $-0.2,-0.1$ and $0.2$ (Models 1.1-1.3 respectively). Specifically, the simulation algorithm is as follows (cfr. Exercise 3.23 in \cite{Nelsen2006}):
\begin{enumerate}
\item [Step 1] Generate two independent uniform $(0,1)$ variables $X^*$ and $T$;
\item [Step 2] Set $a=1+ \theta(1-2X^*)$ and $b=\sqrt{(a^2-4(a-1)T)}$;
\item [Step 3] Set $U^*=2T/(b-a)$;
\item [Step 4] The desired pair is $(X^*,U^*)$, satisfying the condition $U^*\leq X^* \leq U^* + \phi$;
\item [Step 5] Update $U^*$ to be $U^*-0.6$ according to its support $(-0.6,0.4)$
\end{enumerate}
In Case 2 the variables $X^*$ and $U^*$ follow a Frank copula family with parameter $\theta$, given by $\mathcal{C}_{\theta}(u_1,u_2)=-\frac{1}\theta log\left[1+\frac{\left(e^{-\theta u_1}-1\right)\left(e^{-\theta u_2}-1\right)}{e^{-\theta-1}}\right], \theta \in \mathcal{R}\backslash \{0\}$. For this copula the Kendall's Tau is the solution of the equation $\frac{\left[D_1(-\theta)-1\right]}{\theta}=\frac{1-\tau_\theta}4$, where $D_1(\alpha)=\frac1\alpha\int_0^\alpha\frac t{e^t-1}dt$ is a Debye function of the first kind. We consider the cases $\theta= -2.1, -1, 1.86, 5.74, 20.9$ corresponding to association levels of $-0.2,-0.1, 0.2, 0.5$ and $0.9$ respectively (Models 2.1-2.5). The simulation algorithm is as follows (cfr. Exercise 4.17 in \cite{Nelsen2006}):\\
\begin{enumerate}
\item [Step 1] Generate two independent uniform $(0,1)$ variables $T$ and $U^*$;
\item [Step 2] Set $X^*=-(1/\theta)\log(1+(T(\exp(-\theta)-1))/(T+(1-T)\exp(-\theta \times U^*)))$;
\item [Step 3] The desired pair is $(X^*,U^*)$, satisfying the condition $U^*\leq X^* \leq U^* + \phi$;
\item [Step 4] Update $U^*$ to be $U^*-0.6$ according to its support $(-0.6,0.4)$.
\end{enumerate}
In Case 3 the variables $X^*$ and $U^*$ follow a Clayton copula family with generator $\psi_{\theta}(t)=\theta^{-1}(t^{-\theta}-1)$, $\theta>0$, i.e., $\mathcal{C}_{\theta}(u_1,u_2)=\left(u_1^{-\theta}+u_2^{-\theta}-1\right)^{-1/\theta} , \theta \in (0,\infty)$. This copula implies a Kendall's Tau $\tau_{\theta}=\frac \theta{\theta+2}$. We consider the cases $\theta= 0.5, 2, 18$ corresponding respectively to association levels of $0.2,0.5$ and $0.9$ (Models 3.1-3.3). The simulation algorithm is as follows (cfr. Exercise 4.17 in \cite{Nelsen2006}):
\begin{enumerate}
\item [Step 1] Generate independent random variables $Y_1$, $Y_2$ $\sim Exp(1)$;
\item [Step 2] Independently generate $Z_0 \sim \Gamma(1/\theta,1)$, and compute $U^*=(1+Y_2/Z_0)^{(-\theta)}$;
\item [Step 3] Finally compute $X^*=(1+Y_1/Z_0)^{(-\theta)}$;
\item [Step 4] The desired pair is $(X^*,U^*)$, satisfying the condition $U^*\leq X^* \leq U^* + \phi$ ;
\item [Step 5] Update $U^*$ to be $U^*-0.6$ according to its support $(-0.6,0.4)$.
\end{enumerate}
The values of $\theta$ for the several copulas correspond to the same association levels (Kendall's Tau). This will be interesting when interpreting the simulation results.
The simulated scenarios result in different truncations proportions according to the different copula families and parameter values $(\theta)$ considered. For instance, in Case 1, the proportion of truncation ranges from 4\% (Model 1.3) to 13\% (Model 1.1); in Case 2, from 1\% (Model 2.4) to 13\% (Model 2.1); and in Case 3, from 1\% (Model 3.1) to 8\% (Model 3.3). \\
In Figures \ref{Figure11} to \ref{Figure33} we report the MSE of the proposed estimators ($\widehat F$ and $\widehat K$) for each $\theta$ and for the several copulas, computed along 1000 Monte Carlo trials of size $n=250$ and $n=500$, at the deciles of the distribution of $X^*$. We performed simulations for lower sample sizes ($n=50, 100$) too, reporting similar results (not shown). The MSEs decrease when increasing the sample size thus suggesting the consistency of the proposed methods. In Figure \ref{Figure11} (Models 1.1 to 1.3, FGM copula) we report the results of both simple and full algorithms (top from bottom).
In these figure we see that, in general, the simple algorithm provides MSE's slightly larger than those of the full algorithm. Since the full algorithm is computationally heavier see Table \ref{Table42}, we have evaluated the relative increase of the MSE when moving from the full to the simple algorithm ($RMSE=(MSE(\mbox{simple})-MSE(\mbox{full}))/MSE(\mbox{full})$) for the four sample sizes $n=50, 100, 250, 500$ and all the simulated scenarios, see Table \ref{RMSE}. In this table we see that the median increase is only of 1.19\%, 0.65\%, or 0\% depending on the copula (FGM, Frank and Clayton resp.). Besides, by looking at the first quartile of $RMSE$ we see that the simple method is doing it better than the full method at least 25\% of the times. On the other hand, the third quartiles reveal that 75\% of the times the RMSE is below 5\%, 3.8\% or 2.43\% depending again on the copula. Models for which the full algorithm report the best relative performance are those with large negative association, particularly when estimating $K$. Overall, the simple algorithm seems to be the best option according to its good relative performance and computational speed. This is why we only display the results corresponding to the simple algorithm for Frank and Clayton copulas.
\begin{figure}
\centering
\subfigure{\includegraphics[width=0.45\textwidth]{deciles_mse_fgm_250_simple}} ~
\subfigure{\includegraphics[width=0.45\textwidth]{deciles_mse_fgm_500_simple}} \\
\subfigure{\includegraphics[width=0.45\textwidth]{deciles_mse_fgm_250_full}} ~
\subfigure{\includegraphics[width=0.45\textwidth]{deciles_mse_fgm_500_full}} \\
\caption{ MSE's of the proposed estimators $\widehat F$ and $\widehat K$, in each decile, for FGM copula. $N=250$ (left) $N=500$ (right), for simple and full algorithms (from top to bottom). Case1.} \label{Figure11}
\end{figure}
\begin{figure}[!ht]
\begin{minipage}[b]{0.45\linewidth}
\includegraphics[width=\textwidth]{deciles_mse_frank_simple_250}
\end{minipage} \hfill
\begin{minipage}[b]{0.45\linewidth}
\includegraphics[width=\textwidth]{deciles_mse_frank_simple_500}
\end{minipage} \hfill
\vspace{-0.2cm}
\caption{ MSE's of the proposed estimators $\widehat F$ and $\widehat K$, in each decile, for Clayton copula. $N=250$ (left) $N=500$ (right). Case2.} \label{Figure22}
\end{figure}
\begin{figure}[!ht]
\begin{minipage}[b]{0.45\linewidth}
\includegraphics[width=\textwidth]{deciles_mse_clyton_simple_250}
\end{minipage} \hfill
\begin{minipage}[b]{0.45\linewidth}
\includegraphics[width=\textwidth]{deciles_mse_clyton_simple_500}
\end{minipage} \hfill
\vspace{-0.2cm}
\caption{ MSE's of the proposed estimators $\widehat F$ and $\widehat K$, in each decile, for Frank copula. $N=250$ (left) $N=500$ (right). Case3.} \label{Figure33}
\end{figure}
\begin{table}[!ht]
\begin{center}
\begin{tabular}{ccccc}
\hline
& $Q_1$ &$Q_2$& $Q_3$ & Mean \\
\hline
FGM & -0.0045&0.0119&0.0500&0.0221\\
Frank & -0.0166&0.0065&0.0280&0.0026\\
Clayton& -0.0248 & 0.0000 &0.0243 & 0.0010 \\
\hline
\end{tabular}
\caption {The quartiles and mean of the overall RMSE's, considering the sample sizes $n=50, 100, 250$ and $500$ on each function $F$ and $K$ for different $\theta$'s and copula.}
\label{RMSE}\end{center}
\end{table}
In Table \ref{Theta} we display the bias and standard deviation of the estimator $\widehat \theta$ obtained from the simple algorithm along the 1,000 trials, for each copula function and sample sizes $n=250, 500$. As expected, it is seen that the bias and the standard deviation decrease when increasing the sample size. The bias and the standard deviation get larger as the association degree increases, although an exception to this is found for the standard deviation and FGM copula.
\begin{table}[ht]
\begin{center}
\begin{tabular}{ccccc}
\hline
Copula& n& $\theta$& Bias($\widehat \theta$)& sd($\widehat \theta$)\\
\hline
& & -1& 0.0712&0.1133\\
&250&-0.5&-0.0122&0.2307\\
&&1&-0.0851&0.1293\\
\cline{2-5}
FGM& & -1& 0.0505&0.0817\\
&500 & -0.5& -0.0020&0.1634\\
& & 1& -0574&0.0915\\
\hline
\hline
& & -2.1& -0.0844&0.6568\\
&&-1&-0.0482&0.5358\\
&250&1.86&0.0086&0.4535\\
&&5.74&0.0327&0.5453\\
&&20.9&-0.1879&1.3279\\
\cline{2-5}
Frank & & -2.1& -0.0125&0.4609\\
& & -1& -0.0063&0.3904\\
& 500& 1.86&0.0076&0.3353\\
& & 5.74&0.0130&0.3895\\
& & 20.9&-0.1145&0.9041\\
\hline
\hline
& & 0.5& 0.0564&0.0725\\
&250&2&-0.0723&0.1338\\
&&18&-0.0852&0.2523\\
\cline{2-5}
Clayton& & 0.5& 0.0412&0.0548\\
&500 & 2& 0.0523&0.0929\\
& & 18& 0.0684&0.1786\\
\hline
\hline
\end{tabular}
\caption {The bias and the standard deviation of the estimator $\widehat \theta$ obtained from the simple algorithm along the 1,000 trials, for each copula function and sample sizes $n=250, 500$.}
\label{Theta}\end{center}
\end{table}
We have computed the bias and variance of the NPMLE proposed by \cite{Shen08} for the functions $F$ and $K$, which ignores the possible dependence between $X^*$ and $U^*$. While the variance of the NPMLE and that of the copula-based estimator are of the same order (results not shown), the bias of the NPMLE can be two orders of magnitude larger than that corresponding to the proposed estimator. This can be seen from Figure \ref{Figure1}, in which the bias of the NPMLE for the three copulas under several dependence degrees is depicted for $n=500$ (the case $n=250$ reported similar results). As expected, this bias becomes more visible as the association level grows. For instance, in Case 1, the bias of the NPMLE of $F$ when $\theta=1$ is approximately 1.8 times that corresponding to $\theta=-0.5$ (Figure \ref{Figure1}, top left panel); similar results hold for $\widehat K$ (Figure \ref{Figure1}, top right panel). In Case 2, the bias of the NPMLE of $F$ when $\theta=20.9$ is approximately 2.4 times that corresponding to $\theta=1.86$. \\
\begin{figure}
\centering
\subfigure{\includegraphics[width=0.4\textwidth]{fgm_n_500_biasf}} ~
\subfigure{\includegraphics[width=0.4\textwidth]{fgm_n_500_biask}} \\
\subfigure{\includegraphics[width=0.4\textwidth]{frank_n_500_biasf}} ~
\subfigure{\includegraphics[width=0.4\textwidth]{frank_n_500_biask}} \\
\subfigure{\includegraphics[width=0.4\textwidth]{clayton_n_500_biasf}} ~
\subfigure{\includegraphics[width=0.4\textwidth]{clayton_n_500_biask}} \\
\caption{ Bias of the NMPLE proposed by Shen, in each decile, and each functions $F$ (left) and $K$ (right), for FGM, Frank and Clayton copulas (from top to bottom), with sample size
$n=500$ and different $\tau$'s.} \label{Figure1}
\end{figure}
As mentioned in Section \ref{Section2}, the bootstrap method can be applied to estimate the standard error of both the marginal distributions and the copula parameter. We have evaluated the performance of the copula-based bootstrap method when estimating the standard error of $\widehat \theta$. To this end, we have computed the ratio between the bootstrap standard error and the true standard deviation of $\widehat \theta$ along 500 Monte Carlo trials (the true standard error was approximated by the Monte Carlo standard deviation). In Table \ref{Table42} we report the mean and the standard deviation of this ratio $Q$ along the simulated runs for the three copula functions with $n=50$ and $n=250$. From this table it is seen that the bootstrap performs well, giving a more accurate estimation of the standard error of $\widehat \theta$ as the sample size increases.
\begin{table}[!ht]
\centering
\begin{tabular}{c c c c }
\hline
n&Copula& mean (Q)& sd(Q)\\
\cline{1-4}
&FGM&0.8804&0.2108\\
50&Clayton&0.9102&0.2019\\
&Frank&1.0828&0.1909\\
\hline
\hline
&FGM&1.0047&0.1135\\
250&Clayton&0.9872&0.1023\\
&Frank&1.0001&0.0900\\
\end{tabular}
\caption {Mean and standard error of the quotient $Q$.} \label{Table42}
\end{table}
\section{Real data illustration}
\label{Section4}
\indent For illustration purposes, in this section we consider
epidemiological data on transfusion-related Acquired Immune
Deficiency Syndrome (AIDS). The AIDS Blood Transfusion Data are
collected by the Centers for Disease Control (CDC), which is from a
registry data base, a common source of medical data (see
\cite{Bilker96}; \cite{Lawless89}). The variable of interest ($X^*$)
is the induction or incubation time, which is defined as the time
elapsed from Human Immunodeficiency virus (HIV) infection to the
clinical manifestation of full-blown AIDS. The CDC AIDS Blood
Transfusion Data can be viewed as being doubly truncated. The data
were retrospectively ascertained for all transfusion-associated AIDS
cases in which the diagnosis of AIDS occurred prior to the end of
the study, thus leading to right-truncation. Besides, because HIV
was unknown prior to 1982, any cases of transfusion-related AIDS
before this time would not have been properly classified and thus
would have been missed. Thus, in addition to right-truncation, the
observed data are also truncated from the left. See \cite{Bilker96}, section 5.2, for further discussions. \\
The data include 494 cases reported to the CDC prior to January 1,
1987, and diagnosed prior to July 1, 1986. Of the 494 cases, 295 had
consistent data, and the infection could be attributed to a single
transfusion or short series of transfusions. Our analyses are
restricted to this subset, which is entirely reported in
\cite{Lawless89}, Table 1. Values of $U^*$ were obtained by
measuring the time from HIV infection to January 1, 1982; while $V^*$ was defined
as time from HIV infection to the end of study (July 1, 1986). Note that the difference
between $V^*$ and its respective $U^*$ is always 4.5 years. \\
More specifically, our goal is to correct the Efron-Petrosian estimator of $F$ for the possible dependence between AIDS incubation time and the date of HIV infection (left truncation variable). In order to assess this dependence, in Table \ref{Table5} we report the value of $\widehat \theta$ (as well as the corresponding Kendall's Tau $\tau_\theta$) obtained from the two proposed algorithms (full and simple), for the three copula families (FGM, Clayton and Frank). The number of iterations needed for each convergence for each algorithm and copula function are included. Bootstrap standard errors and 95\% confidence intervals based on the bootstrap and the normal approximation are reported too. From this Table \ref{Table5} it is seen that (a) the three copulas indicate a positive association between $U^*$ and $X^*$, as it was anticipated in \ref{Section1}, and (b) the full algorithm is more computationally demanding. An exception to conclusion (b) is found for the Clayton copula, for which the full algorithm fails to provide a likely value for $\theta$. A possible explanation for this is that the full algorithm is unable to get away from the initial values of $(\theta,F,K)$ (the ones corresponding to the independent setting) when using this particular Copula. The second and third order derivatives of the Clayton copula are unbounded when $u_1$, $u_2$ go to zero, so, for small values of $F$ and $K$, we get that the value of the different weights $W$ containing these second order derivatives gets very big and will dominate the likelihood function and also the optimum. The small number of iterations needed to achieve the optimal value for the Clayton copula is much smaller than for the other copula functions. This indicates that possibly a local optimum has been reached instead of the global optimum. For the FGM copula and Frank copula we do not have this problem. For the FGM copula, we also, however, note that the optimal value of theta is reached at the upper limit of the parameter space of possible theta values for this copula. This means that the association is in-fact larger than what can be obtained by this copula function. Hence this copula function is not properly suited to look at the association between the incubation time and the truncation time.\\
In Figure \ref{Simple_model} (simple and full algorithms)
the cumulative df for the incubation times (left panels) and the truncation time $U$ (right panels)
using the three copulas and the NPMLE under independence are jointly depicted. From this Figure it is seen that the choice of the copula has some influence in the resulting estimator; however, in general, all the copulas are able to somehow correct the negative (resp. positive) bias of the NPMLE of the incubation time (resp. left-truncation time) distribution under independence. In this aspect, we note that only the Frank copula function is able to take the full association between the incubation time and the truncation time into account. The FGM-copula tries to do this but is restricted by its limited parameter space and therefore delivers a result between the result of the Frank copula and the independence setting. As discussed, the results for the full algorithm based on the Clayton copula should not be taken as realistic.
\begin{table}[!ht]
\centering
\begin{tabular}{c c c c c c}
\hline
Copula& n. iter&$\widehat \theta$&SEboot&Interval&Kendall's $\tau$ \\
\cline{1-6}
FGM&55&0.982&0.3273&(0.3404;1.6235)&0.22\\
Clayton&114&0.487&0.0785&(0.3330;0.6408)&0.20\\
Frank&179&3.35&0.7758&(1.8294;4.8706)&0.38\\
\hline
\hline
FGM&131&1&0.2425&(0.5246;1.4754)&0.22\\
Clayton&25&0.07&0.0584&(-0.0445;0.1845)&0.03\\
Frank&186&3.46&0.6452&(2.1954;4.7245)&0.38\\
\end{tabular}
\caption {Number of iterations, estimated $\theta$, the correspondent Kendall's $\tau$, the standard error and the confidence interval for $\widehat \theta$, using both algorithms, simple (top) and full (bottom). AIDS data.} \label{Table5}
\end{table}
\vspace{-0.3cm}
\begin{figure}[!ht]
\begin{minipage}[b]{0.45\linewidth}
\includegraphics[width=\textwidth]{xvscumf_aids_simple_fgmvsclaytonvsfrank}
\end{minipage} \hfill
\begin{minipage}[b]{0.45\linewidth}
\includegraphics[width=\textwidth]{uvscumk_aids_simple_fgmvsclaytonvsfrank}
\end{minipage} \hfill
\begin{minipage}[b]{0.45\linewidth}
\includegraphics[width=\textwidth]{xvscumf_aids_full_fgmvsclaytonvsfrank}
\end{minipage} \hfill
\begin{minipage}[b]{0.45\linewidth}
\includegraphics[width=\textwidth]{uvscumk_aids_full_fgmvsclaytonvsfrank}
\end{minipage} \hfill
\vspace{-0.2cm}
\vspace{-0.2cm}
\caption{ Cumulative distribution function for the incubation times (left) and the truncation time $U$ (right) using FGM copula (red dashed line), Clayton copula (blue dashed line), Frank copula (green dashed line) and the NPMLE of Efron and Petrosian (black solid line). Simple algorithm (top) and Full algorithm (bottom). AIDS data.}
\label{Simple_model}
\end{figure}
\section{Conclusions}
\label{Section5}
In this paper we have introduced an extension of the Efron-Petrosian NPMLE when
the lifetime and the truncation times may be dependent. We assume that
$U^*$ depends on the lifetime, and that the dependence structure of $(X^*,U^*)$ is given by a copula function, with arguments $\theta$, $F$ and $G$.
Two different algorithms to estimate the parameter $\theta$ and the distributions $F$ and $G$ have been introduced, the full and the simple algorithms.\\
The performance of these two algorithms has been evaluated though simulations for several copula functions and marginal models. Both estimators are asymptotically equivalent in the sense of their convergence to the same solution. While the simple algorithm provides MSE's slightly larger than those of the full algorithm, but the full algorithm has revealed computationally heavier. The evaluations of the $RMSE$'s allows to concluded that the simple algorithm is the best option according to its good relative performance and computational speed. The systematic bias of the Efron-Petrosian NPMLE under dependence has been evaluated too, being more evident for a stronger dependence degree.\\
In order to estimate the standard error of both the marginal distributions and the copula parameter we have introduced a bootstrap procedure. In our simulation studies the bootstrap performs well, giving a more accurate estimation of the standard error of $\widehat \theta$ with an increasing sample size.\\
A real data illustration has been provided.
We have applied both algorithms to correct the Efron-Petrosian estimator of $F$ for the possible dependence between
AIDS incubation time and the date of HIV infection, for different copula families (FGM, Clayton and Frank). The three copulas indicated a positive association between $U^*$ and $X^*$ when applying both algorithms. An exception was found for the Clayton copula, for which the full algorithm failed to provide a likely value for $\theta$, a numerical issue probably related to the instability of the third-order derivatives of the Clayton copula around zero.
\section*{Appendix}
\label{Section6}
With the notations in Section \ref{Section2}, the joint density of $(X^*,U^*)$ conditionally on $U^* \leq X^* \leq U^*+ \phi$ at point $(x,u)$ is given by
\begin{eqnarray*} \frac{\mathcal{C}_{\theta}^{(1,1)}\left( F(x),G(u) \right)f(x) k(u)}{ \displaystyle \int_{u\leq x \leq u+ \phi} \displaystyle \int \mathcal{C}_{\theta}^{(1,1)}\left( F(x),G(u) \right)dF(x)dG(u) }.
\end{eqnarray*}
\noindent This justifies the likelihood (\ref{eq0}). In order to get the NPMLE for $\theta, f$ and $k$, we maximize the likelihood function (\ref{eq0}) , under the constraints $\displaystyle \sum_{i=1}^nf_i=1$ and $\displaystyle \sum_{i=1}^nk_i=1$. The loglikelihood is given by \\
\small{
\begin{equation}
log L(\theta, f, k)= \displaystyle \sum_{i=1}^n \left[log(f_i)+log (k_i)+log (W_{ii}^{(1,1)})-log \left( \displaystyle \sum_{j=1}^n \displaystyle \sum_{m=1}^nW_{jm}^{(1,1)}f_jk_mJ_{mj}\right) \right],
\end{equation}}
\noindent from which
\small{
\begin{eqnarray*}
&& \frac{\partial log L(\theta, f, k)}{\partial {f_m}}=\\
&=& \frac1{f_m}+ \displaystyle \sum_{i=1}^n \frac {W_{ii}^{(2,1)}}{W_{ii}^{(1,1)}}I_{[X_m \leq X_i ]}-\displaystyle \sum_{i=1}^n \frac{\displaystyle \sum_{j=1}^n \displaystyle \sum_{l=1}^n \left[ W_{jl}^{(2,1)}I_{[X_m \leq X_j]}f_jk_l J_{lj}+W_{jl}^{(1,1)}I_{[X_m = X_j] }k_l J_{lj}\right]}{\displaystyle \sum_{j=1}^n \displaystyle \sum_{l=1}^n W_{jl}^{(1,1)}f_jk_lJ_{lj}}\\
&&= \frac1{f_m}+ \displaystyle \sum_{i=1}^n \frac { W_{ii}^{(2,1)}I_{[X_m\leq X_i]}}{ W_{ii}^{(1,1)}}-
\displaystyle \sum_{i=1}^n \frac {\displaystyle \sum_{j=1}^n \displaystyle \sum_{l=1}^n W_{jl}^{(2,1)}f_jk_lJ_{lj}I_{[X_m \leq X_j]}+\displaystyle \sum_{l=1}^n W_{ml}^{(1,1)}k_lJ_{lm} }{\displaystyle \sum_{j=1}^n \displaystyle \sum_{l=1}^n W_{jl}^{(1,1)}f_jk_lJ_{lj}},
\end{eqnarray*}}
\noindent and similarly
\begin{eqnarray*}
&& \frac{\partial log L(\theta, f, k)}{\partial {k_m}}=\\
&=& \frac1{k_m}+ \displaystyle \sum_{i=1}^n \frac { W_{ii}^{(1,2)}I_{[U_m\leq U_i]}}{ W_{ii}^{(1,1)}}-
\displaystyle \sum_{i=1}^n \frac {\displaystyle \sum_{j=1}^n \displaystyle \sum_{l=1}^n W_{jl}^{(1,2)}f_jk_lJ_{lj}I_{[U_m \leq U_l]}+\displaystyle \sum_{j=1}^n W_{jm}^{(1,1)}f_jJ_{mj} }{\displaystyle \sum_{j=1}^n \displaystyle \sum_{l=1}^n W_{jl}^{(1,1)}f_jk_lJ_{lj}}.
\end{eqnarray*}
Solving the equation $\frac{\partial log L(\theta, f, k)}{\partial {f_m}}= 0$ we get\\
\begin{eqnarray*}
\frac 1{f_m}=n \frac{\displaystyle \sum_{j=1}^n \displaystyle \sum_{l=1}^n { W_{jl}^{(2,1)}f_j k_l J_{lj}I_{[X_m \leq X_j]}}+\displaystyle \sum_{l=1}^n W_{ml}^{(1,1)}k_lJ_{lm} }{\displaystyle \sum_{j=1}^n \displaystyle \sum_{l=1}^n W_{jl}^{(1,1)}f_jk_lJ_{lj}}-\displaystyle \sum_{i=1}^n \frac {W_{ii}^{(2,1)}I_{[X_m \leq X_i]}}{ W_{ii}^{(1,1)}}
\end{eqnarray*}
\noindent from which
\small{
\begin{eqnarray*}
\hspace{-2cm}&&f_m=\\
\hspace{-1cm}&=& \frac {\displaystyle \sum_{j=1}^n \displaystyle \sum_{l=1}^n W_{jl}^{(1,1)}f_jk_lJ_{lj}}{n \displaystyle \sum_{j=1}^n \displaystyle \sum_{l=1}^n W_{jl}^{(2,1)}f_j k_l J_{lj}I_{[X_m \leq X_j]}+ n \displaystyle \sum_{l=1}^n W_{ml}^{(1,1)}k_lJ_{lm}-{\displaystyle \sum_{j=1}^n \displaystyle \sum_{l=1}^n W_{jl}^{(1,1)}f_jk_lJ_{lj}}\displaystyle \sum_{i=1}^n \frac { W_{ii}^{(2,1)}I_{[X_m\leq X_i]}}{ W_{ii}^{(1,1)}}}
\end{eqnarray*}}
\begin{eqnarray*}
\hspace{-1cm}&=& \frac \alpha {n A_m+ n K_m^{W}-\alpha B_m}.
\end{eqnarray*}
\noindent Since $\displaystyle \sum_{m=1}^n f_m=1$, we get that
\begin{eqnarray*}
\alpha \displaystyle \sum_{m=1}^n \frac 1{n A_m+ n K_m^{W}-\alpha B_m}=1
\end{eqnarray*}.
\noindent Then,
\begin{eqnarray*}
\alpha= \left[\displaystyle \sum_{m=1}^n \frac 1{n A_m+ n K_m^{W}-\alpha B_m} \right]^{-1}.
\end{eqnarray*}
This proves the score equation (\ref{eq1full}).
To justify the score equation (\ref{eq2full}), from $\frac {\partial log L (\theta, f, k)}{\partial k_m}=0$, we have similarly:\\
\begin{eqnarray*}
\frac 1{k_m}= n\frac{\displaystyle \sum_{j=1}^n \displaystyle \sum_{l=1}^n { W_{jl}^{(1,2)}f_j k_l J_{lj}I_{[U_m \leq U_k]}}+ { \displaystyle \sum_{j=1}^n W_{jm}^{(1,1)}f_j J_{mj}}}{\alpha}\\
&-&\displaystyle \sum_{i=1}^n \frac {W_{ii}^{(1,2)}I_{[U_m \leq U_i]}}{ W_{ii}^{(1,1)}}.
\end{eqnarray*}
From this equation and since $\displaystyle \sum_{m=1}^n k_m=1$, we get (\ref{eq2full}).
\begin{eqnarray*}
k_h&=& \left[\displaystyle \sum_{m=1}^n \frac 1{n \displaystyle \sum_{j=1}^n \displaystyle \sum_{l=1}^n W_{jl}^{(2,1)}f_j k_l J_{lj}I_{[X_m \leq X_i]}+ n \displaystyle \sum_{l=1}^n W_{ml}^{(1,1)}k_lJ_{lm}-\alpha\displaystyle \sum_{i=1}^n \frac { W_{ii}^{(2,1)}I_{[X_m\leq X_i]}}{ W_{ii}^{(1,1)}}} \right]^{-1} \\
&& \times \frac 1 n{ \displaystyle \sum_{j=1}^n \displaystyle \sum_{l=1}^n W_{jl}^{(1,2)}f_j k_l J_{lj}I_{[U_h \leq U_j]}+ n\displaystyle \sum_{j=1}^n W_{jh}^{(1,1)}f_jJ_{hj}- \alpha \displaystyle \sum_{i=1}^n \frac { W_{ii}^{(1,2)}I_{[U_h\leq U_i]}}{ W_{ii}^{(1,1)}}}
\end{eqnarray*}
|
{
"timestamp": "2018-02-26T02:10:03",
"yymm": "1802",
"arxiv_id": "1802.08579",
"language": "en",
"url": "https://arxiv.org/abs/1802.08579"
}
|
\section{Introduction}
\label{sec:intro}
Mathematics is a language which can describe patterns in everyday life as well as abstract concepts existing only in our minds. Patterns exist in data, functions, and sets constructed around a common theme, but the most tangible patterns are visual. Visual demonstrations can help undergraduate students connect to abstract concepts in advanced mathematical courses. The study of partial differential equations, in particular, benefits from numerical analysis and simulation.
Applications of mathematical concepts are also rich sources of visual aids to gain perspective and understanding. Differential equations are a natural way to model relationships between different measurable phenomena. Do you wish to predict the future behavior of a phenomenon? Differential equations do just that--after being developed to adequately match the dynamics involved. For instance, say you are interested in how fast different parts of a frying pan heat up on the stove. Derived from simplifying assumptions about density, specific heat, and the conservation of energy, the heat equation will do just that! In section \ref{sub:pdes} we use the heat equation (\ref{eqn:LinearTestIBVP}), called a test equation, as a control in our investigations of more complicated partial differential equations (PDEs). To clearly see our predictions of the future behavior, we will utilize numerical methods to encode the dynamics modeled by the PDE into a program which does basic arithmetic to approximate the underlying calculus.
To be confident in our predictions, however, we need to make sure our numerical method is developed in a way that keeps errors small for better accuracy. Since the method will compound error upon error at every iteration, the method must manage how the total error grows in a stable fashion. Else, the computed values will ``blow up'' towards infinity--becoming nonnumerical values once they have exceeded the largest number the computer can store. Such instabilities are adamantly avoided in commercial simulations using {\it adaptive} methods, such as the Rosenbrock method implemented as {\tt ode23s} in MATLAB \cite{Moler2008}. These adaptive methods reduce the step size as needed to ensure stability, but in turn increase the number of steps required for your prediction. Section \ref{sec:NumericalPDEs} gives an overview of numerical partial differential equations. Burden \cite{Burden2011} and Thomas \cite{Thomas1995} provide great beginner and intermediate introductions to the topic, respectively. In section \ref{sec:analysis} we compare basic and adaptive methods in verifying accuracy, analyzing stability through fundamental definitions and theorems, and finish by tracking oscillations in solutions. Researchers have developed many ways to reduce the effect of numerically induced oscillations which can make solutions appear infeasible \cite{Britz2003, Osterby2003}. Though much work has been done in studying the nature of numerical oscillations in ordinary differential equations \cite{CGraham2016, Gao2015}, some researchers have applied this investigation to nonlinear evolution PDEs \cite{Harwood2011, Lakoba2016P1}. Recently, others have looked at the stability of steady-state and traveling wave solutions to nonlinear PDEs \cite{HarleyMarangell2015,LeVeque2007,Nadin2011}, with more work to be done. We utilize these methods in our parameter analysis in section \ref{sec:parameters} and set up several project ideas for further research. Undergraduate students have recently published related work, for example, in steady-state and stability analysis \cite{Aron2014, Sarra2011} and other numerical investigations of PDEs \cite{Juhnke2015}.
\section{Numerical Differential Equations}
\label{sec:NumericalPDEs}
In applying mathematics to real-world problems, a differential equation can encode information about how a quantity changes in time or space relative to itself more easily than forming the function directly by fitting the data. The mass of a bacteria colony is such a quantity. In this example, tracking the intervals over which the population's mass doubles can be related to measurements of the population's mass to find its exponential growth function. Differential equations are formed from such relationships. Finding the pattern of this relationship allows us to solve the differential equation for the function we seek. This pattern may be visible in the algebra of the function, but can be even more clear in graphs of numerical solutions.
\subsection{Overview of Differential Equations}
\label{sub:pdes}
An ordinary differential equation (ODE) is an equation involving derivatives of a single variable whose solution is a function which satisfies the given relationship between the function and its derivatives. Because the integration needed to undo each derivative introduces a constant of integration, conditions are added for each derivative to specify a single function. The order of a differential equation is the highest derivative in the equation. Thus, a first order ODE needs one condition while a third order ODE needs three.
\begin{definition}\label{def:IVP}
An initial value problem (IVP) with a first order ODE is defined as
\begin{eqnarray}\label{eqn:firstode}
\frac{dx}{dt} &=& f(x,t)\\\nonumber
x(t_0)&=&x_0,
\end{eqnarray}
where $t$ is the independent variable, $x\equiv x(t)$ is the dependent variable (also called the unknown function) with initial value of $x(t_0)=x_0$, and $f(x,t)$ is the {\it slope function}.
\end{definition}
As relationships between a function and its derivatives, a PDE and an ODE are much alike. Yet PDEs involve multivariable functions and each derivative is a partial derivative in terms of one or more independent variables. Recall that a partial derivative focuses solely on one variable when computing derivatives. For example, $\frac{\partial}{\partial t} e^{-2t}\sin(3x)=-2e^{-2t}\sin(3x)$. Similar to the ways ordinary derivatives are notated
, partial derivatives can be written in operator form or abbreviated with subscripts (e.g. $\frac{\partial^2 u}{\partial x^2} = u_{xx}$).
{\it Linear} PDEs are composed of a sum of scalar multiples of the unknown function, its derivatives, as well as functions of the independent variables. A PDE is {\it nonlinear} when it has term which is not a scalar multiple of an unknown, such as $\rho u(1-u)$ in (\ref{eqn:FisherIBVP}) or an arbitrary function of the unknown. To introduce problems involving PDEs, we begin with the simplest type of boundary conditions, named after mathematician Peter Dirichlet (1805-1859), and a restriction to first order in time (called evolution PDEs). Note that the number of conditions needed for a unique solution to a PDE is the total of the orders in each independent variable \cite{Thomas1995}. Sufficient number of conditions, however, does not prove uniqueness. The maximum principle and energy method are two ways uniqueness of a solution can be proven \cite{Evans2010}, but such analysis is beyond the scope of this chapter.
\begin{definition}\label{def:IBVP}
An initial boundary value problem (IBVP) with a first order (in time) evolution PDE with Dirichlet boundary conditions is defined as
\begin{eqnarray}\label{eqn:IBVP}
\frac{\partial u}{\partial t} &=& f\left(x,t, u,\frac{\partial u}{\partial x},\frac{\partial^2 u}{\partial x^2},...\right)\\ \nonumber
u(x,0) = u_0(x),\\ \nonumber
u(0,t) = a\\ \nonumber
u(L,t) = b
\end{eqnarray}
where $x,t$ are the independent variables, $u$ is the dependent variable (also called the unknown function) with initial value of $u(x,t)=u_0(x)$ and boundary values $u=a,b$ whenever $x=a,b$ respectively, and $f$ can be any combination of independent variables and any spatial partials of the dependent variable.
\end{definition}
\begin{example}
Let us analyze the components of the following initial boundary value problem:
\begin{eqnarray}\label{eqn:FisherIBVP}
u_t = \delta u_{xx} + \rho u(1-u),\\\nonumber
u(x,0) = u_0(x),\\\nonumber
u(0,t) = 0\\\nonumber
u(10,t) = 1
\end{eqnarray}
First, the PDE is nonlinear due to the $u(1-u)$ term. Second, the single initial condition matches the 1st order in time ($u_t$) and the two boundary values match the 2nd order in space ($u_{xx}$). Thus, this IBVP has sufficient number of conditions needed for a unique solution which supports but does not prove uniquess. Third, parameters $\delta,\rho$ and the initial profile function $u_0(x)$ are kept unspecified.
\end{example}
This reaction-diffusion equation is known as the Fisher-KPP equation for the four mathematicians who all provided great analytical insight into it: Ronald Fisher (1890-1962), Andrey Kolmogorov (1903-1987), Ivan Petrovsky (1901-1973), and Nikolaj Piscounov (1908-1977) \cite{Fisher1937, Kolmogorov1991}. Though it is more generally defined as an IVP, in this chapter we study it in its simpler IBVP form. Coeffients $\delta,\rho$ represent the diffusion and reaction rates and varying their values lead to many interesting behaviors. The Fisher-KPP equation models how a quantity switches between phases, such as genes switching to advantageous alleles where it was originally studied \cite{Fisher1937}.
The form of the initial condition function, $u_0(x)$, is kept vague due to the breadth of physically meaning and theoretically interesting functions which could initialize our problem. Thus, we will use a polynomial fitting functions {\tt polyfit()} and {\tt polyval()} in example code {\tt PDE\_Analysis\_Setup.m} to set up a polynomial of any degree which best goes through the boundary points and other provided points. This description of the initial condition allows us to explore functions constrained by their shape within the bounds of the equilibrium point $\bar{u}$ analyzed in section \ref{sub:steadystate}.
\begin{figure}[t]
\centerline{
\includegraphics[scale=.45]{Harwood_demo_linear_ode23s_fig.pdf}
\includegraphics[scale=.45]{Harwood_demo_Fisher_ode23s_fig.pdf}
}
\caption{Comparison of numerical solutions using the adaptive Rosenbrock method ({\tt ode23s} in MATLAB) for the (left) linear Test equation (\ref{eqn:LinearTestIBVP}) using $\rho=0$ and (right) Fisher-KPP equation (\ref{eqn:FisherIBVP}) using $\rho=1$, where all other parameters use the default values of $a=0, b=1, L=10, \delta=1, \Delta x=0.05, {\rm degree}=2, c=\frac{1}{3}$ and initial condition from line 8 in {\tt PDE\_Analysis\_Setup.m} found in Appendix \ref{App:Setup}.}
\label{fig:lf5ode23s}
\end{figure}
\begin{exercise}
Consider the PDE
\begin{equation}\label{eqn:ex:PDE}
u_t = 4u_{xx}.
\end{equation}
\renewcommand{\labelenumi}{\alph{enumi})}
\begin{enumerate}
\item Determine the order in time and space and how many initial and boundary conditions are needed to define a unique solution.
\item Using Definition (\ref{def:IBVP}) as a guide, write out the IBVP for an unknown $u(x,t)$ such that it has an initial profile of $\sin(x)$, boundary value of $0$ whenever $x=0$ and $x=\pi$, and is defined for $0\leq x\leq 1,t\geq0$.
\item Verify that you have enough initial and boundary conditions as determined previously.
\item Verify that the function,
\begin{equation}
u(x,t)=e^{-4t}\sin(x),
\end{equation}
is a solution to equation (\ref{eqn:ex:PDE}) by evaluating both sides of the PDE and checking the initial and boundary conditions.
\end{enumerate}
\renewcommand{\labelenumi}{\arabic{enumi}.}
\end{exercise}
Error is more difficult to analyze for nonlinear PDEs, so it is helpful to have an associated linear version of your equation to analyze first. We will compare our analysis of reaction-diffusion equations to the Test equation,
\begin{eqnarray}\label{eqn:LinearTestIBVP}
u_t = \delta u_{xx},\\\nonumber
u(x,0) = u_0(x),\\\nonumber
u(0,t) = 0\\\nonumber
u(10,t) = 1
\end{eqnarray}
which is the heat equation in one dimension with constant heat forced at the two end points \cite{Burden2011}. Note, this is not a direct linearization of the Fisher-KPP equation (\ref{eqn:FisherIBVP}), but it behaves similarly for large values of $\delta$. Figure \ref{fig:lf5ode23s} provides a comparison of the solutions to the linear Test equation (\ref{eqn:LinearTestIBVP}) and the Fisher-KPP equation (\ref{eqn:FisherIBVP}) for $\delta=1$. Note how the step size in time for both solutions increases dramatically to have large, even spacing as the solution nears the steady-state solution. Adaptive methods, like MATLAB's ode23s, adjust to a larger step size as the change in the solution diminishes.
\subsection{Overview of Numerical Methods}
\label{sub:methods}
Numerical methods are algorithms which solve problems using arithmetic computations instead of algebraic formulas. They provide quick visualizations and approximations of solutions to problems which are difficult or less helpful to solve exactly.
Numerical methods for differential equations began with methods for approximating integrals: starting with left and right Riemann sums, then progressing to the trapezoidal rule, Simpson's rule and others to increase accuracy more and more efficiently. Unfortunately, the value of the slope function for an ODE is often unknown so such approximations require modifications, such as Taylor series expansions for example, to predict and correct slope estimates. Such methods for ODEs can be directly applied to evolution PDEs (\ref{eqn:IBVP}). Discretizing in space, we create a system of ordinary differential equations with vector ${\bf U}(t)$ with components $U_m(t)$ approximating the unknown function $u(x,t)$ at discrete points $x_m$. The coefficients of linear terms are grouped into matrix $D(t)$ and nonlinear terms are left in vector function ${\bf R}(t,{\bf U})$. In the following analysis, we will assume that $t$ is not explicit in the matrix $D$ or nonlinear vector ${\bf R}({\bf U})$ to obtain the general form of a reaction-diffusion model (\ref{eqn:IBVP})
\begin{equation}\label{eqn:generalRD}
\frac{d {\bf U}}{dt} = D {\bf U} + {\bf R}({\bf U})+{\bf B}.
\end{equation}
\begin{example}\label{ex:discretization}
We will discretize the Fisher-KPP equation (\ref{eqn:FisherIBVP}) in space using default parameter values $a=0, b=1, L=10, \delta=1, \Delta x=0.05, \rho=1$ in {\tt PDE\_Analysis\_ Setup.m} found in Appendix \ref{App:Setup}. See Figure (\ref{fig:lf5ode23s}) (right) for the graph. Evenly dividing the interval $[0,10]$ with $\Delta x=0.05=\frac{1}{20}$ results in 199 spatial points, $x_m$, where the function is unknown (plus the two end points where it is known: $U_0=a,U_{200}=b$). Using a centered difference approximation of $U_{xx}$ \cite{Burden2011},
\begin{eqnarray}
\left( U_{xx}\right)_1 &\approx & \frac{a-2U_1+U_{2}}{\Delta x^2},\\ \nonumber
\left( U_{xx}\right)_m &\approx & \frac{U_{m-1}-2U_m+U_{m+1}}{\Delta x^2},\ 2\leq m\leq 198,\\ \nonumber
\left( U_{xx}\right)_{199} &\approx & \frac{U_{198}-2U_{199}+b}{\Delta x^2},
\end{eqnarray}
the discretization of (\ref{eqn:FisherIBVP}) can be written as
\begin{eqnarray}
\frac{d {\bf U}}{dt} &=& D {\bf U} + {\bf R}({\bf U})+{\bf B},\\ \nonumber
D &=& \frac{\delta}{\Delta x^2} \left[ \begin{array}{cccc}
-2 & 1 &... &0\\
1 & -2 & \ddots &... \\
...&\ddots &\ddots &1\\
0 &...& 1 & -2
\end{array}\right]\\ \nonumber
{\bf R}({\bf U}) &=& \rho \left[ \begin{array}{c}
U_1(1-U_1)\\
...\\
U_{199}(1-U_{199})\\
\end{array}\right] = \rho\left(I-{\rm diag}({\bf U})\right){\bf U}\\ \nonumber
{\bf B} &=& \frac{\delta}{\Delta x^2}\left[ \begin{array}{c}
a\\
0\\
...\\
0\\
b\\
\end{array}\right]
\end{eqnarray}
with a tridiagonal matrix $D$, a nonlinear vector function ${\bf R}$ which can be written as a matrix product using diagonal matrix formed from a vector (diag()), and a sparse constant vector ${\bf B}$ which collects the boundary information.
\end{example}
By the Fundamental Theorem of Calculus \cite{Burden2011}, the exact solution to (\ref{eqn:generalRD}) over a small interval of time $\Delta t$ is found by integration from $t_n$ to $t_{n+1}=t_n+\Delta t$ as
\begin{equation}\label{eqn:numerical-integral}
{\bf U}^{n+1} = {\bf U}^n + \int_{t_n}^{t_{n+1}}\left(D {\bf U}(t) + {\bf R}({\bf U(t)})+{\bf B}\right)dt,
\end{equation}
where each component $U_m(t)$ has been discretized in time to create an array of components $U_m^n$ approximating the solution $u(x_m,t_n)$.
Note that having the unknown function ${\bf U}(t)$ inside the integral (\ref{eqn:numerical-integral}) makes it impossible to integrate exactly, so we must approximate. Approximating with a left Riemann sum results in the Forward Euler (a.k.a. classic Euler) method \cite{Moler2008},
\begin{equation}\label{eqn:FWDEuler}
{\bf U}^{n+1} = {\bf U}^n + \Delta t\left(D( {\bf U}^n + {\bf R}({\bf U}^n)+{\bf B}\right),
\end{equation}
while approximating with a right Riemann sum results in the Backward Euler method \cite{Moler2008}
\begin{equation}\label{eqn:BWDEuler}
{\bf U}^{n+1} = {\bf U}^n + \Delta t\left(D {\bf U}^{n+1} + {\bf R}({\bf U}^{n+1})+{\bf B}\right).
\end{equation}
Although approximating integrals with left and right Riemann sums is a similar task, in solving differential equations, they can be very different. Forward Euler (\ref{eqn:FWDEuler}) is referred to as an {\it explicit} method since the unknown ${\bf U}^{n+1}$ can be directly computed in terms of known quantities such as the current known approximation ${\bf U}^n$, while Backward Euler (\ref{eqn:BWDEuler}) is referred to as an {\it implicit} method since the unknown ${\bf U}^{n+1}$ is solved in terms of both known ${\bf U}^n$ and unknown ${\bf U}^{n+1}$ quantities. Explicit methods are simple to set up and compute, while implicit methods may not be solvable at all. If we set ${\bf R}({\bf U})\equiv {\bf 0}$ to make equation (\ref{eqn:generalRD}) linear, then an implicit method can be easily written in the explicit form, as shown in the Example \ref{ex:BWD}. Otherwise, an unsolvable implicit method can be approximated with a numerical root-finding method such as Newton's method (\ref{eqn:Newton}) which is discussed in section \ref{sub:Newtonmethod}, but nesting numerical methods is much less efficient than implementing an explicit method as it employs a truncated Taylor series to mathematically approximate the unknown terms. The main reasons to use implicit methods are for stability, addressed in section \ref{sub:stability}. The following examples demonstrate how to form the two-level matrix form.
\begin{definition}\label{def:two-level}
A two-level numerical method for an evolution equation (\ref{eqn:IBVP}) is an iteration which can be written in the two-level matrix form
\begin{equation}\label{eqn:two-level_method}
{\bf U}^{n+1}=M\ {\bf U}^{n}+{\bf N},
\end{equation}
where $M$ is the combined transformation matrix and ${\bf N}$ is the
resultant vector. Note, both $M$ and ${\bf N}$ may update every iteration, especially when the PDE is nonlinear, but for many basic problems, $M$ and ${\bf N}$ will be constant.
\end{definition}
\begin{example}\label{ex:FWD}(Forward Euler)
Determine the two-level matrix form for the Forward Euler method for the Fisher-KPP equation (\ref{eqn:FisherIBVP}). Since Forward Euler is already explicit, we simply factor out the ${\bf U}^n$ components from equation (\ref{eqn:FWDEuler}) to form
\begin{eqnarray}
{\bf U}^{n+1} &=& {\bf U}^n + \Delta t\left(D( {\bf U}^n + \rho \left(I-{\rm diag}({\bf U}^n)\right){\bf U}^n+{\bf B}\right),\\ \nonumber
&=& M{\bf U}^n + {\bf N},\\ \nonumber
M &=& \left(I + \Delta t D +\Delta t\rho \left(I-{\rm diag}({\bf U}^n)\right)\right),\\ \nonumber
{\bf N} &=& \Delta t {\bf B},
\end{eqnarray}
where $I$ is the identity matrix, ${\bf N}$ is constant, and $M$ updates with each iteration since it depends on ${\bf U}^n$.
\end{example}
\begin{example}\label{ex:BWD}(Linear Backward Euler)
Determine the two-level matrix form for the Backward Euler method for the Test equation (\ref{eqn:LinearTestIBVP}). In the Backward Euler method (\ref{eqn:BWDEuler}), the unknown ${\bf U}^{n+1}$ terms can be grouped and the coefficient matrix $I-\Delta t D$ inverted to write it explicitly as
\begin{eqnarray}
{\bf U}^{n+1} &=& M{\bf U}^n+{\bf N},\\ \nonumber
M &=& \left(I - \Delta t D \right)^{-1}\\ \nonumber
N &=& \Delta t\left(I - \Delta t D \right)^{-1}{\bf B}
\end{eqnarray}
where the method matrix $M$ and additional vector ${\bf N}$ are constant.
\end{example}
Just as the trapezoid rule takes the average of the left and right Riemann sums, the Crank-Nicolson method (\ref{eqn:CN}) averages the Forward and Backward Euler methods \cite{CrankNicolson1947}.
\begin{equation}\label{eqn:CN}
{\bf U}^{n+1} = {\bf U}^n + \frac{\Delta t}{2} D\left( {\bf U}^{n} + {\bf U}^{n+1}\right) + \frac{\Delta t}{2}\left({\bf R}({\bf U}^{n}) + {\bf R}({\bf U}^{n+1})\right)+\Delta t{\bf B}.
\end{equation}
One way to truncate an implicit method of a nonlinear equation into an explicit method is called a {\it semi-implicit} method \cite{Burden2011}, which treats the nonlinearity as known information (evaluated at current time $t_n$) leaving the unknown linear terms at $t_{n+1}$. For example, the semi-implicit Crank-Nicolson method is
\begin{equation}\label{eqn:CN_SI}
{\bf U}^{n+1} = \left(I - \frac{\Delta t}{2}D\right)^{-1} \left({\bf U}^n + \frac{\Delta t}{2}D {\bf U}^n + \Delta t {\bf R}({\bf U}^n) +{\bf B}\right).
\end{equation}
\begin{exercise}
\renewcommand{\labelenumi}{\alph{enumi})}
After reviewing Example \ref{ex:FWD} and Example \ref{ex:BWD}, complete the following for the semi-implicit Crank-Nicolson method (\ref{eqn:CN_SI}).
\begin{enumerate}
\item Determine the two-level matrix form for the Test equation (\ref{eqn:LinearTestIBVP}). Note, set ${\bf R = 0}$.
\item *Determine the two-level matrix form for the Fisher-KPP equation (\ref{eqn:FisherIBVP}).
\end{enumerate}
\renewcommand{\labelenumi}{\arabic{enumi}.}
*See section \ref{sub:stability} for the answer and its analysis.
\end{exercise}
Taylor series expansions can be used to prove that while the Crank-Nicolson method (\ref{eqn:CN}) for the linear Test equation (\ref{eqn:LinearTestIBVP}) is second order accurate (See Definition \ref{def:order-accuracy} in section \ref{App:AccuracyProof}), the semi-implicit Crank-Nicolson method (\ref{eqn:CN_SI}) for a nonlinear PDE is only first order accurate in time. To increase truncation error accuracy, unknown terms in an implicit method can be truncated using a more accurate explicit method. For example, blending the Crank-Nicolson method with Forward Euler approximations creates the Improved Euler Crank-Nicolson method, which is second order accurate in time for nonlinear PDEs.
\begin{eqnarray}\label{eqn:CN_IE}
{\bf U}^* &=& {\bf U}^n + \Delta t\left(D {\bf U}^n + {\bf R}({\bf U}^n)+{\bf B}\right),\\\nonumber
{\bf U}^{n+1} &=& {\bf U}^n + \frac{\Delta t}{2} D\left( {\bf U}^{n} + {\bf U}^{*}\right) + \frac{\Delta t}{2}\left({\bf R}({\bf U}^{n}) + {\bf R}({\bf U}^{*})\right)+\Delta t{\bf B}.
\end{eqnarray}
This improved Euler Crank-Nicolson method (\ref{eqn:CN_IE}) is part of the family of Runge-Kutta methods which embed a sequence of truncated Taylor expansions for implicit terms to create an explicit method of any given order of accuracy \cite{Burden2011}.
Proofs of the accuracy for the semi-implicit (\ref{eqn:CN_SI}) and improved Euler (\ref{eqn:CN_IE}) methods are included in Appendix \ref{App:AccuracyProof}.
\begin{exercise}
\renewcommand{\labelenumi}{\alph{enumi})}
After reviewing Example \ref{ex:FWD} and Example \ref{ex:BWD}, complete the following for the Improved Euler Crank-Nicolson method (\ref{eqn:CN_IE}).
\begin{enumerate}
\item Determine the two-level matrix form for the Test equation (\ref{eqn:LinearTestIBVP}). Note, set ${\bf R = 0}$.
\item Determine the two-level matrix form for the Fisher-KPP equation (\ref{eqn:FisherIBVP}).
\end{enumerate}
\renewcommand{\labelenumi}{\arabic{enumi}.}
\end{exercise}
\subsection{Overview of Software}
\label{sub:software}
Several software have been developed to compute numerical methods. Commercially, MATLAB, Mathematica, and Maple are the best for analyzing such methods, though there are other commercial software like COMSOL which can do numerical simulation with much less work on your part. Open-source software capable of the same (or similar) numerical computations, such as Octave, SciLab, FreeFEM, etc. are also available. Once the analysis is complete and methods are fully tested, simulation algorithms are trimmed down and often translated into Fortran or C/C++ for efficiency in reducing compiler time.
We will focus on programming in MATLAB, created by mathematician Cleve Moler (born in 1939), one of the authors of the LINPACK and EISPACK scientific subroutine libraries used in Fortran and C/C++ compilers \cite{Moler2008}. Cleve Moler originally created MATLAB to give his students easy access to these subroutines without having to write in Fortran or C themselves. In the same spirit, we will be working with simple demonstration programs, listed in the appendix, to access the core ideas needed for our numerical investigations. Programs {\tt PDE\_Solution.m} (Appendix \ref{App:Solutions}), {\tt PDE\_Analysis\_Setup.m} (Appendix \ref{App:Setup}), and {\tt Method\_Accuracy\_Verification.m} (Appendix \ref{App:Accuracy}) are MATLAB scripts, which means they can be run without any direct input and leave all computed variables publicly available to analyze after they are run. Programs {\tt CrankNicolson\_SI.m} (Appendix \ref{App:CN}) and {\tt Newton\_System.m} (Appendix \ref{App:Newton}) are MATLAB functions, which means they may require inputs to run, keep all their computations private, and can be effectively embedded in other functions or scripts. All demonstrations programs are run through {\tt PDE\_Solution.m}, which is the main program for this group.
\begin{example}\label{ex:demo}
The demonstration programs can be either downloaded from the publisher or typed into five separate MATLAB files and saved according to the name at the top of the file (e.g. {\tt PDE\_Analysis\_Setup.m}). To run them, open MATLAB to the folder which contains these five programs. In the command window, type {\tt help PDE\_Solution} to view the comments in the header of the main program. Then type {\tt PDE\_Solution} to run the default demonstration. This will solve and analyze the Fisher-KPP equation (\ref{eqn:FisherIBVP}) using the default parameters, produce five graph windows, and report three outputs on the command window. The first graph is the numerical solution using MATLAB's built-in implementation of the Rosenbrock method ({\tt ode23s}), which is also demonstrated in Figure \ref{fig:lf5ode23s} (right). The second graph plots the comparable eigenvalues for the semi-implicit Crank-Nicolson method (\ref{eqn:CN_SI}) based upon the maximum $\Delta t$ step used in the chosen method (Rosenbrock by default). The third graph shows a different steady-state solution to the Fisher-KPP equation (\ref{eqn:FisherIBVP}) found using Newton's method (\ref{eqn:Newton}). The fourth graph shows the rapid reduction of the error of this method as the Newton iterations converge. The fifth graph shows the instability of the Newton steady-state solution by feeding a noisy perturbation of it back into the Fisher-KPP equation (\ref{eqn:FisherIBVP}) as an initial condition. This noisy perturbation is compared to round-off perturbation in Figure \ref{fig:lf5convergence} to see how long this Newton steady-state solution can endure. Notice that the solution converges back to the original steady-state solution found in the first graph.
\end{example}
To use the semi-implicit Crank-Nicolson method (\ref{eqn:CN_SI}) instead of MATLAB's ode23s, do the following. Now the actual eigenvalues of this method are plotted in the second graph.
\begin{exercise}
Open {\tt PDE\_Solution.m} in the MATLAB Editor. Then comment lines 7-9 (type a \% in front of each line) and uncomment lines 12-15 (remove the \% in front of each line). Run {\tt PDE\_Solution}. Verify that the second graph matches Figure \ref{fig:f5eig}.
\end{exercise}
The encoded semi-implicit Crank-Nicolson method (\ref{eqn:CN_SI}) uses a fixed step size $\Delta t$, so it is generally not as stable as MATLAB's built-in solver. It would be best to now uncomment lines 7-9 and comment lines 12-15 to return to the default form before proceeding. The main benefit of the ode23s solver is that it is adaptive in choosing the optimal $\Delta t$ step size and adjusting it for regions where the equation is easier or harder to solve than before. This method is also sensitive to {\it stiff} problems, where stability conditions are complicated or varying. MATLAB has several other built-in solvers to handle various situations. You can explore these by typing {\tt help ode}.
Once you have run the default settings, open up {\tt PDE\_Analysis\_Setup} in the editor and tweak the equation parameter values {\tt a,b,L,delta,rho,degree,c} and logistical parameter {\tt dx}. After each tweak, make sure you run the main program {\tt PDE\_Solution}. The logistical parameters {\tt tspan,dt} for the numerical method can also be tweaked in {\tt PDE\_Solution}, and an inside view of Newton iterations can be seen by uncommenting lines 38-39. Newton's method is covered in Section \ref{sub:Newtonmethod}. A solution with Newton's method is demonstrated in \ref{fig:f1steadystate}(left), while all of the iterations are graphed in Figure \ref{fig:f1steadystate}(right). Note that Figure \ref{fig:f1steadystate}(right) is very similar to a solution which varies over time, but it is not. The graph of the iterations demonstrates how Newton's method seeks better and better estimates of a fixed steady-state solution discussed in Section \ref{sub:steadystate}.
\begin{exercise}
In {\tt PDE\_Analysis\_Setup}, set parameter values, $a=0, b=0, L=10, \delta = \frac{1}{10}, \Delta x=\frac{1}{20}, \rho=1, {\rm degree}=2, c=1$. Then, in {\tt PDE\_Solution}, uncomment lines 38-39 and run it. Verify that the third and fourth graphs matches Figure \ref{fig:f1steadystate}.
\end{exercise}
Notice that the iterations of Newton's method in Figure \ref{fig:f1steadystate}(right) demonstrate oscillatory behavior in the form of waves which diminish in amplitude towards the steady-state solution. These are referred to as stable numerical oscillations similar to the behavior of an underdamped spring \cite{Burden2011}. These stable oscillations suggest that the steady-state solution is stable (attracting other initial profiles to it), but due to the negative values in the solution in Figure \ref{fig:f1steadystate}(left), it is actually an unstable steady-state for Fisher-KPP equation (\ref{eqn:FisherIBVP}). You can see this demonstrated in the last graph plotted when you ran {\tt PDE\_Solution} where there is a spike up to a value around $-4 \times 10^12$. This paradox demonstrates that not all steady-state solutions are stable and that the stability of Newton's method differs from the stability of a steady-state solution to an IBVP.
\begin{figure
\centerline{
\includegraphics[scale=.35]{Harwood_smalldelta_SO_Newton_fig.pdf}
\includegraphics[scale=.45]{Harwood_smalldelta_SO_Newton_iterations_fig.pdf}
}
\caption{An example steady-state solution using Newton's method (left) and the iterations to that steady-state (right) using the parameter values $a=0, b=0, L=10, \delta = \frac{1}{10}, \Delta x=\frac{1}{20}, \rho=1, {\rm degree}=2, c=1$ and initial condition from line 8 in {\tt PDE\_Analysis\_Setup.m} found in Appendix \ref{App:Setup}. Also, uncomment lines 38-39 in {\tt PDE\_Solution.m} found in Appendix \ref{App:Solutions}.}
\label{fig:f1steadystate}
\end{figure}
Some best practices of programming in MATLAB are to clean up before running new computations, preallocate memory, and store calculations which are used more than once. Before new computations are stored in a script file, you can clean up your view of previous results in the command window ({\tt clc}), delete previously held values and structure of all ({\tt clear all}) or selected ({\tt clear {\it name1, name2}}) variables, and close all ({\tt close all}) or selected ({\tt close handle1, handle2}) figures. The workspace for running the main program {\tt PDE\_Solution} is actually cleared in line 5 of {\tt PDE\_Analysis\_Setup} so that this supporting file can be run independently when needed.
When you notice the same calculation being computed more than once in your code, store it as a new variable to trim down the number of calculations done for increased efficiency. Most importantly, preallocate the structure of a vector or matrix that you will fill in values with initial zeros ({\tt zeros(columns,rows)}), so that MATLAB does not create multiple copies of the variable in memory as you fill it in. The code {\tt BCs = zeros(M-2,1);} in line 23 of {\tt PDE\_Analysis\_Setup.m} is an example of preallocation for a vector. Preallocation is one of most effective ways of speeding up slow code.
\begin{exercise}\label{prob:preallocate}
Find the four (total) lines of code in {\tt Newton\_System.m}, {\tt Method\_ Accuracy\_Verification.m}, and {\tt CrankNicolson\_SI.m} which preallocate a variable.
\end{exercise}
\section{Error Analysis}
\label{sec:analysis}
To encourage confidence in the numerical solution it is important to support the theoretical results with numerical demonstrations. For example, a theoretical condition for stability or oscilation-free behavior can be demonstrated by comparing solutions before and after the condition within a small neighborhood of it. On the other hand, the order of accuracy can be demonstrated by comparing subsequent solutions over a sequence of step sizes, as we will see in section \ref{sub:verifyerror}. Demonstrating stability ensures {\it when}, while accuracy ensures {\it how rapidly}, the approximation will converge to the true solution. Showing when oscillations begin to occur prevents any confusion over the physical dynamics being simulated, as we will investigate in section \ref{sub:oscillations}.
\subsection{Verifying Accuracy}
\label{sub:verifyerror}
Since numerical methods for PDEs use arithmetic to approximate the underlying calculus, we expect some error in our results, including inaccuracy measuring the distance from our target solution as well as some imprecision in the variation of our approximations. We must also balance the mathematical accuracy in setting up the method with the round-off errors caused by computer arithmetic and storage of real numbers in a finite representation. As we use these values in further computations, we must have some assurance that the error is minimal. Thus, we need criteria to describe how confident we are in these results.
Error is defined as the difference between the true value $u(x_m,t_n)$ and approximate value $U_m^n$, but this value lacks the context given by the magnitude of the solution's value and focuses on the error of individual components of a solution's vector.
Thus, the relative error $\epsilon$ is more meaningful as it presents the absolute error relative to the true value as long as $u(x_m,t_n)\neq 0$ under a suitable norm such as the max norm $||\cdot||_\infty$.
\begin{definition}\label{def:rel-err}
The {\it relative error} $\epsilon$ for a vector solution ${\bf U}^n$ is the difference between the true value $u(x_m,t_n)$ and approximate value $U_m^n$ under a suitable norm $||\cdot||$, relative to the norm of the true value as
\begin{equation}
\epsilon =\frac{||{\bf u}({\bf x},t_n) - {\bf U}^n||}{||{\bf u}({\bf x},t_n)||}\times 100\%.
\end{equation}
\end{definition}
The significant figures of a computation are those that can be claimed with confidence. They correspond to a number of confident digits plus one estimated digit, conventionally set to half of the smallest scale division on the measurement device, and specified precisely in Definition \ref{def:sig-figs}.
\begin{definition}\label{def:sig-figs}
The value $U_m^n$ approximates $u(x_m,t_n)$ to $N$ {\it significant digits} if $N$ is the largest non-negative integer for which the relative error is bounded by the significant error $\epsilon_s(N)$
\begin{equation}
\epsilon_s(N) = \left(5 \times 10^{-N}\right)\times100\%
\end{equation}
\end{definition}
To ensure all computed values in an approximation have about $N$ significant figures, definition \ref{def:sig-figs} implies
\begin{equation}\label{eqn:sig-bound}
N+1>\log_{10}\left(\frac{5}{\epsilon}\right)>N,
\end{equation}
Although the true value is not often known, the relative error of a previous approximation can be estimated using the best available approximation in place of the true value.
\begin{definition}\label{def:eqn:app-rel-err}
For an iterative method with improved approximations ${\bf U}^{(0)},{\bf U}^{(1)},$ ..., ${\bf U}^{(k)}$, ${\bf U}^{(k)}$, the {\it approximate relative error} at position $(x_m,t_n)$ is defined \cite{Burden2011} as the difference between current and previous approximations relative to the current approximation, each under a suitable norm $||\cdot||$
\begin{eqnarray}\label{eqn:app-rel-err}
E^{(k)}
&=& \frac{||{\bf U}^{(k+1)}-{\bf U}^{(k)}||}{||{\bf U}^{(k+1)}||}\times 100\%
\end{eqnarray}
closely approximates, $\epsilon^{(k)}$, the relative error for the $k^{\rm th}$ iteration assuming that the iterations are converging (that is, as long as $\epsilon^{(k+1)}$ is much less than $\epsilon^{(k)}$).
\end{definition}
The following conservative theorem, proven in \cite{Scarborough1966}, is helpful in clearly presenting the lower bound on the number of significant figures of our results.
\begin{theorem}\label{thm:significant-figures}
Approximation ${\bf U}^n$ at step $n$ with approximate relative error ${E}^{(k)}$ is correct to at least $N-1$ significant figures if
\begin{equation}\label{eqn:sig-crit}
{E}^{(k)}<\epsilon_s(N)
\end{equation}
\end{theorem}
Theorem \ref{thm:significant-figures} is conservatively true for the relative error $\epsilon$, often underestimating the number of significant figures found. The approximate relative error ${ E}^{(k)}$ (\ref{eqn:app-rel-err}), however, underestimates the relative error $\epsilon$ and may predict one significant digit more for low order methods.
Combining theorem \ref{thm:significant-figures} with equation (\ref{eqn:sig-bound}), the number of significant figures has a lower bound
\begin{equation}\label{eqn:sig-approx-bound}
N \geq \left\lfloor \log_{10}\left(\frac{0.5}{{ E}^{(k)}}\right)\right\rfloor.
\end{equation}
\begin{table}
\caption{Verifying Accuracy in Time for Semi-Implicit Crank-Nicolson Method}
\label{tab:CN}
\begin{tabular}{p{1cm}p{2.cm}p{1cm}p{1.5cm}p{2cm}p{1cm}p{1.5cm}}
\hline\noalign{\smallskip}
& \multicolumn{2}{c}{\bf Test Equation} & \multicolumn{2}{c}{\bf Fisher-KPP Equation} \\
$\Delta t$ & Approximate Error$^a$ & Sig. Figs$^c$& \centering Order~of Accuracy$^b$ & Approximate Error$^a$ & Sig. Figs$^c$& \centering Order~of Accuracy$^b$ \tabularnewline
\noalign{\smallskip}\svhline\noalign{\smallskip}
1 & 3.8300e-05 &4 & 2 (2.0051) & 4.1353e-05 &4& -1 (-0.5149)\\
$\frac12$ & 9.5413e-06 &4& 2 (2.0009) & 5.9091e-05 &3& 0 (0.4989)\\
$\frac14$ & 2.3838e-06 &5& 2 (2.0003) & 4.1818e-05 &4& 1 (0.7773)\\
$\frac18$ & 5.9584e-07 &5& 2 (2.0001) & 2.4399e-05 &4 & 1 (0.8950)\\
$\frac1{16}$ & 1.4895e-07&6& 2 (2.0000) & 1.3120e-05 &4 & 1 (0.9490)\\
$\frac1{32}$ & 3.7238e-08&7& 2 (2.0000) & 6.7962e-06 &4& 1 (0.9749)\\
$\frac1{64}$ & 9.3096e-09&7& 2 (2.0001) & 3.4578e-06 &5& 1 (0.9875)\\
$\frac1{128}$ & 2.3273e-09&8& 2 (1.9999) & 1.7439e-06 &5 & 1 (0.9938)\\
\noalign{\smallskip}\hline\noalign{\smallskip}
\end{tabular}\\
$^a$ Approximate error under the max norm $||\cdot||_\infty$ for numerical solution ${\bf U}^{(k)}$ computed at $t_n=10$ compared to solution at next iteration ${\bf U}^{(k+1)}$ whose time step is cut in half.\\
$^b$ Order of accuracy is measured as the power of 2 dividing the error as the step size is divided by 2\\
$^c$ Minimum number of significant figures predicted by approximate error bounded by the significant error $\epsilon_s(N)$ as in equation (\ref{eqn:sig-crit})
\end{table}
\begin{example}
Table \ref{tab:CN} presents these measures of error to analyze the Crank-Nicolson method (\ref{eqn:CN}) for the linear Test equation (\ref{eqn:LinearTestIBVP}) and the semi-implicit version of the Crank-Nicolson method (\ref{eqn:CN_SI}) for the Fisher-KPP equation (\ref{eqn:FisherIBVP}). Column 1 tracks the step size $\Delta t$ as it is halved for improved approximations ${\bf U}^{(k)}$ at $t=10$. Columns 2 and 5 present the approximate errors in scientific notation for easy readability. Scientific notation helps read off the minimum number of significant figures ensured by Theorem \ref{thm:significant-figures}, as presented in columns 3 and 6. Notice how the errors in column 2 are divided by about 4 each iteration while those in column 5 are essentially divided by 2. This ratio of approximate relative errors demonstrates the orders of accuracy, $p$ as a power of 2 since the step sizes are divided by 2 each iteration.
\begin{equation}
\frac{\epsilon^{(k+1)}}{\epsilon^{(k)}} = \frac{C}{2^p}
\end{equation}
for some positive scalar $C$ which underestimates the integer $p$ when $C>1$ and overestimates $p$ when $C<1$. By rounding to mask the magnitude of $C$, the order $p$ can be computed as
\begin{equation}\label{eqn:order-accuracy}
p = {\rm round}\left(\log_2\left(\frac{\epsilon^{(k)}}{\epsilon^{(k+1)}}\right)\right).
\end{equation}
Columns 4 and 7 present both rounded and unrounded measures of the order of accuracy for each method and problem. Thus, we have verified that Crank-Nicolson method (\ref{eqn:CN}) on a linear problem is second order accurate in time, whereas the semi-implicit version of the Crank-Nicolson method (\ref{eqn:CN_SI}) for the nonlinear Fisher-KPP equation (\ref{eqn:FisherIBVP}) is only first order in time.
For comparison, Table \ref{tab:ode23s} presents these same measures for the Rosenbrock method built into MATLAB as ode23s. See example program {\tt Method\_Accuracy \_Verification.m} in Appendix \ref{App:AccuracyProof} for how to fix a constant step size in such an adaptive solver by setting the initial step and max step to be $\Delta t$ with a high tolerance to keep the adaptive method from altering the step size.
\end{example}
\begin{exercise}
Implement the Improved Euler Crank-Nicolson method (\ref{eqn:CN_IE}) and verify that the error in time is $O\left(\Delta t^2\right)$ on both the Test equation (\ref{eqn:LinearTestIBVP}) and Fisher-KPP equation (\ref{eqn:FisherIBVP}) using a table similar to Table \ref{tab:CN}.
\end{exercise}
\begin{table}
\caption{Verifying Accuracy in Time for ode23s Solver in MATLAB}
\label{tab:ode23s}
\begin{tabular}{p{1cm}p{2.cm}p{1cm}p{2cm}p{2cm}p{1cm}p{2cm}}
\hline\noalign{\smallskip}
& \multicolumn{2}{c}{\bf Test Equation} & \multicolumn{2}{c}{\bf Fisher-KPP Equation} \\
\centering $\Delta t$ & \centering Approximate Error$^a$ & \centering Sig. Figs$^c$& \centering Order~of Accuracy$^b$ & \centering Approximate Error$^a$ & \centering Sig. Figs$^c$& \centering Order~of Accuracy$^b$ \tabularnewline
1 &1.8829e-05&4& 2 (2.00863)& 7.4556e-05&3& 2 (1.90005)\\
$\frac12$ &4.6791e-06&5& 2 (2.00402)& 1.9976e-05&4& 2 (1.98028)\\
$\frac14$ &1.1665e-06&5& 2 (2.00198)& 5.0628e-06&4& 2 (1.99722)\\
$\frac18$ &2.9123e-07&6& 2 (2.00074)& 1.2681e-06&5& 2 (2.00060)\\
$\frac1{16}$ &7.2771e-08&6& 2 (2.00054)& 3.169e-07& 6& 2 (2.00025)\\
$\frac1{32}$&1.8186e-08 &7& 2 (2.00027)& 7.9211e-08&6& 2 (1.99900)\\
$\frac1{64}$ &4.5456e-09&8& 2 (1.99801)& 1.9817e-08&7& 2 (2.00168)\\
$\frac1{128}$&1.138e-09& 8& 2 (1.99745)& 4.9484e-09&8& 2 (2.00011)\\
\noalign{\smallskip}\hline\noalign{\smallskip}
\end{tabular}\\
$^a$ Approximate error under the max norm $||\cdot||_\infty$ for numerical solution ${\bf U}^{(k)}$ computed at $t_n=10$ compared to solution at next iteration ${\bf U}^{(k+1)}$ whose time step is cut in half.\\
$^b$ Order of accuracy is measured as the power of 2 dividing the error as the step size is divided by 2\\
$^c$ Minimum number of significant figures predicted by approximate error bounded by the significant error $\epsilon_s(N)$ as in equation (\ref{eqn:sig-crit})
\end{table}
\subsection{Convergence}\label{sub:convergence}
Numerical methods provide dependable approximations $U_{m}^{n}$ of
the exact solution $u\left(x_{m},t_{n}\right)$ only if the approximations
converge to the exact solution, $U_{m}^{n}\to u\left(x_{m},t_{n}\right)$
as the step sizes diminish, $\Delta x,\Delta t\to0$. Convergence of a numerical
method relies on both the consistency of the approximate equation
to the original equation as well as the stability of the solution
constructed by the algorithm. Since consistency is determined by construction,
we need only analyze the stability of consistently constructed schemes
to determine their convergence. This convergence through stability is proven generally by the Lax-Richtmeyer Theorem \cite{Thomas1995}, but is more specifically defined for
two-level numerical methods (\ref{eqn:two-level_method}) in the Lax Equivalence Theorem (\ref{thm:Lax-Equivalence-Theorem}).
\begin{definition}\label{def:well-posed}
A problem is {\it well-posed} if there exists a unique solution which depends continuously on the conditions.
\end{definition}
Discretizing an initial-boundary-value problem (IBVP) into an initial-value problem (IVP) as an ODE system ensures the boundary conditions are well developed for the problem, but the initial conditions must also agree at the boundary for the problem to be well-posed. Further, the slope function of the ODE system needs to be infinitely differential, like the Fisher-KPP equation (\ref{eqn:FisherIBVP}), or at least Lipshitz-continuous, so that Picard's uniqueness and existence theorem via Picard iterations \cite{Burden2011} applies to ensure that the problem is well-posed \cite{Thomas1995}. Theorem \ref{thm:Lax-Equivalence-Theorem}, proved in \cite{Thomas1995} ties this altogether to ensure convergence of the numerical solution to the true solution.
\begin{theorem}[Lax Equivalence Theorem] A consistent, two-level difference scheme
(\ref{eqn:two-level_method}) for a well-posed linear IVP is convergent if and only if it is stable. \label{thm:Lax-Equivalence-Theorem}
\end{theorem}
\subsection{Stability}\label{sub:stability}
The beauty of Theorem \ref{thm:Lax-Equivalence-Theorem} (Lax Equivalence Theorem ) is that once we have a consistent numerical method, we can explore the bounds on stability to ensure convergence of the numerical solution. We begin with a few definitions and examples to lead us to von Neumann stability analysis, named after mathematician John von Neumann (1903-1957).
Taken from the word {\it eigenwerte}, meaning one's own values in German, the {\it eigenvalues} of a matrix define how a matrix operates in a given situation.
\begin{definition}\label{def:eig}
For a $k\times k$ matrix $M$, a scalar $\lambda$ is an eigenvalue of $M$ with corresponding $k\times 1$ eigenvector ${\bf v}\neq {\bf 0}$ if
\begin{equation}
M{\bf v} = \lambda {\bf v}.
\end{equation}
\end{definition}
\begin{figure}[t]
\centerline{
\includegraphics[scale=.65]{Harwood_demo_CN_spectrum_fig.pdf}
}
\caption{Plot of real and imaginary components of all eigenvalues of method matrix $M$ for semi-implicit Crank-Nicolson method for Fisher-KPP equation (\ref{eqn:FisherIBVP}) using the default parameter values $a=0, b=1, L=10, \delta=1, \Delta x=0.05, \rho=1, {\rm degree}=2, c=\frac{1}{3}$ and initial condition as given in line 16 of {\tt PDE\_Analysis\_Setup.m} in Appendix \ref{App:Setup}. }
\label{fig:f5eig}
\end{figure}
Lines 22-31 of the demonstration code {\tt PDE\_Solution.m}, found in Appendix \ref{App:Solutions}, compute several measures helpful in assessing stability, including the graph of the eigenvalues of the method matrix for a two-level method (\ref{eqn:two-level_method}) on the complex plane. The spectrum, range of eigenvalues, of the default method matrix is demonstrated in Figure \ref{fig:f5eig}, while the code also reports the range in the step size ratio $\frac{}{}$, range in the real parts of the eigenvalues, and computes the spectral radius.
\begin{definition}
The spectral radius of a matrix, $\mu(M)$ is the maximum magnitude of all eigenvalues of $M$
\begin{equation}
\mu(M)=\max_{i}|\lambda_{i}|.
\end{equation}
\end{definition}
The norm of a vector is a well-defined measure of its size in terms of a specified metric, of which the Euclidean distance (notated $||\cdot||_2$), the maximum absolute value ($||\cdot||_\infty$), and the absolute sum ($||\cdot||_1$) are the most popular. See \cite{HornJohnson2012} for further details. These measures of a vector's size can be extended to matrices.
\begin{definition}\label{defn:MatrixNorm}
For any norm $||\cdot||$, the corresponding matrix norm $|||\cdot|||$ is defined by
\begin{equation}
|||M|||=\max_{\bf x}\frac{||M{\bf x}}{\bf x}.
\end{equation}
\end{definition}
A useful connection between norms and eigenvalues is the following theorem \cite{HornJohnson2012}
\begin{theorem} \label{thm:RadiusBound}
For any matrix norm $|||\cdot|||$ and square matrix $M$, $\mu(M)\leq |||M|||$.
\end{theorem}
\begin{proof}
Consider an eigenvalue $\lambda$ of matrix $M$ corresponding to eigenvector ${\bf x}$ whose magnitude equals the spectral radius, $|\lambda|=\mu(M)$. Form a square matrix $X$ whose columns each equal the eigenvector ${\bf x}$. Note that by Definition \ref{def:eig}, $MX=\lambda X$ and $|||X|||\neq 0$ since ${\bf x}\neq{\bf 0}$.
\begin{eqnarray}
|\lambda | |||X||| &=& ||\lambda X||\\ \nonumber
&=& ||M X||\\ \nonumber
&\leq & |||M||| |||X|||
\end{eqnarray}
Therefore, $|\lambda|=\mu(M)\leq |||M|||$.
\end{proof}
Theorem \ref{thm:RadiusBound} can be extended to an equality in Theorem \ref{thm:RadiusNorm} (proven in \cite{HornJohnson2012}),
\begin{theorem}\label{thm:RadiusNorm}
$\mu(M)= \lim_{k\to\infty} |||M^k|||^\frac{1}{k}$.
\end{theorem}
This offers a useful estimate of the matrix norm by the spectral radius, specifically when the matrix is powered up in solving a numerical method.
Now, we can apply these definitions to the stability of a numerical method. An algorithm is stable if small changes in the initial data produce only small changes in the final results \cite{Burden2011}, that is, the errors do not ``grow too fast" as quantified in Definition \ref{def:stable}.
\begin{definition}\label{def:stable}
A two-level difference method (\ref{eqn:two-level_method}) is said to be stable with respect to the norm $||\cdot||$ if there exist positive max step sizes $\Delta t_{0},\ and\ \Delta x_{0},$ and non-negative constants $K$ and $\beta$ so that
\[
||{\bf U}^{n+1}||\leq Ke^{\beta\triangle t}||U^{0}||,
\]
for $0\leq t,\ 0<\triangle x\leq\triangle x_{0}\ and\ 0<\triangle t\leq\triangle t_{0}$.
\label{defn:Burden Faires Stability}
\end{definition}
The {\it von Neumann criterion} for stability (\ref{VN criterion}) allows for stable solution to an exact solution which is not growing (using $C=0$ for the tight von Neumann criterion) or at most exponentially growing (using some $C>0$) by bounding the spectral radius of the method matrix,
\begin{equation}
\mu(M)\leq1+C\triangle t,\label{VN criterion}
\end{equation}
for some $C\geq0$.
Using the properties of norms and an estimation using Theorem \ref{thm:RadiusNorm}, we can approximately bound the size of the numerical solution under the von Neumann criterion as
\begin{eqnarray}
||{\bf U}^{n+1}|| &=& ||M^{n+1}U^{0}||\\ \nonumber
&\leq & |||M^{n+1}|||\ ||U^{0}||\\ \nonumber
&\approx & \mu(M)^{n+1}\ ||U^{0}||\\ \nonumber
&\leq & \left(1+C\Delta t \right)^{n+1}\ ||U^{0}||\\ \nonumber
&=& \left(1+(n+1)C\Delta t +...\right)\ ||U^{0}||\\ \nonumber
&\leq & e^{(n+1)C\Delta t}\ ||U^{0}||\\ \nonumber
&=& Ke^{\beta\Delta t}\ ||U^{0}||
\end{eqnarray}
for $K=1,\beta=(n+1)C$, which makes the von Neumann criterion sufficient for stability of the solution in approximation for a general method matrix. When the method matrix is symmetric, which occurs for many discretized PDEs including the Test equation with Forward Euler, Backward Euler, and Crank-Nicolson methods, the spectral radius equals the $|||\cdot|||_2$ of the matrix. Then, the von Neumann criterion (\ref{VN criterion}) provides a precise necessary and sufficient condition for stability \cite{Thomas1995}.
If the eigenvalues are easily calculated, they provide a simple means for predicting the stability and behavior of the solution. {\it Von Neumann analysis}, estimation of the eigenvalues from the PDE itself, provides a way to extract information about the eigenvalues, if not the exact eigenvalues themselves.
For a two-level numerical scheme (\ref{eqn:two-level_method}), the
eigenvalues of the combined transformation matrix indicate the stability
of the solution.
Seeking a solution to the linear difference scheme by separation of variables, as is used for linear PDEs, we can show that the discrete error growth factors
are the eigenvalues of the method matrix $M$.
Consider a two-level difference scheme (\ref{eqn:two-level_method}) for a linear parabolic PDE so that ${\bf R}=0$,
then the eigenvalues can be defined by the constant ratio \cite{Thomas1995}
\begin{equation}
\frac{U_{m}^{n+1}}{U_{m}^{n}}=\frac{T^{n+1}}{T^{n}}=\lambda_{m},\ \mbox{where }U_{m}^{n}=X_{m}T^{n}.\label{Growth Factor-Eigenvalue relation}
\end{equation}
The error $\epsilon_{m}^{n}=u\left(x_{m},t_{n}\right)-U_{m}^{n}$
satisfies the same equation as the approximate solution $U_{m}^{n}$,
so the eigenvalues also define the ratio of errors in time
called the error growth (or amplification) factor \cite{Thomas1995}.
Further, the error can be represented in Fourier form as $\epsilon_{m}^{n}=\hat{\epsilon}e^{\alpha n\Delta t}e^{im\beta\Delta x}$
where $\hat{\mbox{\ensuremath{\epsilon}}}$ is a Fourier coefficient,
$\alpha$ is the growth/decay constant, $i=\sqrt{-1}$, and $\beta$ is the wave
number. Under this assumptions, the eigenvalues are equivalent to the error growth factors of the numerical method,
\[
\lambda_{k}=\frac{U_{m}^{n+1}}{U_{m}^{n}}=\frac{\epsilon_{m}^{n+1}}{\epsilon_{m}^{n}}=\frac{\hat{\epsilon}e^{\alpha\left(n+1\right)\Delta t}e^{im\beta\Delta x}}{\hat{\epsilon}e^{\alpha n\Delta t}e^{im\beta\Delta x}}=e^{\alpha\Delta t}.
\]
We can use this equivalency to finding bounds on
the eigenvalues of a numerical scheme by plugging the representative growth factor $e^{\alpha\Delta t}$ into the method discretization called von Neumann stability
analysis (also known as Fourier stability analysis) \cite{vonNeumann1950, Thomas1995}.
The von Neumann criterion (\ref{VN criterion}) ensures that the matrix stays bounded as it is powered up.
If possible, $C$ is set to 0, called the tight von Neumann criterion for simplified bounds on the step sizes.
As another consequence of this equivalence, analyzing the spectrum of the transformation matrix also reveals patterns in the orientation, spread, and balance of the growth of errors for various wave modes.
Before we dive into the stability analysis, it is helpful to review some identities for reducing the error growth factors:
\begin{eqnarray}\label{eqn:identities}
\frac{e^{ix}+ e^{-ix}}{2} = \cos(x),\ \frac{e^{ix}- e^{-ix}}{2} =i\sin(x),\\ \nonumber
\frac{1- \cos(x)}{2} = \sin^2\left(\frac{x}{2}\right),\frac{1+ \cos(x)}{2} =\cos^2\left(\frac{x}{2}\right).
\end{eqnarray}
\begin{example}\label{ex:FWD_Stability
Use the von Neumann stability analysis to determine conditions on $\Delta t,\Delta x$ to ensure stability of the Crank-Nicolson method (\ref{eqn:CN}) for the Test equation (\ref{eqn:LinearTestIBVP}).
For von Neumann stability analysis, we replace each solution term $U_m^n$ in the method with the the representative error $\epsilon_{m}^{n}=\hat{\epsilon}e^{\alpha n\Delta t}e^{im\beta\Delta x}$,
\begin{eqnarray}
U_{m}^{n+1}&=& rU_{m-1}^{n}+\left(1-2r\right)U_{m}^{n} +rU_{m+1}^{n},\\ \nonumber
\hat{\epsilon}e^{\alpha n\Delta t+\alpha\Delta t}e^{im\beta\Delta x} &=& r\hat{\epsilon}e^{\alpha n\Delta t}e^{im\beta\Delta x-i\beta\Delta x} + \left(1-2r\right)\hat{\epsilon}e^{\alpha n\Delta t}e^{im\beta\Delta x}\\ \nonumber
&&+ r\hat{\epsilon}e^{\alpha n\Delta t}e^{im\beta\Delta x+i\beta\Delta x},
\end{eqnarray}
where $r = \frac{\delta \Delta t}{\Delta x^2}$. Dividing through by the common $\epsilon_{m}^{n}$ term we can solve for the error growth factor
\begin{eqnarray}
e^{\alpha \Delta t} &=& 1-2r+2r\left(\frac{e^{i\beta\Delta x}+ e^{-i\beta\Delta x}}{2}\right),\\ \nonumber
&=& 1-4r\left(\frac{1-\cos(\beta\Delta x)}{2}\right),\\ \nonumber
&=& 1-4r\sin^{2}\left(\frac{\beta\Delta x}{2}\right),
\end{eqnarray}
reduced using the identities in (\ref{eqn:identities}). Using a tight ($C=0$) von Neumann criterion (\ref{VN criterion}), we bound $|e^{\alpha\Delta t}|\leq 1$ with the error growth factor in place of the spectral radius. Since the error growth factor is real, the bound is ensured in the two components, $e^{\alpha\Delta t}\leq 1$ which holds trivially and $e^{\alpha\Delta t}\geq -1$ which is true at the extremum as long as $r\leq\frac12$. Thus, as long as $dt\leq\frac{\Delta x^2}{2\delta}$, the Forward Euler method for the Test equation is stable in the sense of the tight von Neumann criterion which ensures diminishing errors at each step.
\end{example}
The balancing of explicit and implicit components in the Crank-Nicolson method create a much less restrictive stability condition.
\begin{exercise}\label{ex:CN_Stability
Use the von Neumann stability analysis to determine conditions on $\Delta t,\Delta x$ to ensure stability of the Crank-Nicolson method (\ref{eqn:CN}) for the Test equation (\ref{eqn:LinearTestIBVP}).
{\it Hint: verify that
\begin{equation} \label{eqn:growthfactor_CNtest}
e^{\alpha\Delta t} =\frac{1-2r\sin^{2}\left(\frac{\beta\Delta x}{2}\right)}{1+2r\sin^{2}\left(\frac{\beta\Delta x}{2}\right)},
\end{equation}
and show that both bounds are trivially true so that Crank-Nicolson method is unconditionally stable.}
\end{exercise}
The default method in {\tt PDE\_Solution.m} and demonstrated in Figure \ref{fig:f5eig} is the semi-implicit Crank-Nicolson method (\ref{eqn:CN_SI}) for the Fisher-KPP equation (\ref{eqn:FisherIBVP}). If you set $\rho=0$ in {\tt PDE\_Analysis\_Setup.m}, however, Crank-Nicolson method for the Test equation (\ref{eqn:LinearTestIBVP}) is analyzed instead.
The two-level matrix form of the semi-implicit Crank-Nicolson method is
\begin{eqnarray}\label{eqn:CN_SI_matrix}
{\bf U}^{n+1} &=& M {\bf U}^n + {\bf N},\\ \nonumber
M &=& \left(I - \frac{\Delta t}{2}D \right)^{-1}\left( I + \frac{\Delta t}{2}D +\rho\Delta t \left( 1 - {\bf U}^n \right) \right),\\ \nonumber
{\bf N} &=& \Delta t\left(I - \frac{\Delta t}{2}D \right)^{-1}{\bf B}.
\end{eqnarray}
Notice in Figure \ref{fig:f5eig} that all of the eigenvalues are real and they are all bounded between -1 and 1. Such a bound, $|\lambda|<1$, ensures stability of the method based upon the default choice of $\Delta t,\Delta x$ step sizes.
\begin{example} \label{ex:CN_SI_Stability}
Use the von Neumann stability analysis to determine conditions on $\Delta t,\Delta x$ to ensure stability of the semi-implicit Crank-Nicolson method (\ref{eqn:CN_SI}) for the Fisher-KPP equation (\ref{eqn:FisherIBVP}).
Now we replace each solution term $U_m^n$ in the semi-implicit Crank-Nicolson method (\ref{eqn:CN_SI}) for the Fisher-KPP equation (\ref{eqn:FisherIBVP})
$$
-\frac{r}{2}U_{m-1}^{n+1}+\left(1-r\right)U_{m}^{n+1}-\frac{r}{2}U_{m+1}^{n+1}= \frac{r}{2}U_{m-1}^{n}+\left(1-r+\rho (1-U_{m}^{n}) \right)U_{m}^{n} +\frac{r}{2}U_{m+1}^{n}
$$
with the representative error $\epsilon_{m}^{n}=\hat{\epsilon}e^{\alpha n\Delta t}e^{im\beta\Delta x}$ where again $r = \frac{\delta \Delta t}{\Delta x^2}$. Again dividing through by the common $\epsilon_{m}^{n}$ term we can solve for the error growth factor
$$
e^{\alpha\Delta t}=\frac{1-2r\sin^{2}\left(\frac{\beta\Delta x}{2}\right) + \Delta t\rho(1-\tilde{U})}{1+2r\sin^{2}\left(\frac{\beta\Delta x}{2}\right)}
$$
where constant $\tilde{U}$ represents the extreme values of $U_{m}^{n}$. Due to the equilibrium points $\bar{u}=0,1$ to be analyzed in section \ref{sub:steadystate}, the bound $0\leq U_{m}^{n}\leq 1$ holds as long as the initial condition is similarly bounded $0\leq U_{m}^{0}\leq 1$.
Due to the potentially positive term $\Delta t \rho (1-\tilde{U})$, the tight ($C=0$) von Neumann criterion fails, but the general von Neumann criterion (\ref{VN criterion}), $|e^{\alpha\Delta t}|\leq 1+C\Delta t$, does hold for $C\geq \rho$. With this assumption, $e^{\alpha\Delta t}\geq -1-C\Delta t$ is trivially true and $e^{\alpha\Delta t}\leq 1+C\Delta t$ results in
$$
\left(\rho(1-\tilde{U})-C\right)\Delta x^2 -4\delta \sin^{2}\left(\frac{\beta\Delta x}{2}\right) \leq 2C\delta\Delta t \sin^{2}\left(\frac{\beta\Delta x}{2}\right)
$$
which is satisfied at all extrema as long as $C\geq \rho$ since $\left(\rho(1-\tilde{U})-C\right)\leq 0$. Thus, the semi-implicit Crank-Nicolson method is unconditionally stable in the sense of the general von Neumann criterion, which bounds the growth of error less than an exponential. This stability is not as strong as that of the Crank-Nicolson method for the Test equation but it provides useful management of the error.
\end{example}
\subsection{Oscillatory Behavior}\label{sub:oscillations}
Oscillatory behavior has been exhaustively studied for ODEs \cite{CGraham2016, Gao2015} with much numerical focus on researching ways to dampen oscillations in case they emerge \cite{Britz2003, Osterby2003}. For those wishing to keep their methods oscillation-free, Theorem \ref{thm:nonnegeig} provides sufficiency of non-oscillatory behavior through the non-negative eigenvalue condition (proven, for example, in \cite{Thomas1995}).
\begin{theorem}[Non-negative Eigenvalue Condition] \label{thm:nonnegeig}
A two-level difference scheme (\ref{eqn:two-level_method}) is free of numerical oscillations if all the eigenvalues $\lambda_i$ of the method matrix $M$ are non-negative.
\end{theorem}
Following von Neumann stability analysis from section \ref{sub:stability}, we can use the error growth factors previously computed to determine the non-negative eigenvalue condition for a given method.
\begin{example}
To find the non-negative eigenvalue condition for the semi-implicit Crank-Nicolson method for the Fisher-KPP equation (\ref{eqn:FisherIBVP}), we start by bounding our previously computed error growth factor as $e^{\alpha\Delta t}\geq 0 $ to obtain
$$
1-2r\sin^{2}\left(\frac{\beta\Delta x}{2}\right) + \Delta t\rho(1-\tilde{U})\geq 0
$$
which is satisfied at the extrema, assuming $0\leq\tilde{U}\leq1$, by the condition
\begin{equation}\label{eqn:nonnegCN_SI}
\frac{\delta\Delta t}{\Delta x^2}\leq \frac12
\end{equation}
which happens to be the same non-negative eigenvalue condition for the Crank-Nicolson method applied to the linear Test equation (\ref{eqn:LinearTestIBVP}).
\end{example}
A numerical approach to track numerical oscillations uses a slight modification of the standard definitions for oscillatory behavior from ODE research to identify numerical oscillations in solutions to PDEs \cite{Lakoba2016P1}.
\begin{definition}
A continuous function $u(x,t)$ is oscillatory about $K$ if the difference $u(x,t)-K$ has an infinite number of zeros for $a\leq t<\infty$ for any $a$.
Alternately, a function is oscillatory over a finite interval if it has more than two critical points of the same kind (max, min, inflection points) in any finite interval $[a,b]$ \cite{Gao2015}.
\end{definition}
Using the first derivative test, this requires two changes in the sign of the derivative. Using first order finite differences to approximate the derivative results in the following approach to track numerical oscillations.
\begin{definition}[Numerical Oscillations]
By tracking the sign change of the derivative for each spatial component through sequential steps $t_{n-2}, t_{n-1}, t_{n}$ in time, oscillations in time can be determined by the logical evaluation
$$
(U^{n-2}-U^{n-1}) (U^{n-1}-U^{n}) < 0,
$$
which returns true (inequality satisfied) if there is a step where the magnitude oscillates through a critical point. Catching two such critical points will define a numerical oscillation in the solution.
\end{definition}
Crank-Nicolson method is known to be unconditionally stable, but damped oscillations have been found for large time steps. The point at which oscillations begin to occur is an open question, but it is known to be bounded from below by the non-negative eigenvalue condition, which can be rewritten from (\ref{eqn:nonnegCN_SI}) as $\Delta t \leq \frac{\Delta x^2}{2\delta}$. Breaking the non-negative eigenvalue condition, however, does not always create oscillations.
\cproblem{\label{prob:stepconditions}
Using the example code found in Appendix \ref{App:Solutions}, uncomment lines 12-15 (deleting \%'s) and comment lines 7-9 (adding \%'s) to use the semi-implicit Crank-Nicolson method. Make sure the default values of $\Delta x=0.05, \rho=1$ are set in {\tt PDE\_Analysis\_Setup.m} and choose step size $\Delta t=2$ in {\tt PDE\_ Solution.m}. Notice how badly the non-negative eigenvalue condition $\Delta t\leq\frac{\Delta x^2}{2\delta}$ fails and run {\tt PDE\_Solution.m} to see stable oscillations in the solution. Run it again with smaller and smaller $\Delta t$ values until the oscillations are no longer visible. Save this point as $(\Delta x,\Delta t)$. Change $\Delta x=0.1$ and choose a large enough $\Delta t$ to see oscillations and repeat the process to identify the lowest time step when oscillations are evident. Repeat this for $\Delta x=0.5$ and $\Delta x=1$, then plot all the $(\Delta x,\Delta t)$ points in MATLAB by typing {\tt plot(dx,dt)} where {\tt dx,dt} are vector coordinates of the $(\Delta x,\Delta t)$ points. On the Figure menu, click {\it Tools},then {\it Basic Fitting}, and check {\it Show equations} and choose a type of plot which best fits the data. Write this as a relationship between $\Delta t$ and $\Delta x$.
}
Oscillations in linear problems can be difficult to see, so it is best to catalyze any slight oscillations with oscillatory variation in the initial condition, or for a more extreme response, define the initial condition so that it fails to meet one or more boundary condition. Notice that the IBVP will no longer have a unique theoretical solution, but the numerical method will force an approximate solution to the PDE and match the conditions as best as it can. If oscillations are permitted by the method, then they will be clearly evident in this process.
\resproject{
In {\tt PDE\_Analysis\_Setup.m}, set {\tt rho=0} on line 12 and multiply line 16 by zero to keep the same number of elements, \\
{\tt u0 = 0*polyval(polyfit(\dots}\\
Investigate lowest $\Delta t$ values when oscillations occur for $\Delta x=0.05,0.1,0.5,1$ and fit the points with the Basic Fitting used in Challenge Problem \ref{prob:stepconditions}. Then, investigate a theoretical bound on the error growth factor (\ref{eqn:growthfactor_CNtest}) for the Crank-Nicolson method to the Test equation which approximates the fitting curve. It may be helpful to look for patterns in the computed eigenvalues at those $(\Delta x,\Delta t)$ points.
}
\section{Parameter Analysis}\label{sec:parameters}
Though parameters are held fixed when solving a PDE, varying their values can have an interesting impact upon the shape of the solution. We will focus on how changing parameters values and initial conditions affect the end behavior of IBVPs. For example, Figure \ref{fig:f5steadystatestab} compares two very different steady-state solutions based upon similar sinusoidal initial conditions. Taking the limit of parameters in a given model can help us see between which limiting functions the steady-state solutions tend.
\begin{figure
\centerline{
\includegraphics[scale=.45]{Harwood_halfsine_OFS_Newton_fig.pdf}
\hfil
\includegraphics[scale=.45]{Harwood_fullsine_SO_Newton_fig.pdf}
}
\caption{Example graphs of Newton's method using the parameter values $a=0, b=0, L=10, \delta = 1, \Delta x=\frac{1}{20}, \rho=1, {\rm degree}=2, c=1$ and replacing the default initial condition with (left) $\sin\left(\frac{\pi t}{L}\right)$ and (right) $\sin\left(\frac{2\pi t}{L}\right)$ in {\tt PDE\_Analysis\_Setup.m} found in Appendix \ref{App:Setup}.}
\label{fig:f34steadystate}
\end{figure}
\subsection{Steady-State Solutions}
\label{sub:steadystate}
To consider steady-state solutions to the general reaction diffusion model (\ref{eqn:IBVP}), we set the time derivative to zero to create the boundary-value problem (BVP) with updated $u\equiv u(x)$ satisfying
\begin{eqnarray}\label{eqn:BVP}
0 &=& \delta u_{xx} + R(u),\\\nonumber
&&u(0) = a,\\\nonumber
&&u(L) = b
\end{eqnarray}
LeVeque \cite{LeVeque2007}, Marangell et. al. \cite{HarleyMarangell2015}, and Aron et.al. \cite{Aron2014} provide insightful examples and helpful guides for analyzing steady-state and traveling wave solutions of nonlinear PDEs. We will follow their guidelines here in our analysis of steady-states of the Fisher-KPP equation (\ref{eqn:FisherIBVP}). Notice that as $\delta\to \infty$, equation (\ref{eqn:BVP}) simplifies as $0 = u_{xx} + \frac1\delta R(u)\to 0=u_{xx}$. Under this parameter limit, all straight lines are steady-states, but only one,
\begin{equation} \label{eqn:f_infty}
f_\infty(x) =\frac{b-a}{L}x+a,
\end{equation}
fits the boundary conditions $u(0,t)=a,\ u(L,t)=b$. On the other hand, as $\delta\to 0$, solutions to equation (\ref{eqn:BVP}) tend towards equilibrium points, $0=R(\bar{u})$, of the reaction function inside the interval $(0,L)$. Along with the fixed boundary conditions, this creates discontinuous limiting functions
\begin{equation} \label{eqn:f_0}
f_0^{(\bar{u})}(x) = \left\{\begin{array}{ll} a,& x=0\\ \bar{u},& 0<x<L,\\ b,& x=L \end{array} \right.
\end{equation}
defined for each equilibrium point $\bar{u}$.
\begin{example}
The Fisher-KPP equation (\ref{eqn:FisherIBVP}) reduces to the BVP with updated $u\equiv u(x)$ satisfying
\begin{eqnarray}\label{eqn:steadystate}
0 &=& \delta u_{xx} + \rho u(1-u),\\\nonumber
&&u(0) = 0,\\\nonumber
&&u(10) = 1,
\end{eqnarray}
As parameter $\delta$ varies, the steady-state solutions of (\ref{eqn:steadystate}) transform from one of the discontinuous limiting functions
\begin{equation} \label{eqn:ex-f_0}
f_0^{(\bar{u})}(x) = \left\{\begin{array}{ll} 0,& x=0\\ \bar{u},& 0<x<10,\\ 1,& x=10 \end{array} \right.
\end{equation}
for equilibrium $\bar{u}=0$ or $\bar{u}=1$, towards the line
\begin{equation} \label{eqn:ex-f_infty}
f_\infty(x) =\frac{1}{10}x.
\end{equation}
\end{example}
\subsection{Newton's Method}\label{sub:Newtonmethod}
The Taylor series expansion for an analytical function towards a root, that is $0 = f(x_{n+1})$, gives us
\begin{equation}
0 = f(x_n) + \Delta x f'(x_n) + O\left(\Delta x^2\right)
\end{equation}
Truncating $O\left(\Delta x^2\right)$ terms, and expanding $\Delta x = x_{n+1}-x_n$, an actual root of $f(x)$ can be approximated using Newton's method \cite{Burden2011}. Newton's iteration method for a single variable function can be generalized to a vector of functions ${\bf G}(u)$ solving a (usually nonlinear) system of equations $0={\bf G}(u)$ resulting in a sequence of vector approximations $\{{\bf U}^{(k)}\}$. Starting near enough to a vector of roots which is stable, the sequence of approximations will converge to it: $\lim_{k\to \infty} {\bf U}^{(k)}={\bf U}_s$. Determining the intervals of convergence for a single variable Newton's method can be a challenge; even more so for this vector version. Note that the limiting vector of roots is itself a discretized version of a solution, ${\bf U}_s=u({\bf x})$, to the BVP system (\ref{eqn:BVP}).
\begin{definition}[Vector Form of Newton's Method]
For system of equations $0={\bf G}(u)$,
\begin{equation}\label{eqn:Newton}
{\bf U}^{(k+1)} = {\bf U}^{(k)} - J^{-1}({\bf U}^{(k)}) {\bf G}({\bf U}^{(k)})
\end{equation}
where $J({\bf U}^{(k)})$ is the Jacobian matrix, $\frac{\partial {\bf G}_i(u)}{\partial U_j}$, which is the derivative of ${\bf G}(u)$ in $\mathbb{R}^{M\times M}$ where M is the number of components of ${\bf G}({\bf U}^{(k)})$ \cite{LeVeque2007}.
\end{definition}
Using the standard centered difference, we can discretize the nonlinear BVP (\ref{eqn:BVP}) to obtain the system of equations $0 = {\bf G}\left( {\bf U}^n\right)$
\begin{equation}
0 = \delta D {\bf U}^n + \rho {\bf U}^n\left( 1-{\bf U}^n \right)
\end{equation}
We need an initial guess for Newton’s method, so we will use the initial condition $u_0$ from the IBVP (\ref{eqn:FisherIBVP}). Table \ref{tab:Newton} shows the change in the solution measured by $||{\bf U}^{(k+1)}-{\bf U}^{(k)}||_\infty=||J^{-1}{\bf G}||_\infty$ in each iteration. As expected, Newton's method appears to be converging quadratically, that is $\epsilon^{(k+1)} = O(\epsilon^{(k)})^2$ according to Definition \ref{def:order-conv}.
\begin{definition}\label{def:order-conv}
Given an iteration which converges, the {\it order of convergence} $N$ is the power function relationship which bounds subsequent approximation errors as
\begin{equation}
\epsilon^{(k+1)} = O\left((\epsilon^{(k)})^N \right).
\end{equation}
\end{definition}
Note that scientific notation presents an effective display of these solution changes. You can see that the powers are essentially doubling every iteration in column 4 of Table \ref{tab:Newton} for the Approximate Error$^a$ in the Fisher-KPP equation. This second order convergence can be more carefully measured by computing
$$
{\rm order}^{(k)} = {\rm round}\left( \frac{\log\left(\epsilon^{(k+1)}\right)}{\log\left(\epsilon^{(k)}\right)} \right)
$$
where rounding takes into account the variable coefficient $C$ in definition \ref{def:order-conv}. For instance, values in column 4 slightly above 2 demonstrate $C<1$ and those slightly below 2 demonstrate $C>1$. Thus it is reasonable to round these to the nearest integer.
This is not evident, however, for the Test equation because the error suddenly drops near the machine tolerance and further convergence is stymied by round-off error.
If we start with a different initial guess ${\bf U}^{(0)}$ (but still close enough to this solution), we would find that the method still converges to this same solution. Newton's method can be shown to converge if we start with an initial guess that is
sufficiently close to a solution. How close depends on the nature of the problem. For more sensitive problems one might have to start extremely close.
In such cases it may be necessary to use a technique such as continuation to find suitable initial data by varying a parameter, for example \cite{LeVeque2007}.
\begin{figure}[t]
\centerline{
\includegraphics[scale=.65]{Harwood_demo_error_convergence_Newton_fig.pdf}
}
\caption{Log-plot of approximate relative error, $\varepsilon^(k)_r=\log_10(\max|{\bf U}^(k+1)-{\bf U}^(k)|)$ in Newton's method for the
Fisher-KPP equation (\ref{eqn:FisherIBVP}) using the default parameter values $a=0, b=1, L=10, \delta=1, \Delta x=0.05, \rho=1, {\rm degree}=2, c=\frac{1}{3}$ and initial condition as given in {\tt PDE\_Analysis\_Setup.m} found in Appendix \ref{App:Setup}. }
\label{fig:lf5convergence}
\end{figure}
\begin{table}
\caption{Verifying Convergence of Newton's Method}
\label{tab:Newton}
\begin{tabular}{p{1.5cm}p{2.75cm}p{1.75cm}p{2.75cm}p{1.75cm}}
\hline\noalign{\smallskip}
& \multicolumn{2}{c}{\bf Test Equation} & \multicolumn{2}{c}{\bf Fisher-KPP Equation} \\
\centering Iteration & \centering Approximate Error$^a$ & \centering Order of Convergence$^b$ & \centering Approximate Error$^a$ & \centering Order of Convergence$^b$ \tabularnewline
\noalign{\smallskip}\svhline\noalign{\smallskip}
1 & 1.6667e-01 & 18.2372 & 1.5421e+00 &0 (-0.4712)\\
2 & 6.4393e-15 & 0.9956 & 8.1537e-01 & -1 (-0.5866)\\
3 & 7.4385e-15 & 1.0585& 1.1272e+00 & -5 (-5.0250)\\
4 & tol.$^c$ reached & & 5.4789e-01 & 2 (2.4533)\\
5 & & & 2.2853e-01 & 2 (2.2568)\\
6 & & & 3.5750e-02 & 2 (2.0317)\\
7 & & & 1.1499e-03 & 2 (2.0341)\\
8 & & & 1.0499e-06 & 2 (1.9878)\\
9 & & & 1.3032e-12 & 2 (1.2875)\\
10 & & & tol.$^c$ reached & \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\end{tabular}\\
$^a$ Approximate error measured as the maximum absolute difference ($||\cdot||_\infty$) between one iteration and the next.\\
$^b$ Order of convergence is measured as the power each error is raised to produce the next: $\epsilon_{i+1} = \epsilon_i^p \to p = \log(\epsilon_{i+1})/\log(\epsilon_i)$\\
$^c$ Stopping criterion is reached error is less than tol = $10\epsilon = 2.2204e-15$.
\end{table}
The solution found in Figure \ref{fig:fl6delta_range} for $\delta=1$ is an isolated (or locally unique) solution in the sense that there are no other solutions very nearby. However, it does not follow that this is the unique solution to the BVP (\ref{eqn:steadystate}) as shown by the convergence in Figure \ref{fig:f5steadystatestab} to another steady-state solution. In fact, this steady-state is unstable for the Fisher-KPP equation (\ref{eqn:FisherIBVP}), as demonstrated in Figure \ref{fig:f5steadystatestab}.
\begin{figure}[t]
\centerline{
\includegraphics[scale=.65]{Harwood_steadystate_range_Newton_fig.pdf}
}
\caption{Graph of three steady-state solutions to BVP (\ref{eqn:steadystate}) by Newton's method for a $\delta$-parameter range of (1) $\delta=0$, (2) $\delta=1$, and (3) $\delta\to\infty$ using the other default parameter values $a=0, b=1, L=10, \Delta x=\frac{1}{20}, \rho=1, {\rm degree}=2, c=\frac{1}{3}$ and initial condition as given in {\tt PDE\_Analysis\_Setup.m} found in Appendix \ref{App:Setup}.}
\label{fig:fl6delta_range}
\end{figure}
\begin{svgraybox}
{\bf Project Idea 3.}
For $\delta=1$, use Newton's method in example code in section \ref{App:Solutions} in the Appendix to investigate steady-state solutions to the bounded Fisher-KPP equation (\ref{eqn:steadystate}). Note the behavior of limiting solutions found in relation to the initial condition used. Also, note the shape of functions for which Newton's method is unstable. One example behavior for a specific quadratic is shown in Figure \ref{fig:fl6delta_range}
\end{svgraybox}
\begin{svgraybox}
{\bf Project Idea 4.}
Find an initial condition which tends to a steady-state different than either $f_0(x)$ or $f_\infty(x)$. Investigate how the shape of the solution changes as $\delta \to 0$ and as $\delta \to \infty$. Does it converge to a limiting function at both ends?
\end{svgraybox}
\begin{figure}[t]
\includegraphics[scale=.40]{Harwood_steadystate_semistable_fig.pdf}
\includegraphics[scale=.40]{Harwood_steadystate_semistable_noise_fig.pdf}
\caption{Demonstration of the instability of the steady-state in Figure \ref{fig:fl6delta_range} for $\delta=1$ as an initial condition for the Fisher-KPP equation (\ref{eqn:FisherIBVP}) adding (a) no additional noise and (b) some normally distributed noise to slightly perturb the initial condition from the steady-state using the other default parameter values $a=0, b=1, L=10, \Delta x=\frac{1}{20}, \rho=1, {\rm degree}=2, c=\frac{1}{3}$ and first initial condition as given in {\tt PDE\_Analysis\_Setup.m} found in Appendix \ref{App:Setup}. }
\label{fig:f5steadystatestab}
\end{figure}
\begin{svgraybox}
{\bf Project Idea 5. }
Once you find an initial condition which tends to a steady-state different than either $f_0(x)$ or $f_\infty(x)$, run {\tt PDE\_Solution.m} in Appendix \ref{App:Solutions} to investigate the time stability of this steady-state using the built-in solver initialized by this steady-state perturbed by some small normally distributed noise.
\end{svgraybox}
Note, the {\tt randn(i,j)} function in MATLAB is used to create noise in {\tt PDE \_Solution.m} using a vector of normally distributed psuedo-random variables with mean 0 and standard deviation 1.
Newton's method is called a local method since it converges to a stable solution only if it is near enough. More precisely, there is an open region around each solution called a basin, from which all initial functions behave the same (either all converging to the solution or all diverging) \cite{Burden2011}.
\begin{svgraybox}
{\bf Project Idea 6. }
It is interesting to note that when $b=0$ in the Fisher-KPP equation, $\delta$-limiting functions coalesce, $f_\infty(x)=f_0(x)\equiv 0$, for $\bar{u}=0$. Starting with $\delta=1$, numerically verify that the steady-state behavior as $\delta\to\infty$ remains at zero. Though there are two equilibrium solutions as $\delta\to0$, $\bar{u}=0$ and $\bar{u}=1$, one is part of a continuous limiting function qualitatively similar to $f_\infty(x)$ while the other is distinct from $f_\infty(x)$ and discontinuous. Numerically investigate the steady-state behavior as $\delta\to0$ starting at $\delta=1$. Estimate the intervals, called Newton's basins, which converge to each distinct steady-state shape or diverge entirely. Note, it is easiest to distinguish steady-state shapes by number of extremum. For example, smooth functions tending towards $f_0^{(1)}(x)$ have one extrema.
\end{svgraybox}
\subsection{Traveling Wave Solutions}
The previous analysis for finding steady-state solutions can also be used to find asymptotic traveling wave solutions to the initial value problem
\begin{eqnarray}\label{eqn:IVP}
u_t = \delta u_{xx} + R(u),\\\nonumber
u(x,0) = u_0(x),\ -\infty <x< \infty,\ t\geq 0,
\end{eqnarray}
by introducing a moving coordinate frame: $z=x-ct$,
with speed $c$ where $c>0$ moves to the right. Note that by the chain rule for $u(z(x,t))$
\begin{eqnarray}
u_t = u_z z_t = -c u_z \\\nonumber
u_x = u_z z_x = u_z\\\nonumber
u_{xx} = (u_z)_z z_x = u_{zz}.
\end{eqnarray}
Under this moving coordinate frame, equation (\ref{eqn:IVP}) transforms into the boundary value problem
\begin{eqnarray}\label{eqn:traveling}
0 &=& \delta u_{zz} +cu_z+ R(u),\\\nonumber
&
-\infty <z< \infty,
\end{eqnarray}
\begin{svgraybox}
{\bf Project Idea 7. }
Modify the example code {\tt PDE\_Analysis\_Setup} to introduce a positive speed starting with $c=2$ (updated after the initial condition is defined to avoid a conflict) and adding {\tt +c/dx*spdiags(ones(M-2,1)*[-1 1 ],[-1 0], M-2, M-2)} to the line defining matrix D and update {\tt BCs(1)} accordingly. Once you have implemented this transformation correctly, you can further investigate the effect the wave speed $c$ has on the existence and stability of traveling waves as analyzed in \cite{HarleyMarangell2015}. Then apply this analysis to investigate traveling waves of the FitzHugh-Nagumo equation (\ref{eqn:FitzHughNagumo}) as steady-states of the transformed equation.
\end{svgraybox}
\section{Further Investigations}\label{sec:further-investigation}
Now that we have some experience numerically investigating the Fisher-KPP equation, we can branch out to other relevant nonlinear PDEs.
\begin{svgraybox}
{\bf Project Idea 8. }
Use MATLAB's {\tt ode23s} solver to investigate asymptotic behavior of other relevant reaction-diffusion equations, such as the FitzHugh-Nagumo equation (\ref{eqn:FitzHughNagumo}) which models the phase transition of chemical activation along neuron cells \cite{FitzHughNagumo}. First investigate steady-state solutions and then transform the IBVP to investigate traveling waves. Which is more applicable to the model? A more complicated example is the Nonlinear Schr\"odinger equation (\ref{eqn:NLS}), whose solution is a complex-valued nonlinear wave which models light propagation in fiber optic cable \cite{Lakoba2016P1}.
\end{svgraybox}
For the FitzHugh-Nagumo equation (\ref{eqn:FitzHughNagumo})
\begin{eqnarray} \label{eqn:FitzHughNagumo}
u_t &=& \delta u_{xx} + \rho u(u-\alpha)(1-u),\ 0<\alpha<1\\\nonumber
&&u(x,0) = u_0(x),\\\nonumber
&&u(0,t) = 0\\\nonumber
&&u(10,t) = 1
\end{eqnarray}
For the Nonlinear Schr\"odinger (\ref{eqn:NLS}), the boundary should not interfere or add to the propagating wave. One example boundary condition often used is setting the ends to zero for some large $L$ representing $\infty$.
\begin{eqnarray} \label{eqn:NLS}
u_t &=& i\delta u_{xx} - i\rho |u|^2 u,\\\nonumber
&&u(x,0) = u_0(x),\\\nonumber
&&u(-L,t) = 0\\\nonumber
&&u(L,t) = 0
\end{eqnarray}
Additionally, once you have analyzed a relevant model PDE, it is helpful to compare end behavior of numerical solutions with feasible bounds on the physical quantities modeled. In modeling gene propagation, for example, with the Fisher-KPP equation (\ref{eqn:FisherIBVP}), the values of $u$ are amounts of saturation of the advantageous gene in a population. As such, it is feasible for $0<u<1$ as was used in the stability analysis. The steady-state solution shown in Figure \ref{fig:fl6delta_range}, which we showed was unstable in Figure \ref{fig:f5steadystatestab}, does not stay bounded in the feasible region. This is an example where unstable solutions represent a non-physical solution. While investigating other PDEs, keep in mind which steady-states have feasible shapes and bounds. Also, if you have measurement data to compare to, consider which range of parameter values yield the closest looking behavior. This will help define a feasible region of the parameters.
|
{
"timestamp": "2018-02-27T02:03:27",
"yymm": "1802",
"arxiv_id": "1802.08785",
"language": "en",
"url": "https://arxiv.org/abs/1802.08785"
}
|
\section{Introduction}
High temperature superconductivity in Fe-based compounds has taken on immense research interest since their discovery in 2008 \cite{KamiharaFeSCs, PaglioneRev,JohnstonRev, LumsdenRev}. Of these compounds, BaFe$_2$As$_2$ has been among the most extensively studied, largely due to availability of sizable high quality single crystals. BaFe$_2$As$_2$ is an antiferromagnet (AFM) with $T_N$ = 135 K \cite{Rotter}. AFM order is closely linked to both electronic nematicity \cite{Fernandes} and structural symmetry breaking from tetragonal to orthorhombic\cite{Rotter}. Magnetic and structural transitions present in pure BaFe$_2$As$_2$ are sensitive to chemical substitution and physical pressure, and a dome-like superconducting phase emerges with their suppression \cite{Canfield}. While substitution on all three ionic sites has been observed to stabilize high $T_c$~superconductivity, the choice of substituant site strongly influences the ensuing superconducting phase. For instance, electron doping with Ni and Co substitution for Fe induces fully gapped superconductivity while isoelectronic substitution of P on the As site produces a nodal superconducting phase \cite{Stewart,Co-nodeless, Ni-nodeless, P-nodal}. Superconductivity in all of these series however is believed to be closely linked to phase criticality; specifically, the competition and cooperation between nematic and magnetic phases and superconducting pairing.
BaNi$_2$As$_2$~crystallizes in the same tetragonal ThCr$_2$Si$_2$ structure (space group \textit{I4/mmm}) as BaFe$_2$As$_2$ and similarly undergoes a structural distortion at approximately 135 K \cite{Ronning}. However, in BaNi$_2$As$_2$~the structural distortion is between a high temperature tetragonal and low temperature triclinic, rather than orthorhombic, symmetry, and has no associated magnetic order \cite{Sefat, Neutron}. Rather, theoretical work has suggested that the zig-zag chain structure in the triclinic distortion is driven by orbital ordering, explaining the lack of magnetic order \cite{Zigzag}.
BaNi$_2$As$_2$~also displays bulk superconductivity below $T_c$~= 0.7 K \cite{Ronning}, suggested to be conventional BCS-type in nature with a fully-gapped $s$-wave order parameter symmetry \cite{Kurita, Subedi}. Superconductivity in BaNi$_2$As$_2$~is widely thought to be distinct from the unconventional sign-changing, $s^\pm$, order parameter of the iron-based high-$T_c$~ superconductors \cite{s+-,PaglioneRev}, such as in Ba(Fe$_{1-x}$Ni$_{x}$)$_2$As$_2$ 0.02 $\leq x\leq$ 0.08 \cite{Fe-NiPD}. Electronic structure calculations suggest BaNi$_2$As$_2$~should not host an $s^\pm$ state, as any nodal planes would necessarily intersect the Fermi Surface due to its complexity \cite{NiRev}, and the heat capacity and thermal conductivity data of BaNi$_2$As$_2$~has been well fit to a BCS $s$-wave model \cite{Kurita}. Despite the distinctions from its iron-based counterpart, previous substitutional studies in Ba(Ni$_{1-x}$Cu$_{x}$)$_2$As$_2$ \cite{KudoCu} and BaNi$_2$(As$_{1-x}$P$_{x}$)$_2$ \cite{KudoP} have found an abrupt, strong enhancement of $T_c$~from 0.7 K to 3.3 K upon suppression of the triclinic phase \cite{KudoCu, KudoP}, with strengthened pairing attributed to a soft phonon mode at the first-order structural phase boundary. The enhanced $T_c$~value in the tetragonal phase of BaNi$_2$(As$_{1-x}$P$_{x}$)$_2$ extends to the $x$ = 1 end-member BaNi$_2$P$_2$ \cite{BaNiP}, suggesting the enhancement is rooted in the tetragonal structure itself.
The recent discovery of charge density wave (CDW) emerging near the structural transition in BaNi$_2$As$_2$~\cite{Abbamonte} raises new questions about pairing in this system, in particular the possibility of a more complicated relationship between superconductivity and structural criticality in BaNi$_2$As$_2$. Here we report the physical properties of Co-substituted BaNi$_2$As$_2$~single crystals, showing that the low temperature triclinic phase is smoothly suppressed with cobalt substitution concomittant with a continuous enhancement of $T_c$~upon approach to the zero-temperature structural phase boundary. We find that, in contrast to other reported BaNi$_2$As$_2$~substitutional studies, and in a manner reminiscent of similar work in BaFe$_2$As$_2$, Ba(Ni$_{1-x}$Co$_{x}$)$_2$As$_2$~exhibits a strong enhancement of $T_c$~in both the triclinic and tetragonal low temperature phases, with suppression away from the structural critical point suggesting a Cooper pairing enhancement reminiscent of superconductivity emerging from quantum criticality.
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{LatticeParameters2.pdf}
\caption{Structural and chemical characterization of Ba(Ni$_{1-x}$Co$_{x}$)$_2$As$_2$~single crystals. (a) WDS chemical composition characterization for Ba(Ni$_{1-x}$Co$_{x}$)$_2$As$_2$~single crystals. $x_{wds}$ = $x_{nominal}$ curve represented by black dashed line. (b) Lattice parameters in Ba(Ni$_{1-x}$Co$_{x}$)$_2$As$_2$~series collected at 250 K. Data show a strongly linear evolution in both $a$ and $c$ axis length through $x$ = 0.251.}
\label{fig:Figure1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{PureCompound.pdf}
\caption{Characterization of the structural transition in BaNi$_2$As$_2$. Resistivity of pure BaNi$_2$As$_2$~single crystals shown in main figure. Inset a (b) displays hysteretic magnetization (heat capacity) when warming and cooling through the structural transition.}
\label{fig:Figure2}
\end{figure}
\section{Experimental Methods}
\begin{table}
\caption{\label{tabl1} Ba(Ni$_{1-x}$Co$_{x}$)$_2$As$_2$~crystallographic data determined through single-crystal x-ray diffraction. All data were collected at 250 K.}
\footnotesize\rm
\begin{ruledtabular}
\begin{tabular}{llll}
$x$&0&0.083&0.133\\
\hline
Crystal system&Tetragonal&Tetragonal&Tetragonal\\
Space group&I4/mmm&I4/mmm&I4/mmm\\
$a$($\mathrm{\AA}$)&4.144(2)&4.1256(5)&4.1140(7)\\
$b$($\mathrm{\AA}$)&4.144(2)&4.1256(5)&4.1140(7)\\
$c$($\mathrm{\AA}$)&11.656(6)&11.7486(15)&11.827(2)\\
$V^3$($\mathrm{\AA}^3)$&200.2(2)&199.97(5)&200.17(8)\\
Reflections&1737&1705&1776\\
R$_1$&0.0140&0.0179&0.0156\\
Atomic parameters:&&&\\
Ba&2$a$ (0,0,0)&2$a$ (0,0,0)&2$a$ (0,0,0)\\
Ni/Co&4$d$ (0,1/2,1/4)&4$d$ (0,1/2,1/4)&4$d$ (0,1/2,1/4)\\
As&4$e$ (0,0,$z$)&4$e$ (0,0,$z$)&4$e$ (0,0,$z$)\\
$z$&0.34726(6)&0.34785(7)&0.34812(6)\\
Bond lengths ($\mathrm{\AA}$):&&\\
Ba-As($\mathrm{\AA}$)&3.4288(15)&3.4213(6)&3.4189(6)\\
Ni/Co-As($\mathrm{\AA}$)&2.3619(11)&2.3615(5)&2.3618(5)\\
As-As($\mathrm{\AA}$)&3.560(71)&3.575(0)&3.592(5)\\
Bond angles (deg):&&&\\
As-Ni/Co-As&103.32(2)&103.710(16)&103.971(15)\\
As-Ni/Co-As&122.63(5)&121.74(4)&121.14(3)\\
\end{tabular}
\end{ruledtabular}
\end{table}
Ba(Ni$_{1-x}$Co$_{x}$)$_2$As$_2$~crystals were grown out of Pb-flux using a solution growth technique originally reported by Ronning, \textit{et al. }\cite{Ronning}. Crystals formed as shiny, thin platelets, with typical dimensions of 0.5 mm $\times$ 0.5 mm $\times$ 0.05 mm with a high observed residual resistivity ratio RRR = 10 that exceeded previous reports, as well as our own self-flux grown samples. The typically small crystal sizes were prohibitive for thermodynamic and magnetic measurements. To circumvent this issue, larger BaNi$_2$As$_2$~ crystals with dimensions of 2 mm $\times$ 2 mm $\times$ 0.5 mm were also synthesized out of NiAs self-flux \cite{Sefat} and were used for characterization of the structural transition.
Elemental composition in substituted samples was determined using wavelength dispersive spectroscopy (WDS). Crystal properties within a growth show minimal variation, while WDS gives variability in Ba(Ni$_{1-x}$Co$_{x}$)$_2$As$_2$ ~Co concentrations of $\Delta x \leq$ 0.01 for crystals pulled from the same growth. Variation between the nominal \textit{x} concentration versus the one obtained from WDS is shown in Fig. 1(a).
Structural data were collected on single crystals in a Bruker APEX-II CCD system equipped with a graphite monochromator and a MoK$_\alpha$~sealed tube ($\lambda$ = 0.71073 \AA), and were refined using the Bruker SHELXTL Software Package. Crystallographic information collected in the tetragonal phase (250 K) are included in Table 1 for several representative $x$ values. Atomic positions evolve monotonically across the phase diagram, while low conventional residual values (R$_1$) confirm high crystal quality. A continuous decrease in $a$ and increase in $c$ lattice parameters with increasing Co concentration is observed across all measured samples, as shown in Fig. 1(b).
Standard density functional theory calculations for pure BaNi$_2$As$_2$~were conducted using the WIEN2K \cite{Wein2k} implementation of the full potential linearized augmented plane wave method in the local density approximation. The k-point mesh was taken to be 11 $\times$ 11 $\times$ 11, with lattice constants taken from our experimental measurements. Supercell calculations were implemented for Co-substituted cases (i.e., Ba$_4$Ni$_7$CoAs$_8$ for $x$ = 0.125 and Ba$_2$Ni$_3$CoAs$_4$ for $x$ = 0.250), and resultant electronic structures unfolded via recently developed first-principles unfolding methods \cite{Unfolding}.
Transport, heat capacity, and $ac$ magnetic susceptibility data were taken using Quantum Design Physical Property Measurement System\textsuperscript{\textregistered} (PPMS-14T) and DynaCool\textsuperscript{TM} (DC-14T) systems. An environment between 1.8 and 300 K was used in each system. Heat capacity and transport measurements were extended down to 400 mK using a Quantum Design Helium-3 refrigerator option compatible with the PPMS. In-plane transport data were taken using a four wire configuration. Au wires were attached to cleaved, or polished when necessary (to remove Pb contamination) single crystals using DuPont 4929N silver paste. Single crystal \textit{ac} magnetic susceptibility was measured using a homemade coil \cite{Coil}. \textit{Ac} magnetic susceptibility measurements between 0.1 and 3 K were taken with the coil mounted on a Quantum Design Adiabatic Demagnetization Refrigerator insert for the PPMS. Data were taken at a frequency of 19.997 kHz, in an ac field with approximate amplitude of 0.25 Oe.
Heat capacity measurements were taken with a relaxation technique fit to a dual time constant model. The background heat capacity of the platform and grease was measured first and subtracted from the final result. Experiments on Co substituted samples were complicated by small crystal sizes ($<$ 0.1 mg). To circumvent this issue, heat capacity measurements were taken on collections of several samples pulled from the same growth. Sharp anomalies at the structural transitions in measurements taken on these collections of crystals, along with the high degree of growth homogeneity determined through WDS, suggest minimal error in heat capacity data due to collection averaging. The heat capacity data across the first-order structural transition of BaNi$_2$As$_2$ was measured by establishing $\Delta T$ of 15 K at 130 K\cite{QD}. Data were collected over 4$\tau$ measuring time (about 2.5 min). Single slope method \cite{QDHC} was used to calculate the heat capacity that is shown in the inset (a) to Fig. 2.
DC magnetic susceptibility measurements were carried out in a Quantum Design Magnetic Property Measurement System (MPMS) SQUID magnetometer.
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{RvTfinal.pdf}
\caption{Transport measurements in Ba(Ni$_{1-x}$Co$_{x}$)$_2$As$_2$~series. Main figure displays resistance normalized to room temperature value, and vertically offset for clarity. Data show clear suppression of anomalies associated with the structural transition, which vanishes by $x$ = 0.133. Inset displays low temperature resistance normalized to 3.5 K value. Samples display clear enhancement of $T_c$~when approaching structural phase boundary. Data plotted in blue ($x$ $\leq$ 0.083) feature a low temperature resistance anomaly consistent with the triclinic structural distortion. Curves plotted in red remain tetragonal down to the lowest measured temperature.}
\label{fig:Figure3}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{Cp.pdf}
\caption{Heat capacity measurements collected on warming in Ba(Ni$_{1-x}$Co$_{x}$)$_2$As$_2$~crystals. Anomalies in main figure indicative of structural transition. Inset displays low temperature $C_p$/$T$ data plotted versus temperature squared.}
\label{fig:Figure4}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{CJE217.pdf}
\caption{Measurements of superconducting transition in sample with near optimal substitution $x$ = 0.063. Electronic heat capacity ($C_e$) was determined by subtracting the phonon contribution ($\beta$T$^3$) from the total heat capacity (main figure). Red curve is the $\alpha$-model predictions for a BCS superconductor \cite{Johnston} ($\alpha$ = 1.764) scaled by a constant multiple to match the data. Inset displays superconducting transition measured via four terminal resistance (a) and the real part of $ac$ susceptibility measured using a homemade coil (b).}
\label{fig:Figure5}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{Hc2.pdf}
\caption{$H_{c2}$~data collected on an optimally substituted sample of $x$ = 0.083. Isomagnetic resistance data collected with field parallel to crystal a-axis (a) and parallel to crystal c-axis (b). Data collected parallel to the c-axis were taken in 0.5 kOe increments (0, 0.5, 1, 1.5, 2, 2.5, 3, 3.5 and 4 kOe), while measurements with H $\parallel$ a were taken in 1 kOe increments (0, 1, 2, 3, 4, 5, and 6 kOe). (c) $H_{c2}$~vs $T_c$~phase diagram in parallel and perpendicular configuration.Data points were taken at the midpoint of the resistive transition, and error bars represent the range wherein resistance is between 90\% and 10\% the normal state value. Curves are generated for a dirty BCS superconductor using the model developed by Werthamer, Helfand, and Hohenberg \cite{WHH}. Inset shows upper critical field anisotropy, $\Gamma$, determined using the midpoint criteria.}
\label{fig:Figure6}
\end{figure}
\section{Results}
The electrical resistivity of BaNi$_2$As$_2$ is presented in Fig. 2, showing a pronounced hysteresis in the data collected on warming and cooling due to the strongly first-order tetragonal to triclinic structural transition. This hysteresis is also observed in the magnetic susceptibility (see inset (a) to Fig. 2) and heat capacity (see inset (b) to Fig. 2). Figure 3 displays the evolution of the hysteretic region in Ba(Ni$_{1-x}$Co$_{x}$)$_2$As$_2$~as measured by resistivity on heating and cooling. The hysteresis, and by extension the structural distortion, is observed throughout the range of the triclinic phase and is quickly suppressed with increasing $x$. This observation is consistent with the evolution of the heat capacity anomaly shown in Fig. 4, which also is absent by $x$ = 0.133. The low temperature heat capacity displayed in the inset to Fig. 3 shows no dramatic changes in Debye temperature or Sommerfield coefficient for the reported Co concentrations. The extracted Debye temperatures are $\Theta$$_D$ = 236 K, 218 K, and 225 K for $x$ = 0.014, 0.083, and 0.133 respectively. Pure BaNi$_2$As$_2$~was observed to have a Debye temperature of 250 K, consistent with previous work \cite{Sefat}.
Despite large changes in low temperature structure, superconductivity surprisingly evolves continuously in the Ba(Ni$_{1-x}$Co$_{x}$)$_2$As$_2$~series (see Fig. 3 inset), with a fast enhancement in $T_c$~upon cobalt substitution, rising from $T_c$~= 0.7 K at $x$ = 0 to 1.7 K with just 1.4 \% cobalt substitution for nickel. $T_c$~continues to increase with $x$ in the triclinic phase, eventually exhibiting a maximum of 2.3 K at $x$ = 0.083 and then gradual decreasing until entirely absent by $x$ = 0.251.
Fig.~5 presents heat capacity (main), transport (a) and $ac$ magnetization (b) measurements of the superconducting transition in the same single-crystal sample (crystal dimensions of 0.67 mm $\times$ 0.83 mm $\times$ 0.067 mm) with $x$ = 0.063. Balancing the entropy in the observed heat capacity jump yields a $T_c$~of 1.8 K for this sample. The red curve in Fig. 5 is the $\alpha$ model prediction of heat capacity for a single band BCS superconductor. This curve has been scaled by a constant value of 1.35 to match the observed heat capacity jump. This model describes data well near $T_c$, and deviations at low temperatures may be due to nuclear Schottky contributions as observed in the pure compound \cite{Kurita}. The modeled heat capacity jump $\Delta$\textit{C$_e$}/$\gamma$T is $\sim$2.2, well above the BCS limit of 1.43, indicating strongly coupled superconductivity at this Co concentration. This value is consistent with previous reports of enhanced normalized heat capacity jumps of approximately 1.9 in both Ba(Ni$_{1-x}$Cu$_{x}$)$_2$As$_2$ and BaNi$_2$(As$_{1-x}$P$_{x}$)$_2$ \cite{KudoCu, KudoP} and greatly exceeds the near-BCS value observed in pure BaNi$_2$As$_2$~\cite{Kurita}. While previous work on Cu- and P-substituted BaNi$_2$As$_2$~ suggested that the enhancement in the tetragonal phase was consistent with a phonon softening picture, this is not the case here, as the enhancement occurs in the triclinic phase and the Debye frequency exhibits little change through the entire Co substitution range as noted above.
Both superconductivity and the structural transition in optimally substituted $x$ = 0.083 samples were observed to be of bulk origin, as each manifested itself in anomalies in measured heat capacity (see Fig. 4 main and inset). Figure 6 shows the evolution of upper critical field in these optimally substituted samples, which exhibit an approximately three fold enhancement compared to the pure compound. As reported in BaNi$_2$As$_2$, superconductivity in optimally substituted Ba(Ni$_{1-x}$Co$_{x}$)$_2$As$_2$~is more robust when field is applied parallel to the crystal plane. At higher fields, resistance curves taken in this orientation begin to broaden, while data taken with field along the $c$ axis remain sharp over all measurements. $H_{c2}$~anisotropy, $\Gamma$, remains virtually constant at all temperatures, with $\Gamma$ = 1.50 slightly below the value 2.1 reported for the pure compound \cite{Ronning}.
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{FinalPhaseDiagram.pdf}
\caption{Phase diagram for Ba(Ni$_{1-x}$Co$_{x}$)$_2$As$_2$~system gathered from transport data. Structural and superconducting critical temperatures were both determined by the midpoint of each resistive transition. Superconducting $T_c$~is scaled by a factor of 10 to improve clarity. Inset displays transport data for optimally substituted, $x$ = 0.083, samples featuring both clear enhancement in $T_c$~and structural transition anomaly.}
\label{fig:Figure7}
\end{figure}
\section{Discussion}
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{ElectronicCalculations.pdf}
\caption{The evolution of the electronic density of states in Ba(Ni$_{1-x}$Co$_{x}$)$_2$As$_2$. Data displayed for $x$ = 0 (top), 0.125 (center), and 0.25 (bottom).}
\label{fig:Figure8}
\end{figure}
Previous reports of chemical substitution in BaNi$_2$As$_2$~feature nearly identical evolutions of superconductivity. However, this trend is broken through Co substitution, which causes a strong enhancement of $T_c$~{\it within} the triclinic phase and a smooth evolution through the triclinic-tetragonal T=0 boundary, manifesting in a superconducting dome-shaped phase diagram (Fig. 7).
The calculated electronic density of states (DOS) exhibits a monotonic enhancement at $E_F$ with increasing Co concentration (Fig. 8), with a Co d-orbital component that smoothly adds to the total DOS. This could be expected to provide an environment more hospitable to superconductivity with increasing Co, but the observed suppression of $T_c$~with high Co concentration is inconsistent with this conclusion, ruling out changes in DOS as the predominant factor responsible for enhanced $T_c$. We also observe no dramatic changes to Fermi surface topology that account for the rapid suppression of superconductivity in over-substituted Ba(Ni$_{1-x}$Co$_{x}$)$_2$As$_2$. While the enhanced Wilson ratio observed in the Co-based end-member BaCo$_2$As$_2$ \cite{BaCo} suggests that increasing Co concentration may ultimately invoke ferromagnetic correlations, the concentrations where $T_c$~is suppressed in Ba(Ni$_{1-x}$Co$_{x}$)$_2$As$_2$~series are far from $x$ = 1 such that it is unlikely that the rapid suppression of $T_c$~in the tetragonal phase is a result of proximity to ferromagnetism, and warrants further investigation.
Ba(Ni$_{1-x}$Cu$_{x}$)$_2$As$_2$ and BaNi$_2$(As$_{1-x}$P$_{x}$)$_2$ series also exhibit phonon softening in high $T_c$~samples, indicated by strong superconducting coupling and a dramatically reduced Debye temperature. While strong superconducting coupling is also observed in the Ba(Ni$_{1-x}$Co$_{x}$)$_2$As$_2$~series, the Debye temperature remains virtually constant over the range of $x$ studied here. Given the strongly first-order nature of the structural transition, phonon softening within the triclinic phase is unexpected, though would not be unprecedented \cite{BaGe-P}. While the effect of phonon softening on pairing near such a strongly discontinuous structural boundary cannot be ignored, the distinct behavior found in the Ba(Ni$_{1-x}$Co$_{x}$)$_2$As$_2$~series suggests another mechanism is responsible for the strengthening of superconductivity, which appears to be centered around the triclinic-tetragonal critical point. The recent observations of CDW order in BaNi$_2$As$_2$~\cite{Abbamonte} are a provocative suggestion that the previously mundane view of both superconductivity and the structural distortion in BaNi$_2$As$_2$~should be revisited, and that fluctuation-driven superconductivity is a real possibility \cite{Zigzag}. Further, uncovering a new mechanism for superconducting enhancement opens an interesting avenue to potentially extend superconductivity to even higher critical temperatures in this and related systems.
\section{\label{sec:Acknowledge}Acknowledgments}
Research at the University of Maryland was supported by the AFOSR Grant No. FA9550-14-10332 and the Gordon and Betty Moore Foundation Grant No. GBMF4419. We also acknowledge support from the Center for Nanophysics and Advanced Materials as well as the Maryland Nanocenter and its FabLab.
|
{
"timestamp": "2018-02-27T02:01:40",
"yymm": "1802",
"arxiv_id": "1802.08743",
"language": "en",
"url": "https://arxiv.org/abs/1802.08743"
}
|
\section{Introduction}
Obtaining high-resolution 3D models of real-world scenes is a common task. These observations may be captured with a variety of robotic platforms (e.g., wheeled, articulated, aerial platforms, etc.) in a variety of different environments (e.g., outdoors, inside pipes, etc.)
The individual observations can then be combined into a single 3D representation (e.g., a triangulated 3D mesh). The quality of this model depends on how well the observations capture the scene, i.e., the number and distribution of the individual measurements. The problem of selecting and planning sensor views to obtain high-resolution models is known as \gls{nbv} planning.
\gls{nbv} planning approaches can be classified as either scene-model-based or scene-model-free. Model-based approaches \citem{Bircher2015, Kaba2016} use \textit{a priori} knowledge of the scene structure to compute a set of views from which the scene (i.e., an object or environment) is observed. These approaches work for a given scene but do not generalise well to other scenes.
Model-free approaches often use a volumetric \citem{Connolly1985} or surface representation \citem{Hollinger2012}. Volumetric representations discretise the scene into voxels and can obtain high observation coverage with a small voxel size but do not produce high-resolution models of large scenes. Surface representations estimate surface geometry from observations and can obtain high quality models of large scenes but often require tuning of unintuitive parameters or multiple survey stages.
This paper presents the \gls{see}, a scene-model-free approach to NBV planning that uses a density representation. This representation uses a given resolution and measurement density to define a \textit{frontier} between fully and partially observed surfaces. Sensor views are proposed to observe this frontier and expand the fully observed surfaces. \gls{nbv}s are selected and new measurements are obtained until the entire scene is observed at the chosen resolution and measurement density.
This density representation does not require an \textit{a priori} discretisation of the scene as used by volumetric approaches and scales with the number of measurements obtained and not the size of the scene. This makes \gls{see} appropriate for large-scale observations (e.g., inspecting a bridge with an aerial vehicle). \gls{see} uses a more intuitive parameterisation than many surface representations and does not require multiple survey stages.
SEE is evaluated in simulation on four standard models \citem{Krishnamurthy1996a, Turk1994, Curless1996, Newell1975} and a full-scale model of the Radcliffe Camera in Oxford \citem{Boronczyk2016} \figref{marquee}. The results show that it achieves higher surface coverage in less computational time than the evaluated state-of-the-art volumetric approaches \citem{Vasquez-Gomez2015, Kriegel2015, Delmerico2017} while requiring the sensor to travel equivalent distances.
Section II presents an overview of \gls{nbv} planning literature. Section III presents \gls{see}. Section IV presents an experimental comparison of \gls{see} with state-of-the-art volumetric approaches on four standard models and a full-scale model of the Radcliffe Camera. Sections V and VI present a discussion of the results and our plans for future work.
\begin{figure}[tpb]
\centering
\subfloat[SEE]{\includegraphics[width=0.475\linewidth]{see_radcam_cross04.png}}\quad
\subfloat[AE \citem{Kriegel2015}]{\includegraphics[width=0.475\linewidth]{ae_radcam_cross04.png}}
\caption{A comparison of the point cloud resulting from running SEE (a) and AE \citem{Kriegel2015} (b) on a full-scale model of the Radcliffe Camera in Oxford. \gls{see} observed 99\% of the model at a $0.05$~m resolution. AE, the best-performing volumetric approach, observed 79\% in the same number of views.}
\figlabel{marquee}
\end{figure}
\section{Related Work}
Existing NBV planning work covers a variety of scene sizes, from small objects (e.g., the Stanford Bunny \citem{Turk1994}) \cite{Vasquez-Gomez2015, Kriegel2015, Delmerico2017, Dierenbach2016, Karaszewski2016, Khalfaoui2013, Pito1996, Yuan1995, Connolly1985} to buildings \cite{Yoder2016, MENG2017, Bircher2016, Song2017, Hollinger2012, Bissmarck2015, Bircher2015, Kaba2016, Roberts2017}.
Surveys of \gls{nbv} planning literature \citem{Tarabanis1995, Scott2003a, Karaszewski2016a} categorise approaches based on their scene representation. The most widely used categorisation \citem{Scott2003a} classifies approaches as either scene-model-based or scene-model-free. Model-based approaches \citem{Bircher2015, Kaba2016} require an \textit{a priori} scene model and do not generalise well. Within the class of model-free approaches there are global, volumetric and surface representations.
Global representations \citem{Pito1996, Yuan1995} consider all observations as part of a single connected surface. Pito \citem{Pito1996} generates a tessellated view space and selects \gls{nbv}s to observe the boundaries of a partial mesh until the mesh boundaries are closed. It obtains high-resolution models but requires a fixed work-space and known sensor model. Yuan \citem{Yuan1995} estimates the geometry of surface patches and selects views to observe the unknown space between them and obtain a single surface but only demonstrates it on simple surface geometries.
Volumetric representations \citem{Vasquez-Gomez2015, Delmerico2017, Connolly1985, Yoder2016, Bircher2016, Song2017, MENG2017, Bissmarck2015} discretise a bounded scene volume into a voxel grid from which view selection metrics can be computed. Seminal work by Connolly \citem{Connolly1985} uses a metric that counts the number of unseen voxels visible from potential views on a tessellated sphere encompassing the scene. View metrics in later work \citem{Vasquez-Gomez2015, Delmerico2017} consider multiple factors but still sample views from a tessellated surface. Vasquez-Gomez et al. \citem{Vasquez-Gomez2015} rank potential views based on reachability, distance, overlap with previous views and the number of visible unseen voxels. Delmerico et al. \citem{Delmerico2017} use \gls{infg} metrics to evaluate views based on voxel visibility, observability and proximity to existing observations.
The model resolution obtained from a volumetric representation depends on the resolution of the voxel grid and the number of potential views. Smaller voxels and more potential views allow for greater model detail but require higher computational costs to raytrace each view. These representations are difficult to scale to large scenes without lowering the model quality or increasing the computation time.
Volumetric representations \citem{Yoder2016, Bircher2016, Song2017, MENG2017, Bissmarck2015} have been applied to large scenes despite these limitations. Most approaches mitigate the increase in computation time by reducing the number of potential views. Yoder et al. \citem{Yoder2016} only sample views to observe the frontier between seen and unseen voxels and select \gls{nbv}s with a view selection metric that balances view utility and travel distance. Meng et al. \cite{MENG2017} similarly only sample views that observe frontier voxels and select \gls{nbv}s with an \gls{infg} metric. Bircher et al. \citem{Bircher2016} use the RRT algorithm \citem{LaValle1998} to plan paths through known voxels and sample views at the vertices of the RRT tree to observe unknown voxels. The \gls{nbv} is selected from the sampled views with an \gls{infg} metric. Song et al. \citem{Song2017} present a similar approach to \citem{Bircher2016} using the RRT* algorithm \citem{Karaman2011} to plan a path to the \gls{nbv} that maximises the observation of frontier voxels. Potential views are sampled within a given radius of the RRT* path and the subset that provides the greatest coverage is selected.
Reducing the number of potential views can mitigate the increased computational cost of large scenes but the resolution of the voxel grid is still limited by the raytracing complexity. Bissmarck et al. \citem{Bissmarck2015} compare raytracing algorithms that consider voxel observability, frontier voxels, sparse ray casting and using a hierarchy of voxel grid resolutions to reduce this complexity. They demonstrate that these algorithms outperform simple raycasting in terms of computation time but a \gls{nbv} planning approach using the algorithms for view selection is not presented.
Surface representations \citem{Dierenbach2016, Khalfaoui2013, Roberts2017, Hollinger2012} estimate surface geometry from sensor observations (e.g., by triangulating measurements into a mesh) and compute views to extend the surface boundaries and improve the surface quality. Some approaches incrementally extend the surface representation with new observations \citem{Dierenbach2016, Khalfaoui2013} while others use a multistage survey to iteratively refine a surface model of the scene \citem{Roberts2017, Hollinger2012}.
Dierenbach et al. \citem{Dierenbach2016} estimate surface geometry by training a neural network to generate a simplified mesh from sensor measurements. Point density is computed locally around the mesh vertices and views are proposed to extend the mesh and obtain a given point density. Khalfaoui et al. \citem{Khalfaoui2013} apply density-based clustering to sensor observations and propose views to observe the cluster boundaries until the maximum distance between cluster centers is below a given threshold. These approaches can obtain high-resolution models but require tuning of unintuitive parameters.
Multistage approaches \citem{Roberts2017, Hollinger2012} refine an existing surface mesh that is often obtained manually or with a preplanned path. Hollinger et al. \citem{Hollinger2012} represent the mesh uncertainty as a Gaussian process and propose views to improve the surface estimation. Roberts et al. \citem{Roberts2017} sample potential views within a given distance of the mesh surface, select the minimal subset that can provide complete coverage and plan the shortest path between them.
Some work \citem{Kriegel2015, Karaszewski2016} presents approaches using both volumetric and surface representations. Kriegel et al. \citem{Kriegel2015} combine a volumetric representation with an \gls{infg} view selection metric and a surface representation that selects views to extend the boundaries of a surface mesh and obtain a given point density. Karaszewski et al. \citem{Karaszewski2016} obtain an initial scene survey with a volumetric representation and then fill discontinuities in the observed surfaces based on the local point density. The local measurement density is also considered by \gls{see} but without the complexity of using a different underlying representation.
\vspace{1ex}
\gls{see} is a \gls{nbv} planning approach that uses a density representation. Unlike volumetric representations, it scales well to large scenes and is shown to obtain accurate and complete models of scenes at any scale (i.e., both \emph{bunnies} and \emph{buildings}). Unlike surface representations, it does not require multistage surveys or have unintuitive parameters. SEE instead uses only measurement density and resolution.
\section{Surface Edge Explorer (SEE)}
SEE seeks to observe an entire scene with a minimum measurement density. This measurement density is defined by the resolution, $r$ and target density, $\rho$, used to detect frontiers in the measurements. Frontiers are detected by classifying sensor measurements (i.e., points) based on the number of neighbouring points within the distance $r$. Points with sufficient neighbours (i.e., the local density is greater than or equal to $\rho$) are classified as \emph{core} and those without are classified as \emph{outliers}. Outlier points with both core and outlier neighbours are then classified as \emph{frontier} points \figref{class}. These frontier points represent the boundary between fully and partially observed surfaces \secref{front_det}.
The scene observation is expanded by taking measurements at these frontiers. Potential views are proposed by estimating the local surface geometry around frontier points as a plane described by a set of orthogonal vectors \figref{surf}. These vectors describe the normal to the local surface, the density boundary and the direction of partial observation (i.e., the frontier) \secref{surf_geom}.
Views are proposed orthogonal to this locally estimated surface plane to maximise sensor coverage \figref{vp}. The view distance can be specified by the user or defined as a function of the sensor parameters and desired resolution \secref{view_prop}.
The \gls{nbv} is selected from these \emph{view proposals} to reduce the distance from both the current sensor position and the first observation of the scene. This guides observations to expand one frontier at a time and decreases the total distance travelled by the sensor \secref{nbv}.
The proposed views will not expand frontiers on discontinuous or highly non-planar surfaces. These views are iteratively adjusted in response to new observations until the frontier point is observed or a sufficient number of attempts have been made to classify it as an outlier. Points classified as outliers will not be reprocessed unless a new point is observed nearby \secref{view_adj}.
SEE continues to select NBVs until there are no more frontier points and all measurements have been classified as core or outlier points. This can be achieved in unbounded real-world problems by discarding all measurements outside of a predefined scene boundary \secref{term}.
\begin{figure}[tpb]
\centering
\includegraphics[width=\linewidth]{pie_dbscan_mono-1.png}
\caption{An illustration of SEE's density-based classification. Points with a sufficient number of neighbours are classified as core points (black) while those without are outlier points (white). Points with both core points and outlier points in their neighbourhood are frontier points (grey).}
\figlabel{class}
\end{figure}
\subsection{Frontier Detection}
\seclabel{front_det}
\begin{figure}[tpb]
\centering
\includegraphics[width=\linewidth]{surf_geom-1.png}
\caption{An illustration of SEE's local surface geometry estimation. The geometry of the surface at the frontier points (grey) is estimated from nearby points with an orthogonal set of vectors. These vectors are orientated normal to the surface, $\norm$ (out of the page), parallel to the boundary line, $\bound$ and perpendicular to the boundary line (i.e., into the frontier), $\front$.}
\figlabel{surf}
\end{figure}
Frontiers between fully and partially observed surfaces are detected by performing density-based classification of sensor measurements (i.e., points). Points are classified as either core, frontier or outlier based on the number of neighbouring points, $k$, with a radius, $r$, of the point \figref{class}. The number of observed points in the $r$-ball is compared with the minimum number of points, $k_\mathrm{min}$, necessary to satisfy the desired point density, $\rho$, where $k_\mathrm{min} = \frac{4}{3}\rho\pi r^3$.
This density-based classification approach is based on DBSCAN \citem{Ester1996}. DBSCAN classifies a set of sensor measurements, $P := \{\mathbf{p}_i\}_{i=1}^n$ where $\mathbf{p}_i \in \mathbb{R}^3$, as core points, $\core$, frontier points, $\frontier$, or outlier points, $\free$. These labels are complete and unique such that \[P \equiv \core \cup \frontier \cup \free \quad\mathrm{and}\quad \core \cap \frontier \equiv \core \cap \free \equiv \frontier \cap \free \equiv \emptyset\,.\]
A point is classified as a core point if it has more than $k_\mathrm{min}$ neighbours within a distance $r$, \[\core := \{\mathbf{p} \in P \;|\; |N_\mathbf{p}| \geq k_\mathrm{min}\}\,,\]
where $N_\mathbf{p}$ is the set of points within $r$ of $\mathbf{p}$, \[N_\mathbf{p} := N(P,r,\mathbf{p}) := \{\mathbf{q} \in P \;|\;||\mathbf{q} - \mathbf{p}|| \leq r\}\,,\]
$||\cdot||$ is the $\mathrm{L}^2$-norm and $|\cdot|$ is set cardinality.
A point is classified as a frontier point if it is not a core point but has both core and outlier neighbours,
\[\frontier := \{\mathbf{p} \in P \;|\; |N_\mathbf{p}| < k_\mathrm{min} \;\land\; N_\mathbf{p} \;\cap\; \core \not= \emptyset \;\land\; N_\mathbf{p} \;\cap\; \free \not= \emptyset\}\,.\]
It is otherwise classified as an outlier point,
\[\free = P \setminus (\core \cup \frontier)\,.\]
This paper modifies DBSCAN to classify measurements obtained from incremental observations \algmref{inc_dbscan_alg}. When a new sensor observation is obtained, the set of new measurements, $M$, is combined with the existing classification sets, $\core$, $\frontier$ and $\free$ (Line 1). Each new point, $\mathbf{p}$, is processed and added to either the core, frontier or outlier point sets (Line 3). Any new point that has not yet been classified is added to the (re)classification queue, $Q$, along with its neighbourhood points (Lines 4--5). If a point in the queue is not a core point then it is (re)classified based on the new measurements (Lines 6--7). Points with insufficient neighbours to be core are classified as frontier points if they have both core and outlier neighbours or otherwise as outlier points (Lines 9--14). Points with sufficient neighbours are classified as core points (Line 16). If the point was previously unclassified then its neighbourhood is added to the (re)classification queue and it is marked as classified (Lines 19--21).
\begin{algorithm}[tpb]
\caption{POINT-CLASSIFIER($M, \core, \frontier, \free, r, k_\mathrm{min}$)}
\algmlabel{inc_dbscan_alg}
\begin{algorithmic}[1]
\small
\State {$P := \core \cup \frontier \cup \free \cup M$}
\State {$V \gets \emptyset$}
\ForAll{{$\mathbf{p} \in M$}}
\If{{$\mathbf{p} \notin V$}}
\State $Q \gets N(P,r,\mathbf{p}) \cup \{\mathbf{p}\}$
\ForAll{$\mathbf{q} \in Q$}
\If{{$\mathbf{q} \notin C$}}
\State $N_\mathbf{q} \gets N(P,r,\mathbf{q})$
\If{$|N_\mathbf{q}| < k_\mathrm{min}$}
\If{{$N_\mathbf{q} \cap C \neq \emptyset$} \textbf{and} {$N_\mathbf{q} \cap O \neq \emptyset$}}
\State $\frontier \gets \frontier \cup \{\mathbf{q}\}$
\State $\free \gets \free \setminus \{\mathbf{q}\}$
\Else
\State $\free \gets \free \cup \{\mathbf{q}\}$
\EndIf
\Else
\State $\core\gets \core\cup \{\mathbf{q}\}$
\State $\frontier\gets \frontier\setminus \{\mathbf{q}\}$
\State $\free \gets \free \setminus \{\mathbf{q}\}$
\If{{$\mathbf{q} \in M$} \textbf{and} {$\mathbf{q} \notin V$}}
\State $Q \gets Q \cup N_\mathbf{q}$
\State {$V \gets V \cup \{\mathbf{q}\}$}
\EndIf
\EndIf
\EndIf
\EndFor
\EndIf
\EndFor
\end{algorithmic}
\end{algorithm}
\subsection{Surface Geometry Estimation}
\seclabel{surf_geom}
Good observations require knowledge of the surface geometry. The surface around a frontier point, $\mathbf{f}$, is approximated as locally planar through eigendecomposition of a matrix representation of its neighbourhood,
\[\mathbf{D} := [\mathbf{p}_{1}-\mathbf{f},...,\mathbf{p}_{n}-\mathbf{f}] \in \mathbb{R}^{3 \times |N_\mathbf{f}|}\,,\] where $\mathbf{p}_{i} \in N_\mathbf{f}$ are the neighbouring points.
The eigendecomposition of the square matrix, $\mathbf{A} := \mathbf{DD}^\mathrm{T}$, produces a set of eigenvalues, $\Lambda = \{\lambda_{1},\lambda_{2},\lambda_{3}\}$ and their corresponding eigenvectors, $\Upsilon = \{\mathbf{\psi}_{1}, \mathbf{\psi}_{2}, \mathbf{\psi}_{3}\}$, satisfying the eigenequation,
\[\mathbf{A}\mathbf{\psi}_i = \lambda_i\mathbf{\psi}_i\,, \;i = \{1,2,3\}\,.\]
As $\mathbf{A}$ is a real orthogonal matrix, the set of eigenvectors form an orthonormal basis (i.e., three mutually orthogonal unit vectors) of $\mathbf{D}$. Each eigenvector describes one component of the observed surface geometry \figref{surf}. The normal vector, $\norm$, is orthogonal to the surface plane. The boundary vector, $\bound$, points along the boundary between partially and fully observed surfaces. The frontier vector, $\front$, lies in the surface plane and points in the direction of partial observation.
The surface geometry components are determined sequentially from the eigenvectors, eigenvalues, view orientation and the mean of the nearby points, $\mathbf{{\bar{p}}}$, \[\mathbf{{\bar{p}}} = \frac{1}{|N_\mathbf{f}|}\sum_{\mathbf{p} \in N_\mathbf{f}}{\mathbf{p} - \mathbf{f}} \,.\]
\subsubsection{Normal vector}
\seclabel{surf_norm_def}
The normal vector, $\norm$, is assigned as the eigenvector corresponding to the minimum eigenvalue (i.e., the direction of least surface variance),
\[\norm = \{\mathbf{\psi}_i \;|\; \lambda_i = \min\left\lbrace\Lambda\right\rbrace\}\,.\]
The direction of the normal vector is chosen to be opposite the direction of the view orientation, $\viewo$, such that, \[|\norm \cdot \viewo| < 0\,.\]
\subsubsection{Frontier vector}
\seclabel{edge_orth_def}
The frontier vector, $\front$, is the eigenvector perpendicular to the boundary of the partially observed surface. It is assigned as the remaining eigenvector which maximises the magnitude of the dot product with the mean point,
\[\front = \argmax_\mathbf{\mathbf{\psi} \,\in\, \Upsilon \setminus \norm} (|\mathbf{\bar{p}} \cdot \mathbf{\psi}|)\,.\]
The direction of the frontier vector is chosen to point away from the mean of the frontier point neighbourhood, into the partially observed region of the point cloud such that, \[|\front \cdot \mathbf{{\bar{p}}}| < 0\,.\]
\subsubsection{Boundary vector}
\seclabel{edge_para_def}
The remaining eigenvector is locally tangential to the boundary between the density regions and is referred to as the boundary vector. The direction of the boundary vector is given by the cross product of the normal and frontier vectors,
\[\bound := \norm \times \front\,.\]
\begin{figure}[tpb]
\centering
\includegraphics[width=\linewidth]{view_proposal-1.png}
\caption{An illustration of SEE's initial view proposal generation. Initial view proposals, ($\viewp$, $\viewo$), are generated around each frontier point (grey) from the estimated local surface geometry, $\norm$, $
\front$ and $\bound$. The view orientation, $\viewo$, is given by the inverse sign of the normal vector, $\viewo = -\norm$. The view position, $\viewp$, is set at a view distance, $d_\mathrm{v}$, from the frontier point in the direction of the normal vector, $\norm$. The dashed lines represent the field-of-view of the sensor. These views are adjusted when observing surfaces with discontinuities and occlusions to obtain the best view possible.}
\figlabel{vp}
\end{figure}
\subsection{View Generation}
\seclabel{view_prop}
View proposals are generated to maximise sensor coverage of the estimated planar surface around each frontier point. A view proposal, $\view$, is defined by a view position, $\viewp$ and orientation, $\viewo$.
The view position is a distance, $d_\mathrm{v}$, on the normal vector, $\norm$, from the frontier point,
\[\viewp = \mathbf{f} + d_\mathrm{v}\norm\,.\]
The view distance may be user specified or defined as function of the sensor parameters and desired resolution.
The view orientation, $\phi$, is given by the inverse of the normal vector (i.e., pointing in the direction of the surface),
\[\viewo = -\norm\,.\]
\subsection{NBV Selection}
\seclabel{nbv}
The NBV is selected from the set of view proposals, \[W := \{\mathrm{\mathbf{g}}(\mathbf{f} \in \frontier)\}\,,\]
where $\mathrm{\mathbf{g}}$ maps frontier points to view proposals (i.e., Sec. \ref{sec:view_prop}).
SEE observes the scene while reducing total travel distance by selecting NBVs based on their \emph{incremental} and \emph{origin} distances. The incremental distance of a NBV is given by the difference between the current view position, $\viewp_i$ and the position of the proposed view. The origin distance of a NBV is given by the difference between the position of the proposed view and the first scene observation, $\viewp_0$.
The NBV, $\viewn$, is selected to minimise the global distance,
\[\viewn = \argmin_{\views \in W'}(||\viewp - \viewp_0||)\,,\]
from the set of view proposals, $W'$, within $r$ of the current view,
\[W' = \{\views \in W\;|\; ||\viewp - \viewp_i|| < r\}\,.\]
If there are no nearby view proposals (i.e., $W' \equiv \emptyset$) then the NBV that minimises the local distance is selected,
\[\viewn = \argmin_{\views \in W}(||\viewp - \viewp_i||)\,.\]
\subsection{Local View Adjustment}
\seclabel{view_adj}
Real surfaces have discontinuities and occlusions that invalidate the locally planar assumptions and prevent expansion of the frontier. In these situations, SEE incrementally adapts the current view until either the frontier point is observed or sufficient attempts have been made to classify it as an outlier.
The locally planar assumption is often violated by surface discontinuities (e.g., edges or corners) or occlusions by other surfaces. When the frontier point is near a discontinuity, the view must be adjusted to observe both sides of it (i.e., to see around the corner). When the frontier point is occluded by another surface, the view must be adjusted to avoid the occlusion (i.e., to see around the other surface). These views are not orthogonal to the locally estimated surface. SEE attains such views by iteratively using new measurements to translate and rotate the current view to move the center of the observed points towards the frontier point.
The magnitude of the translation and rotation for each axis is determined by the displacement, $\frontd := [\frontdi_1, \frontdi_2, \frontdi_3]^T$, between the center of observed points, $\cmass$, and the frontier point along the axis,
\[\frontd = \mathbf{R}^\mathrm{T}_\mathrm{d}(\frontc - \cmass)\,,\]
where $\mathbf{R}_\mathrm{d} = [\norm\;\front\;\bound]$ is a rotation into a local frame.
The view is first translated along the frontier vector by a distance, $d_\mathrm{f}$,
\[d_\mathrm{f} = \frontdi_1(d_\mathrm{t} + 1)\,,\]
and rotated around the boundary vector by $\theta_\mathrm{b}$,
\[\theta_\mathrm{b} = \tan^{-1}\left(\frac{d_\mathrm{v}\frontdi_1d_\mathrm{t}}{d^2_\mathrm{v} + \frontdi_1^2(d_\mathrm{t} + 1)}\right)\,.\]
It is then translated along the boundary vector by a distance, $d_\mathrm{b}$,
\[d_\mathrm{b} = \frontdi_2(d_\mathrm{t} + 1)\,,\]
and rotated around the frontier vector by $\theta_\mathrm{f}$,
\[\theta_\mathrm{f} = \tan^{-1}\left(\frac{d_\mathrm{v}\frontdi_2d_\mathrm{t}}{d^2_\mathrm{v} + \frontdi_2^2(d_\mathrm{t} + 1)}\right)\,.\]
The distance factor, $d_\mathrm{t}$, determines the magnitude of the translation and rotation for the view adjustment. SEE scales it exponentially with the number of view adjustments, $n$, for a given frontier point, $d_\mathrm{t} = 2^n$. This stops the size of the view adjustment from converging to zero as the center of observed points moves closer to the frontier point.
The position and orientation of the adjusted view, $\viewn$, is then given by,
\begin{align*}
\viewp_{i+1} &= \frontc - d_\mathrm{v}\viewo_{i+1}\,,\\
\viewo_{i+1} &= \frac{\frontc - \mathbf{R}_\mathrm{f}\mathbf{R}_\mathrm{b}(\viewp_i + d_\mathrm{f}\front + d_\mathrm{b}\bound)}{||\mathbf{R}_\mathrm{f}\mathbf{R}_\mathrm{b}(\viewp_i + d_\mathrm{f}\front + d_\mathrm{b}\bound) ||}\,.
\end{align*}
The rotation matrices, $\mathbf{R}_\mathrm{b}$ and $\mathbf{R}_\mathrm{f}$, are computed with Rodrigues' rotation formula \citem{RodriguesO.1840a} using the frontier and boundary axes and angles, $\theta_\mathrm{f}$ and $\theta_\mathrm{b}$,
\begin{align*}
\mathbf{R}_\mathrm{b} &= (\cos \theta_\mathrm{b})\mathbf{I} + \sin \theta_\mathrm{b}\bound^\wedge + (1 - \cos \theta_\mathrm{b}) \bound\bound^\mathrm{T} \,,\\
\mathbf{R}_\mathrm{f} &= (\cos \theta_\mathrm{f})\mathbf{I} + \sin \theta_\mathrm{f}\front^\wedge + (1 - \cos \theta_\mathrm{f}) \front\front^\mathrm{T} \,,
\end{align*}
where,
\[\mathbf{u}^{\wedge} = \begin{bmatrix}
0 & -u_{2} & u_{1} \\
u_{2} & 0 & -u_{0} \\
-u_{1} & u_{0} & 0
\end{bmatrix}\,,\]
and $\mathbf{I}$ is the identity matrix.
The sensor is moved to the adjusted view and another observation is obtained. This process is repeated iteratively until the frontier is expanded (i.e., the other side of the surface discontinuity is observed) or the Euclidean distance between the frontier point and the center of observed points stops reducing. If this termination criterion is reached then the view is reinitialised on the viewing axis from which the frontier point was observed (i.e., where no occluding surface exists) but at a distance from the surface no greater than that of the observing view, $\viewp_\mathrm{obs}$.
This new view position is
\[\viewp_{i+1} = \frontc - \min\{||\frontc - \viewp_\mathrm{obs}||\,,\, d_\mathrm{v}\}\, \viewo_{i+1}\,.\]
The new view orientation is
\[\viewo_{i+1} = \frac{\frontc - \viewp_\mathrm{obs}}{||\frontc - \viewp_\mathrm{obs}||}\,.\]
When starting the view adjustment from the observation viewing axis, the distance factor is reinitialised, $d_\mathrm{t} = 1$, and adjustment is again performed until termination. If this process also reaches the termination criterion then the frontier point is reclassified as an outlier point.
\subsection{Completion}
\seclabel{term}
\gls{see} completes the observation of a scene when the final frontier point has been observed and all points are classified as either core points or outliers. This termination criterion assumes that the observable scene is finite. In the real world this condition can be met by defining a scene boundary and discarding all measurements outside it.
\section{Evaluation}
\begin{figure*}[tpb]
\centering
\captionsetup[subfigure]{}
\captionsetup[subfigure]{labelformat=empty}
\captionsetup[subfigure]{}
\subfloat[Stanford Armadillo ($1$~m) \citem{Krishnamurthy1996a}]{\includegraphics[width=.24\linewidth]{armadillo00}} \hfill
\captionsetup[subfigure]{labelformat=empty}
\subfloat[]{\includegraphics[width=.24\linewidth]{armadillo_ovn_1}} \hfill
\subfloat[]{\includegraphics[width=.24\linewidth]{armadillo_ovt_1}} \hfill
\subfloat[]{\includegraphics[width=.24\linewidth]{armadillo_ovd_1}} \hfill
\captionsetup[subfigure]{}
\subfloat[Stanford Bunny ($1$~m) \citem{Turk1994}]{\includegraphics[width=.24\linewidth]{bunny00}} \hfill
\captionsetup[subfigure]{labelformat=empty}
\subfloat[]{\includegraphics[width=.24\linewidth]{bunny_ovn_1}} \hfill
\subfloat[]{\includegraphics[width=.24\linewidth]{bunny_ovt_1}} \hfill
\subfloat[]{\includegraphics[width=.24\linewidth]{bunny_ovd_1}} \hfill
\captionsetup[subfigure]{}
\subfloat[Stanford Dragon ($1$~m) \citem{Curless1996}]{\includegraphics[width=.24\linewidth]{dragon00}} \hfill
\captionsetup[subfigure]{labelformat=empty}
\subfloat[]{\includegraphics[width=.24\linewidth]{dragon_ovn_1}} \hfill
\subfloat[]{\includegraphics[width=.24\linewidth]{dragon_ovt_1}} \hfill
\subfloat[]{\includegraphics[width=.24\linewidth]{dragon_ovd_1}} \hfill
\captionsetup[subfigure]{}
\subfloat[Newell Teapot ($1$~m) \citem{Newell1975}]{\includegraphics[width=.24\linewidth]{teapot00}} \hfill
\captionsetup[subfigure]{labelformat=empty}
\subfloat[]{\includegraphics[width=.24\linewidth]{teapot_ovn_1}} \hfill
\subfloat[]{\includegraphics[width=.24\linewidth]{teapot_ovt_1}} \hfill
\subfloat[]{\includegraphics[width=.24\linewidth]{teapot_ovd_1}} \hfill
\captionsetup[subfigure]{}
\subfloat[Radcliffe Camera ($40$~m) \citem{Boronczyk2016}]{\includegraphics[width=.24\linewidth]{radcam00_new}} \hfill
\captionsetup[subfigure]{labelformat=empty}
\subfloat[]{\includegraphics[width=.24\linewidth]{radcam_ovn_0}} \hfill
\subfloat[]{\includegraphics[width=.24\linewidth]{radcam_ovt_0}} \hfill
\subfloat[]{\includegraphics[width=.24\linewidth]{radcam_ovd_0}} \hfill
\subfloat[]{\includegraphics[width=.24\linewidth]{white}}
\subfloat[]{\includegraphics[width=.6\linewidth]{legend}}
\caption{The performance of \gls{see} and state-of-the-art volumetric approaches \citem{Vasquez-Gomez2015, Kriegel2015, Delmerico2017} on four ($1$~m) standard models, the Stanford Armadillo \citem{Krishnamurthy1996a}, the Stanford Bunny \citem{Turk1994}, the Stanford Dragon \citem{Curless1996}, the Newell Teapot \citem{Newell1975} and on a full-scale ($40$~m) model of the Radcliffe Camera \citem{Boronczyk2016}. Noise-free measurements obtained by SEE are presented in the left-most column to illustrate the model. The graphs present the mean performance calculated from fifty independent trials on each model. Left to right they present the mean surface coverage vs the number of views, the mean computational time required to plan \gls{nbv}s and the mean distance travelled by the sensor. The error bars denote one standard deviation around the mean. These results show that SEE achieves higher surface coverage in less computational time and with near equivalent travel distances when compared to the evaluated volumetric approaches.}
\figlabel{results}
\end{figure*}
\gls{see} is compared to state-of-the-art \gls{nbv} approaches with volumetric representations, Area Factor (AF) \citem{Vasquez-Gomez2015}, Average Entropy (AE) \citem{Kriegel2015}, Occlusion Aware (OA) \citem{Delmerico2017}, Unobserved Voxel (UV) \citem{Delmerico2017}, Rear Side Voxel (RSV) \citem{Delmerico2017}, Rear Side Entropy (RSE) \citem{Delmerico2017} and Proximity Count (PC) \citem{Delmerico2017} on four standard models, the Stanford Armadillo \citem{Krishnamurthy1996a}, the Stanford Bunny \citem{Turk1994}, the Stanford Dragon \citem{Curless1996}, the Newell Teapot \citem{Newell1975} and on a full-scale model of the Radcliffe Camera \citem{Boronczyk2016}. The implementations of the volumetric approaches are provided by \cite{Delmerico2017}.
\subsection{Simulation Environment}
Measurements are simulated from a depth sensor by raycasting into a triangulated mesh of a scene model and adding Gaussian noise ($\mu = 0$~m, $\sigma = 0.01$~m) to the ray intersections to simulate a noisy 3D range sensor. These measurements are given to the \gls{nbv} algorithms as sensor observations. The process is repeated for each view requested by the algorithm.
The depth sensor is defined by a field-of-view in radians, $\alpha$, and a dimension in pixels, $w_\mathrm{x}$ and $w_\mathrm{y}$. The simulation environment contains no ground plane and the sensor can move unconstrained in three dimensions with six degrees of freedom. The sensor is prevented from moving inside scene surfaces by checking for intersections between the sensor path and the scene model. The sensor parameters used for the evaluation are $\alpha = \frac{\pi}{3}$~rad, $w_\mathrm{x} = 600$~px and $w_\mathrm{y} = 600$~px.
\subsection{Evaluation Parameters}
Potential views for the volumetric approaches are sampled from a given view surface (i.e., a view sphere) surrounding the scene as in \citem{Vasquez-Gomez2015, Delmerico2017}. Kriegel et al. \citem{Kriegel2015} does not restrict views to a view surface but we use the implementation provided by \citem{Delmerico2017} which does. The radius of the view sphere is defined as half the diagonal of the scene bounding box plus a chosen offset of $2$~m for the standard models and $16$~m for the Radcliffe Camera. The view distance for \gls{see} is set to the radius of the view sphere.
SEE uses a measurement density of $\rho = 4000$~points per m$^3$ for the standard models and $\rho = 60$~points per m$^3$ for the Radcliffe Camera. The resolution used is $r = 0.02$~m for the standard models and $r = 0.2$~m for the Radcliffe Camera. The volumetric approaches use the same resolutions for their voxel grids.
Every algorithm was run fifty times on each model for a given number of views. \gls{see} was run until its completion criterion was satisfied. The view limit for the \gls{infg} approaches on each model is set to $1.5\times$ the maximum number of views used by SEE to demonstrate their convergence. The number of views sampled on the view sphere is defined as $2.4\times$ the view limit as in \citem{Delmerico2017}.
\subsection{Evaluation Metrics}
The algorithms are evaluated by calculating their relative surface coverage, computational time and sensor travel distance. These values are averaged across fifty experiments on each model \figref{results}.
\subsubsection{Surface Coverage}
The surface coverage of an approach is measured as the ratio of observed model points, $M_\mathrm{o}$, to total model points, $M_\mathrm{t}$,
\[\tau := \frac{M_\mathrm{o}}{M_\mathrm{t}}\,.\]
A point is considered observed, $M_\mathrm{o} \subseteq M_\mathrm{t}$, if there is a measurement within $r_\mathrm{d}$ of the point. This registration distance is chosen as $r_\mathrm{d} = 0.005$~m for the standard models, as in \citem{Delmerico2017}, and $r_\mathrm{d} = 0.05$~m for the Radcliffe Camera model.
\subsubsection{Time}
The time taken to compute next best views is measured and added to a cumulative total. The time required for sensor travel is not considered.
\subsubsection{Distance}
The distance travelled by the sensor is measured by summing Euclidean distance between the positions of subsequent views.
\section{Discussion}
The experimental results demonstrate that \gls{see} outperforms the evaluated state-of-the-art volumetric approaches \figref{results} by requiring less computational time to plan views that obtain greater surface coverage with near equivalent travel distances, regardless of scene complexity and scale. \gls{see} is shown to consistently obtain high surface coverage for models with different surface complexities and scales while the volumetric approaches demonstrate varying performance.
Standard models with a large amount of self-occlusions (e.g., the ears of the Stanford Bunny and the handle of the Newell Teapot) demonstrate the advantages of the adaptable views used by \gls{see}. The evaluated volumetric approaches perform worse on these problems as they do not adjust their views to account for occlusions. The view selection metric presented in \citem{Kriegel2015} does adapt views to handle occlusions but this is not included in the implementation provided by \citem{Delmerico2017}.
The Radcliffe Camera model demonstrates the difficulty of scaling volumetric approaches to large scenes. The large resolution necessary for reasonable raytracing allows voxels to be observed by discontinuous measurements \figref{marquee}.
The experiments show that the computational performance of \gls{see} is logarithmically better than the volumetric approaches. The poor performance of the volumetric approaches is due to the computational complexity of raytracing a high-resolution voxel grid from every view on the view sphere when selecting a \gls{nbv}. The limited scalability of the volumetric approaches with scene size is demonstrated by the difference in computational performance between the standard models and the Radcliffe Camera model.
While \gls{see} travels a larger distance per-view in the experiments, it initially achieves equivalent surface coverage per unit distance. The volumetric approaches then appear to continue to travel without significantly improving coverage while \gls{see} continues to increase coverage as it travels. As a result, by the time \gls{see} terminates it has travelled distances equivalent to many of the other approaches but has achieved higher surface coverage.
\section{Conclusion}
\gls{see} is a scene-model-free approach to NBV planning that uses a density representation. The representation defines a \textit{frontier} between fully and partially observed surfaces based on a user-specified resolution and measurement density. View proposals are generated to observe this frontier and extend the scene coverage. \gls{nbv}s are selected and new measurements are obtained until the scene is fully observed with the given measurement density and at the specified resolution.
The density representation used by \gls{see} has a number of advantages over volumetric and surface representations. Unlike volumetric representations, the complexity of \gls{see} only scales with the number of measurements and not scene scale, making it possible to obtain high-resolution models of large scenes. In contrast to many surface approaches the measurement density and resolution parameters can be specified intuitively and only a single survey stage is required.
Experimental results show that \gls{see} outperforms state-of-the-art volumetric approaches in terms of surface coverage and computation time. It take less computation time to propose views that achieve greater surface coverage with an equivalent travel distance.
SEE was only compared to publicly available volumetric approaches as we were unable to attain implementations of relevant surface approaches. We plan to implement state-of-the-art surface (e.g., \cite{Dierenbach2016}) and/or combined approaches (e.g., \cite{Kriegel2015}) and present comparisons with these in future work. SEE may be made available to other researchers upon request to facilitate comparisons. We are also working to deploy and test \gls{see} on real-world problems with an aerial platform.
\renewcommand*{\bibfont}{\footnotesize}
{\renewcommand{\markboth}[2]{}
\printbibliography}
\end{document}
|
{
"timestamp": "2018-02-26T02:10:57",
"yymm": "1802",
"arxiv_id": "1802.08617",
"language": "en",
"url": "https://arxiv.org/abs/1802.08617"
}
|
\section{Introduction}
\def\theequation{1.\arabic{equation}}
\setcounter{equation}{0}
Stochastic processes are widely used in science nowadays, as they allow for a flexible modelling of time-dependent phenomena. For example, in physics stochastic processes are used to explain the behaviour of quantum systems (see \citealp{vKa07}), but stochastic processes are also suitable for financial modelling. The seminal paper by \cite{DelSch94} suggests to use the special class of It\=o semimartingales in continuous time. Financial models based on It\=o semimartingales satisfy a certain condition on the absence of arbitrage and moreover they are still rich enough to accommodate stylized facts such as volatility clustering, leverage effects and jumps. As a consequence, in recent years a lot of research was focused on the development of statistical procedures for characteristics of It\=o semimartingales based on discrete observations. In particular, the importance of the jump component has been enforced by recent research (see \citealp{AitJac09b} and \citealp{AitJac09a}) and common methods in this field are gathered in the recent monographs by \cite{JacPro12} and \cite{AitJac14}.
A fundamental topic in statistics for stochastic processes is the analysis of structural breaks. Corresponding test procedures, commonly referred to as change point tests, have their origin in quality control (see \citealp{page1954, Pag55}) and nowadays, these techniques are widely used in many fields of science such as economics (\citealp{Per06}), finance (\citealp{AndGhy09}), climatology (\citealp{Ree07}) and engineering (\citealp{Sto00}). The contributions of the present paper to this field of research are new statistical procedures for the detection of changes in the jump behaviour of an It\=o semimartingale.
In contrast to the existing works \cite{BueHofVetDet15} and \cite{HofVetDet17} this paper introduces methods of inference on the jump behaviour of the underlying process in general, while in the previously mentioned references the authors restrict the analysis to jumps which exceed a minimum size $\eps >0$.
Throughout this work we assume that we have high-frequency data $X_{i\Delta_n}$ $(i=0,1,\ldots,n)$ with $\Delta_n \to 0$, where the process $(X_t)_{t\in \R_+}$ is an It\=o semimartingale with the following decomposition
\begin{multline*}
X_t = X_0 + \int_0^t b_s \, ds + \int_0^t \sigma_s\, dW_s + \int_0^t \int_{\R} u 1_{\{|u| \leq 1\}} (\mu-\bar \mu)(ds,du) + \int_0^t \int_{\R} u 1_{\{|u|>1\}} \mu(du,dz).
\end{multline*}
Here $W$ is a standard Brownian motion and $\mu$ is a Poisson random measure on $\R^+ \times \R$ with predictable compensator $\bar \mu$ satisfying $\bar \mu(ds,du) = ds \: \nu_s(du)$. Our approach is completely non-parametric, that is we only impose structural assumptions on the characteristic triplet $(b_s,\sigma_s,\nu_s)$ of $(X_t)_{t\in \R_+}$. The crucial quantity here is the transition kernel $\nu_s$ which controls the number and the size of the jumps around time $s \in \R_+$. Our aim is to test the null hypothesis
\begin{align}
\label{H0}
{\bf H}_0: \nu_s(d\taili) = \nu_0(d\taili)
\end{align}
against various alternatives involving the non-constancy of $\nu_s$. In particular, the detection of abrupt changes in a stochastic feature has been discussed extensively in the literature (see \citealp{auehor2013} and \citealp{jandhyala:2013} for an overview in a time series context). The first part of this paper belongs to this area of research and introduces tests for $\textbf{H}_0$ versus alternatives of an abrupt change of the form
\begin{align*}
{\bf H}_1^{(ab)}: \nu_s^{(n)}(d\taili) = \ind_{\{s < \ip{n\gseqi_0} \Delta_n\}}\nu_1(d\taili) + \ind_{\{s \geq \ip{n\gseqi_0} \Delta_n\}}\nu_2(d\taili),
\end{align*}
for some unknown $\gseqi_0 \in (0,1)$ and two distinct L\'evy measures $\nu_1 \neq \nu_2$. Similar to the classical setup of detecting changes in the mean of a time series it is only possible to define the change point relative to the length of the data set which in our case is the time horizon $n\Delta_n$. However, for inference on the jump behaviour the time horizon has to tend to infinity ($n\Delta_n \to \infty$) since there are only finitely many jumps of a certain size on every compact interval. Furthermore, we also discuss how to estimate the unknown change point $\gseqi_0$, if the alternative ${\bf H}_1^{\scriptscriptstyle (ab)}$ is true.
A more difficult problem is the detection of gradual (smooth, continuous) changes in a stochastic feature. As a consequence, the setup in most papers on this topic is restricted to non-parametric location or parametric models with independently distributed observations (see e.g. \citealp{bissel1984a}, \citealp{gan1991}, \citealp{siezha1994}, \citealp{huskova1999}, \citealp{husste2002} and \citealp{Mallik2013}). Gradual changes in a time series context are for instance discussed in \cite{aueste2002} and \cite{VogDet15}. In the second part of this paper we contribute to this development by introducing new procedures for gradual changes in the kernel $\nu_s$, where we basically test ${\bf H}_0$ against the general alternative
\begin{align*}
{\bf H}_1^{(gra)}: \nu_s(d\taili) \text{ is not Lebesgue-almost everywhere constant in } s \in [0,n\Delta_n].
\end{align*}
Moreover, we introduce an estimator for the first time point where the jump behaviour deviates from the null hypothesis.
The remaining paper is organized as follows: In Section \ref{sec:Asss} we give the basic assumptions on the characteristics of the underlying process and the observation scheme. Section \ref{sec:Infabchdf} introduces test and estimation procedures for abrupt changes in the jump behaviour in general by using CUSUM processes. In Section \ref{sec:gradchadf} we discuss how to detect and estimate gradual changes in the entire jump behaviour. Section \ref{sec:fisaper} contains an extensive simulation study investigating the finite-sample performance of the new procedures. Finally, all proofs are relegated to Section \ref{sec:weConv} and the technical appendices \ref{appA}, \ref{appB} and \ref{appC}.
\section{The basic assumptions}
\label{sec:Asss}
\def\theequation{2.\arabic{equation}}
\setcounter{equation}{0}
In order to accommodate both abrupt and gradual changes in our approach we follow \cite{HofVetDet17} and assume that there is a driving law behind the evolution of the jump behaviour in time which is common for all $n\in\N$. That is we assume that at step $n\in \N$ we observe an It\=o semimartingale $X^{\scriptscriptstyle (n)}$ with characteristics $(b_s^{\scriptscriptstyle (n)},\sigma_s^{\scriptscriptstyle (n)},\nu_s^{\scriptscriptstyle(n)})$ at the equidistant time points $i\Delta_n$ with $i=0,1,\ldots,n$ which satisfies the following rescaling assumption
\begin{align}
\label{ComResAss}
\nu^{(n)}_{s}(dz) = g\Big(\frac{s}{n\Delta_n}, dz\Big)
\end{align}
for a transition kernel $g(y,d\taili)$ from $([0,1],\Bb([0,1]))$ into $(\R,\Bb)$, where here and below $\Bb(A)$ denotes the trace $\sigma$-algebra on $A\subset\R$ of the Borel $\sigma$-algebra $\Bb$ of $\R$. In order to detect changes in the jump behaviour of the underlying It\=o semimartingale in general, we have to draw inference on the kernel $g(y,B)$ for sets $B \in \Bb$ containing the origin. However, $g$ has locally the properties of a L\'evy measure. Thus, if we deviate from the (simple) case of finite activity jumps the total mass of $g$ on every neighbourhood of the origin is infinite and we cannot estimate $g(y,\cdot)$ on sets containing $0$ directly. We address this problem by weighting the kernel $g$ according to an auxiliary function, precisely for change point detection we consider
\begin{equation}
\label{NrhoDef}
N_{\rho}(g;\gseqi,\dfi) := \int \limits_0^\gseqi \int \limits_{-\infty}^\dfi \rho(\taili) g(y,d\taili) dy,
\end{equation}
for $(\gseqi,\dfi) \in \netir$, where $\rho$ is chosen appropriately such that the integral is always defined. Under weak conditions on $\rho$, this so-called L\'evy distribution function $N_\rho$ determines the entire kernel $g$ and therefore the evolution of the jump behaviour in time. The natural approach to draw inference on $N_\rho$ is the following sequential generalization of an estimator in \cite{Nic15}
\[
\tilde N_\rho^{(n)}(\gseqi,t) = \frac{1}{n \Delta_n} \sum \limits_{i = 1}^{\ip{n\gseqi}} \rho(\Deli X^{(n)}) \Indit(\Deli X^{(n)}),
\]
for $(\gseqi,\dfi) \in \netir$, where $\Deli X^{\scriptscriptstyle (n)} = X^{\scriptscriptstyle (n)}_{i\Delta_n} - X^{\scriptscriptstyle (n)}_{(i-1)\Delta_n}$. Using a spectral approach similar to \cite{NicRei12} these authors prove weak convergence of $\sqrt{n \Delta_n} \big(\tilde N_\rho^{\scriptscriptstyle (n)}(1,t) - N_\rho(g;1,t)\big)$ in $\ell^\infty(\R)$ to a tight Gaussian process, but only for L\'evy processes without a diffusion component, i.e. in particular for constant $g(y,\cdot) \equiv \nu(\cdot)$. The main difficulty in generalizing this result is the superposition of small jumps with the roughly fluctuating Brownian component of the process. We solve this problem by using a truncation approach which has originally been used by \cite{mancini} to cut off jumps in order to draw inference on integrated volatility. More precisely, we follow \cite{HofVet15} and identify jumps by inverting the truncation technique of \cite{mancini}, i.e. all test statistics and estimators investigated below are functionals of the sequential truncated empirical L\'evy distribution function
\begin{equation}
\label{NrhonDef}
N_{\rho}^{(n)}(\gseqi,\dfi) = \frac{1}{n \Delta_n} \sum \limits_{i=1}^{\ip{n\gseqi}} \rho(\Deli X^{(n)}) \Indit(\Deli X^{(n)}) \Truniv, \quad (\gseqi,\dfi) \in \netir,
\end{equation}
for some suitable null sequence $v_n \to 0$.
As a further improvement to previous studies we analyse the asymptotic behaviour of our tests under local alternatives. That is, in the rescaling assumption \eqref{ComResAss} we let $g = g^{\scriptscriptstyle (n)}$ depend on $n \in \N$, where there exist transition kernels $g_0,g_1,g_2$ satisfying some additional regularity assumptions such that for each $y \in [0,1]$
\begin{equation}
\label{gon012Ass2}
g^{(n)}(y,d\taili) = g_0(y,d\taili) + \frac 1{\sqrt{n\Delta_n}} g_1(y,d\taili) + \Rc_n(y,d\taili)
\end{equation}
and for each $y \in [0,1]$, $B\in\Bb$ and $n\in\N$ the remainder kernel $\Rc_n$ satisfies
\[
\Rc_n(y,B) \leq a_n g_2(y,B)
\]
for a sequence $a_n = o((n\Delta_n)^{-1/2})$ of non-negative real numbers. For constant $g_0(y,\cdot) \equiv \nu_0(\cdot)$ assumption \eqref{gon012Ass2} is exactly the local alternative where the jump behaviour converges to the null hypothesis $g_0(y,\cdot) \equiv \nu_0(\cdot)$ from the direction defined by $g_1$ at rate $(n\Delta_n)^{-1/2}$. In this sense, Theorem \ref{ConvThm}, in which we prove weak convergence of the stochastic process
\[
G_{\rho}^{(n)}(\gseqi,\dfi) = \sqrt{n \Delta_n}\big(N_{\rho}^{(n)}(\gseqi,\dfi) - N_{\rho}(g^{(n)};\gseqi,\dfi)\big), \quad (\gseqi,\dfi) \in \netir
\]
to a tight Gaussian process in $\linner$, is a generalization of the results in \cite{HofVet15} to sequential processes for time dependent variable jump behaviour as in \eqref{gon012Ass2}.
Critical values for the test procedures introduced below and the optimal choice of a regularization parameter of the new estimator for gradual change points are obtained by a multiplier bootstrap approach. Precisely, Theorem \ref{CondConvThm}, in which we prove conditional weak convergence in a suitable sense of the bootstrapped version
\begin{align*}
\hat G_\rho^{(n)}(\gseqi,\dfi) = \frac 1{\sqrt{n\Delta_n}} \sum\limits_{i=1}^{\ip{n\gseqi}} \xi_i \rho\big(\Deli X^{(n)}\big) \ind_{(-\infty,\dfi]}\big(\Deli X^{(n)}\big) \ind_{\{|\Deli X^{(n)} | > v_n\}}, \quad (\gseqi, \dfi) \in \netir
\end{align*}
of $G_\rho^{\scriptscriptstyle (n)}$ to a Gaussian process, where $(\xi_i)_{i\in\N}$ is a sequence of i.i.d.\ multipliers with mean $0$ and variance $1$, complements the paper \cite{HofVet15}.
For the rescaling assumptions \eqref{ComResAss} and \eqref{gon012Ass2} we consider transition kernels $g_i(y,d\taili)$ of the set $\Gc(\beta,p)$ depending on parameters $\beta \in (0,2), p>0$. In order to define this set we denote by $\lambda$ the one-dimensional Lebesgue measure defined on the Lebesgue $\sigma$-algebra $\Lc_1$ of $\R$ and we denote by $\lambda_1$ the restriction of $\lambda$ to the trace $\sigma$-algebra $[0,1] \cap \Lc_1$.
\begin{definition}
\label{rhoandgass2}
For $\beta \in (0,2)$ and $p > 0$ the set $\Gc(\beta,p)$ consists of all transition kernels $g(y,d\taili)$ from $([0,1],\Bb([0,1]))$ into $(\R,\Bb)$, such that for each $y \in [0,1]$ the measure $g(y,d\taili)$ has a Lebesgue density $h_y(\taili)$ and there exist $\eta,M >0$ as well as a Lebesgue null set $L \in [0,1] \cap \Lc_1$ such that the following items are satisfied:
\begin{enumerate}[(1)]
\item $h_y(\taili) \leq K|\taili|^{-(1+ \beta)}$ holds for all $\taili \in (-\eta,\eta)$, $y \in [0,1] \setminus L$ and for some $K >0$. \label{BGn0Ass}
\item For $n \in \N$ let $C_n \defeq \lbrace \taili \in \R \mid \frac{1}{n} \leq |\taili| \leq n \rbrace$. Then for each $n \in \N$ there exists a $K_n >0$ with $h_y(\taili) \leq K_n$ for each $\taili \in C_n$ and all $y \in [0,1] \setminus L$.
\label{DiCondmiddle}
\item $h_y(\taili) \leq K |\taili|^{-(2p\vee 2) - \epsilon}$ whenever $|\taili| \geq M$ and $y \in [0,1] \setminus L$, for some $K > 0$ and some $\epsilon >0$.
\label{DiCondinfty}
\end{enumerate}
\end{definition}
The items above basically say that the densities $h_y$ are bounded by a continuous L\'evy density of a L\'evy measure which behaves near zero like the one of a $\beta$-stable process, whereas this density has to decay sufficiently fast at infinity. Such conditions are well-known in the literature and often used in similar works on high-frequency statistics; see e.g.\ \cite{AitJac09b} or \cite{AitJac10}. From Assumption \ref{Cond1} and Proposition \ref{easier} in Section \ref{sec:weConv} it can be seen that it is even possible to work with a wider class of transition kernels $g(y,d\taili)$ which does not require Lebesgue densities. Nevertheless, we stick to the set $\Gc(\beta,p)$ defined above which is much simpler to interpret. The following example shows that alternatives of abrupt changes in the jump behaviour can be described by transition kernels in the set $\Gc(\beta,p)$.
\begin{example} ({\it abrupt changes})
\label{Ex:Sitabcha}
In Section \ref{sec:Infabchdf} we introduce statistical procedures for inference of abrupt changes in the jump behaviour. In this case the kernel $g_0$ is typically of the form as discussed below. For $\beta \in (0,2)$ and $p > 0$ let $\Mc(\beta,p)$ be the set of all L\'evy measures $\nu$ such that the constant transition kernel $g(y,d\taili) = \nu(d\taili)$ belongs to $\Gc(\beta,p)$.
Let $\gseqi_0 \in (0,1]$ and let $\nu_1, \nu_2 \in \Mc(\beta,p)$ be two L\'evy measures. Then the transition kernel $g_0$ given by
\begin{align}
\label{gabDef}
g_0(y,d\taili) = \begin{cases}
\nu_1(d \taili), \quad &\text{ for } y \in [0,\gseqi_0] \\
\nu_2(d \taili), \quad &\text{ for } y \in (\gseqi_0,1].
\end{cases}
\end{align}
is an element of $\Gc(\beta,p)$. In the context of change-point tests $\gseqi_0 = 1$ corresponds to the null hypothesis of no change in the jump behaviour, whereas \eqref{gabDef} describes an abrupt change for $\gseqi_0 \in(0,1)$ and $\nu_1\neq\nu_2$.
\end{example}
The variance gamma process is a common model for the log stock price in finance (see for instance \cite{Mad98}). Moreover, the L\'evy measure of a variance gamma process has the form $\nu(d\taili) = (a_1 \taili^{-1} e^{-b_1\taili} - a_2 \taili^{-1} e^{-b_2\taili})~d\taili$ for $a_1,a_2,b_1,b_2 >0$. Thus, the transition kernel $g_0(y,d\taili)$ belongs to $\Gc(\beta,p)$ for all $\beta\in(0,2)$ and $p>0$, if similar as in \eqref{gabDef} $g_0$ is piecewise constant in $y\in[0,1]$ and on the domains of constancy it is equal to the L\'evy measure of a variance gamma process. Below we give the main assumptions which are sufficient for the convergence results in this paper.
\begin{assumption}
\label{EasierCond}
Let $0< \beta < 2$ and $0< \tau < (1/5 \wedge \frac{2-\beta}{2+5\beta})$. Furthermore, let $p > \beta+((\frac 12 + \frac 32 \beta) \vee \frac{2}{1+5\tau})$. At step $n\in\N$ we observe an It\=o semimartingale $X^{\scriptscriptstyle (n)}$ adapted to the filtration of some filtered probability space $(\Omega,\Fc,(\Fc_t)_{t\in\R_+},\Prob)$ with characteristics $(b_s^{\scriptscriptstyle (n)},\sigma_s^{\scriptscriptstyle (n)},\nu_s^{\scriptscriptstyle(n)})$ at the equidistant time points $\{i\Delta_n \mid i=0,1,\ldots,n\}$ such that the following items are satisfied:
\begin{compactenum}[(a)]
\item \textit{Assumptions on the jump characteristic and the function $\rho$:}
\label{rhoandgass}
\begin{enumerate}[(1)]
\item For each $n\in\N$ and $s \in [0,n\Delta_n]$ we have
\begin{align}
\label{RescAss2}
\nu^{(n)}_{s}(dz) = g^{(n)}\Big(\frac{s}{n\Delta_n}, dz\Big),
\end{align}
where there exist transition kernels $g_0,g_1,g_2 \in \Gc(\beta,p)$ such that for each $y \in [0,1]$
\begin{equation}
\label{gon012Ass}
g^{(n)}(y,d\taili) = g_0(y,d\taili) + \frac 1{\sqrt{n\Delta_n}} g_1(y,d\taili) + \Rc_n(y,d\taili)
\end{equation}
and for each $y \in [0,1]$, $B\in\Bb$ and $n\in\N$ the kernel $\Rc_n$ satisfies
\[
\Rc_n(y,B) \leq a_n g_2(y,B)
\]
for a sequence $a_n = o((n\Delta_n)^{-1/2})$ of non-negative real numbers.
\item \label{EasrhoCond} $\rho \colon \R \rightarrow \R$ is a bounded $\mathcal C^1$-function with $\rho(0)=0$ and its derivative satisfies $|\rho^{\prime}(\taili)| \leq K |\taili|^{p-1}$ for all $\taili \in \R$ and some constant $K>0$.
\item \label{rhoneq0} $\rho(\taili) \neq 0$ for each $\taili \neq 0$.
\item \label{jbidenass} For every $\dfi \in \R$ there exists a finite set $M_{(\dfi)} \subset [0,1]$, such that the function
\[ y \mapsto \int_{-\infty}^\dfi \rho(\taili) g_0(y,d\taili) \] is continuous on $[0,1] \setminus M_{(\dfi)}$.
\end{enumerate}
\item \label{speed}\textit{Assumptions on the truncation sequence $v_n$ and the observation scheme:} \\
The truncation sequence $v_n$ satisfies
\[
v_n \defeq \gamma \Delta_n^{\ovw},
\]
with $\ovw = (1+5\tau)/4$ and some $\gamma >0$. Define further:
\begin{align*}
t_1 \defeq ( 1+ \tau )^{-1} \quad \text{and } \quad
t_2 \defeq ( (7\tau +1)/2 )^{-1} \wedge 1.
\end{align*}
Then we have $0 < t_1 < t_2 \leq 1$ and we suppose that the observation scheme satisfies for some $\delta >0$
\[
\Delta_n = o(n^{-t_1}) \quad \text{ and } \quad n^{-t_2+\delta} = o(\Delta_n).
\]
\item \label{DriDiffMomCond} \textit{Assumptions on the drift and the diffusion coefficient:} \\
For
\[
m_b = \frac{6+10\tau}{3-5\tau} \leq 4 \quad \text{ and } \quad m_\sigma = \frac{6+10 \tau}{1-5\tau}
\]
we have
\[
\sup\limits_{n\in\N}\sup \limits_{s \in \R_+} \Big\{ \Eb \big|b^{(n)}_s\big|^{m_b} \vee \Eb \big|\sigma^{(n)}_s\big|^{m_\sigma} \Big\} < \infty.
\]
\end{compactenum}
\end{assumption}
\begin{remark} Suppose we have complete knowledge of the distribution function $N_{\rho}(g_0;\gseqi,\dfi)$. Obviously, the measure with density $M(dy,d\taili) \defeq \rho(\taili) g_0(y,d\taili)dy$ is completely determined from knowledge of the entire function $N_{\rho}(g_0;\cdot,\cdot)$ and does not charge $[0,1]\times \{0\}$. Therefore, due to Assumption \ref{EasierCond}\eqref{rhoneq0} $1/\rho(\taili) M(dy,d\taili) = g_0(y,d\taili)dy$ and consequently the jump behaviour corresponding to $g_0$ is known as well. Furthermore, Assumption \ref{EasierCond}\eqref{jbidenass} ensures that a characteristic quantity for a gradual change, which we introduce in Section \ref{sec:gradchadf} is zero if and only if the jump behaviour corresponding to $g_0$ is constant in time. All convergence results in this paper also hold without Assumption \ref{EasierCond}\eqref{rhoneq0} and \eqref{jbidenass}.
Moreover, the function
\[
\tilde \rho(x) = \begin{cases}
0, \quad &\text{if } x=0, \\
e^{-1/|x|}, \quad &\text{if } |x| >0, \\
\end{cases}
\]
is suitable for any choice of the constants $\beta$ and $\tau$. In practice, however, one would like to work with a polynomial decay at zero, in which case the condition on $p$ comes into play. Here, the smaller the parameter $\beta$, the smaller $p$ can be chosen. For example, for $\beta < 3/5$ and $\tau > 3/35$ even a choice $p<2$ is possible.
Furthermore, it is also important to choose the observation scheme suitably. Obviously, we have $\Delta_n \rightarrow 0$ and $n \Delta_n \rightarrow \infty$ because of $0< t_1 < t_2 \leq 1$, and a typical choice is $\Delta_n = O(n^{-y})$ and $n^{-y} = O(\Delta_n)$ for some $0< t_1 < y < t_2 \leq 1$.
Finally, Assumption \ref{EasierCond}\eqref{DriDiffMomCond} requires only a bound on the moments of the remaining characteristics and is therefore extremely mild.
\end{remark}
In the remaining part of this section we illustrate an example of a kernel $g_0 \in \Gc(\beta,p)$ for some suitable $\beta,p$ and a function $\rho$ satisfying Assumption \ref{EasierCond}\eqref{EasrhoCond} and \eqref{rhoneq0}.
\begin{example}({\it gradual changes}) \label{Ex:SitgraCh} In Section \ref{sec:gradchadf}, which is dedicated to inference of gradual changes, we basically test against the general alternative that the jump behaviour is non-constant. In the following we introduce an example of a kernel $g_0$ which can be used to describe a gradual change in the jump behaviour and a corresponding function $\rho$ satisfying Assumption \ref{EasierCond}\eqref{EasrhoCond} and \eqref{rhoneq0}.
To this end, for $L >0$, $p >1$ let
\begin{equation}
\label{Eq:rhoLpdef}
\rho_{L,p}(\taili) := L \times \begin{cases}
2|\taili|^p, \quad &\text{ for } |\taili| \leq 1 \\
4p|\taili| - p \taili^2 + 2 - 3p, \quad &\text{ for } 1 \leq |\taili| \leq 2 \\
2+p, \quad &\text{ for } |\taili| \geq 2
\end{cases}
\end{equation}
and for $0 < \beta < 2$, $p> 1$ consider the L\'evy density
\[
h_{\beta,p}(\taili) := |\taili|^{-(1+\beta)} \ind_{\{ 0 < |\taili| <1 \}} + \ind_{\{1 \leq |\taili| \leq 2\}} + |\taili|^{-p} \ind_{\{|\taili| > 2\}}.
\]
Furthermore, for $0< \hat \beta < 2$ and $\hat p > 1 \vee \hat \beta$ let $A: [0,1] \to (0, \infty)$, $\beta: [0,1] \to (0,\hat \beta]$ and $p: [0,1] \to [ 2 \hat p + \eps,\infty)$ for some $\eps > 0$ be Borel measurable functions such that $A$ is bounded. Then, the kernel
\begin{equation}
\label{gformgrach}
g_0(y,d\taili) = A(y) h_{\beta(y),p(y)}(\taili) d\taili, \quad y \in [0,1]
\end{equation}
belongs to $\Gc(\hat \beta,\hat p)$ and for arbitrary $L>0$ the function $\rho_{L,\hat p}$ satisfies Assumption \ref{EasierCond}\eqref{EasrhoCond} and \eqref{rhoneq0}.
\end{example}
\section{Statistical inference for abrupt changes}
\label{sec:Infabchdf}
\def\theequation{3.\arabic{equation}}
\setcounter{equation}{0}
In this section we deduce test and estimation procedures for abrupt changes in the jump behaviour of the underlying process, that is we investigate the situation of Example \ref{Ex:Sitabcha}. To this end, we test the null hypothesis of no change in the jump behaviour
\begin{enumerate}
\item[${\bf H}_0$:] Assumption~\ref{EasierCond} is satisfied for $g_1=g_2=0$ and there exists a L\'evy measure $\nu_0$ such that $g_0(y,d\taili) = \nu_0(d\taili)$ for Lebesgue almost every $y \in [0,1]$.
\end{enumerate}
against the alternative that the jump behaviour is constant on two intervals
\begin{enumerate}
\item[${\bf H}_1$:] Assumption~\ref{EasierCond} is satisfied for $g_1=g_2=0$ and there exists some $\theta_0 \in (0,1)$ and two L\'evy measures $\nu_1 \neq \nu_2$ such that $g_0$ has the form \eqref{gabDef}.
\end{enumerate}
The corresponding alternative for fixed $\dfi_0 \in \R$ is given by:
\begin{enumerate}
\item[${\bf H}_1^{(\rho,\dfi_0)}$:] We have the situation from ${\bf H}_1$, but with $N_\rho(\nu_1;\dfi_0) \neq N_\rho(\nu_2;\dfi_0)$, where
\begin{align}
\label{NrhonuDef}
N_\rho(\nu;\dfi) = \int_{-\infty}^{\dfi} \rho(\taili) \nu(d\taili)
\end{align}
for a L\'evy measure $\nu$.
\end{enumerate}
Moreover, we investigate the behaviour of the tests introduced in this section under local alternatives which tend to the null hypothesis as $n\to\infty$:
\begin{enumerate}
\item[${\bf H}^{(loc)}_1$:] Assumption~\ref{EasierCond} is satisfied with $g_0(y,d\taili) = \nu_0(d\taili)$ for Lebesgue-a.e. $y \in [0,1]$ for some L\'evy measure $\nu_0$ and with some transition kernels $g_1,g_2 \in \Gc(\beta,p)$.
\end{enumerate}
\subsection{Weak convergence of test statistics}
Following \cite{Ino01} a suitable approach to introduce tests for the hypotheses above is to investigate the convergence behaviour of the CUSUM process
\begin{align}
\label{Tbrhondef}
\Tb_\rho^{(n)} (\gseqi,\dfi) = \sqrt{n \Delta_n} \Big( N_\rho^{(n)}(\gseqi,\dfi) - \frac{\ip{n\gseqi}}n N_\rho^{(n)}(1,\dfi) \Big),
\end{align}
with $N_\rho^{(n)}(\gseqi,\dfi)$ defined in \eqref{NrhonDef}. The corresponding test rejects the null hypothesis ${\bf H}_0$ for large values of the Kolmogorov-Smirnov-type statistic
\begin{align*}
T_\rho^{(n)} = \sup\limits_{(\gseqi,\dfi) \in \netir} \big|\Tb_\rho^{(n)} (\gseqi,\dfi) \big|.
\end{align*}
The theorem below establishes functional weak convergence of $\Tb_\rho^{\scriptscriptstyle (n)}$ in the general case of local alternatives.
\begin{theorem}
\label{CUSUMTrhowc}
Under ${\bf H}_1^{(loc)}$ the process $\Tb_\rho^{(n)}$ converges weakly in $\linner$ to the process $\Tb_\rho + \Tb_{\rho,g_1}$, where the tight mean zero Gaussian process $\Tb_\rho$ has the covariance structure
\begin{align}
\label{TrhoCovFkt}
\Eb\{\Tb_\rho(\gseqi_1,\dfi_1)\Tb_\rho(\gseqi_2,\dfi_2)\} = \{ (\gseqi_1 \wedge \gseqi_2) - \gseqi_1 \gseqi_2 \} \int \limits_{-\infty}^{\dfi_1 \wedge \dfi_2} \rho^2(\taili) \nu_0(d\taili)
\end{align}
and the deterministic function $\Tb_{\rho,g_1} \in \linner$ is given by
\begin{equation}
\label{shiftdef}
\Tb_{\rho,g_1} (\gseqi,\dfi) = N_\rho(g_1;\gseqi,\dfi) - \gseqi N_\rho(g_1;1,\dfi),
\end{equation}
where $N_\rho(g_1;\cdot,\cdot)$ is defined in \eqref{NrhoDef}.
\end{theorem}
As an immediate consequence of the previous result and the continuous mapping theorem we obtain weak convergence of the statistic $T_\rho^{\scriptscriptstyle (n)}$.
\begin{corollary}
\label{KolSmiConvRes}
Suppose ${\bf H}_1^{(loc)}$ is true, then we have
\begin{align}
\label{Trhsugpdef}
T_\rho^{(n)} \weak T_{\rho,g_1} := \sup \limits_{(\gseqi,\dfi) \in \netir} \big|\Tb_\rho(\gseqi,\dfi) + \Tb_{\rho,g_1} (\gseqi,\dfi) \big|,
\end{align}
in $(\R,\Bb)$ with $\Tb_\rho + \Tb_{\rho,g_1}$ the limit process in Theorem \ref{CUSUMTrhowc}.
\end{corollary}
In applications the L\'evy measure $\nu_0$ which describes the limiting jump behaviour of the underlying process is usually unknown. If one is only interested in the detection of changes in the distribution function $N_\rho(\nu_0;\dfi_0)$ for a fixed $\dfi_0 \in \R$, the processes
\begin{align*}
\Vb_{\rho,\dfi_0}^{(n)}(\gseqi) := \frac{\Tb_\rho^{(n)}(\gseqi,\dfi_0)}{\sqrt{N^{\scriptscriptstyle (n)}_{\rho^2}(1,\dfi_0)}} \ind_{\{N^{\scriptscriptstyle (n)}_{\rho^2}(1,\dfi_0) >0\}}, \quad \gseqi \in [0,1]
\end{align*}
converge weakly to a shifted version of a pivotal limit process.
\begin{proposition}
\label{poiwconKSdi}
Under ${\bf H}_1^{(loc)}$ for each fixed $\dfi_0 \in \R$ with $N_{\rho^2}(\nu_0;\dfi_0) >0$ we have $\Vb_{\rho,\dfi_0}^{\scriptscriptstyle (n)} \weak \Kb + \bar \Vb_{\rho,\dfi_0}^{\scriptscriptstyle (g_1)}$ in $\linne$, where $\Kb$ denotes a standard Brownian bridge and with the deterministic function
\[
\bar \Vb_{\rho,\dfi_0}^{(g_1)}(\gseqi) := \frac{\Tb_{\rho,g_1}(\gseqi,\dfi_0)}{\sqrt{N_{\rho^2}(\nu_0;\dfi_0)}} \in \linne,
\]
where $N_{\rho^2}(\nu_0;\cdot)$ is defined in \eqref{NrhonuDef}. In particular,
\begin{align}
\label{Vrhosupc}
V_{\rho,\dfi_0}^{(n)} := \sup\limits_{\gseqi \in [0,1]} \big|\Vb_{\rho,\dfi_0}^{(n)}(\gseqi) \big| \weak \bar V_{\rho,\dfi_0}^{(g_1)} := \sup\limits_{\gseqi \in [0,1]} \big| \Kb(\gseqi) + \bar \Vb_{\rho,\dfi_0}^{(g_1)}(\gseqi) \big|.
\end{align}
\end{proposition}
Quantiles of functionals of the limit process $\Tb_\rho + \Tb_{\rho,g_1}$ in Theorem \ref{CUSUMTrhowc} are not easily accessible since the distribution of such functionals usually depends in a complicated way on the unknown quantities $\nu_0$ and $g_1$ in the jump characteristic of the underlying process. In order to obtain reasonable approximations for these quantiles we use a multiplier bootstrap approach. That is, in the following we consider bootstrapped processes, $\hat Y_n = \hat Y_n(X_1, \ldots, X_n, \xi_1, \ldots, \xi_n)$, which depend on random variables $X_1, \ldots, X_n$ defined on a probability space $(\Omega_X, \mathcal F_X, \mathbb P_X)$ and on random weights $\xi_1, \ldots, \xi_n$ which are defined on a distinct probability space $(\Omega_{\xi}, \mathcal F_{\xi}, \mathbb P_{\xi})$. Thus, the processes $\hat Y_n$ live on the product space
$(\Omega, \mathcal A,\mathbb P) \defeq (\Omega_X, \mathcal A_X, \mathbb P_X) \otimes (\Omega_{\xi}, \mathcal A_{\xi}, \mathbb P_{\xi})$.
Below we use the notion of weak convergence conditional on the sequence $(X_i)_{i\in\N}$ in probability. It can be found in \cite{Kos08} on pp.\ 19--20.
\begin{definition}
\label{ConvcondDataDef}
Let $\hat Y_n = \hat Y_n (X_1, \ldots ,X_n; \xi_1, \ldots , \xi_n) \colon (\Omega, \mathcal A,\mathbb P) \rightarrow \mathbb D$ be a random element taking values in some metric space $\mathbb D$ depending on some random variables $X_1, \ldots, X_n$ and some random weights $\xi_1, \ldots, \xi_n$. Moreover, let $Y$ be a tight, Borel measurable random variable into $\mathbb D$. Then $\hat Y_n$ converges weakly to $Y$ conditional on the data $X_1, X_2, \ldots$ in probability, if and only if
\begin{enumerate}[(a)]
\item \label{ConConvDia} $\sup \limits_{f \in \text{BL}_1(\mathbb D)} |\mathbb E_{\xi} f(\hat Y_n) - \mathbb E f(Y) | \pto 0,$
\item \label{ConConvDib} $\mathbb E_{\xi} f( \hat Y_n)^{\ast} - \mathbb E_{\xi} f( \hat Y_n)_{\ast} \probto 0$ for all
$f \in \text{BL}_1(\mathbb D).$
\end{enumerate}
Here, $\mathbb E_{\xi}$ denotes the conditional expectation over the weights $\xi$ given the data $X_1, \ldots, X_n$, whereas $\text{BL}_1(\mathbb D)$ is the space of all real-valued Lipschitz continuous functions $f$ on $\mathbb D$ with sup-norm $\| f \|_{\Db} \leq 1$ and Lipschitz constant $1$. Here and below we denote the sup-norm of a real valued function $f$ on a set $M$ by $\|f\|_M$. Furthermore, in item \eqref{ConConvDib} $f( \hat Y_n)^{\ast}$ and $f( \hat Y_n)_{\ast}$ denote a minimal measurable majorant and a maximal measurable minorant with respect to the joint probability space $(\Omega, \mathcal A,\mathbb P)$. The type of convergence defined above is denoted by $\hat Y_n \weakP Y$.
\end{definition}
\begin{remark}
\label{rem:condweak}
$~~$
\begin{compactenum}[(i)]
\item Throughout this work all expressions $f(\hat Y_n)$, with a bootstrapped statistic $\hat Y_n$ and a Lip\-schitz continuous function $f$, are measurable functions of the random weights. To this end we do not use a measurable majorant or minorant in item \eqref{ConConvDia} in the definition above.
\item The implication ``(ii) $\Rightarrow$ (i)'' in the proof of Theorem~2.9.6 in \cite{VanWel96} shows that conditional weak convergence $\weakP$ implies unconditional weak convergence $\weak$ with respect to the product measure $\mathbb P$.
\end{compactenum}
\end{remark}
For the results on conditional weak convergence of the bootstrapped processes below we require a rather mild additional assumption on the sequence of multipliers, which is satisfied for many common distributions such as for instance the Gaussian, the Poisson or the Binomial distribution.
\begin{assumption}
\label{MultiplAss}
The sequence $(\xi_i)_{i \in \N}$ is defined on a distinct probability space than the one generating the data $\{X_{i\Delta_n}^{\scriptscriptstyle (n)} \mid i=0,1,\ldots,n\}$ as described above, is i.i.d.\ with mean zero, variance one and there exists an $M >0$ such that for each integer $m \geq 2$ we have
\begin{align*}
\Eb |\xi_1|^m \leq m! M^m.
\end{align*}
\end{assumption}
Reasonable bootstrap counterparts $\hat \Tb_\rho^{\scriptscriptstyle (n)}$ of the processes $\Tb_\rho^{\scriptscriptstyle (n)}$ are given by
\begin{align*}
&\hat \Tb_\rho^{(n)} (\gseqi,\dfi) := \hat \Tb_\rho^{(n)} (X^{(n)}_{\Delta_n}, \ldots, X^{(n)}_{n\Delta_n}; \xi_1,\ldots,\xi_n;\gseqi,\dfi) := \\
&\hspace{6mm}= \sqrt{n\Delta_n} \frac{\ip{n\gseqi}}n \frac{n-\ip{n\gseqi}}n \Big[ \frac 1{\ip{n\gseqi} \Delta_n} \sum\limits_{j=1}^{\ip{n\gseqi}} \xi_j \rho(\Delj X^{(n)}) \Indit(\Delj X^{(n)}) \ind_{\{|\Delj X^{(n)}| > v_n\}} \\
&\hspace{31mm}- \frac 1{(n-\ip{n\gseqi})\Delta_n} \sum\limits_{j=\ip{n \gseqi}+1}^n \xi_j \rho(\Delj X^{(n)}) \Indit(\Delj X^{(n)}) \ind_{\{|\Delj X^{(n)}| > v_n\}} \Big].
\end{align*}
In the following theorem we establish conditional weak convergence of $\hat \Tb_\rho^{\scriptscriptstyle (n)}$ under the general assumptions of Section \ref{sec:Asss}.
\begin{theorem}
\label{BootTrhoThm}
Let Assumption \ref{EasierCond} be valid and let the multipliers $(\xi_j)_{j\in\N}$ satisfy Assumption \ref{MultiplAss}. Then we have
\begin{align*}
\hat \Tb_\rho^{(n)} \weakP \Tb_\rho
\end{align*}
in $\linner$, where $\Tb_\rho$ is a tight mean zero Gaussian process in $\linner$ with covariance function
\begin{align}
\label{Tbrbcov}
\Eb\{\Tb_\rho(\gseqi_1,\dfi_1) \Tb_\rho(\gseqi_2,\dfi_2)\} &= \int\limits_0^{\gseqi_1\wedge \gseqi_2} \int\limits_{-\infty}^{\dfi_1 \wedge \dfi_2} \rho^2(\taili) g_0(y,d\taili) dy - \gseqi_1 \int \limits_0^{\gseqi_2} \int\limits_{-\infty}^{\dfi_1 \wedge \dfi_2} \rho^2(\taili) g_0(y,d\taili) dy \nonumber \\ &\hspace{15mm}- \gseqi_2 \int \limits_0^{\gseqi_1} \int\limits_{-\infty}^{\dfi_1 \wedge \dfi_2} \rho^2(\taili) g_0(y,d\taili) dy + \gseqi_1 \gseqi_2 \int \limits_0^1 \int\limits_{-\infty}^{\dfi_1 \wedge \dfi_2} \rho^2(\taili) g_0(y,d\taili) dy.
\end{align}
\end{theorem}
\begin{remark} The aim of our bootstrap procedure is to mimic the convergence behaviour of $\Tb_\rho^{\scriptscriptstyle (n)}$. The covariance function of the limiting process in Theorem \ref{BootTrhoThm} differs from \eqref{TrhoCovFkt}, because Theorem \ref{BootTrhoThm} holds under the general conditions introduced in Assumption \ref{EasierCond}, i.e.\ for an arbitrary kernel $g_0 \in \Gc(\beta,p)$. Under the null hypothesis ${\bf H}_0$, where we have $g_0(\cdot,dz) = \nu_0(dz)$, the covariance function \eqref{Tbrbcov} coincides with \eqref{TrhoCovFkt}.
\end{remark}
The limit distribution of the Kolmogorov-Smirnov-type test statistic $T_\rho^{\scriptscriptstyle (n)}$ in Corollary \ref{KolSmiConvRes} can be approximated under $\textbf{H}_0$ by the bootstrap statistics in the following corollary, which is an immediate consequence of Proposition 10.7 in \cite{Kos08}.
\begin{corollary}
\label{haTroCons}
If Assumption \ref{EasierCond} and Assumption \ref{MultiplAss} are satisfied, we have
\begin{align*}
\hat T_\rho^{(n)} := \sup \limits_{(\gseqi,\dfi) \in \netir} | \hat \Tb_\rho^{(n)}(\gseqi,\dfi)| \weakP T_\rho := \sup \limits_{(\gseqi,\dfi) \in \netir} \big|\Tb_\rho(\gseqi,\dfi) \big|,
\end{align*}
with $\Tb_\rho$ the limit process in Theorem \ref{BootTrhoThm}.
\end{corollary}
\subsection{Test procedures for abrupt changes}
\label{sec:tfacha}
The weak convergence results of the previous section make it possible to define test procedures for abrupt changes in the jump behaviour of the underlying process based on L\'evy distribution functions of type \eqref{NrhoDef}. In the following let $B \in \N$ be some large number and let $(\xi^{\scriptscriptstyle (b)})_{b = 1, \ldots,B}$ be independent vectors of i.i.d.\ random variables $\xi^{\scriptscriptstyle (b)} = (\xi_j^{\scriptscriptstyle (b)})_{j=1,\ldots,n}$ with mean zero and variance one, which satisfy Assumption \ref{MultiplAss}. With $\hat \Tb_{\scriptscriptstyle \rho,\xi^{\scriptscriptstyle (b)}}^{\scriptscriptstyle (n)}$ and $\hat T_{\scriptscriptstyle \rho,\xi^{\scriptscriptstyle (b)}}^{\scriptscriptstyle (n)}$ we denote the corresponding bootstrapped quantity calculated with respect to the data and the $b$-th multiplier sequence $\xi^{\scriptscriptstyle (b)}$. For a given level $\alpha \in (0,1)$, we propose to reject ${\bf H}_0$ in favor of ${\bf H}_1$, if
\begin{align}
\label{testvfkglob}
T_\rho^{(n)} \geq \hat q_{1-\alpha}^{(B)} \Big( T_\rho^{(n)} \Big),
\end{align}
where $\hat q_{1-\alpha}^{\scriptscriptstyle (B)} ( T_\rho^{\scriptscriptstyle (n)} )$ denotes the $(1-\alpha)$-sample quantile of $\hat T_{\scriptscriptstyle \rho,\xi^{\scriptscriptstyle (1)}}^{\scriptscriptstyle (n)}, \ldots , \hat T_{\scriptscriptstyle \rho,\xi^{\scriptscriptstyle (B)}}^{\scriptscriptstyle (n)}$. Similarly, for $\dfi_0 \in \R$, ${\bf H}_0$ is rejected in favor of ${\bf H}_1^{\scriptscriptstyle (\rho,\dfi_0)}$, if
\begin{align}
\label{testvfklok}
W_\rho^{(n,\dfi_0)} := \sup \limits_{\gseqi \in [0,1]} |\Tb_\rho^{(n)}(\gseqi,\dfi_0) | \geq \hat q_{1 - \alpha}^{(B)}\Big( W_\rho^{(n,\dfi_0)} \Big),
\end{align}
where $\hat q_{1 - \alpha}^{\scriptscriptstyle (B)}( W_\rho^{\scriptscriptstyle (n,\dfi_0)} )$ denotes the $(1-\alpha)$-sample quantile of $\hat W_{\scriptscriptstyle \rho, \xi^{\scriptscriptstyle (1)}}^{\scriptscriptstyle (n,\dfi_0)}, \ldots, \hat W_{\scriptscriptstyle \rho, \xi^{\scriptscriptstyle (B)}}^{\scriptscriptstyle (n,\dfi_0)}$, and where $\hat W_{\scriptscriptstyle \rho, \xi^{\scriptscriptstyle (b)}}^{\scriptscriptstyle (n,\dfi_0)} $ $:= \sup_{\gseqi \in [0,1]} |\hat \Tb_{\scriptscriptstyle \rho,\xi^{\scriptscriptstyle (b)}}^{\scriptscriptstyle (n)}(\gseqi, \dfi_0)|$ for $b= 1, \ldots, B$. Furthermore, according to Proposition \ref{poiwconKSdi} we define an exact test procedure, that is ${\bf H}_0$ is rejected in favor of the point-wise alternative ${\bf H}_1^{\scriptscriptstyle (\rho,\dfi_0)}$, if
\begin{align}
\label{testvfklokex}
V_{\rho,\dfi_0}^{(n)} \geq q_{1-\alpha}^K,
\end{align}
where $q_{1-\alpha}^{\scriptscriptstyle K}$ is the $(1-\alpha)$-quantile of the Kolmogorov-Smirnov-distribution, that is the distribution of the supremum of a standard Brownian bridge $K= \sup_{\gseqi \in [0,1]} |\Kb(\gseqi)|$. \\
The following results show the behaviour of the previously introduced tests under the null hypothesis, local alternatives and the alternatives of an abrupt change. In particular, these tests are consistent asymptotic level $\alpha$ tests. First, recall the tight centered Gaussian process $\Tb_\rho$ in $\linner$ with covariance function \eqref{TrhoCovFkt}, let $L_\rho: (\R,\Bb) \to (\R,\Bb)$ be the distribution function of the supremum variable $\sup_{(\gseqi,\dfi) \in \netir} |\Tb_\rho(\gseqi,\dfi)|$ and let $L_\rho^{\scriptscriptstyle (\dfi_0)}$ be the distribution function of $\sup_{\gseqi \in [0,1]} |\Tb_\rho(\gseqi,\dfi_0)|$. Furthermore, recall the random variable
\[
T_{\rho,g_1} = \sup \limits_{(\gseqi,\dfi) \in \netir} \big|\Tb_\rho(\gseqi,\dfi) + \Tb_{\rho,g_1} (\gseqi,\dfi) \big|,
\]
defined in \eqref{Trhsugpdef} with the deterministic function
\begin{equation*}
\Tb_{\rho,g_1} (\gseqi,\dfi) = N_\rho(g_1;\gseqi,\dfi) - \gseqi N_\rho(g_1;1,\dfi),
\end{equation*}
defined in \eqref{shiftdef} and let
\[
T_{\rho,g_1}^{(\dfi_0)} := \sup \limits_{\gseqi \in [0,1]} \big|\Tb_\rho(\gseqi,\dfi_0) + \Tb_{\rho,g_1} (\gseqi,\dfi_0) \big|.
\]
Then the results on consistency of the tests are as follows.
\begin{proposition}
\label{ConsuH1loc}
Under ${\bf H}_1^{(loc)}$ with $\nu_0 \neq 0$
\begin{multline}
\label{Prop3101}
\Prob\big(L_\rho(T_{\rho,g_1}) > 1-\alpha\big) \leq \liminf_{B \to \infty} \lim_{n\to\infty} \Prob\big( T_\rho^{(n)} \geq \hat q_{1-\alpha}^{(B)} \big( T_\rho^{(n)}\big) \big) \\ \leq \limsup_{B \to \infty} \lim_{n\to\infty} \Prob\big( T_\rho^{(n)} \geq \hat q_{1-\alpha}^{(B)} \big( T_\rho^{(n)}\big) \big) \leq \Prob\big(L_\rho(T_{\rho,g_1}) \ge 1-\alpha\big)
\end{multline}
holds for each $\alpha \in (0,1)$ and additionally if $N_{\rho^2}(\nu_0,\dfi_0) >0$ then for all $\alpha \in (0,1)$ we have
\begin{equation}
\label{Prop3102}
\Prob\big(\bar V_{\rho,\dfi_0}^{(g_1)} > q_{1-\alpha}^K \big) \le \liminf_{n\to\infty} \Prob\big(V_{\rho,\dfi_0}^{(n)} \geq q_{1-\alpha}^K\big) \leq \limsup_{n\to\infty} \Prob\big(V_{\rho,\dfi_0}^{(n)} \geq q_{1-\alpha}^K\big) \le \Prob\big(\bar V_{\rho,\dfi_0}^{(g_1)} \ge q_{1-\alpha}^K \big),
\end{equation}
with $V_{\rho,\dfi_0}^{(n)}$ and $\bar V_{\rho,\dfi_0}^{(g_1)}$ defined in \eqref{Vrhosupc}, as well as
\begin{multline}
\label{Prop3103}
\Prob\big(L_\rho^{(\dfi_0)}\big(T_{\rho,g_1}^{(\dfi_0)}\big) > 1-\alpha\big) \leq \liminf_{B \to \infty} \lim_{n\to\infty} \Prob\big(W_\rho^{(n,\dfi_0)} \geq \hat q_{1 - \alpha}^{(B)}\big( W_\rho^{(n,\dfi_0)} \big) \big) \\ \leq \limsup_{B \to \infty} \lim_{n\to\infty} \Prob\big(W_\rho^{(n,\dfi_0)} \geq \hat q_{1 - \alpha}^{(B)}\big( W_\rho^{(n,\dfi_0)} \big) \big) \leq \Prob\big(L^{(\dfi_0)}_\rho\big(T^{(\dfi_0)}_{\rho,g_1}\big) \ge 1-\alpha\big).
\end{multline}
\end{proposition}
\begin{remark}
According to Corollary 1.3 and Remark 4.1 in \cite{GaeMolRos07} the distribution function $L_\rho$ is continuous on $\R$ and strictly increasing on $\R_+$. Thus, \eqref{Prop3101} basically states that under the local alternative for large $B,n \in \N$ the probability that the test \eqref{testvfkglob} rejects the null hypothesis is approximately equal to the probability that the supremum of the shifted version $T_{\rho,g_1}$ exceeds the $(1-\alpha)$-quantile of the non-shifted version $T_{\rho,0}$. An analysis of the latter probability, which is beyond the scope of this paper, then shows in which direction, i.e.\ for which $g_1$, it is harder to distinguish the null hypothesis from the alternative. The assertions \eqref{Prop3102} and \eqref{Prop3103} can be interpreted in the same way.
\end{remark}
\begin{corollary}
\label{prop:asledf}
Under ${\bf H}_0$ the tests \eqref{testvfkglob}, \eqref{testvfklok} and \eqref{testvfklokex} have asymptotic level $\alpha$, that is if $\nu_0 \neq 0$ we have for each $\alpha \in (0,1)$
\begin{align}
\label{Cor3111}
\lim\limits_{B\to \infty} \lim \limits_{n \to \infty} \Prob\big( T_\rho^{(n)} \geq \hat q_{1-\alpha}^{(B)} ( T_\rho^{(n)}) \big) = \alpha
\end{align}
and furthermore
\begin{align}
\label{Cor3112}
\lim\limits_{n \to \infty} \Prob\big(V_{\rho,\dfi_0}^{(n)} \geq q_{1-\alpha}^K\big) = \alpha, \quad \lim \limits_{B \to \infty} \lim \limits_{n \to \infty} \Prob\big(W_\rho^{(n,\dfi_0)} \geq \hat q_{1 - \alpha}^{(B)}( W_\rho^{(n,\dfi_0)} ) \big) = \alpha,
\end{align}
holds for all $\alpha \in (0,1)$, if $N_{\rho^2}(\nu_0;\dfi_0) >0$.
\end{corollary}
\begin{proposition}
\label{prop:conuH1}
The tests \eqref{testvfkglob}, \eqref{testvfklok} and \eqref{testvfklokex} are consistent in the following sense: Under ${\bf H}_1$, for all $\alpha \in (0,1)$ and all $B \in \N$, we have
\begin{align*}
\lim \limits_{n \to \infty} \Prob\big( T_\rho^{(n)} \geq \hat q_{1-\alpha}^{(B)} ( T_\rho^{(n)}) \big) =1.
\end{align*}
Under ${\bf H}_1^{\scriptscriptstyle (\rho,\dfi_0)}$, for all $\alpha \in (0,1)$ and all $B \in \N$,
\begin{align*}
\lim\limits_{n \to \infty} \Prob\big(V_{\rho,\dfi_0}^{(n)} \geq q_{1-\alpha}^K\big) = 1 \quad \text{ and } \quad \lim \limits_{n \to \infty} \Prob\big(W_\rho^{(n,\dfi_0)} \geq \hat q_{1 - \alpha}^{(B)}( W_\rho^{(n,\dfi_0)} ) \big) = 1.
\end{align*}
\end{proposition}
\subsection{The argmax-estimators}
\label{argdfmaxeSe}
If one of the aforementioned tests rejects the null hypothesis in favor of an abrupt alternative the natural question arises of how to estimate the unknown break point $\gseqi_0$. A typical approach in change-point analysis to this estimation problem is the so-called argmax-estimator, that is we basically take the argmax of the function $\gseqi \mapsto \sup_{\dfi\in\R}|\Tb_\rho^{\scriptscriptstyle (n)}(\gseqi,\dfi)|$ as an estimate for $\gseqi_0$. Consistency of our estimators follows with the argmax continuous mapping theorem of \cite{KimPol90} using the following auxiliary result.
\begin{proposition}
\label{prop:tndfh1}
Under ${\bf H}_1$, the random function $(\theta,\dfi) \mapsto (n\Delta_n)^{\scriptscriptstyle -1/2} \Tb_\rho^{\scriptscriptstyle (n)}(\theta,\dfi)$ converges in $\linner$ to the function
\begin{align*}
T_{(1)}^\rho(\theta,\dfi) \defeq \begin{cases}
\theta (1-\theta_0) \{ N_\rho(\nu_1 ;\dfi) - N_\rho(\nu_2; \dfi) \}, \quad \text{ if } \theta \leq \theta_0 \\
\theta_0 (1- \theta) \{ N_\rho(\nu_1;\dfi) - N_\rho(\nu_2; \dfi) \}, \quad \text{ if } \theta \geq \theta_0
\end{cases}
\end{align*}
in outer probability, where $N_\rho(\nu; \cdot)$ is defined in \eqref{NrhonuDef}.
\end{proposition}
For the test problem ${\bf H}_0$ versus ${\bf H}_1$ we consider the estimator
\begin{equation}
\label{argmaxsup}
\tilde \theta_\rho^{(n)} \defeq \operatorname{arg\,max}_{\theta \in [0,1]} \sup_{\dfi \in \R} \big|\Tb_\rho^{(n)}(\gseqi, \dfi)\big|
\end{equation}
and in the setup ${\bf H}_0$ versus ${\bf H}_1^{(\rho,\dfi_0)}$ a suitable estimator for the change point is given by
\begin{equation*}
\tilde \theta^{(n)}_{\rho,\dfi_0} \defeq \operatorname{arg\,max}_{\gseqi \in [0,1]} \big|\Tb_\rho^{(n)}(\gseqi,\dfi_0)\big|.
\end{equation*}
The following proposition establishes consistency of these estimators.
\begin{proposition}
\label{prop:argmax}
Under ${\bf H}_1$ we have $\tilde \gseqi_\rho^{\scriptscriptstyle (n)} = \gseqi_0 + o_\Prob(1)$ for $n \to \infty$ and if the special case ${\bf H}_1^{\scriptscriptstyle (\rho,\dfi_0)}$ is true we obtain $\tilde \gseqi^{\scriptscriptstyle (n)}_{\rho,\dfi_0} = \gseqi_0 + o_\Prob(1)$.
\end{proposition}
\begin{remark}
For the sake of convenience we have focused on the case of one single break. The results on the tests in Section \ref{sec:tfacha} also hold for alternatives with finitely many abrupt changes. Moreover, the estimation methods depicted above can easily be extended to detect multiple change points by a standard binary segmentation algorithm dating back to \cite{Vos81}.
\end{remark}
\section{Statistical inference for gradual changes}
\label{sec:gradchadf}
\def\theequation{4.\arabic{equation}}
\setcounter{equation}{0}
As a generalization of Proposition \ref{prop:tndfh1} one can show that $k_n^{\scriptscriptstyle -1/2} \Tb_\rho^{\scriptscriptstyle (n)}(\theta,\dfi)$ converges in $\linner$ in outer probability to the function $\Tb_{\rho,g_0}$ defined in \eqref{shiftdef} whenever Assumption \ref{EasierCond} is satisfied.
Thus, under some minor regularity conditions, argmax$_{\gseqi \in [0,1]} |\Tb_\rho^{\scriptscriptstyle (n)}(\gseqi,\dfi)|$ is a consistent estimator of argmax$_{\gseqi \in [0,1]} |\Tb_{\rho,g_0}(\gseqi,\dfi)|$. However, if the jump behaviour changes gradually at $\gseqi_0$, the function $\gseqi \mapsto |\Tb_{\rho,g_0}(\gseqi,\dfi)|$ is usually maximal at a point $\gseqi_1 > \gseqi_0$.
As a consequence the argmax-estimators investigated in Section \ref{argdfmaxeSe} usually overestimate a change point, if the change is not abrupt. Therefore, in this section we introduce test and estimation procedures which are tailored for gradual changes in the entire jump behaviour.
\subsection{A measure of time variation for the entire jump behaviour}
If the jump behaviour is given by \eqref{ComResAss} for some suitable transition kernel $g = g_0$ from $([0,1],$ $ \Bb([0,1]))$ into $(\R,\Bb)$, we follow \cite{VogDet15} and base our analysis of gradual changes on the quantity
\begin{align}
\label{entmeaotivDef}
D_\rho^{(g_0)}(\kseqi,\gseqi,\dfi) \defeq N_\rho(g_0;\kseqi,\dfi) - \frac \kseqi\gseqi N_\rho(g_0;\gseqi,\dfi), \quad (\kseqi,\gseqi,\dfi) \in C \times \R
\end{align}
with
\begin{equation}
\label{cdef}
C := \{(\kseqi,\gseqi) \in [0,1]^2 \mid \kseqi \leq \gseqi \}
\end{equation}
and where $N_\rho(g_0; \cdot, \cdot)$ is defined in \eqref{NrhoDef}. Here and throughout this paper we use the convention $\frac 00 \defeq 1$. We will address $D_\rho^{\scriptscriptstyle (g_0)}$ as the measure of time variation (with respect to $\rho$) of the entire jump behaviour of the underlying process, because the following lemma shows that $D_\rho^{\scriptscriptstyle (g_0)}$ indicates whether there is a change in the jump behaviour.
\begin{lemma}
\label{Drhg0suit}
Let $\gseqi \in [0,1]$. Then $D^{\scriptscriptstyle (g_0)}_\rho(\kseqi,\gseqi,\dfi) =0$ for all $0 \leq \kseqi \leq \gseqi$ and $\dfi \in \R$ if and only if the kernel $g_0(\cdot,d\taili)$ is Lebesgue almost everywhere constant on $[0,\gseqi]$.
\end{lemma}
According to the preceding lemma there exists a (gradual) change in the jump behaviour given by $g_0$ if and only if
\begin{align*}
\sup_{\gseqi \in [0,1]} \tilde \Dc^{(g_0)}_\rho(\gseqi) >0,
\end{align*}
where
\begin{align*}
\tilde \Dc^{(g_0)}_\rho(\gseqi) \defeq \sup\limits_{\dfi \in \R} \sup_{0\leq\kseqi\leq\gseqi}\big|D^{(g_0)}_\rho(\kseqi,\gseqi,\dfi)\big|.
\end{align*}
As a consequence, the first point of a change in the jump behaviour is given by
\begin{align}
\label{entchanpoi}
\entcp \defeq \inf \left\{ \gseqi \in [0,1] \mid \tilde \Dc^{(g_0)}_\rho(\gseqi) >0 \right\},
\end{align}
where we set $\inf \varnothing \defeq 1$. We call $\entcp$
the change point of the jump behaviour of the underlying process. Notice that by the discussion after \eqref{cdef} the definition in \eqref{entchanpoi} is independent of $\rho$.
In Section \ref{subsec:cth0E} we construct an estimator for $\gseqi_0$, where we only consider the quantity
\begin{align}
\label{supenmotv}
\mathcal D^{(g_0)}_\rho(\gseqi) \defeq \sup \limits_{\dfi \in \R} \sup \limits_{0 \leq \kseqi \leq \gseqi' \leq \gseqi} \big|D^{(g_0)}_\rho( \kseqi, \gseqi', \dfi)\big|,
\end{align}
instead of $\tilde \Dc^{\scriptscriptstyle (g_0)}_\rho$. On the one hand the monotonicity of $\mathcal D^{\scriptscriptstyle (g_0)}_\rho$ simplifies our entire presentation and on the other hand the first time point where $\mathcal D^{\scriptscriptstyle (g_0)}_\rho$ deviates from $0$ is also given by $\gseqi_0$, so it is equivalent to consider $\mathcal D^{\scriptscriptstyle (g_0)}_\rho$ instead. Our analysis of gradual changes is based on a consistent estimator $\Db_\rho^{\scriptscriptstyle (n)}$ of $D^{\scriptscriptstyle (g_0)}_\rho$ which we construct in Section \ref{subsec:Drhoe}. Before that we illustrate the quantities introduced in \eqref{entchanpoi} and \eqref{supenmotv} in the situations of Example \ref{Ex:Sitabcha} and Example \ref{Ex:SitgraCh}.
\begin{example}
\label{exdf1}
Recall the situation of an abrupt change as in Example \ref{Ex:Sitabcha}. Precisely, let $\beta \in (0,2)$, $p > 0$ and $\nu_1, \nu_2 \in \Mc(\beta,p)$ with $\nu_1 \neq \nu_2$ such that for some $\gseqi_0 \in (0,1)$ the transition kernel $g_0$ has the form
\begin{align}
\label{exabtrker}
g_0(y,d\taili) = \begin{cases}
\nu_1(d \taili), \quad &\text{ for } y \in [0,\gseqi_0], \\
\nu_2(d \taili), \quad &\text{ for } y \in (\gseqi_0,1].
\end{cases}
\end{align}
Obviously, for some function $\rho: \R \to \R$ such that Assumption \ref{EasierCond}\eqref{EasrhoCond} and \eqref{rhoneq0} are satisfied we have $D^{\scriptscriptstyle (g_0)}_\rho(\kseqi,\gseqi',\dfi) =0$ for each $(\kseqi,\gseqi',\dfi) \in C \times \R$ with $\gseqi' \leq \gseqi_0$ and consequently $\Dc^{\scriptscriptstyle (g_0)}_\rho(\gseqi) =0$ for each $\gseqi \leq \gseqi_0$. On the other hand, if $\gseqi_0 < \gseqi' \leq 1$ and $\kseqi \leq \gseqi_0$ we have
\[
D^{(g_0)}_\rho(\kseqi,\gseqi',\dfi) = \kseqi N_\rho(\nu_1;\dfi) - \frac\kseqi{\gseqi'}(\gseqi_0 N_\rho(\nu_1;\dfi) + (\gseqi' - \gseqi_0) N_\rho(\nu_2;\dfi)) = \kseqi (N_\rho(\nu_2;\dfi) - N_\rho(\nu_1;\dfi)) \big( \frac{\gseqi_0}{\gseqi'} - 1 \big)
\]
with \( N_\rho(\nu;\dfi) \) defined in \eqref{NrhonuDef} and we obtain
\[
\sup\limits_{\dfi\in\R}\sup\limits_{\kseqi \leq \gseqi_0} |D^{(g_0)}_\rho(\kseqi,\gseqi',\dfi)| = V_0^{\rho} \gseqi_0\big( 1- \frac{\gseqi_0}{\gseqi'} \big),
\]
where \( V_0^{\rho} = \sup_{\dfi\in\R} |N_\rho(\nu_1;\dfi) - N_\rho(\nu_2;\dfi)| >0\), because of $\nu_1 \neq \nu_2$ and the assumptions on $\rho$. For $\gseqi_0 < \kseqi \leq \gseqi'$ a similar calculation yields
\[
D^{(g_0)}_\rho (\kseqi,\gseqi',\dfi)= \gseqi_0(N_\rho(\nu_2;\dfi) - N_\rho(\nu_1;\dfi)) \big( \frac\kseqi{\gseqi'} - 1 \big)
\]
which gives
\[
\sup\limits_{\dfi \in \R} \sup\limits_{\gseqi_0 < \kseqi \leq \gseqi'} \big|D^{(g_0)}_\rho(\kseqi,\gseqi',\dfi)\big| = V_0^{\rho} \gseqi_0 \big(1- \frac{\gseqi_0}{\gseqi'} \big).
\]
Therefore, it follows that the quantity defined in \eqref{entchanpoi} is given by $\gseqi_0$, because for $\gseqi > \gseqi_0$ we have
\begin{equation}
\label{Drhoabentw}
\Dc^{(g_0)}_\rho(\gseqi) = \sup_{\gseqi_0 < \gseqi' \leq \gseqi} \max \Big\{ \sup_{\dfi \in \R}\sup_{\kseqi \leq \gseqi_0} \big|D^{(g_0)}_\rho(\kseqi,\gseqi',\dfi)\big|,~ \sup_{\dfi \in \R}\sup_{\gseqi_0 < \kseqi \leq \gseqi'} \big|D^{(g_0)}_\rho(\kseqi,\gseqi',\dfi)\big| \Big\} = V_0^{\rho} \gseqi_0 \big( 1- \frac{\gseqi_0}{\gseqi} \big).
\end{equation}
\end{example}
\begin{example}
\label{exdf2}
Recall the situation of Example \ref{Ex:SitgraCh}. Let the transition kernel $g_0$ be of the form \eqref{gformgrach} such that there exist $\gseqi_0 \in (0,1)$, \(A_0 \in (0,\infty)\), \(\beta_0 \in (0,\hat \beta]\) and \(p_0 \in [2 \hat p + \eps,\infty)\) for some $\eps >0$ with
\begin{align}
\label{vorcpkonst}
A(y)=A_0, \quad \beta(y) = \beta_0 \quad \text{ and } \quad p(y) = p_0
\end{align}
for each $y \in [0,\gseqi_0]$. Additionally, let $\gseqi_0$ be contained in an open interval $U$ with a real analytic function $\bar A:U \to (0,\infty)$ and affine linear functions $\bar \beta:U \to (0,\hat \beta]$, $\bar p:U \to [2 \hat p + \eps,\infty)$ such that at least one of the functions $\bar A$, $\bar \beta$ and $\bar p$ is non-constant and
\begin{align}
\label{atcpanalytic}
A(y) = \bar A(y), \quad \beta(y) = \bar \beta(y), \quad \text{ as well as } \quad p(y) = \bar p(y)
\end{align}
for all $y \in [\gseqi_0,1) \cap U$. Then the quantity defined in \eqref{entchanpoi} is given by $\gseqi_0$.
\end{example}
\subsection{The empirical measure of time variation and its convergence behaviour}
\label{subsec:Drhoe}
Suppose we have established that $N_\rho^{\scriptscriptstyle (n)}(\cdot,\cdot)$ is a consistent estimator for $N_\rho(g_0;\cdot,\cdot)$. Then with the set $C$ defined in \eqref{cdef} it is reasonable to consider
\begin{align}\label{enttivaryest}
\Db_\rho^{(n)}(\kseqi, \gseqi, \dfi) \defeq N_\rho^{(n)}(\kseqi, \dfi) - \frac \kseqi\gseqi N_\rho^{(n)}(\gseqi, \dfi),~~ (\kseqi, \gseqi, \dfi) \in C \times \R,
\end{align}
as an estimate for the measure of time variation of the entire jump behaviour $D_\rho^{\scriptscriptstyle (g_0)}$ defined in \eqref{entmeaotivDef}.
In the following we want to establish consistency of the empirical measure of time variation $\Db_\rho^{\scriptscriptstyle (n)}$. To be precise, the following two theorems show that the process
\begin{align}
\label{bbhrhonDef}
\Hb_\rho^{(n)}(\kseqi, \gseqi, \dfi) := \sqrt{n\Delta_n}\big(\Db_\rho^{(n)}(\kseqi, \gseqi, \dfi) - D^{(g_0)}_\rho(\kseqi, \gseqi, \dfi)\big).
\end{align}
and its bootstrapped counterpart converge weakly or weakly conditional on the data in probability, respectively, to a suitable tight mean zero
Gaussian process.
\begin{theorem}
\label{SchwKentmotv}
If Assumption \ref{EasierCond} is satisfied, then the process $\Hb_\rho^{\scriptscriptstyle (n)}$ defined in \eqref{bbhrhonDef} converges weakly, that is $\Hb_\rho^{\scriptscriptstyle (n)} \weak \Hb_\rho + D_\rho^{\scriptscriptstyle (g_1)}$ in $\linctr$, where $\Hb_\rho$ is a tight mean zero Gaussian process with covariance function
\begin{align}
\label{HbrhoProCov}
\operatorname{Cov}\big(\Hb_\rho(&\kseqi_1, \gseqi_1, \dfi_1),\Hb_\rho(\kseqi_2, \gseqi_2, \dfi_2)\big) = \nonumber \\
&= \int \limits_0^{\kseqi_1 \wedge \kseqi_2} \int\limits_{-\infty}^{\dfi_1 \wedge \dfi_2} \rho^2(\taili) g_0(y,d\taili) dy - \frac{\kseqi_1}{\gseqi_1} \int \limits_0^{\kseqi_2 \wedge \gseqi_1} \int\limits_{-\infty}^{\dfi_1 \wedge \dfi_2}
\rho^2(\taili) g_0(y,d\taili) dy \nonumber \\
&\hspace{10mm}- \frac{\kseqi_2}{\gseqi_2} \int \limits_0^{\kseqi_1 \wedge \gseqi_2} \int\limits_{-\infty}^{\dfi_1 \wedge \dfi_2} \rho^2(\taili) g_0(y,d\taili) dy + \frac{\kseqi_1 \kseqi_2}{\gseqi_1 \gseqi_2} \int \limits_0^{\gseqi_1 \wedge \gseqi_2} \int\limits_{-\infty}^{\dfi_1 \wedge \dfi_2} \rho^2(\taili) g_0(y,d\taili) dy.
\end{align}
\end{theorem}
For the statistical change-point inference proposed in the following sections we require quantiles of
functionals of the limiting distribution in Theorem \ref{SchwKentmotv}. \eqref{HbrhoProCov} shows that this distribution depends in a complicated way on the unknown underlying kernel $g_0$
and therefore corresponding quantiles are difficult to estimate. In order to solve this problem we want to use a multiplier bootstrap approach similar to Section \ref{sec:Infabchdf}. To this end, we define the following bootstrap counterpart of the process $\Hb_\rho^{\scriptscriptstyle (n)}$
\begin{align}
\label{hatHbrhondeq}
\hat \Hb_\rho^{(n)} (\kseqi, \gseqi, \dfi)
&\defeq
\hat \Hb_\rho^{(n)} (X^{(n)}_{\Delta_n}, \ldots, X^{(n)}_{n \Delta_n}; \xi_1, \ldots, \xi_n; \kseqi, \gseqi, \dfi ) \nonumber \\
&:=
\frac{1}{\sqrt {n \Delta_n}} \bigg[ \sum \limits_{j=1}^{\lfloor n\kseqi \rfloor} \xi_j \rho(\Delj X^{(n)}) \ind_{(-\infty,\dfi]}(\Delj X^{(n)}) \ind_{\{|\Delj X^{(n)} | > v_n\}} - \nonumber \\
&\hspace{3cm}
- \frac{\kseqi}{\gseqi} \sum \limits_{j = 1}^{\ip{n \gseqi}}\xi_j \rho(\Delj X^{(n)}) \ind_{(-\infty,\dfi]}(\Delj X^{(n)}) \ind_{\{|\Delj X^{(n)} | > v_n\}} \bigg ].
\end{align}
The result below establishes consistency of $\hat \Hb_\rho^{\scriptscriptstyle (n)}$.
\begin{theorem}
\label{BootHnConvThm}
Let Assumption \ref{EasierCond} be valid and let the multiplier sequence $(\xi_i)_{i \in \N}$
satisfy Assumption \ref{MultiplAss}. Then we have $\hat \Hb_\rho^{\scriptscriptstyle (n)} \weakP \mathbb H_\rho$
in $\linctr$, where the tight mean zero Gaussian process $\mathbb H_\rho$ has the covariance structure \eqref{HbrhoProCov}
\end{theorem}
\subsection{Estimating the gradual change point}
\label{subsec:cth0E}
For the sake of a unique definition of the (gradual) change point $\gseqi_0$ in \eqref{entchanpoi} we suppose throughout this section that Assumption \ref{EasierCond} holds with $g_1=g_2=0$. Recall the definition
\[
\Dc^{(g_0)}_\rho(\gseqi) = \sup \limits_{\dfi \in \R} \sup \limits_{0 \leq \kseqi \leq \gseqi' \leq \gseqi} \big|D^{(g_0)}_\rho( \kseqi, \gseqi', \dfi)\big|
\]
in \eqref{supenmotv}, then by Theorem \ref{SchwKentmotv} the process $\Db_\rho^{\scriptscriptstyle (n)}(\kseqi,\gseqi,\dfi) $ from \eqref{enttivaryest}
is a consistent estimator of $D^{\scriptscriptstyle (g_0)}_\rho(\kseqi, \gseqi, \dfi) $. Therefore, we set
\begin{align*}
\Db_{\rho,*}^{(n)}(\gseqi) \defeq \sup \limits_{\dfi \in \R} \sup \limits_{0 \leq \kseqi \leq \gseqi' \leq \gseqi} \big|\Db_\rho^{(n)}(\kseqi,\gseqi',\dfi)\big|,
\end{align*}
and an application of the continuous mapping theorem and Theorem \ref{SchwKentmotv} yields the following result.
\begin{corollary}
If Assumption \ref{EasierCond} is satisfied with $g_1=g_2=0$, then $(n\Delta_n)^{1/2} \Db_{\rho,*}^{(n)} \weak \Hb_{\rho,*}$ in $\ell^{\infty}\big([0,\gseqi_0]\big)$, where $\Hb_{\rho,*}$ is
the tight process in $\linne$ defined by
\begin{align*}
\Hb_{\rho,*} (\gseqi) \defeq \sup \limits_{\dfi \in \R} \sup \limits_{0 \leq \kseqi \leq \gseqi' \leq \gseqi} |\Hb_\rho(\kseqi, \gseqi', \dfi)|,
\end{align*}
with the centered Gaussian process $\Hb_\rho$ defined in Theorem \ref{SchwKentmotv}.
\end{corollary}
Below we obtain that the rate of convergence of an estimator for $\gseqi_0$ depends on the smoothness of the curve $\theta \mapsto \Dc^{\scriptscriptstyle (g_0)}_\rho(\gseqi)$ at $\gseqi_0$. Thus, we impose a kind of Taylor expansion of the function $\Dc^{\scriptscriptstyle (g_0)}_\rho$. More precisely, we assume throughout this section
that $\gseqi_0 <1$ and that there exist constants $\iota, \eta, \smooi, c >0$ such that $\Dc^{\scriptscriptstyle (g_0)}_\rho$ admits an expansion of the form
\begin{equation} \label{additass}
\Dc^{(g_0)}_\rho(\gseqi) = c \big( \gseqi - \gseqi_0 \big)^{\smooi} + \aleph(\gseqi)
\end{equation}
for all $\gseqi \in [\gseqi_0, \gseqi_0 + \iota]$, where the remainder term satisfies
$|\aleph(\gseqi)| \leq K\big(\gseqi - \gseqi_0 \big)^{\smooi + \eta}$ for some $K>0$.
According to Theorem \ref{SchwKentmotv} we have $(n\Delta_n)^{\scriptscriptstyle 1/2} \Db_{\rho,*}^{\scriptscriptstyle (n)}(\gseqi) \rightarrow \infty$ in probability for any $\gseqi \in (\gseqi_0, 1]$. Consequently, if the deterministic sequence $\thrle_n \rightarrow \infty$ is chosen appropriately, the statistic
\[
r_\rho^{(n)}(\gseqi) \defeq \ind_{\lbrace (n\Delta_n)^{1/2} \Db_{\rho,*}^{(n)}(\gseqi) \leq \thrle_n \rbrace}, \]
should satisfy
\[r_\rho^{(n)}(\gseqi) \probto \begin{cases}
1, \quad &\text{ if } \gseqi \leq \gseqi_0, \\
0, \quad &\text{ if } \gseqi > \gseqi_0.
\end{cases}\]
Thus, we define the estimator for the change point by
\begin{align}
\label{grestdef}
\hat \theta_\rho^{(n)} = \hat \theta_\rho^{(n)}(\thrle_n) \defeq \int \limits_0^1 r_\rho^{(n)}(\gseqi) d \gseqi.
\end{align}
The theorem below establishes consistency of the estimator $\hat \theta_\rho^{\scriptscriptstyle (n)}$ under mild additional assumptions on the sequence $(\thrle_n)_{n\in\N}$.
\begin{theorem}
\label{SchaezerKonvT}
If Assumption \ref{EasierCond} is satisfied with $g_1=g_2=0$, $\gseqi_0 <1$, and \eqref{additass} holds
for some $\smooi >0$, then
\[
\hat \theta_\rho^{(n)} - \gseqi_0 = O_\Prob \Big ( \Big ( \frac{\thrle_n}{\sqrt{n\Delta_n}}\Big)^{1/\smooi} \Big),
\]
for any sequence $\thrle_n \rightarrow \infty$ with $\thrle_n / \sqrt{n\Delta_n} \rightarrow 0$.
\end{theorem}
Theorem \ref{SchaezerKonvT} describes how the curvature of $\Dc^{\scriptscriptstyle (g_0)}_\rho$ at $\gseqi_0$ determines the convergence behaviour of the estimator: A lower degree of smoothness of $\Dc^{\scriptscriptstyle (g_0)}_\rho$ in $\gseqi_0$ yields a better rate of convergence. However, the estimator depends on the choice of the threshold level $\thrle_n$ and we explain below how to choose this sequence with bootstrap methods in order to control the probability of over- and underestimation. But before that the following theorem investigates the mean squared error
\begin{align*}
\operatorname{MSE}(\thrle_n)= \Eb \Big[ \big( \hat \theta_\rho^{(n)}(\thrle_n) - \gseqi_0 \big)^2 \Big]
\end{align*}
of the estimator $\hat \theta_\rho^{\scriptscriptstyle (n)}$. Recall the definition of $\Hb_\rho^{\scriptscriptstyle (n)}$ in \eqref{bbhrhonDef} and define
\begin{equation*}
\Hb_{\rho,*}^{(n)}(\gseqi) \defeq \sup \limits_{\dfi \in \R} \sup \limits_{0 \leq \kseqi \leq \gseqi ' \leq \gseqi} |\Hb_\rho^{(n)}(\kseqi, \gseqi ', \dfi)|, \quad \gseqi \in [0,1],
\end{equation*}
which is an upper bound for the distance between the estimator $\Db_{\rho,*}^{\scriptscriptstyle (n)}(\gseqi)$ and the true value $\Dc^{\scriptscriptstyle (g_0)}_\rho(\gseqi)$. For a sequence $\alpha_n \rightarrow \infty$ with $\alpha_n = o(\thrle_n)$ we decompose the MSE into
\begin{align*}
\text{MSE}^{(\rho)}_1(\thrle_n,\alpha_n) &\defeq \Eb \Big[ \big( \hat \theta_\rho^{(n)}(\thrle_n) - \gseqi_0 \big)^2 \ind_{\left \{ \Hb_{\rho,*}^{(n)}(1) \leq \alpha_n \right\}} \Big], \\
\text{MSE}^{(\rho)}_2(\thrle_n,\alpha_n) &\defeq \Eb \Big[ \big( \hat \theta_\rho^{(n)}(\thrle_n) - \gseqi_0 \big)^2 \ind_{\left \{ \Hb_{\rho,*}^{(n)}(1) > \alpha_n \right\}} \Big] \leq \Prob \big( \Hb_{\rho,*}^{(n)}(1) > \alpha_n \big),
\end{align*}
which can be considered as the MSE due to small and large estimation error.
\begin{theorem}
\label{MSEdfzerlThm}
Suppose that $\gseqi_0 <1$, \eqref{additass} and Assumption \ref{EasierCond} with $g_1=g_2=0$ are satisfied. Then for any
sequence $\alpha_n \rightarrow \infty$ with $\alpha_n = o(\thrle_n)$ we have
\begin{align*}
K_1 \Big ( \frac{\thrle_n}{\sqrt{n\Delta_n}}\Big)^{2/\smooi} \leq &\operatorname{MSE}^{(\rho)}_1(\thrle_n,\alpha_n) \leq K_2 \Big ( \frac{\thrle_n}{\sqrt{n\Delta_n}}\Big)^{2/\smooi} \\
\nonumber
&\operatorname{MSE}^{(\rho)}_2(\thrle_n,\alpha_n) \leq \Prob \big( \Hb_{\rho,*}^{(n)}(1) > \alpha_n \big),
\end{align*}
for $n \in \N$ sufficiently large, where the constants $K_1$ and $K_2$ can be chosen as
\begin{align*}
K_1 = \bigg( \frac{1-\varphi}{c} \bigg)^{2/\smooi} \quad \text{ and } \quad K_2 = \bigg( \frac{1+ \varphi}{c} \bigg)^{2/\smooi}
\end{align*}
for arbitrary $\varphi \in (0,1)$.
\end{theorem}
In the following we discuss the choice of the regularizing sequence $\thrle_n$ for the estimator $\hat \theta_\rho^{\scriptscriptstyle (n)}$ in order to control the probability of over- and underestimation of the change point $\gseqi_0 \in (0,1)$.
Let $\predfest$ be a preliminary consistent estimate of $\theta_0$. For example, if \eqref{additass} holds for some $\smooi >0$, one can take $\predfest = \hat \theta_\rho^{\scriptscriptstyle (n)}(\thrle_n)$ for a sequence $\thrle_n \rightarrow \infty$ satisfying the assumptions of Theorem \ref{SchaezerKonvT}.
In the sequel, let $B\in\N$ be some large number and let $(\xi^{\scriptscriptstyle (b)})_{b=1, \dots ,B}$ denote independent sequences of random variables, $\xi^{\scriptscriptstyle (b)} \defeq (\xi_j^{\scriptscriptstyle (b)})_{j\in\N}$, satisfying Assumption \ref{MultiplAss}.
We denote by $\hat \Hb_{\rho,*}^{\scriptscriptstyle (n, b)}$ the particular bootstrap statistics calculated with respect to the data and the bootstrap multipliers $\xi^{\scriptscriptstyle (b)}_1, \ldots, \xi^{\scriptscriptstyle (b)}_n$ from the $b$-th iteration, where
\begin{align}
\label{hatFnstardef}
\hat \Hb_{\rho,*}^{(n)}(\gseqi) \defeq \sup \limits_{\dfi \in \R} \sup \limits_{0 \leq \kseqi \leq \gseqi ' \leq \gseqi} \big|\hat \Hb_\rho^{(n)}(\kseqi, \gseqi ', \dfi)\big|
\end{align}
for $\gseqi \in [0,1]$. With these notations for $B,n \in \N$ and $0< r \leq 1$ we define the following empirical distribution function
\begin{align*}
\empFrbodif (x) &= \frac 1B \sum \limits_{i=1}^B \ind_{\big\{ \big(\hat \Hb_{\rho,*}^{\scriptscriptstyle (n,i)}(\predfest)\big)^r \leq x \big\}},
\end{align*}
and we denote by
\begin{align*}
\empFrboindif(y) := \inf \Big\{x \in \R ~ \big| ~ \empFrbodif(x) \geq y \Big\}
\end{align*}
its pseudo-inverse. Then in the sense of the theorems below the optimal choice of the threshold is given by
\begin{align}
\label{thrledfdefeq}
\haFthrle :=
\empFrboindif(1-\alpha).
\end{align}
for a confidence level $\alpha \in (0,1)$.
\begin{theorem}
\label{OptthChodfThm}
Let $0<\alpha<1$ and assume that Assumption \ref{EasierCond} is satisfied with $g_1=g_2=0$ and with $0< \gseqi_0 <1$ for $\gseqi_0$ defined in \eqref{entchanpoi}. Suppose further that there exists some $\dfi_0 \in \R$ with $N_{\rho^2}(g_0;\gseqi_0,\dfi_0) > 0$.
Then the limiting probability for underestimation of the change point $\gseqi_0$ is bounded by $\alpha$. Precisely,
\begin{align*}
\limsup \limits_{B \rightarrow \infty} \limsup \limits_{n \rightarrow \infty} \Prob\Big(\hat \theta_\rho^{(n)}\big(\haFthrleei\big) < \gseqi_0 \Big) \leq \alpha.
\end{align*}
\end{theorem}
\begin{theorem}
\label{KorThrCdfThm}
Let Assumption \ref{EasierCond} be satisfied with $g_1=g_2=0$, let $0<r<1$ and for $\gseqi_0$ defined in \eqref{entchanpoi} let $0< \gseqi_0 <1$. Furthermore, suppose that \eqref{additass} holds for some $\smooi, c >0$ and that there exists a $\dfi_0 \in \R$ satisfying $N_{\rho^2}(g_0;\gseqi_0,\dfi_0) > 0$.
Additionally, let the bootstrap multipliers be either bounded in absolute value or standard normal distributed. Then for each $K > \big( 1/c \big)^{\scriptscriptstyle 1/\smooi}$ and all sequences $(\alpha_n)_{n \in \N} \subset (0,1)$ with $\alpha_n \rightarrow 0$ and $(B_n)_{n \in \N} \subset \N$ with $B_n \rightarrow \infty$ such that
\begin{enumerate}
\item
$\alpha_n^2 B_n \rightarrow \infty,$
\item $(n \Delta_n)^{\frac{1-r}{2r}} \alpha_n \rightarrow \infty,$
\item $\alpha_n^{-1} n \Delta_n^{1+\tau} \to 0,$ with $\tau >0$ from Assumption \ref{EasierCond},
\end{enumerate}
we have
\begin{align}
\label{ovestdfabEq}
\lim \limits_{n \rightarrow \infty} \Prob\Big(\hat\gseqi_\rho^{(n)}\big(\haFcothrleor\big) > \gseqi_0 + K \varphi^*_n \Big) =0,
\end{align}
where $\varphi^*_{n} = \big(\haFcothrleor/\sqrt{n \Delta_n}\big)^{1/\smooi} \probto 0$, while $\haFcothrleor \probto \infty$.
\end{theorem}
Theorem \ref{KorThrCdfThm} is meaningless without the statement $\varphi^*_{n} \probto 0$. With the additional parameter $r \in (0,1)$ this assertion can be proved by using the assumptions $(n \Delta_n)^{\frac{1-r}{2r}} \alpha_n \rightarrow \infty$ and $\alpha_n^{-1} n \Delta_n^{1+\tau} \to 0$ only. However, it seems that for $r=1$ the statement $\varphi^*_{n} \probto 0$ can only be verified under very restrictive conditions on the underlying process.
We conclude this section with an example which shows that the expansion \eqref{additass} and the additional assumption $N_{\rho^2}(g_0;\gseqi_0,\dfi_0) > 0$ of the preceding theorems are satisfied in the situations of Example \ref{Ex:Sitabcha} and Example \ref{Ex:SitgraCh}. A proof for this example can be found in Section \ref{prresec4}.
\begin{example}
\label{exdf3} ~ \vspace{-2mm}
\begin{enumerate}[(1)]
\item Recall the situation of an abrupt change considered in Example \ref{exdf1}. In this case it follows from \eqref{Drhoabentw} that
\begin{equation*}
\Dc^{(g_0)}_\rho(\gseqi) = V_0^\rho \gseqi_0 \big( 1- \frac{\gseqi_0}\gseqi \big) = V_0^\rho (\gseqi - \gseqi_0) - \frac{V_0^\rho}\gseqi(\gseqi - \gseqi_0)^2 >0,
\end{equation*}
whenever $\gseqi_0 < \gseqi \leq 1$. Consequently, \eqref{additass} is satisfied with $\smooi =1$ and $\aleph(\gseqi) = - \frac{V_0^\rho}\gseqi (\gseqi - \gseqi_0)^2 = O((\gseqi - \gseqi_0)^2)$ for $\gseqi \to \gseqi_0$. Moreover, if $\nu_1 \neq 0$ and the function $\rho$ meets Assumption \ref{EasierCond}\eqref{rhoneq0}, the transition kernel given by \eqref{exabtrker} satisfies the additional assumption $N_{\rho^2}(g_0;\gseqi_0,\dfi_0) >0$ in Theorem \ref{OptthChodfThm} and Theorem \ref{KorThrCdfThm} for some $\dfi_0 \in \R$.
\item \label{exdf3Nr2} In the situation discussed in Example \ref{exdf2} let
\begin{equation*}
\bar N(y,\dfi) = \bar A(y) \int \limits_{-\infty}^\dfi \rho_{L,\hat p}(\taili) h_{\bar \beta(y), \bar p(y)}(\taili) d\taili
\end{equation*}
for $y \in U$ and $\dfi \in \R$. Then we have
\begin{equation*}
k_0 := \min \Big\{ k \in \N ~ \big| ~ \exists t \in \R \colon N_k(\dfi) \neq 0 \Big\} < \infty,
\end{equation*}
where for $k \in \N_0$ and $\dfi \in \R$
\begin{equation*}
N_k(\dfi) := \Big( \frac{\partial^k \bar N}{\partial y^k} \Big) \Big |_{(\gseqi_0,\dfi)}
\end{equation*}
denotes the $k$-th partial derivative of $\bar N$ with respect to $y$ at $(\gseqi_0,\dfi)$, which is a bounded function on $\R$. Furthermore, there exists a $\iota >0$ such that
\begin{equation}
\label{DcrhoLpexpan}
\Dc_{\rho_{L,\hat p}}^{(g_0)}(\gseqi) = \Big( \frac 1{(k_0 +1)!} \sup_{t \in \R} |N_{k_0}(\dfi)| \Big) (\gseqi - \gseqi_0)^{k_0 +1} + \aleph(\gseqi)
\end{equation}
on $[\gseqi_0,\gseqi_0 + \iota]$ with $|\aleph(\gseqi)| \leq K(\gseqi - \gseqi_0)^{k_0 +2}$ for some $K>0$. Obviously, $N_{\rho_{L,\hat p}^2}(g_0;\gseqi_0,\dfi_0) >0$ holds for some $\dfi_0 \in \R$.
\end{enumerate}
\end{example}
\subsection{Testing for a gradual change}
In Section \ref{sec:Infabchdf} we introduced change point tests for the situation of an abrupt change as in Example \ref{Ex:Sitabcha}, where the jump behaviour is assumed to be constant before and after the change point.
In this section we illustrate a reasonable way to derive test procedures for the existence of a gradual change in the data. In order to formulate suitable hypotheses for a gradual change point recall
the definition of the measure of time variation for the entire jump behaviour $D_\rho^{\scriptscriptstyle (g_0)}$ in
\eqref{entmeaotivDef} and define for $\dfi_0 \in \R$ and $\gseqi \in [0,1]$ the quantities
\begin{align*}
\mathcal D_\rho^{(g_0)}(\gseqi) &\defeq \sup \limits_{\dfi \in \R} \sup \limits_{0 \leq \kseqi \leq \gseqi' \leq \gseqi} \big|D^{(g_0)}_\rho( \kseqi, \gseqi', \dfi)\big| \\
\mathcal D_{\rho,\dfi_0}^{(g_0)}(\gseqi) &\defeq \sup \limits_{0 \leq \kseqi \leq \gseqi' \leq \gseqi} \big|D^{(g_0)}_\rho(\kseqi, \gseqi', \dfi_0)\big|.
\end{align*}
We test the null hypothesis
\begin{enumerate}
\item[${\bf H}_0$:] Assumption~\ref{EasierCond} is satisfied with $g_1=g_2=0$ and there exists a L\'evy measure $\nu_0$ such that $g_0(y,d\taili) = \nu_0(d\taili)$ holds for Lebesgue almost every $y \in [0,1]$.
\end{enumerate}
versus the general alternative of non-constant jump behaviour
\begin{enumerate}
\item[${\bf H}_1^*$:] Assumption~\ref{EasierCond} holds with $g_1=g_2=0$ and we have $\Dc_\rho^{(g_0)}(1) >0$.
\end{enumerate}
If one is interested in gradual changes in $N_\rho(\nu_s^{\scriptscriptstyle (n)};\dfi_0)$ for a fixed $\dfi_0 \in \R$, one can consider
the corresponding alternative
\begin{enumerate}
\item[${\bf H}_1^*(\dfi_0)$:] Assumption~\ref{EasierCond} is satisfied with $g_1=g_2=0$ and we have $\Dc_{\rho,\dfi_0}^{(g_0)}(1) >0$.
\end{enumerate}
Furthermore, we investigate the behaviour of the tests introduced below under local alternatives of the form
\begin{enumerate}
\item[${\bf H}^{(loc)}_1$:] Assumption~\ref{EasierCond} holds with $g_0(y,d\taili) = \nu_0(d\taili)$ for Lebesgue-a.e. $y \in [0,1]$ for some L\'evy measure $\nu_0$ and some transition kernels $g_1,g_2 \in \Gc(\beta,p)$.
\end{enumerate}
\begin{remark}
Note that the function $D^{\scriptscriptstyle (g_0)}_\rho$ in \eqref{entmeaotivDef} is uniformly continuous in $(\kseqi,\gseqi) \in C$ uniformly in $\dfi \in \R$, that is for any $\eta >0$ there exists a $\delta >0$ such that
\[\big|D^{(g_0)}_\rho(\kseqi_1,\gseqi_1,\dfi) - D^{(g_0)}_\rho(\kseqi_2, \gseqi_2, \dfi)\big| < \eta\]
holds for each $\dfi \in \R$ and all pairs $(\kseqi_1,\gseqi_1), (\kseqi_2, \gseqi_2) \in C = \{(\kseqi, \gseqi) \in [0,1]^2 \mid \kseqi \leq \gseqi \}$ with maximum distance $\|(\kseqi_1,\gseqi_1)- (\kseqi_2, \gseqi_2)\|_\infty < \delta$.
Therefore, the function $D^*_\rho(g_0;\kseqi, \gseqi) = \sup_{\dfi \in \R} |D^{\scriptscriptstyle (g_0)}_\rho(\kseqi, \gseqi, \dfi)|$ is uniformly continuous on $C$ and as a consequence $\Dc^{\scriptscriptstyle (g_0)}_\rho$ is continuous on $[0,1]$. Thus, $\Dc^{\scriptscriptstyle (g_0)}_\rho(1) >0$ holds if and only if the point $\theta_0$ defined in \eqref{entchanpoi} satisfies $\theta_0 < 1$.
\end{remark}
The idea of the following tests is to reject the null hypothesis ${\bf H}_0$ for large values of the corresponding estimators $\Db_{\rho,*}^{\scriptscriptstyle (n)}(1)$ and $\sup_{(\kseqi, \gseqi) \in C} |\Db_\rho^{\scriptscriptstyle (n)}(\kseqi, \gseqi,\dfi_0)|$ for $\mathcal D^{\scriptscriptstyle (g_0)}_\rho(1) $
and $\mathcal D_{\rho,\dfi_0}^{\scriptscriptstyle (g_0)}(1)$, respectively. In order to obtain critical values we use the multiplier bootstrap approach introduced in Section \ref{subsec:Drhoe}.
For this purpose we denote by $(\xi^{\scriptscriptstyle (b)})_{b=1,\ldots,B}$ for some large $B\in\N$ independent sequences $\xi^{\scriptscriptstyle (b)}=(\xi^{\scriptscriptstyle (b)}_j)_{j\in\N}$ of multipliers satisfying Assumption \ref{MultiplAss}.
We denote by $\hat \Hb_{\rho}^{\scriptscriptstyle (n, b)}$
the processes defined in \eqref{hatHbrhondeq} calculated from $\{ X^{\scriptscriptstyle (n)}_{i \Delta_n} \mid i=0,\ldots,n\}$ and the $b$-th bootstrap multipliers $\xi^{\scriptscriptstyle (b)}_1, \ldots, \xi^{\scriptscriptstyle (b)}_n$.
For a given level $\alpha \in (0,1)$, we
propose to reject ${\bf H}_0$ in favor of ${\bf H}_1^*$, if
\begin{equation} \label{testdfglobal}
(n\Delta_n)^{1/2} \Db_{\rho,*}^{(n)}(1) \geq \hat q^{(B)}_{1 - \alpha} \Big( {\Hb}^{(n)}_{\rho,*}(1) \Big),
\end{equation}
where $\hat q^{(B)}_{1 - \alpha} \big({\Hb}^{(n)}_{\rho,*}(1) \big)$ denotes the $(1- \alpha)$-quantile of the sample $\hat{\Hb}_{\rho,*}^{\scriptscriptstyle (n,1)}(1), \ldots, \hat{\Hb}_{\rho,*}^{\scriptscriptstyle (n,B)}(1)$ with $\hat \Hb_{\rho,*}^{\scriptscriptstyle (n,b)}$ defined in \eqref{hatFnstardef}.
Similarly, for $\dfi_0 \in \R$, the null hypothesis
${\bf H}_0$ is rejected in favor of ${\bf H}^*_1(\dfi_0)$ if
\begin{equation} \label{testdflokal}
R_{\rho,\dfi_0}^{(n)} \defeq (n \Delta_n)^{1/2} \sup \limits_{(\kseqi, \gseqi) \in C} \big|\Db_\rho^{(n)}(\kseqi, \gseqi,\dfi_0)\big| \geq \hat q^{(B)}_{1 - \alpha}\big(R_{\rho,\dfi_0}^{(n)} \big),
\end{equation}
where $\hat q^{(B)}_{1 - \alpha}\big(R_{\rho,\dfi_0}^{(n)} \big)$ denotes the $(1- \alpha)$-quantile of the sample $\hat R_{\rho,\dfi_0}^{\scriptscriptstyle (n,1)}, \ldots, \hat R_{\rho,\dfi_0}^{\scriptscriptstyle (n,B)}$, and
\[
\hat R_{\rho,\dfi_0}^{(n,b)} \defeq \sup_{(\kseqi, \gseqi) \in C} \big| \hat \Hb^{(n,b)}_{\rho}(\kseqi, \gseqi,\dfi_0) \big|.
\]
In the following we show the behaviour of the aforementioned tests under ${\bf H}_0$, ${\bf H}_1^{(loc)}$ and the alternatves ${\bf H}_1^*$, ${\bf H}_1^*(\dfi_0)$. To this end, recall the limit process $\Hb_{\rho,g_1} \defeq \Hb_{\rho} + D_\rho^{\scriptscriptstyle (g_1)}$ in Theorem \ref{SchwKentmotv}, where $D_\rho^{\scriptscriptstyle (g_1)}$ is defined in \eqref{entmeaotivDef} and where the tight mean zero Gaussian process $\Hb_\rho$ in $\linctr$ has the covariance function \eqref{HbrhoProCov}. Under the general Assumption \ref{EasierCond} let $K_\rho : (\R,\Bb) \to (\R,\Bb)$ be the c.d.f.\ of $\sup_{(\kseqi,\gseqi,\dfi)\in C\times \R} |\Hb_\rho(\kseqi,\gseqi,\dfi)|$ and let $K_\rho^{\scriptscriptstyle (\dfi_0)} : (\R,\Bb) \to (\R,\Bb)$ be the c.d.f.\ of $\sup_{(\kseqi,\gseqi)\in C} |\Hb_\rho(\kseqi,\gseqi,\dfi_0)|$. Furthermore, let
\begin{align*}
H_{\rho,g_1} &\defeq \sup_{(\kseqi,\gseqi,\dfi)\in C\times \R} |\Hb_\rho(\kseqi,\gseqi,\dfi) + D_\rho^{(g_1)}(\kseqi,\gseqi,\dfi)|, \\
H_{\rho,g_1}^{(\dfi_0)} &\defeq \sup_{(\kseqi,\gseqi)\in C} |\Hb_\rho(\kseqi,\gseqi,\dfi_0) + D_\rho^{(g_1)}(\kseqi,\gseqi,\dfi_0)|.
\end{align*}
The proposition below shows the performance of the new tests under the local alternative ${\bf H}_1^{(loc)}$.
\begin{proposition}
\label{localtgrapro}
Under ${\bf H}_1^{(loc)}$ we have for each $\alpha \in (0,1)$
\begin{multline*}
\Prob\big(K_\rho(H_{\rho,g_1}) > 1-\alpha \big) \leq \liminf_{B \to \infty} \lim_{n\to \infty} \Prob \Big( (n\Delta_n)^{1/2} \Db_{\rho,*}^{(n)}(1) \geq \hat q^{(B)}_{1 - \alpha} \big( {\Hb}^{(n)}_{\rho,*}(1) \big) \Big) \\
\leq \limsup_{B \to \infty} \lim_{n\to \infty} \Prob \Big( (n\Delta_n)^{1/2} \Db_{\rho,*}^{(n)}(1) \geq \hat q^{(B)}_{1 - \alpha} \big( {\Hb}^{(n)}_{\rho,*}(1) \big) \Big) \leq \Prob\big(K_\rho(H_{\rho,g_1}) \geq 1-\alpha \big),
\end{multline*}
if there exist $\bar \dfi \in \R$, $\bar \kseqi \in (0,1)$ with $N_{\rho^2}(g_0;\bar \kseqi, \bar \dfi) > 0$, and furthermore
\begin{multline*}
\Prob\big(K_\rho^{(\dfi_0)}(H^{(\dfi_0)}_{\rho,g_1}) > 1-\alpha \big) \leq \liminf_{B \to \infty} \lim_{n\to \infty} \Prob \Big( R_{\rho,\dfi_0}^{(n)} \geq \hat q^{(B)}_{1 - \alpha}\big(R_{\rho,\dfi_0}^{(n)} \big) \Big) \\
\leq \limsup_{B \to \infty} \lim_{n\to \infty} \Prob \Big( R_{\rho,\dfi_0}^{(n)} \geq \hat q^{(B)}_{1 - \alpha}\big(R_{\rho,\dfi_0}^{(n)} \big) \Big) \leq \Prob\big(K_\rho^{(\dfi_0)}(H^{(\dfi_0)}_{\rho,g_1}) \ge 1-\alpha \big)
\end{multline*}
holds for each $\alpha \in (0,1)$, if there exists a $\bar \kseqi \in (0,1)$ with $N_{\rho^2}(g_0;\bar \kseqi, \dfi_0) > 0$.
\end{proposition}
With the result above and an inspection of the limiting probability $\Prob\big(K_\rho(H_{\rho,g_1}) \geq 1-\alpha \big)$, which is beyond the scope of this paper, one can show for which direction $g_1$ it is more difficult to distinguish the null hypothesis from the alternative. An immediate consequence of Proposition \ref{localtgrapro} is that the tests \eqref{testdfglobal} and \eqref{testdflokal} hold the level $\alpha$ asymptotically.
\begin{corollary}
\label{H0graprop}
The tests \eqref{testdfglobal} and \eqref{testdflokal} are asymptotic level $\alpha$ tests in the following sense: Under ${\bf H}_0$ with $\nu_0\neq 0$ we have for each $\alpha \in(0,1)$
\[\lim_{B \to \infty} \lim_{n\to \infty} \Prob \Big( (n\Delta_n)^{1/2} \Db_{\rho,*}^{(n)}(1) \geq \hat q^{(B)}_{1 - \alpha} \big( {\Hb}^{(n)}_{\rho,*}(1) \big) \Big) = \alpha \]
and moreover
\[\lim_{B \to \infty} \lim_{n\to \infty} \Prob \Big( R_{\rho,\dfi_0}^{(n)} \geq \hat q^{(B)}_{1 - \alpha}\big(R_{\rho,\dfi_0}^{(n)} \big) \Big)=\alpha,\]
holds for all $\alpha\in (0,1)$, if $N_{\rho^2}(\nu_0;\dfi_0) >0$.
\end{corollary}
The tests \eqref{testdfglobal} and \eqref{testdflokal} are also consistent under the fixed alternatives ${\bf H}_1^*$, ${\bf H}_1^*(\dfi_0)$ in the sense of the following proposition.
\begin{proposition} \label{CorConGradf}
Under ${\bf H}_1^*$, we have for all $B \in \mathbb N$
\begin{equation*}
\lim \limits_{n \rightarrow \infty} \mathbb P \Big( (n\Delta_n)^{1/2} \Db_{\rho,*}^{(n)}(1) \geq \hat q^{(B)}_{1 - \alpha} \big( {\Hb}^{(n)}_{\rho,*}(1) \big) \Big) = 1.
\end{equation*}
Under ${\bf H}_1^*(\dfi_0)$, we have for all $B \in \mathbb N$
\[
\lim \limits_{n \rightarrow \infty} \mathbb P \Big( R_{\rho,\dfi_0}^{(n)} \geq \hat q^{(B)}_{1 - \alpha}\big( R_{\rho,\dfi_0}^{(n)} \big) \Big) =1.
\]
\end{proposition}
\section{Finite-sample properties}
\def\theequation{5.\arabic{equation}}
\setcounter{equation}{0}
\label{sec:fisaper}
In this section we present the results of an extensive simulation study assessing the finite-sample properties of the new statistical procedures. The design of this study is as follows:
\begin{itemize}
\item We apply our estimators and test statistics to $n$ data points $\{X_{\Delta_n}, \ldots, X_{n\Delta_n}\}$ as realizations of an It\=o semimartingale $(X_t)_{t \in \R_+}$ with characteristics $(b,\sigma,\nu_s)$. For the sample size we choose either $n=10000$ or $n=22500$, where for the effective sample size we consider the choices $k_n := n\Delta_n = 50,100,200$ in the case $n=10000$ resulting in frequencies $\Delta_n^{-1} = 200,100,50$ and in the case $n=22500$ we consider $k_n= n\Delta_n = 50,75,100,150,250$ resulting in $\Delta_n^{-1} = 450,300,225,150,90$.
\item Corresponding to our basic rescaling assumption \eqref{ComResAss} the jump characteristic satisfies
\[
\nu_s(d\taili) = g\Big(\frac{s}{n\Delta_n},d\taili \Big),
\]
where the transition kernel $g(y,d\taili)$ is given by
\begin{equation}
\label{SimMod}
g(y,[z,\infty)) = \begin{cases}
\Big(\frac{\eta(y)}{\pi \taili}\Big)^{1/2} - \Big(\frac{1}{\pi 10^6}\Big)^{1/2}, \quad &\text{ if } 0 < \taili \leq \eta(y) 10^6, \\
0, &\text{ otherwise, }
\end{cases}
\end{equation}
and $g(y,(-\infty,z]) = 0$ for all $z <0$.
\item In order to simulate data points $\{X_{\Delta_n}, \ldots, X_{n\Delta_n}\}$ including an abrupt change we choose
\begin{equation}
\label{etaabch}
\eta(y) = \begin{cases}
1, \quad &\text{ if } y \leq \gseqi_0, \\
\psi, \quad &\text{ if } y > \gseqi_0,
\end{cases} \quad \quad\quad (y \in [0,1])
\end{equation}
for $\gseqi_0 \in (0,1)$, $\psi \ge 1$ and we use a modification of Algorithm 6.13 in \cite{ConTan04} to simulate pure jump It\=o semimartingales under ${\bf H}_0$, i.e. for $\psi=1$. Under the alternative of an abrupt change, i.e. for $\psi >1$, we merge two paths of independent semimartingales together.
\item A gradual change in the jump characteristic is realized by choosing
\begin{equation}
\label{etagrch}
\eta(y) = \begin{cases}
1, \quad & \text{ if } y \le \gseqi_0, \\
(A(y-\gseqi_0)^w +1)^2, \quad & \text{ if } y \ge \gseqi_0,
\end{cases} \quad \quad\quad (y \in [0,1])
\end{equation}
in \eqref{SimMod} for some $\gseqi_0 \in [0,1]$, $A>0$ and $w>0$. In order to obtain pure jump It\=o semimartingale data according to this model we sample $15$ times more frequently, i.e. for $j \in \{1,\ldots,15n\}$ we use a modification of Algorithm 6.13 in \cite{ConTan04} to simulate an increment $Z_j = \tilde X_{j\Delta_n /15}^{(j)} - \tilde X_{(j-1)\Delta_n /15}^{(j)}$ of a $1/2$-stable pure jump L\'evy subordinator with characteristic exponent
\[
\Phi^{(j)}(u) = \int (e^{iuz} - 1) \nu^{(j)}(dz),
\]
where $\nu^{(j)}(dz) = g(j/(15n),dz)$. For the resulting data vector $\{X_{\Delta_n}, \ldots, X_{n\Delta_n}\}$ we use
\[
X_{k\Delta_n} = \sum_{j=1}^{15k} Z_j, \quad (k=1,\ldots,n).
\]
\item In order to investigate the performance of our truncation method we either use the plain pure jump data vector $\{X_{\Delta_n}, \ldots, X_{n\Delta_n}\}$ as described above, resulting in the characteristics $b=\sigma=0$ for the continuous part, or we use $\{X_{\Delta_n} + S_{\Delta_n}, \ldots, X_{n\Delta_n} + S_{n\Delta_n}\}$, where $S_t = W_t +t$ with a Brownian motion $(W_t)_{t \in \R_+}$ resulting in $b=\sigma=1$. In the graphics depicted below the results for pure jump data are presented on the left-hand side, while the results including a continuous component are always placed on the right-hand side.
\item For the truncation sequence $v_n= \gamma \Delta_n^{\ovw}$ we choose $\gamma =1$ and $\ovw = 3/4$ in each run resulting in the parameter $\tau = 2/15$ in Assumption \ref{EasierCond}.
\item Due to computational reasons we approximate the supremum in $\dfi \in \R$ by taking the maximum either over the finite grid $T_1 := \{0.1\cdot j \mid j=1, \ldots, 30\}$ or the finite grid $T_2 := \{0.1 + j \cdot 0.3 \mid j=0,1,\ldots,9\}$.
\item For the function $\rho$ we use $\rho_{L,p}$ from \eqref{Eq:rhoLpdef} in Example \ref{Ex:SitgraCh} with parameters $L=1$ and $p=2$.
\item Each combination of parameters we present below is run $500$ times and if the statistical procedure includes a bootstrap method we always use $B=200$ bootstrap replications. In order to illustrate the power of our test procedures we display simulated rejection probabilities, i.e. the mean of the $500$ test results. Furthermore, we measure the performance of our estimators by mean absolute deviation, i.e. if $\Theta = \{\hat \gseqi_1, \ldots, \hat \gseqi_{500} \}$ is the set of obtained estimation results we depict
\[
\ell^1(\Theta,\gseqi_0) = \frac 1{500} \sum_{j=1}^{500} | \hat \gseqi_j - \gseqi_0 |,
\]
where $\gseqi_0$ is the location of the change point.
\end{itemize}
\begin{table}[t!]
\begin{center}
\footnotesize{
\begin{tabular}{ c ||c||c|c|c|c|c|c|c| }
\hline
\multicolumn{1}{|c||}{$k_n$} & \multicolumn{1}{c||}{Test \eqref{testvfkglob}} & \multicolumn{1}{c|}{Pointwise Tests} &
\multicolumn{1}{c|}{$\dfi_0 = 0.5$} & \multicolumn{1}{c|}{$\dfi_0 = 1$} & \multicolumn{1}{c|}{$\dfi_0 = 1.5$} &
\multicolumn{1}{c|}{$\dfi_0 = 2$} & \multicolumn{1}{c|}{$\dfi_0 = 2.5$} & \multicolumn{1}{c|}{$\dfi_0 = 3$} \\
\hline
\multicolumn{1}{|c||}{$50$} & \multicolumn{1}{c||}{0.026} & \multicolumn{1}{c|}{\eqref{testvfklok}} &
\multicolumn{1}{c|}{$0.062$} & \multicolumn{1}{c|}{$0.036$} & \multicolumn{1}{c|}{$0.024$} &
\multicolumn{1}{c|}{$0.036$} & \multicolumn{1}{c|}{$0.026$} & \multicolumn{1}{c|}{$0.036$} \\
\multicolumn{1}{|c||}{ } & \multicolumn{1}{c||}{ } & \multicolumn{1}{c|}{\eqref{testvfklokex}} &
\multicolumn{1}{c|}{$0.060$} & \multicolumn{1}{c|}{$0.042$} & \multicolumn{1}{c|}{$0.030$} &
\multicolumn{1}{c|}{$0.030$} & \multicolumn{1}{c|}{$0.016$} & \multicolumn{1}{c|}{$0.020$} \\
\hline
\multicolumn{1}{|c||}{75} & \multicolumn{1}{c||}{0.052} & \multicolumn{1}{c|}{\eqref{testvfklok}} &
\multicolumn{1}{c|}{$0.058$} & \multicolumn{1}{c|}{$0.048$} & \multicolumn{1}{c|}{$0.046$} &
\multicolumn{1}{c|}{$0.040$} & \multicolumn{1}{c|}{$0.046$} & \multicolumn{1}{c|}{$0.050$} \\
\multicolumn{1}{|c||}{ } & \multicolumn{1}{c||}{ } & \multicolumn{1}{c|}{\eqref{testvfklokex}} &
\multicolumn{1}{c|}{$0.040$} & \multicolumn{1}{c|}{$0.046$} & \multicolumn{1}{c|}{$0.032$} &
\multicolumn{1}{c|}{$0.036$} & \multicolumn{1}{c|}{$0.028$} & \multicolumn{1}{c|}{$0.030$} \\
\hline
\multicolumn{1}{|c||}{100} & \multicolumn{1}{c||}{0.050} & \multicolumn{1}{c|}{\eqref{testvfklok}} &
\multicolumn{1}{c|}{$0.046$} & \multicolumn{1}{c|}{$0.054$} & \multicolumn{1}{c|}{$0.042$} &
\multicolumn{1}{c|}{$0.046$} & \multicolumn{1}{c|}{$0.038$} & \multicolumn{1}{c|}{$0.042$} \\
\multicolumn{1}{|c||}{ } & \multicolumn{1}{c||}{ } & \multicolumn{1}{c|}{\eqref{testvfklokex}} &
\multicolumn{1}{c|}{$0.038$} & \multicolumn{1}{c|}{$0.038$} & \multicolumn{1}{c|}{$0.036$} &
\multicolumn{1}{c|}{$0.040$} & \multicolumn{1}{c|}{$0.028$} & \multicolumn{1}{c|}{$0.032$} \\
\hline
\multicolumn{1}{|c||}{150} & \multicolumn{1}{c||}{0.068} & \multicolumn{1}{c|}{\eqref{testvfklok}} &
\multicolumn{1}{c|}{$0.038$} & \multicolumn{1}{c|}{$0.054$} & \multicolumn{1}{c|}{$0.054$} &
\multicolumn{1}{c|}{$0.054$} & \multicolumn{1}{c|}{$0.058$} & \multicolumn{1}{c|}{$0.066$} \\
\multicolumn{1}{|c||}{ } & \multicolumn{1}{c||}{ } & \multicolumn{1}{c|}{\eqref{testvfklokex}} &
\multicolumn{1}{c|}{$0.036$} & \multicolumn{1}{c|}{$0.036$} & \multicolumn{1}{c|}{$0.050$} &
\multicolumn{1}{c|}{$0.042$} & \multicolumn{1}{c|}{$0.052$} & \multicolumn{1}{c|}{$0.044$} \\
\hline
\multicolumn{1}{|c||}{250} & \multicolumn{1}{c||}{0.060} & \multicolumn{1}{c|}{\eqref{testvfklok}} &
\multicolumn{1}{c|}{$0.068$} & \multicolumn{1}{c|}{$0.056$} & \multicolumn{1}{c|}{$0.056$} &
\multicolumn{1}{c|}{$0.058$} & \multicolumn{1}{c|}{$0.064$} & \multicolumn{1}{c|}{$0.060$} \\
\multicolumn{1}{|c||}{ } & \multicolumn{1}{c||}{ } & \multicolumn{1}{c|}{\eqref{testvfklokex}} &
\multicolumn{1}{c|}{$0.046$} & \multicolumn{1}{c|}{$0.034$} & \multicolumn{1}{c|}{$0.034$} &
\multicolumn{1}{c|}{$0.032$} & \multicolumn{1}{c|}{$0.044$} & \multicolumn{1}{c|}{$0.052$} \\
\hline
\end{tabular}
}
\caption{\textbf{Test procedures \eqref{testvfkglob}, \eqref{testvfklok} and \eqref{testvfklokex}} under $\textbf H_0$.
\it Simulated rejection probabilities in the application of the test \eqref{testvfkglob}, the test \eqref{testvfklok} and the
test \eqref{testvfklokex} using $500$ pure jump subordinator data vectors under the null hypothesis.}
\label{tab:h011}
\end{center}
\vspace{-.1cm}
\end{table}
\begin{table}[t!]
\begin{center}
\footnotesize{
\begin{tabular}{ c ||c||c|c|c|c|c|c|c| }
\hline
\multicolumn{1}{|c||}{$k_n$} & \multicolumn{1}{c||}{Test \eqref{testvfkglob}} & \multicolumn{1}{c|}{Pointwise Tests} &
\multicolumn{1}{c|}{$\dfi_0 = 0.5$} & \multicolumn{1}{c|}{$\dfi_0 = 1$} & \multicolumn{1}{c|}{$\dfi_0 = 1.5$} &
\multicolumn{1}{c|}{$\dfi_0 = 2$} & \multicolumn{1}{c|}{$\dfi_0 = 2.5$} & \multicolumn{1}{c|}{$\dfi_0 = 3$} \\
\hline
\multicolumn{1}{|c||}{$50$} & \multicolumn{1}{c||}{0.040} & \multicolumn{1}{c|}{\eqref{testvfklok}} &
\multicolumn{1}{c|}{$0.038$} & \multicolumn{1}{c|}{$0.042$} & \multicolumn{1}{c|}{$0.036$} &
\multicolumn{1}{c|}{$0.054$} & \multicolumn{1}{c|}{$0.034$} & \multicolumn{1}{c|}{$0.036$} \\
\multicolumn{1}{|c||}{ } & \multicolumn{1}{c||}{ } & \multicolumn{1}{c|}{\eqref{testvfklokex}} &
\multicolumn{1}{c|}{$0.036$} & \multicolumn{1}{c|}{$0.030$} & \multicolumn{1}{c|}{$0.028$} &
\multicolumn{1}{c|}{$0.042$} & \multicolumn{1}{c|}{$0.026$} & \multicolumn{1}{c|}{$0.028$} \\
\hline
\multicolumn{1}{|c||}{75} & \multicolumn{1}{c||}{0.058} & \multicolumn{1}{c|}{\eqref{testvfklok}} &
\multicolumn{1}{c|}{$0.024$} & \multicolumn{1}{c|}{$0.050$} & \multicolumn{1}{c|}{$0.030$} &
\multicolumn{1}{c|}{$0.048$} & \multicolumn{1}{c|}{$0.058$} & \multicolumn{1}{c|}{$0.050$} \\
\multicolumn{1}{|c||}{ } & \multicolumn{1}{c||}{ } & \multicolumn{1}{c|}{\eqref{testvfklokex}} &
\multicolumn{1}{c|}{$0.030$} & \multicolumn{1}{c|}{$0.032$} & \multicolumn{1}{c|}{$0.020$} &
\multicolumn{1}{c|}{$0.042$} & \multicolumn{1}{c|}{$0.046$} & \multicolumn{1}{c|}{$0.036$} \\
\hline
\multicolumn{1}{|c||}{100} & \multicolumn{1}{c||}{0.050} & \multicolumn{1}{c|}{\eqref{testvfklok}} &
\multicolumn{1}{c|}{$0.044$} & \multicolumn{1}{c|}{$0.050$} & \multicolumn{1}{c|}{$0.040$} &
\multicolumn{1}{c|}{$0.046$} & \multicolumn{1}{c|}{$0.048$} & \multicolumn{1}{c|}{$0.052$} \\
\multicolumn{1}{|c||}{ } & \multicolumn{1}{c||}{ } & \multicolumn{1}{c|}{\eqref{testvfklokex}} &
\multicolumn{1}{c|}{$0.034$} & \multicolumn{1}{c|}{$0.040$} & \multicolumn{1}{c|}{$0.026$} &
\multicolumn{1}{c|}{$0.046$} & \multicolumn{1}{c|}{$0.040$} & \multicolumn{1}{c|}{$0.048$} \\
\hline
\multicolumn{1}{|c||}{150} & \multicolumn{1}{c||}{0.054} & \multicolumn{1}{c|}{\eqref{testvfklok}} &
\multicolumn{1}{c|}{$0.040$} & \multicolumn{1}{c|}{$0.050$} & \multicolumn{1}{c|}{$0.048$} &
\multicolumn{1}{c|}{$0.056$} & \multicolumn{1}{c|}{$0.048$} & \multicolumn{1}{c|}{$0.060$} \\
\multicolumn{1}{|c||}{ } & \multicolumn{1}{c||}{ } & \multicolumn{1}{c|}{\eqref{testvfklokex}} &
\multicolumn{1}{c|}{$0.040$} & \multicolumn{1}{c|}{$0.032$} & \multicolumn{1}{c|}{$0.038$} &
\multicolumn{1}{c|}{$0.038$} & \multicolumn{1}{c|}{$0.030$} & \multicolumn{1}{c|}{$0.038$} \\
\hline
\multicolumn{1}{|c||}{250} & \multicolumn{1}{c||}{0.060} & \multicolumn{1}{c|}{\eqref{testvfklok}} &
\multicolumn{1}{c|}{$0.046$} & \multicolumn{1}{c|}{$0.058$} & \multicolumn{1}{c|}{$0.036$} &
\multicolumn{1}{c|}{$0.056$} & \multicolumn{1}{c|}{$0.062$} & \multicolumn{1}{c|}{$0.058$} \\
\multicolumn{1}{|c||}{ } & \multicolumn{1}{c||}{ } & \multicolumn{1}{c|}{\eqref{testvfklokex}} &
\multicolumn{1}{c|}{$0.036$} & \multicolumn{1}{c|}{$0.050$} & \multicolumn{1}{c|}{$0.030$} &
\multicolumn{1}{c|}{$0.044$} & \multicolumn{1}{c|}{$0.054$} & \multicolumn{1}{c|}{$0.046$} \\
\hline
\end{tabular}
}
\caption{\textbf{Test procedures \eqref{testvfkglob}, \eqref{testvfklok} and \eqref{testvfklokex}} under $\textbf H_0$.
\it Simulated rejection probabilities in the application of the test \eqref{testvfkglob}, the test \eqref{testvfklok} and the
test \eqref{testvfklokex} using $500$ pure jump subordinator data vectors plus a drift and plus a Brownian motion under the null hypothesis.}
\label{tab:h011pB}
\end{center}
\vspace{-.1cm}
\end{table}
\subsection{Finite-sample performance of the procedures in Section \ref{sec:Infabchdf}}
\label{fsperCUSUMpr}
In order to demonstrate the performance of the procedures introduced in Section \ref{sec:Infabchdf} we choose the sample size $n=22500$ and the grid $T_1 = \{0.1\cdot j \mid j=1, \ldots, 30\}$ to approximate the supremum in $\dfi \in \R$. The confidence level of the test procedures is $\alpha = 5\%$ in each run.
\vspace{.2cm}
\noindent
\textbf{Simulated rejection probabilities for the tests \eqref{testvfkglob}, \eqref{testvfklok} and \eqref{testvfklokex}}
\vspace{.3cm}
Table \ref{tab:h011} and Table \ref{tab:h011pB} show a reasonable approximation of the nominal level $\alpha = 0.05$ of the tests \eqref{testvfkglob}, \eqref{testvfklok} and \eqref{testvfklokex} under ${\bf H}_0$. Test \eqref{testvfklokex} appears to be slightly more conservative than the test \eqref{testvfklok}. Moreover, in Table \ref{tab:h011pB} the underlying process includes a continuous component with $b=\sigma=1$ and the results are similar to Table \ref{tab:h011}.
\begin{figure}[t!]
\centering
\includegraphics[width=0.48\textwidth]{CUSbeDu}~~~ \includegraphics[width=0.48\textwidth]{CUSbeDupBM}
\vspace{-.5cm}
\caption{\label{fig:CUSbeDu}
\it Simulated rejection probabilities of the test \eqref{testvfkglob} for different factors of jump size $\psi$ in \eqref{etaabch}. Pure jump data on the left-hand side and data including a Brownian motion with drift on the right-hand side. The dashed red line indicates $\alpha = 5\%$.
}
\end{figure}
In Figure \ref{fig:CUSbeDu} we depict rejection probabilities for the test \eqref{testvfkglob} for different effective sample sizes $k_n = n \Delta_n$. The factor of jump size corresponds to $\psi$ in \eqref{etaabch} and the dashed red line indicates the nominal level $\alpha = 5\%$. The change point is located at $\gseqi_0 = 0.5$. The results are as expected: Large differences of the factor of jump size before and after the change yield higher rejection probabilities. Moreover, due to better approximations the relative frequencies of rejections increase with the effective sample size $k_n = n \Delta_n$. Notice also that the results for pure jump It\=o semimartingales and for data including a continuous component are almost the same. This fact indicates a reasonable performance of the proposed truncation technique for an ordinary sample size $n=22500$.
\begin{figure}[t!]
\centering
\includegraphics[width=0.48\textwidth]{CUSth0Du}~~~ \includegraphics[width=0.48\textwidth]{CUSth0DupBM}
\vspace{-.5cm}
\caption{\label{fig:CUSt0Du}
\it Simulated relative frequencies of rejection of the test \eqref{testvfkglob} for different locations of the change point $\gseqi_0$ in \eqref{etaabch} for pure jump data (left-hand side) and with an additional Brownian motion with drift (right-hand side). The dashed red line indicates the nominal level $\alpha = 5\%$.
}
\end{figure}
Figure \ref{fig:CUSt0Du} shows rejection probabilities for varying locations of the change point $\gseqi_0$, where $\psi =4$ in \eqref{etaabch}. Our results illustrate that an abrupt change can be detected best, if it is located close to the middle of the data set, i.e. for $\gseqi_0 \approx 0.5$. Furthermore, in this case the power of the test is increasing in the effective sample size $k_n = n \Delta_n$ as well and the performance for data including a continuous component is nearly the same.
\begin{figure}[t!]
\centering
\includegraphics[width=0.48\textwidth]{CUSpDurchl}~~~ \includegraphics[width=0.48\textwidth]{CUSpDurchlpBM}
\vspace{-.5cm}
\caption{\label{fig:CUSpDu}
\it Simulated rejection probabilities of the test \eqref{testvfkglob} for different values of the parameter $p \ge 2$ in the function $\rho_{1,p}$ for pure jump data (left panel) and plus an additional Brownian motion with drift (right panel). The dashed red line indicates the confidence level $\alpha = 5\%$.
}
\end{figure}
In Figure \ref{fig:CUSpDu} we display relative frequencies of rejection for different values of the parameter $p \in [2,20]$ of the function $\rho_{1,p}$ defined in \eqref{Eq:rhoLpdef}. This function is used to calculate the process $\Tb_{\rho_{1,p}}^{\scriptscriptstyle (n)}(\gseqi,\dfi)$ for certain values of $\gseqi \in [0,1]$ and $\dfi \in T_1$. In each run the change point is located at $\gseqi_0 = 0.5$ and we have $\psi = 3$ in \eqref{etaabch}. The results suggest to use the lowest possible value of the parameter $p$ in order to obtain the maximum power of the test. Again, the rejection probabilities of the test are nearly unaffected by the presence of a Brownian component.
\begin{figure}[t!]
\centering
\includegraphics[width=0.48\textwidth]{CUStDu}~~~ \includegraphics[width=0.48\textwidth]{CUStDupBM}
\vspace{-.5cm}
\caption{\label{fig:CUStDu}
\it Simulated rejection probabilities of the test \eqref{testvfklok} (black lines) and the test \eqref{testvfklokex} (grey lines) for different values $\dfi_0$ for pure jump data (left-hand side) and with an additional Brownian motion with drift (right-hand side). The dashed red line indicates the nominal level $\alpha = 5\%$.
}
\end{figure}
In Figure \ref{fig:CUStDu} we depict rejection probabilities of the tests \eqref{testvfklok} and \eqref{testvfklokex} for different values of $\dfi_0 \in [0.1,50]$. In the underlying model \eqref{SimMod} we use $\eta(y)$ defined in \eqref{etaabch} with $\gseqi_0 = 0.5$ and $\psi = 3$. We observe that test \eqref{testvfklok} has slightly more power than test \eqref{testvfklokex} and the power of both tests is increasing for small values of $\dfi_0$. The latter can be explained by the fact that less increments of the underlying It\=o semimartingale which take values in the interval $(v_n,\dfi_0]$ are used to calculate the test statistics. The effect is even more significant when a Brownian component is present (right panel). In this case it is more difficult to detect a change, because of the superposition of small increments with an i.i.d.\ sequence of random variables following a normal distribution with variance $\Delta_n$ (see also Figure 3 in \cite{BueHofVetDet15}). Furthermore, one can show (see, for instance, Lemma 6.3 in \cite{HofVetDet17}) that in the case of a pure jump It\=o semimartingales the probability of the event that $m$ increments exceed the value $\dfi_0$ is bounded by $K \dfi_0^{\scriptscriptstyle -m/2}$. As a consequence, for large $\dfi_0$ the power of both tests reaches a saturation, because only a negligible proportion of increments exceed $\dfi_0$.
\vspace{3mm}
\noindent
\textbf{Finite-sample performance of the argmax-estimator \eqref{argmaxsup}}
\vspace{.2cm}
\begin{figure}[t!]
\centering
\includegraphics[width=0.48\textwidth]{amSchbeDu}~~~ \includegraphics[width=0.48\textwidth]{amSchbeDupB}
\vspace{-.5cm}
\caption{\label{fig:argmbeDu}
\it Mean absolute deviation of estimator \eqref{argmaxsup} for different values $\psi \ge 1$ in \eqref{etaabch} for pure jump data (left panel) and plus an additional Brownian motion with drift (right panel).
}
\end{figure}
In Figure \ref{fig:argmbeDu} we display mean absolute deviations of estimator \eqref{argmaxsup} for different values $\psi \in [1,5]$ in \eqref{etaabch}, where the true change point is located at $\gseqi_0 = 0.5$. The results correspond to Figure \ref{fig:CUSbeDu} in the sense that large values of $\psi$ yield a better performance of the statistical procedure. Because of better approximations the mean absolute deviation is also decreasing in the effective sample size $k_n = n \Delta_n$. Additionally, we also observe in the estimation results that due to the truncation approach the mean absolute deviation is nearly unaffected by the presence of a Brownian component.
\begin{figure}[t!]
\centering
\includegraphics[width=0.48\textwidth]{amSchth0Du}~~~ \includegraphics[width=0.48\textwidth]{amSchth0DupB}
\vspace{-.5cm}
\caption{\label{fig:argmt0Du}
\it Mean absolute deviation of estimator \eqref{argmaxsup} for different values $\theta_0 \in (0,1)$ in \eqref{etaabch} for pure jump It\=o semimartingales (left-hand side) and plus an additional Brownian component (right-hand side).
}
\end{figure}
Figure \ref{fig:argmt0Du} shows mean absolute deviations of estimator \eqref{argmaxsup} for different locations of the change point $\gseqi_0 \in (0,1)$, where we choose $\psi =3$ in \eqref{etaabch}. Similar to Figure \ref{fig:CUSt0Du} the results suggest that a change point can be detected best if it is located around the middle of the data set, i.e. if $\gseqi_0 \approx 0.5$. Furthermore, as in Figure \ref{fig:argmbeDu} the estimation error is decreasing in the effective sample size $k_n$.
\subsection{Finite-sample performance of the procedures in Section \ref{sec:gradchadf}}
In this section we investigate the finite-sample performance of the statistical procedures introduced in Section \ref{sec:gradchadf}.
\vspace{3mm}
\noindent
\textbf{Finite-sample performance of the tests \eqref{testdfglobal} and \eqref{testdflokal}}
\vspace{.2cm}
Table \ref{tab:h0gr} and Table \ref{tab:h0grpB} show simulated rejection probabilities of test \eqref{testdfglobal} and test \eqref{testdflokal} under the null hypothesis, i.e. for $\psi =1$ in \eqref{etaabch}. The sample size is $n=22500$ and for the test \eqref{testdfglobal} we approximate the supremum in $\dfi \in \R$ by taking the maximum over the finite grid $T_1 = \{0.1\cdot j \mid j=1, \ldots, 30\}$. In both cases for pure jump It\=o semimartingales ($b=\sigma =0$; Table \ref{tab:h0gr}) and for It\=o semimartingales including a Brownian component ($b=\sigma =1$; Table \ref{tab:h0grpB}) we observe a reasonable approximation of the nominal level $\alpha = 5\%$.
\begin{table}[t!]
\begin{center}
\begin{tabular}{ c ||c||c|c|c|c|c|c| }
\hline
\multicolumn{1}{|c||}{ } & \multicolumn{1}{c||}{Test \eqref{testdfglobal}} & \multicolumn{6}{c|}{Test \eqref{testdflokal}} \\
\hline
\multicolumn{1}{|c||}{$k_n$} & \multicolumn{1}{c||}{$T_1$} &
\multicolumn{1}{c|}{$\dfi_0 = 0.5$} & \multicolumn{1}{c|}{$\dfi_0 = 1$} & \multicolumn{1}{c|}{$\dfi_0 = 1.5$} &
\multicolumn{1}{c|}{$\dfi_0 = 2$} & \multicolumn{1}{c|}{$\dfi_0 = 2.5$} & \multicolumn{1}{c|}{$\dfi_0 = 3$} \\
\hline
\multicolumn{1}{|c||}{$50$} & \multicolumn{1}{c||}{$0.050$} &
\multicolumn{1}{c|}{$0.028$} & \multicolumn{1}{c|}{$0.020$} & \multicolumn{1}{c|}{$0.030$} &
\multicolumn{1}{c|}{$0.040$} & \multicolumn{1}{c|}{$0.056$} & \multicolumn{1}{c|}{$0.046$} \\
\hline
\multicolumn{1}{|c||}{$75$} & \multicolumn{1}{c||}{$0.048$} &
\multicolumn{1}{c|}{$0.058$} & \multicolumn{1}{c|}{$0.058$} & \multicolumn{1}{c|}{$0.048$} &
\multicolumn{1}{c|}{$0.048$} & \multicolumn{1}{c|}{$0.048$} & \multicolumn{1}{c|}{$0.044$} \\
\hline
\multicolumn{1}{|c||}{$100$} & \multicolumn{1}{c||}{$0.056$} &
\multicolumn{1}{c|}{$0.062$} & \multicolumn{1}{c|}{$0.038$} & \multicolumn{1}{c|}{$0.046$} &
\multicolumn{1}{c|}{$0.038$} & \multicolumn{1}{c|}{$0.046$} & \multicolumn{1}{c|}{$0.060$} \\
\hline
\multicolumn{1}{|c||}{$150$} & \multicolumn{1}{c||}{$0.076$} &
\multicolumn{1}{c|}{$0.056$} & \multicolumn{1}{c|}{$0.062$} & \multicolumn{1}{c|}{$0.066$} &
\multicolumn{1}{c|}{$0.054$} & \multicolumn{1}{c|}{$0.062$} & \multicolumn{1}{c|}{$0.078$} \\
\hline
\multicolumn{1}{|c||}{$250$} & \multicolumn{1}{c||}{$0.062$} &
\multicolumn{1}{c|}{$0.070$} & \multicolumn{1}{c|}{$0.070$} & \multicolumn{1}{c|}{$0.058$} &
\multicolumn{1}{c|}{$0.056$} & \multicolumn{1}{c|}{$0.054$} & \multicolumn{1}{c|}{$0.066$} \\
\hline
\end{tabular}
\caption{\textbf{Test procedures \eqref{testdfglobal} and \eqref{testdflokal}} under ${\bf H}_0$. \it Simulated rejection probabilities of the test \eqref{testdfglobal} and the test \eqref{testdflokal} using $500$ pure jump It\=o semimartingale data vectors under the null hypothesis.}
\label{tab:h0gr}
\end{center}
\vspace{-.1cm}
\end{table}
\begin{table}[t!]
\begin{center}
\begin{tabular}{ c ||c||c|c|c|c|c|c| }
\hline
\multicolumn{1}{|c||}{ } & \multicolumn{1}{c||}{Test \eqref{testdfglobal}} & \multicolumn{6}{c|}{Test \eqref{testdflokal}} \\
\hline
\multicolumn{1}{|c||}{$k_n$} & \multicolumn{1}{c||}{$T_1$} &
\multicolumn{1}{c|}{$\dfi_0 = 0.5$} & \multicolumn{1}{c|}{$\dfi_0 = 1$} & \multicolumn{1}{c|}{$\dfi_0 = 1.5$} &
\multicolumn{1}{c|}{$\dfi_0 = 2$} & \multicolumn{1}{c|}{$\dfi_0 = 2.5$} & \multicolumn{1}{c|}{$\dfi_0 = 3$} \\
\hline
\multicolumn{1}{|c||}{$50$} & \multicolumn{1}{c||}{$0.044$} &
\multicolumn{1}{c|}{$0.036$} & \multicolumn{1}{c|}{$0.026$} & \multicolumn{1}{c|}{$0.028$} &
\multicolumn{1}{c|}{$0.044$} & \multicolumn{1}{c|}{$0.040$} & \multicolumn{1}{c|}{$0.040$} \\
\hline
\multicolumn{1}{|c||}{$75$} & \multicolumn{1}{c||}{$0.042$} &
\multicolumn{1}{c|}{$0.050$} & \multicolumn{1}{c|}{$0.054$} & \multicolumn{1}{c|}{$0.042$} &
\multicolumn{1}{c|}{$0.044$} & \multicolumn{1}{c|}{$0.038$} & \multicolumn{1}{c|}{$0.044$} \\
\hline
\multicolumn{1}{|c||}{$100$} & \multicolumn{1}{c||}{$0.074$} &
\multicolumn{1}{c|}{$0.040$} & \multicolumn{1}{c|}{$0.038$} & \multicolumn{1}{c|}{$0.036$} &
\multicolumn{1}{c|}{$0.046$} & \multicolumn{1}{c|}{$0.062$} & \multicolumn{1}{c|}{$0.068$} \\
\hline
\multicolumn{1}{|c||}{$150$} & \multicolumn{1}{c||}{$0.044$} &
\multicolumn{1}{c|}{$0.036$} & \multicolumn{1}{c|}{$0.056$} & \multicolumn{1}{c|}{$0.058$} &
\multicolumn{1}{c|}{$0.052$} & \multicolumn{1}{c|}{$0.042$} & \multicolumn{1}{c|}{$0.044$} \\
\hline
\multicolumn{1}{|c||}{$250$} & \multicolumn{1}{c||}{$0.050$} &
\multicolumn{1}{c|}{$0.034$} & \multicolumn{1}{c|}{$0.042$} & \multicolumn{1}{c|}{$0.056$} &
\multicolumn{1}{c|}{$0.062$} & \multicolumn{1}{c|}{$0.062$} & \multicolumn{1}{c|}{$0.058$} \\
\hline
\end{tabular}
\caption{\textbf{Test procedures \eqref{testdfglobal} and \eqref{testdflokal}} under ${\bf H}_0$. \it Simulated rejection probabilities of the test \eqref{testdfglobal} and the test \eqref{testdflokal} using $500$ pure jump It\=o semimartingale data vectors plus a drift and plus a Brownian motion under the null hypothesis.}
\label{tab:h0grpB}
\end{center}
\vspace{-.1cm}
\end{table}
For computational reasons the results on the tests \eqref{testdfglobal} and \eqref{testdflokal} depicted below are obtained for a sample size $n=10000$ and effective sample size $k_n \in \{50,100,200\}$. In each run we choose the confidence level $\alpha = 5\%$ and the supremum in $\dfi \in \R$ is approximated by the maximum over the finite grid $T_1 = \{0.1\cdot j \mid j=1, \ldots, 30\}$.
\begin{figure}[t!]
\centering
\includegraphics[width=0.48\textwidth]{grCPTsDu}~~~ \includegraphics[width=0.48\textwidth]{grCPTsDupB}
\vspace{-.5cm}
\caption{\label{fig:grtwDu}
\it Simulated rejection probabilities of the test \eqref{testdfglobal} for different values $w \in [0.6,30]$ in \eqref{etagrch} for pure jump It\=o semimartingales (left panel) and plus a Brownian motion and a drift (right panel). The dashed red line indicates the nominal level $\alpha = 5\%$.
}
\end{figure}
Figure \ref{fig:grtwDu} shows simulated rejection probabilities of the test \eqref{testdfglobal} for different degrees of smoothness of the change $w$ in \eqref{etagrch}. The change is located at $\gseqi_0 = 0.4$ and $A$ is chosen such that the characteristic quantity for a gradual change satisfies $\Dc_\rho^{\scriptscriptstyle (g)}(1) = 3$ in each scenario. As expected, it is more difficult to distinguish a very smooth change from the null hypothesis and therefore the rejection probability is decreasing in $w$. Similar to the CUSUM test investigated in Section \ref{fsperCUSUMpr} the power of the test is increasing in $k_n = n \Delta_n$ as well. Furthermore, all simulation results on the tests \eqref{testdfglobal} and \eqref{testdflokal} are similar for pure jump processes and processes including a Brownian component. This indicates that our truncation approach also works in this setup.
\begin{figure}[t!]
\centering
\includegraphics[width=0.48\textwidth]{grCPTth0Du}~~~ \includegraphics[width=0.48\textwidth]{grCPTth0DupB}
\vspace{-.5cm}
\caption{\label{fig:grtt0Du}
\it Simulated relative frequencies of rejections of the test \eqref{testdfglobal} for different locations of the change point $\gseqi_0 \in (0,1)$ in \eqref{etagrch} for pure jump processes (left-hand side) and plus an additional Brownian motion with drift (right-hand side). The dashed red line indicates the confidence level $\alpha = 5\%$.
}
\end{figure}
In Figure \ref{fig:grtt0Du} we depict rejection rates of the test \eqref{testdfglobal} for different locations of the change point $\gseqi_0 \in (0,1)$. We simulate a linear change, i.e. we have $w=1$ in \eqref{etagrch}, and $A$ is chosen such that $\Dc_\rho^{\scriptscriptstyle (g)}(1)= 0.3$ holds in each run. As before, the power of the test is increasing in the effective sample size $k_n = n\Delta_n$ and moreover it is decreasing in $\gseqi_0$. The latter can be explained by the shape of the model \eqref{etagrch}: For large $\gseqi_0$ the jump characteristic is ``close'' to the null hypothesis. \\
\noindent
\textbf{Finite-sample performance of the estimator $\hat \gseqi_\rho^{\scriptscriptstyle (n)}$ defined in \eqref{grestdef}}
Following \cite{HofVetDet17} we implement the estimator $\hat \gseqi_\rho^{\scriptscriptstyle (n)}$ in five steps as follows:
\begin{enumerate}
\item[\textit{Step 1.}] Choose a preliminary estimate \(\hat \gseqi^{\scriptscriptstyle (pr)} \in (0,1)\), a probability level \(\alpha \in (0,1) \) and a parameter \(r \in (0,1]\).
\item[\textit{Step 2.}] Initial choice of the tuning parameter \(\thrle_n\): \\
Evaluate \eqref{thrledfdefeq} for $\hat \gseqi^{\scriptscriptstyle (pr)}, \alpha$ and $r$ (with \( B=200\) as described above and where the supremum in $\dfi \in \R$ is approximated by the maximum over $\dfi \in T_2 = \{0.1 + j \cdot 0.3 \mid j=0,1,\ldots,9\}$) and obtain \( \hat\thrle^{\scriptscriptstyle (in)}\).
\item[\textit{Step 3.}] Intermediate estimate of the change point. \\
Evaluate \eqref{grestdef} for \(\hat\thrle^{\scriptscriptstyle (in)}\) and obtain \(\hat \gseqi^{\scriptscriptstyle (in)}\).
\item[\textit{Step 4.}] Final choice of the tuning parameter \(\thrle_n\): \\
Evaluate \eqref{thrledfdefeq} for $\hat \gseqi^{\scriptscriptstyle (in)}, \alpha, r$ and obtain \( \hat\thrle^{\scriptscriptstyle (fi)}\).
\item[\textit{Step 5.}] Estimate $\gseqi_0$. \\
Evaluate \eqref{grestdef} for \(\hat\thrle^{\scriptscriptstyle (fi)}\) and obtain the final estimate \(\hat \gseqi\) of the change point.
\end{enumerate}
From the theoretical standpoint of Section \ref{subsec:cth0E} we have to ensure that the preliminary estimate \(\hat \gseqi^{\scriptscriptstyle (pr)}\) in Step 1 is consistent in order to guarantee consistency of the final estimate \(\hat \gseqi\). If not declared otherwise, we always make the ``arbitrary'' choice \(\hat \gseqi^{\scriptscriptstyle (pr)} = 0.1\) for two reasons: First, a simulation study which is not included in this paper, where the estimation procedure is started in Step 2 with the choice \( \hat\thrle^{\scriptscriptstyle (in)} = \sqrt[3]{n\Delta_n}\) (which yields consistency according to Theorem \ref{SchaezerKonvT}) has shown similar results as the ones depicted below. Secondly, with the small choice of \(\hat \gseqi^{\scriptscriptstyle (pr)} = 0.1\) in Step 1 we obtain smaller values of the thresholds \( \hat\thrle^{\scriptscriptstyle (in)}\), \( \hat\thrle^{\scriptscriptstyle (fi)}\) and this reduces the calculation time. Furthermore, in the following simulation study we choose the sample size $n=22500$ and we vary the effective sample size in $k_n = n \Delta_n \in \{50,100,250\}$. For the evaluation of \eqref{thrledfdefeq} we always use $\alpha = 10\%$ and for computational reasons suprema in $\dfi \in \R$ are approximated by maxima over $\dfi \in T_2 = \{0.1 + j \cdot 0.3 \mid j=0,1,\ldots,9\}$. If not declared otherwise, we simulate a linear change, i.e. $w =1$ in \eqref{etagrch}, which is located at $\gseqi_0 = 0.4$. $A$ is always chosen such that the characteristic quantity for a gradual change satisfies $\Dc_\rho^{\scriptscriptstyle (g)}(1) = 3$ in all scenarios.
\begin{figure}[t!]
\centering
\includegraphics[width=0.48\textwidth]{grSchrDu}~~~ \includegraphics[width=0.48\textwidth]{grSchrDupBM}
\vspace{-.5cm}
\caption{\label{fig:grschrDu}
\it Mean absolute deviations of estimator \eqref{grestdef} for different choices of $r \in (0,1]$ in Step 1 for pure jump It\=o semimartingales (left panel) and with an additional Brownian component (right panel).
}
\end{figure}
Figure \ref{fig:grschrDu} shows mean absolute deviations for different choices of $r \in (0,1]$ in Step 1.
The graphics suggest that in all cases the mean absolute deviation for $r=0.3$ is close to its overall minimum. Thus, we choose $r=0.3$ in Step 1 in all following investigations.
\begin{figure}[t!]
\centering
\includegraphics[width=0.48\textwidth]{grSchPEDu}~~~ \includegraphics[width=0.48\textwidth]{grSchPEDupB}
\vspace{-.5cm}
\caption{\label{fig:grschPEDu}
\it Mean absolute deviations of estimator \eqref{grestdef} for different choices of the preliminary estimate \(\hat \gseqi^{\scriptscriptstyle (pr)} \in (0,1)\) in Step 1 for pure jump It\=o semimartingales (left-hand side) and plus an additional Brownian motion with drift (right-hand side).
}
\end{figure}
In Figure \ref{fig:grschPEDu} we depict mean absolute deviations of the estimator \eqref{grestdef} for different choices of the preliminary estimate \(\hat \gseqi^{\scriptscriptstyle (pr)} \in (0,1)\) in Step 1. The estimation error is smallest, if the preliminary estimate is chosen close to $1$. This finding corresponds to another simulation study which is not included below and which has shown that the estimation procedure \eqref{grestdef} tends to underestimate the change point. As a consequence, \(\hat \gseqi^{\scriptscriptstyle (pr)}\) close to $1$ induces larger values of the quantities \( \hat\thrle^{\scriptscriptstyle (in)}, \hat \gseqi^{\scriptscriptstyle (in)}, \hat\thrle^{\scriptscriptstyle (fi)}\) in Steps 2-4 and prevents the underestimation error.
\begin{figure}[t!]
\centering
\includegraphics[width=0.48\textwidth]{grSchsDu}~~~ \includegraphics[width=0.48\textwidth]{grSchsDupB}
\vspace{-.5cm}
\caption{\label{fig:grschsDu}
\it Mean absolute deviations of estimator \eqref{grestdef} for different degrees of smoothness of the change $w$ in \eqref{etagrch} for pure jump processes (left panel) and with an additional continuous component (right panel).
}
\end{figure}
Figure \ref{fig:grschsDu} shows simulated mean absolute deviations of estimator \eqref{grestdef} for different degrees of smoothness of the change $w$ in \eqref{etagrch}. The results correspond to Figure \ref{fig:grtwDu} and confirm the intuitive idea that a smooth change is more difficult to detect. Moreover, better approximations for large effective sample sizes $k_n = n \Delta_n$ reduce the estimation error.
\begin{figure}[t!]
\centering
\includegraphics[width=0.48\textwidth]{grSchth0Du}~~~ \includegraphics[width=0.48\textwidth]{grSchth0DupB}
\vspace{-.5cm}
\caption{\label{fig:grschth0Du}
\it Simulated mean absolute deviation of estimator \eqref{grestdef} for different locations of the change point for pure jump processes (left-hand side) and plus an additional Brownian motion with drift (right-hand side).
}
\end{figure}
In Figure \ref{fig:grschth0Du} we display simulated mean absolute deviations of estimator $\hat \gseqi_\rho^{\scriptscriptstyle (n)}$ for different locations of the change point $\gseqi_0 \in (0,1)$ in \eqref{etagrch}. The results correspond to Figure \ref{fig:grtt0Du} and show that for small values of $\gseqi_0$ the change point can be detected best. This is a consequence of model \eqref{etagrch}: For large $\gseqi_0 \in (0,1)$ the jump behaviour is nearly constant.
\vspace{1cm}
\medskip
{\bf Acknowledgements}
This work has been supported by the
Collaborative Research Center ``Statistical modeling of nonlinear
dynamic processes" (SFB 823, Project A1) and the Research Training Group ``High-dimensional phenomena in probability -- fluctuations and discontinuity" (RTG 2131) of the German Research Foundation (DFG).
\bibliographystyle{apalike}
\addcontentsline{toc}{chapter}{Bibliography}
|
{
"timestamp": "2018-02-26T02:12:13",
"yymm": "1802",
"arxiv_id": "1802.08658",
"language": "en",
"url": "https://arxiv.org/abs/1802.08658"
}
|
\section{Motivation}
In the era of `big data', dimension reduction is critical for data science.
The aim is to map a set of high-dimensional points $x_1,x_2,...,x_n \in \mathcal{X}$ to a lower dimensional (feature) space $\mathcal{F}$
\begin{equation*}
\Psi: \mathcal{X} \subseteq \mathbb{R}^{p} \rightarrow \mathcal{F} \subseteq \mathbb{R}^{d}, \qquad d \ll p.
\end{equation*}
The map $\Psi$ aims to preserve large scale features, while suppressing uninformative variance (fine scale features) in the data~\cite{MAL-002,van2009dimensionality}.
Diffusion maps provide a flexible and data-driven framework for non-linear dimensionality reduction~\cite{lafon2004diffusion,Coifman7426,Coifman2006acha,Coifman2008mmas,Nadler2006acha}.
Inspired by stochastic dynamical systems, diffusion maps have been used in a diverse set of applications including
face recognition~\cite{barkan2013fast}, image segmentation~\cite{Karacan:2013:SIS:2508363.2508403}, gene expression analysis~\cite{xu2007applications}, and anomaly detection~\cite{mishne2013multiscale}.
Because computing the diffusion map scales with the number of observations $n$, it is computationally intractable for long time series data, especially as parameter tuning is also required.
Randomized methods have recently emerged as a powerful strategy for handling `big data'~\cite{halko2011rand,Mahoney2011,liberty2013simple,erichson2016randomized,erichson2017compressed} and for linear dimensionality reduction~\cite{halko2011algorithm,rokhlin2009randomized, ERICHSON20181,erichson2017randomized,erichson2017randomizedCP}, with the Nystr\"om method being a popular randomized technique for the fast approximation of kernel machines~\cite{NIPS2000_1866,drineas2005nystrom}.
Specifically, the Nystr\"om method takes advantage of low-rank structure and a rapidly decaying eigenvalue spectrum of symmetric kernel matrices. Thus the memory and computational burdens of kernel methods can be substantially eased.
Inspired by these ideas, we take advantage of randomization as a computational strategy and propose a Nystr\"om-accelerated diffusion map algorithm.
\section{Diffusion Maps in a nutshell}
\begin{figure}[!b]
\begin{center}\scalebox{0.80}{
\begin{tikzpicture}
\begin{scope}[every node/.style={circle,thick,draw}]
\node[fill=red!10] (A) at (-1.5, 0) {A};
\node[fill=red!10] (B) at (-2.0, 3) {B};
\node[fill=red!10] (C) at ( 0.5, 2) {C};
\node[fill=blue!10] (D) at (6.5,3.5) {D};
\node[fill=blue!10] (F) at (7,2) {F} ;
\end{scope}
\begin{scope}[>={Stealth[black]},
every node/.style={fill = white,circle}]
\path [-, very thick, black] (A) edge node {$p(A,B)$} (B);
\path [-, very thick, gray] (B) edge (C);
\path [-, very thick, black] (A) edge node {$p(A,F)$} (F);
\path [-, very thick, gray] (D) edge (C);
\path [-, very thick, gray] (D) edge (F);
\path [-, very thick, gray] (D) edge (B);
\path [-, very thick, gray] (D) edge (A);
\path [-, very thick, gray] (A) edge (C);
\path [-, very thick, gray] (B) edge (F);
\path [-, very thick, gray] (C) edge (F);
\path [->, red, very thick, dashed] (A) edge[bend right=-60 red] (B);
\path [->, red, very thick, dashed] (A) edge[bend right=20 red] (F);
\end{scope}
\end{tikzpicture}}
\end{center}
\caption{
Nodes which have a high transition probability are considered to be highly connected. For instance, it is more likely to jump from node $A$ to $B$ than from $A$ to $F$.
}
\label{Fig:random_walk}
\end{figure}
Diffusion maps explore the relationship between heat diffusion and random walks on undirected graphs.
A graph can be constructed from the data using a kernel function $\kappa(x,y): \mathcal{X} \times \mathcal{X} \rightarrow \mathbb{R}$, which measures the similarity for all points in the input space $x,y \in {\mathcal{X}}$.
A similarity measure is, in some sense, the inverse of a distance function, \textit{i.e.}, similar objects take large values.
Therefore, different kernel functions capture distinct features of the data.
Given such a graph, the connectivity between two data points can be quantified in terms of the probability $p(x,y)$ of jumping from $x$ to $y$. This is illustrated in Fig.~\ref{Fig:random_walk}.
Specifically, the quantity $p(x,y)$ is defined as the normalized kernel
\begin{equation}
p(x,y) := \dfrac{\kappa(x,y)}{\nu(x)}.
\end{equation}
This is known as normalized graph Laplacian construction~\cite{chung1997spectral}, where $\nu(x)$ is defined as a measure ${\nu(x)} = \int_{\mathcal{X}} \kappa(x,y) \, \mu(y) \, \mathrm{d} y$ of degree in a graph so that we have
\begin{equation}
\int\limits_\mathcal{X} \! p(x,y) \, \mu (y) \, \mathrm{d} y = 1,
\end{equation}
where $\mu(\cdot)$ denotes the measure of distribution of the data points on $\mathcal{X}$.
This means that $p(x,y)$ represents the transition kernel of a reversible Markov chain on the graph, \textit{i.e.}, $p(x,y)$ represents the one-step transition probability from $x$ to $y$.
Now, a diffusion operator $\mathbf{P}$ can be defined by integrating over all paths through the graph as
\begin{equation}\label{eq:P}
\mathbf{P}f(x) := \int\limits_{\mathcal{X}} \! p(x,y) \, f(y) \, \mu(y) \, \mathrm{d} y, \qquad \: \forall f \, \in \, L_1\left(\mathcal{X}\right),
\end{equation}
so that $\mathbf{P}$ defines the entire Markov chain~\cite{Coifman7426}.
More generally, we can define the probability of transition from each point to
another by running the Markov chain $t$ times forward:
\begin{equation}
\mathbf{P}^{t}f(x) := \int\limits_X \! p^{t}(x,y) \, f(y) \, \mu(y) \, \mathrm{d} y.
\end{equation}
The rationale is that the underlying geometric structure of the dataset is revealed at a magnified scale by taking larger powers of $\mathbf{P}$.
Hence, the diffusion time $t$ acts as a scale, \textit{i.e.}, the transition probability between far away points is decreased with each time step $t$.
Spectral methods can be used to characterize the properties of the Markov chain.
To do so, however, we need to define first a symmetric operator $\mathbf{A}$ as
\begin{equation}\label{eq:A}
\mathbf{A}f(x) := \int\limits_\mathcal{X} \! {a}(x,y) \, f(y) \, \mu(y) \, \mathrm{d} y
\end{equation}
by normalizing the kernel with a symmetrized measure
\begin{equation}
{a}(x,y) := \dfrac{\kappa(x,y)}{ \sqrt{\nu(x)} \sqrt{\nu(y)}}.
\end{equation}
This ensures that ${a}(x,y)$ is symmetric, ${a}(x,y)={a}(y,x)$, and positivity preserving $a(x,y)\geq 0\,\,\forall x,y$~\cite{Coifman2006acha,lovasz1993random}.
Now, the eigenvalues $\lambda_i$ and corresponding eigenfunctions $\phi_i(x)$ of the operator $\mathbf{A}$ can be used to describe the transition probability of the diffusion process.
Specifically, we can define the components of the diffusion map $\Psi^t(x)$ as the scaled eigenfunctions of the diffusion operator
\begin{equation*}
\Psi^t(x) = \left(\sqrt{\lambda_1^{t}} \, \phi_1(x), \sqrt{\lambda_2^{t}} \, \phi_2(x),...,\sqrt{\lambda_n^{t}} \, \phi_n(x) \right).
\end{equation*}
The diffusion map $\Psi^t(x)$ captures the underlying geometry of the input data.
Finally, to embed the data into an Euclidean space, we can use the diffusion map to evaluate the diffusion distance between two data points
\begin{equation*}
D^2_t(x,y)=||\Psi^t(x) - \Psi^t(y)||^2 \approx \sum_{i=1}^{d} \lambda_i^t (\phi_i(x) - \phi_i(y))^2,
\end{equation*}
where we may retain only the $d$ dominant components to achieve dimensionality reduction.
The diffusion distance reflects the connectivity of the data, \textit{i.e.}, points which are characterized by a high transition probability are considered to be highly connected.
This notion allows one to identify clusters in regions which are highly connected and which have a low probability of escape~\cite{lafon2004diffusion,Coifman2006acha}.
\section{Diffusion Maps meet Nystr\"om}
The Nystr\"om method~\cite{nystrom1930praktische} provides a powerful framework to solve Fredholm integral equations which take the form
\begin{equation}\label{eq:FredholmInt}
\int \! {a}(x,y) \, \phi_i(y) \, \mu (y) \, \mathrm{d} y = \lambda_i \phi_i(x).
\end{equation}
We recognize the resemblance with~\eqref{eq:A}. Suppose, we are given a set of independent and identically distributed samples $\{x_1, x_j, ..., x_l\}$ drawn from $\mu (y)$. Then, the idea is it to approximate Equation~\eqref{eq:FredholmInt} by computing the empirical average
\begin{equation}
\dfrac{1}{l}\sum_{j=1}^{l} {a}(x,x_j) \phi_i(x_j) \approx \lambda_i \phi_i(x).
\end{equation}
Drawing on these ideas, Williams and Seeger~\cite{NIPS2000_1866} proposed the Nystr\"om method for the fast approximation of kernel matrices. This has led to a large body of research and we refer to~\cite{drineas2005nystrom} for an excellent and comprehensive treatment.
\subsection{Nystr\"om Accelerated Diffusion Maps Algorithm}
Let us express the diffusion maps algorithm in matrix notation. Let $\mathbf{X} \in \mathbb{R}^{n\times p}$ be a dataset with $n$ observations and $p$ variables.
Then, given $\kappa$ we form a symmetric kernel matrix $\mathbf{K} \in \mathbb{R}^{n\times n}$ where each entry is obtained as $\mathbf{K}_{{i,j}} = \kappa(x_{i},x_{j})$.
The diffusion operator in Equation~\eqref{eq:P} can be expressed in the form of a diffusion matrix $\mathbf{P} \in \mathbb{R}^{n\times n}$ as
\begin{equation}
\mathbf{P} := \mathbf{D}^{-1} \mathbf{K},
\end{equation}
where $\mathbf{D} \in \mathbb{R}^{n\times n}$ is a diagonal matrix which is computed as $\mathbf{D}_{{i,i}} = \sum _{j} \mathbf{K}_{{i,j}}$.
Next, we form a symmetric matrix
\begin{equation}
\mathbf{A} := \mathbf{D}^{-\frac{1}{2}} \mathbf{K} \mathbf{D}^{-\frac{1}{2}},
\end{equation}
which allows us to compute the eigendecomposition
\begin{equation}
\mathbf{A} = \mathbf{U} \mathbf{\Lambda} \mathbf{U}^\top.
\end{equation}
The columns $\phi_i \in \mathbb{R}^{n}$ of $\mathbf{U} \in \mathbb{R}^{n\times n}$ are the orthonormal eigenvectors.
The diagonal matrix $\mathbf{\Lambda}\in \mathbb{R}^{n\times n}$ has the eigenvalues $\lambda_1 \geq \lambda_2 \geq ... \geq 0$ in descending order as its entries.
The Nystr\"om method can now be used to quickly produce an approximation for the dominant $d$ eigenvalues and eigenvectors~\cite{NIPS2000_1866}. Assuming that $\mathbf{A} \in \mathbb{R}^{n\times n}$ is a symmetric positive semidefinite matrix (SPSD), the Nystr\"om method yields the following low-rank approximation for the diffusion matrix
\begin{equation}\label{eq:mat_nystroem}
\mathbf{A} \approx \mathbf{C} \mathbf{W}^{-1} \mathbf{C}^\top,
\end{equation}
where $\mathbf{C}$ is an $n\times d$ matrix which approximately captures the row and column space of $\mathbf{A}$. The matrix $\mathbf{W}$ has dimension $d\times d$ and is SPSD.
Following, Halko et al. \cite{halko2011rand}, we can factor $A$ in Equation~\eqref{eq:mat_nystroem} using the Cholesky decomposition
\begin{equation}
\mathbf{A} \approx \mathbf{F}\mathbf{F}^\top,
\end{equation}
where $\mathbf{F} \in \mathbb{R}^{n\times d}$ is the approximate Cholesky factor, defined as
$\mathbf{F} := \mathbf{C} \mathbf{W}^{-\frac{1}{2}}$.
Then, we can obtain the eigenvectors and eigenvalues by computing the singular value decomposition
\begin{equation}
\mathbf{F} = \mathbf{\tilde{U}} \mathbf{\Sigma} \mathbf{V}^\top.
\end{equation}
The left singular vectors $\mathbf{\tilde{U}} \in \mathbb{R}^{n\times d}$ are the dominant $d$ eigenvectors of $\mathbf{A}$ and $\mathbf{\Lambda}=\mathbf{\Sigma}^2 \in \mathbb{R}^{d\times d}$ are the corresponding $d$ eigenvalues. Finally, we can recover the eigenvectors of the diffusion matrix $\mathbf{P}$ as $\mathbf{U = D\tilde{U}}$.
\subsection{Matrix Sketching}
Different strategies are available to form the matrices $\mathbf{C}$ and $\mathbf{W}$. Column sampling is most computational and memory efficient. Random projections have the advantage that they often provide an improved approximation. Thus, the different strategies pose a trade-off between speed and accuracy and the optimal choice depends on the specific application.
\subsubsection{Column Sampling}
The most popular strategy to form $\mathbf{C} \in \mathbb{R}^{n\times d}$ is column sampling, \textit{i.e.}, we sample $d$ columns from $\mathbf{A}$.
Subsequently, the small matrix $\mathbf{W} \in \mathbb{R}^{d\times d}$ is formed by extracting $d$ rows from $\mathbf{C}$. Given an index vector $J\in \mathbb{N}^{d}$ we form the matrices as
\begin{equation}
\mathbf{C} := \mathbf{A}(:,J) \quad \text{and} \quad \mathbf{W} := \mathbf{C}(J,:) = \mathbf{A}(J,J).
\end{equation}
The index vector can be designed using random (uniform) sampling or importance sampling~\cite{kumar2012sampling}.
Column sampling is most efficient, because it avoids explicit construction of the kernel matrix.
For details we refer to~\cite{drineas2005nystrom}.
\subsubsection{Random Projections}
The second strategy is to use random projections\cite{halko2011rand}. First, we form a random test matrix $\mathbf{\Omega} \in \mathbb{R}^{n\times l}$ which is used to sketch the diffusion matrix
\begin{equation}\label{eq:sketch}
\mathbf{S} := \mathbf{A} \mathbf{\Omega}.
\end{equation}
where $l\ge d$ is slightly larger than the desired target rank $d$.
Due to symmetry, the columns of $\mathbf{S} \in \mathbb{R}^{n\times l}$ provide a basis for both the column and row space of $\mathbf{A}$.
Then, an orthonormal basis $\mathbf{Q} \in \mathbb{R}^{n\times l}$ is obtained by computing the QR decomposition as $\mathbf{S} = \mathbf{Q} \mathbf{R}$.
We form the matrix $\mathbf{C} \in \mathbb{R}^{n\times l}$ and $\mathbf{W} \in \mathbb{R}^{l\times l}$ by projecting $\mathbf{A}$ to a lower-dimensional space as
\begin{equation}
\mathbf{C} := \mathbf{A} \mathbf{Q} \quad \text{and} \quad \mathbf{W} := \mathbf{Q}^\top \mathbf{C}.
\end{equation}
Further, the power iteration scheme can be used to improve the quality of the basis matrix $\mathbf{Q}$~\cite{halko2011rand}.
The idea is to sample from a preprocessed matrix
$\mathbf{S} = (\mathbf{A}\mathbf{A}^\top)^q \mathbf{A} \mathbf{\Omega}$,
instead of directly sampling from $\mathbf{A}$ as in Equation~\eqref{eq:sketch}.
Here, $q$ denotes the number of power iterations. In practice, this is implemented efficiently via subspace iterations.
\section{Results}
In the following, we demonstrate the efficiency of our proposed Nystr\"om accelerated diffusion map algorithm. First, we explore both toy data and time-series data from a dynamical system. Then, we evaluate the computational performance and compare it with the deterministic diffusion map algorithm.
Here, we restrict the evaluation to the Gaussian kernel:
\begin{equation*}
\kappa(x,y) = \exp\left( -\sigma^{-1} ||x-y||_2^2\right),
\end{equation*}
where $\sigma$ controls the variance (width) of the distribution.
\subsection{Artificial Toy Datasets}
First, we consider two non-linear artificial datasets: the helix and the famous Swiss role dataset.
Both datasets are perturbed with a small amount of white Gaussian noise.
Figure~\ref{fig:toy_data} shows both datasets. The first two components of the diffusion map $\Psi^t(x)$ are used to illustrate the non-linear embedding in two dimensional space at time $t=100$. Then, we use the diffusion distance to cluster the data points. Indeed, the diffusion map is able to correctly cluster both non-linear data sets. The width of the Gaussian kernel is set to $\sigma=0.5$.
\begin{figure}[!h]
\centering
\begin{subfigure}[t]{0.20\textwidth}
\centering
\DeclareGraphicsExtensions{.pdf}
\includegraphics[width=1\textwidth]{img/helix3d}
\end{subfigure}
~
\begin{subfigure}[t]{0.20\textwidth}
\centering
\DeclareGraphicsExtensions{.pdf}
\includegraphics[width=1\textwidth]{img/swiss3d}
\end{subfigure}
\begin{subfigure}[t]{0.18\textwidth}
\centering
\DeclareGraphicsExtensions{.pdf}
\includegraphics[width=1\textwidth]{img/helix_clustered100}
\end{subfigure}
~
\begin{subfigure}[t]{0.18\textwidth}
\centering
\DeclareGraphicsExtensions{.pdf}
\includegraphics[width=1\textwidth]{img/swiss3d_clustered100}
\end{subfigure}
\begin{subfigure}[t]{0.18\textwidth}
\centering
\DeclareGraphicsExtensions{.pdf}
\includegraphics[width=1\textwidth]{img/helix_embedding}
\caption{Noisy helix}
\end{subfigure}
~
\begin{subfigure}[t]{0.18\textwidth}
\centering
\DeclareGraphicsExtensions{.pdf}
\includegraphics[width=1\textwidth]{img/swiss3d_embedding}
\caption{Noisy Swiss role}
\end{subfigure}
\caption{The top row shows the datasets. The second row shows the clustered data points at diffusion time $t=100$. The third row shows low-dimensional embedding computed using the Nystr\"om-accelerated diffusion map algorithm.}
\label{fig:toy_data}
\end{figure}
\newpage
\subsection{The Chaotic Lorenz System}
Next, we explore the embedding of nonlinear time-series data.
Discovering nonlinear transformations that map dynamical systems into a new coordinate system with favorable properties is at the center of modern efforts in data-driven dynamics.
One such favorable transformation is obtained by eigenfunctions of the Koopman operator, which provides an infinite-dimensional but linear representation of nonlinear dynamical systems~\cite{Koopman1931pnas,Mezic2005nd,Brunton2016plosone}.
Diffusion maps have recently been connected to Koopman analysis and are now increasingly being employed to analyze coherent structures and nonlinear embeddings of dynamical systems~\cite{Giannakis2015siads,Berry2015pre,Yair2017pnas}.
Here, we explore the chaotic Lorenz system, which is among the simplest and well-studied chaotic dynamical system~\cite{Lorenz1963jas}:
\begin{equation}\label{Eq:Lorenz}
[\dot{x},\,\dot{y},\,\dot{z}] = [ \sigma (y - x),\, x(\rho -z) - y,\, x y - \beta z],
\end{equation}
with parameters $\sigma\!=\!10,\rho\!=\!28,$ and $\beta\!=\!8/3$.
We use the initial conditions $\begin{bmatrix}-8 & 8 & 27\end{bmatrix}^\top$ and integrate the system from $t\!=\!0$ to $t\!=\!5$ with $\Delta t\approx0.0001$.
Figure~\ref{Fig:lorenz} shows the results.
\begin{figure}[!bt]
\centering
\begin{subfigure}[t]{0.2\textwidth}
\centering
\DeclareGraphicsExtensions{.pdf}
\includegraphics[width=1\textwidth]{img/lorenz3d}
\caption{Lorenz system}
\end{subfigure}
~
\begin{subfigure}[t]{0.18\textwidth}
\centering
\DeclareGraphicsExtensions{.pdf}
\includegraphics[width=1\textwidth]{img/lorenz_embedding}
\caption{2-D embedding}
\end{subfigure}
\begin{subfigure}[t]{0.18\textwidth}
\centering
\DeclareGraphicsExtensions{.pdf}
\includegraphics[width=1\textwidth]{img/lorenz_clustered100}
\caption{Clustered: $t=100$}
\end{subfigure}
~
\begin{subfigure}[t]{0.18\textwidth}
\centering
\DeclareGraphicsExtensions{.pdf}
\includegraphics[width=1\textwidth]{img/lorenz_clustered10000}
\caption{Clustered: $t=10000$}
\end{subfigure}
\caption{The chaotic Lorenz system and its two-dimensional embedding using diffusion maps. Here, a large number of diffusion time steps $t$ is required to obtain stable clusters.}
\label{Fig:lorenz}
\end{figure}
\subsection{Computational Performance}
Table~\ref{table:performance_summary} gives a flavor of the computational performance of the Nystr\"om-accelerated diffusion map algorithm. We achieve substantial computational savings over the deterministic diffusion map algorithm, while attaining small errors. The relative errors between the deterministic $\Psi^t(x)$ and randomized diffusion maps $\tilde{\Psi}^t(x)$ at $t\!=\!1$ are computed in the Frobenius norm: $||\, |\Psi^t(x)| - |\tilde{\Psi}^t(x)| \, ||_F / ||\, |\Psi^t(x)| \, ||_F$.
\begin{table}[!htb]
\caption{Computational performance for both the deterministic and the Nystr\"om accelerated diffusion map algorithm.}
\centering
\scalebox{0.8}{
\begin{tabular}{l c c c c c c}
\hline\hline
Dataset & \thead{Number of \\ Observations} & \thead{Time in s\\ Deterministic} & \thead{Time in s\\ Nystr\"om} & Speedup & Error \\ [0.5ex]
\hline
Helix & 15,000 & 40 & 11 & 3.6 & 1.8e-13\\
Swiss role & 20,000 & 72 & 19 & 3.7 & 0.001 \\
Lorenz & 30,000 & 351 & 115 & 3.0 & 0.06 \\
[1ex]
\hline
\end{tabular}}
\label{table:performance_summary}
\end{table}
\begin{figure}[!h]
\centering
\DeclareGraphicsExtensions{.pdf}
\includegraphics[width=0.45\textwidth]{img/spectrum_log}
\caption{The Nystr\"om method faithfully captures the dominant eigenvalues of the Gaussian kernel for the Lorenz system.}
\label{fig:spectrum}
\end{figure}
The algorithms are implemented in Python and code is available via GitHub: \url{https://github.com/erichson}. The deterministic algorithm uses the fast ARPACK eigensolver provided by SciPy. The Nystr\"om accelerated diffusion map algorithm is computed using random projections with slight oversampling and two additional power iterations $q\!=\!2$. The target-rank (number of components) is set to $d=300$ for the toy data and $d\!=\!500$ for the Lorenz system.
Figure~\ref{fig:spectrum} shows the approximated eigenvalues for the Lorenz system.
\section{Discussion}
The computational complexity of diffusion maps scales with the number of observations $n$.
Thus, applications such as the analysis of time-series data from dynamical systems pose a computational challenge for diffusion maps.
Fortunately, the Nystr\"om method can be used to ease the computational demands.
However, diffusion maps are highly sensitive to the approximated range subspace which is provided by the eigenvectors. This means that the Nystr\"om method provides a good approximation only if: (a) the kernel matrix has low-rank structure; (b) the eigenvalue spectrum has a fast decay.
The Nystr\"om method shows an excellent performance using random projections with additional power iterations.
We achieve a speedup of roughly two to four times when approximating the dominant diffusion map components.
Unfortunately, the approximation quality turns out be poor using random column sampling.
Future research opens room for a more comprehensive evaluation study. Further, it is of interest to explore kernel functions which are more suitable for dynamical systems, \textit{e.g.}, cone-shaped kernels~\cite{zhao1990use,Giannakis2015siads}.
\newpage
\bibliographystyle{IEEEbib}
\small
\begin{spacing}{0.93}
|
{
"timestamp": "2018-02-27T02:02:30",
"yymm": "1802",
"arxiv_id": "1802.08762",
"language": "en",
"url": "https://arxiv.org/abs/1802.08762"
}
|
\section{The sine-Gordon model spectrum}
\paragraph{Mass spectrum -}
The action of the sine-Gordon model as a perturbed conformal field theory can be written as:
\begin{equation}
\mathcal{S}_{\text{SGM}}=\int_{-\infty}^{\infty} {\rm d} t\int_{0}^{L} {\rm d} x\, \left[\frac{1}{8\pi}\partial_\nu \varphi \partial^\nu \varphi+\mu : \cos\left(\frac{\beta}{\sqrt{4\pi}}\varphi\right) : \right]\label{eq:SGactionZamolodchikov}
\end{equation}
where the semicolon denotes normal ordering of the free massless boson modes. The relation between the soliton mass $M$ and the coupling parameter $\mu$ is \cite{Zamo1995}:
\begin{equation}
\mu=\kappa(\xi)M^{2/(\xi+1)},
\end{equation}
where the parameter $\xi$ is defined as:
\begin{equation}
\xi=\frac{\beta^2}{8\pi-\beta^2}=\frac{\Delta}{1-\Delta},\label{eq:DefinitionP}
\end{equation}
with $\Delta$ the conformal weight of the vertex operator and the coupling-mass ratio $\kappa(\xi)$ is \cite{Zamo1995}:
\begin{equation}
\kappa(\xi)=\frac{2}{\pi}\frac{\Gamma\left(\frac{\xi}{\xi+1}\right)}{\Gamma\left(\frac{1}{\xi+1}\right)}\left[\frac{\sqrt{\pi}\Gamma\left(\frac{\xi+1}{2}\right)}{2\Gamma\left(\frac{\xi}{2}\right)}\right]^{2/(\xi+1)}.
\end{equation}
In \eqref{eq:SGactionZamolodchikov} we have used the rescaled field:
\begin{equation}
\phi=:\frac{1}{\sqrt{4\pi}}\varphi
\end{equation}
and compactified it on a circle of radius $R$:
\begin{equation}
\varphi\sim\varphi+2\pi R, \qquad R=\frac{ \sqrt{4\pi}}{\beta}=\sqrt{\frac{\xi+1}{2\xi}}=\frac{1}{\sqrt{2\Delta}} \label{eq:Compactif}
\end{equation}
to take into account the periodicity of the cosine potential.
In the above the length of the system is $L$ and we impose Dirichlet boundary conditions:
\begin{equation}
\varphi(0)=\varphi(L)=0
\end{equation}
The SGM particle spectrum consists of solitons, anti-solitons and for $\xi<1$ also breathers, i.e. soliton-antisoliton bound states. For a given $\xi$, there are $n=1,2,...<1/\xi$ breathers with masses:
\begin{equation}
m_n = 2 M \sin (n \pi \xi/2)
\end{equation}
plotted as a function of the interaction $\Delta=\beta^2/(8\pi)=\xi/(\xi+1)$ in Fig.~\ref{fig:sG-masses}.
\begin{figure}[htbp]
\begin{center}
\centering
\includegraphics[width=.6\linewidth]{{{sG_masses}}}
\caption{Breather masses $m_n$ in units of soliton mass $M$ as a function of interaction $\beta^2/(8\pi)$. Dashed vertical lines denote the values of interaction used for the numerics: from left to right $\frac1{260},\frac1{100},\frac1{18}$ and $\frac1{8}$. The horizontal line denotes the inverse system size $1/(ML)=1/25$ for comparison.}
\label{fig:sG-masses}
\end{center}
\end{figure}
\paragraph{Excitation level of states -}
Here we discuss in more detail the argument used in the main text to explain the behaviour of non-Gaussianity of the states investigated. For any state it is possible to express its excitation level compared to the potential using a single dimensionless quantity. The potential term of the SGM Hamiltonian is
\begin{equation}
-\int dx\frac{m^{2}}{\beta^{2}}\cos\beta\varphi
\end{equation}
and for small $\beta$, the soliton mass is
\begin{equation}
M=\frac{8m}{\beta^{2}}
\end{equation}
Assuming that the energy of the state relative to the ground state is given by $\chi$
\begin{equation}
E-E_{0}=\chi M
\end{equation}
while the potential height in finite volume is
\begin{equation}
\Delta V=\frac{2m^{2}}{\beta^{2}}L
\end{equation}
the relevant ratio is given by
\begin{equation}
\frac{E-E_{0}}{\Delta V}=\frac{\chi M}{\frac{2m^{2}}{\beta^{2}}L}=\frac{4\chi}{mL}
\label{energyratio}
\end{equation}
where $m$ can be replaced with the first breather mass $m_{1}$ in the small $\beta$ regime.
If this dimensionless quantity is small, the state is lying at the bottom of the potential, where it is effectively parabolic. Thus, the excitations in such a state are free massive phonons and non-Gaussianity is suppressed. This happens in the ground state and low-temperature states. On the contrary, if the dimensionless ratio is higher, of the order of $0.5$, then the state experiences the full cosine potential, the excitations are solitons and breathers and the state can be highly non-Gaussian. This happens at intermediate temperatures and in the low-energy excited states. If the dimensionless ratio is even higher, that is much higher than $1$, the cosine potential becomes insignificant and the system is effectively free - the excitations are free massless bosons and non-Gaussianity is again suppressed. This happens at high temperatures and in highly excited states.
\section{Truncated Conformal Space Approach for the sine-Gordon correlation functions\label{app:TCSA}}
In this section we explain the adaptation of the Truncated Conformal Space Approach (TCSA) to compute the correlation functions of the sine-Gordon model on a finite interval. The general idea of the TCSA is to write the theory of interest as the conformal part plus a relevant perturbation. Then, the Hamiltonian and the observables are expressed as matrices in the basis of the conformal part and a truncation at certain energy is introduced to keep the matrices finite. All the operators are expressed in the standard CFT notation of complex Euclidean spacetime coordinates $z=e^{\frac{\pi}{L}(\tau- {\rm i} x)}$ and $\bar{z}=e^{\frac{\pi}{L}(\tau+ {\rm i} x)}$ with $\tau= {\rm i} t$ the imaginary time. However, the complex coordinates are introduced just to aid the computation of the matrix elements of the operators in the CFT basis. The TCSA time propagation in our work is always done in real time $t$ and we use the Schr\"{o}dinger picture in which all operators expressing fields and physical observables are time independent and only the states carry the time evolution. Therefore we can always use the expressions for the operators at $\tau=0$.
Let us begin by introducing the vertex operator defined as \cite{CFT}:
\begin{equation}
V_n(z,\bar{z})=e^{i\frac{n}{R}\varphi(z,\bar{z})}.
\end{equation}
With its help, we can write the sine-Gordon Hamiltonian as:
\begin{eqnarray}
H_{\text{SGM}}
&:=& H_{\text{FB}}- \frac{\kappa(\xi)}{2} \left(\frac{\pi}{ML}\right)^{2\Delta} M^2 \int_{0}^{L} {\rm d} x \left(V_{1}(e^{- {\rm i} \frac{\pi}{L}x},e^{ {\rm i} \frac{\pi}{L}x})+V_{-1}(e^{- {\rm i} \frac{\pi}{L}x},e^{ {\rm i} \frac{\pi}{L}x})\right)\label{eq:SGhamiltonian}
\end{eqnarray}
where
\begin{equation}
H_{\text{FB}}=\frac{1}{8\pi}\int_{0}^{L} {\rm d} x \left[\left(\partial_t \varphi\right)^2+\left(\partial_x \varphi\right)^2\right]
\end{equation}
is the free massless boson Hamiltonian. The soliton mass $M$ plays the role of energy or inverse length unit. The factor $\left(\frac{\pi}{ML}\right)^{2\Delta}$ is the conformal scaling factor associated with the vertex operator when transforming from the strip of width $L$ to the plane geometry and the corresponding scaling dimension is $\Delta$ \cite{CFT}.
The Hamiltonian \eqref{eq:SGhamiltonian} is already written in the TCSA form, where the free massless boson term represents the exactly solvable (conformal) part and the interaction term represents the relevant perturbation. In the following, we discuss the free massless boson Hilbert space, give the matrix elements of the operators used in the computation and explain how the TCSA simulation is done.
\paragraph{Free massless boson Hilbert space -} The boson field $\varphi$ satisfying Dirichlet boundary conditions takes the form \cite{CFT}:
\begin{equation}
\varphi(x)=\varphi_0- \frac{2\pi}{L}R W x+2\sum_{k\neq0}\frac{a_k}{k}\sin(k{\pi}x/{L}).\label{eq:FBfield}
\end{equation}
For other boundary conditions (Neumann and periodic), $\varphi_0$ is an operator, which is divergent in itself and only its exponential is well defined. In the case of Dirichlet boundary conditions, $\varphi_0$ is a number corresponding to the $x=0$ boundary value of the field and in our case $\varphi_0=0$. The operator $W$ gives the difference of the field at the two ends of the interval; for the case of periodic boundary conditions its values are quantised and give the winding number of the compact scalar field, a.k.a. the topological charge carried by the solitonic excitations. For Dirichlet boundary conditions, different values of $W$ correspond to distinct sectors consisting of a single Fock space. The case of two identical Dirichlet boundaries on the strip corresponds to $W=0$.
We quantize the field using the following commutation relations:
\begin{equation}
\left[a_k,a_l\right]=k\delta_{k+l}.\label{eq:CommRelGeneral}
\end{equation}
From the vacuum state $\left|0\right>$ that is defined by:
\begin{equation}
a_k\left|0\right>=0,\quad\text{for all }k>0\label{eq:Annih}
\end{equation}
we can construct descendant states by acting with the creation operators $a_{-k}=a_k^\dagger$, a (nonnegative integer) number of times $r_k\geq 0$:
\begin{equation}
\left|\vec{r}\right>=\left|r_1,r_2,\ldots,r_k,\ldots\right>:=\frac{1}{N_{\vec{r}}}\prod_{k=1}^{\infty}a_{-k}^{r_k}\left|0\right>,\label{eq:States}
\end{equation}
The normalization is given by:
\begin{equation}
N_{\vec{r}}^2=\left<0\right|\prod_{k=1}^{\infty}a_k^{r_k}a_{-k}^{r_k}\left|0\right>=\prod_{k=1}^{\infty}(r_k!k^{r_k}).\label{eq:Normalization}
\end{equation}
These states provide a basis of the $W=0$ Fock space, that is the Hilbert space for our problem.
\paragraph{Matrix elements -}
To perform the TCSA, we have to compute in the free boson Hilbert space the matrix elements of all operators needed in the computation. This is done by preforming the algebra using the commutation relations \eqref{eq:CommRelGeneral}. For a given pair of basis states of the $W=0$ Fock space:
\begin{eqnarray}
\left|\psi\right>&=&\left|\vec{r}\right>,\\
\left|\psi'\right>&=&\left|\vec{r}'\right>,
\end{eqnarray}
let us denote the corresponding matrix element of an operator $O$ by:
\begin{equation}
O^{\psi',\psi}:=\left<\psi'\right|O\left|\psi\right>.
\end{equation}
For the results presented in this work we need the following operators.
The free massless boson Hamiltonian for Dirichlet boundary conditions is diagonal with matrix elements:
\begin{equation}
H_{\text{FB}}^{\psi',\psi}=\frac{\pi}{L}\left
\sum_{k=1}^{\infty}kr_k-\frac{1}{24}\right)\delta_{\psi',\psi}.\label{eq:FBmatrixElements}
\end{equation}
The Hamiltonian of the massive free boson:
\begin{equation}
H_{\text{mFB}}=\frac{1}{8\pi}\int_{0}^{L} {\rm d} x \left[\left(\partial_t \varphi\right)^2+\left(\partial_x \varphi\right)^2+m^2\varphi^2\right].
\end{equation}
has the following matrix elements:
\begin{eqnarray}
H_{\text{mFB}}^{\psi',\psi}&=&\frac{\pi}{L}\left\{\delta_{\psi',\psi}\left(\sum_{k=1}^{\infty}\left(1+\frac{m^2L^2}{2\pi^2k^2}\right)kr_k-\frac{1}{24}\right)+\right.\\
&&\left.+\frac{m^2L^2}{4\pi^2}\sum_{k=1}^{\infty}\left(\prod_{n=1\atop n\neq k}^{\infty}\delta_{r'_n,r_n}\right)\frac{1}{k^2}\left(\sqrt{r_k k}\sqrt{(r_k-1) k}\,\delta_{r'_k+2,r_k
+\sqrt{(r_k+2) k}\sqrt{(r_k+1) k}\,\delta_{r'_k-2,r_k
\right)\right\}.\nonumber
\end{eqnarray}
The expression for the vertex operator can be written in normal ordered form as \cite{CFT}:
\begin{equation}
V_n(z,\bar{z})=e^{iq\varphi(z,\bar{z})}=\left|z-\bar{z}\right|^{-q^2}:e^{iq\varphi(z,\bar{z})}:
\end{equation}
where $q\equiv n/R$ with the value of the compactification radius $R$ given in (\ref{eq:Compactif}).
Its matrix elements are:
\begin{equation}
V_n^{\psi',\psi}\left(e^{ {\rm i} \frac{\pi}{L}x},e^{- {\rm i} \frac{\pi}{L}x}\right)=N_{\vec{r}'}^{-1}N_{\vec{r}}^{-1}\left[2\sin\left(\frac{\pi x}{L}\right)\right]^{-q^2}\prod_{k=1}^{\infty}\left<0\right|a_k^{r'_k}e^{- q \frac{a_{-k}}{k}(z^k-\bar{z}^k)}e^{ q \frac{a_k}{k}(z^{-k}-\bar{z}^{-k})}a_{-k}^{r_k}\left|0\right>,
\end{equation}
with:
\begin{eqnarray}
&\left<0\right|a_k^{r'_k}e^{- q \frac{a_{-k}}{k}(z^k-\bar{z}^k)}e^{ q \frac{a_k}{k}(z^{-k}-\bar{z}^{-k})}a_{-k}^{r_k}\left|0\right>=&\nonumber\\
&=\sum_{j'=0}^{\infty}\sum_{j=0}^{\infty}\frac{1}{j'!j!}\left(\frac{2q}{k}\right)^{j'+j}\left[\frac{\bar{z}^k-z^k}{2}\right]^{j'+j}\left<0\right|a_k^{r'_k}a_{-k}^{j'}a_k^{j}a_{-k}^{r_k}\left|0\right>&
\end{eqnarray}
and:
\begin{equation}
\left<0\right|a_k^{r'_k}a_{-k}^{j'}a_k^{j}a_{-k}^{r_k}\left|0\right>=k^{j'+j}\left(\begin{array}{c}r'_k\\j'\end{array}\right)\left(\begin{array}{c}r_k\\j\end{array}\right)j'!j!(r_k-j)!k^{r_k-j}\delta_{r'_k-j',r_k-j}\Theta(r_k\geq j).\label{eq:CombinatoricTerm2}
\end{equation}
To get the matrix elements of the spatially integrated vertex operator that appears in the sine-Gordon Hamiltonian (\ref{eq:SGhamiltonian}), the following relation is useful:
\begin{equation}
\int_{0}^{\pi} {\rm d} u \left[2\sin\left(u\right)\right]^{-q^2}e^{- {\rm i} k u}=\frac{e^{- {\rm i} \frac{\pi}{2}k}\pi}{(1-q^2)B\left(\frac{1}{2}(2-q^2-k),\frac{1}{2}(2-q^2+k)\right)}.
\end{equation}
Here, $B(x,y)=\frac{\Gamma(x)\Gamma(y)}{\Gamma(x+y)}$ is the beta function.
Lastly the matrix elements of the $\varphi$ operator are:
\begin{equation}
\varphi^{\psi',\psi}(x)=2\sum_{n=1}^{\infty}\left(\prod_{
k=1\atop
k\neq n}^{\infty}\delta_{r'_k,r_k}\right)\left(\sqrt{\frac{r_n}{n} }\,\delta_{r'_n+1,r_n} +\sqrt{\frac{r_n+1}{n}}\,\delta_{r'_n-1,r_n}\right)\sin(n{\pi}x/{L}).
\end{equation}
\paragraph{Thermal states and time evolution -}
The density matrix $\rho(H,T)$ that describes a thermal state of some Hamiltonian $H$ at temperature $T$ is given by:
\begin{equation}
\rho(H,T)=\frac{e^{-H/T}}{\text{tr}\left(e^{-H/T}\right)},
\end{equation}
In order to construct a thermal density matrix, we first construct the TCSA form of $H$ and then compute $\rho(H,T)$ numerically, using matrix exponentiation.
Similarly for the study of quench dynamics, the time evolution operator:
\begin{equation}
U(H,t)=e^{- {\rm i} H t}.
\end{equation}
corresponding to the chosen post-quench Hamiltonian $H$ and time $t$ after the quench, is computed numerically by matrix exponentiation.
\section{Truncation effects and computational performance} \label{sec:TruncationPerformance}
\paragraph{Truncation -}
The TCSA simulation is done by representing the operators as numerical matrices (using the matrix elements computed above) and introducing a cutoff. This is done by keeping only those states in the Hilbert space whose energy (with respect to the free boson Hamiltonian \eqref{eq:FBmatrixElements}) is below the chosen cutoff value. In this way we keep the matrices finite. The number of states in the Hilbert space for a chosen cutoff:
\begin{equation}
\text{cutoff}:=\left(kr_k\right)_{\text{max}}
\end{equation}
is given by the cumulative sum of the (combinatorial) partition function:
\begin{equation}
\text{\#states}=\sum_{n=0}^{\text{cutoff}}p(n).
\end{equation}
The values relevant for this work are listed in the table \ref{tab:NoStates}.
\begin{center}
\begin{table}[htbp]
\begin{tabular}{ |c|c| }
\hline
\textbf{cutoff} & \,\textbf{\#states} \\ \hline
15 & 684 \\ \hline
16 & 915 \\ \hline
17 & 1212 \\ \hline
18 & 1597 \\ \hline
19 & 2087 \\ \hline
20 & 2714 \\ \hline
21 & 3506 \\ \hline
22 & 4508 \\ \hline
\end{tabular}
\caption{Number of states in the Hilbert space for the energy cutoff values used for the analysis in this work.\label{tab:NoStates}}
\end{table}
\end{center}
\paragraph{Truncation effects -}
Truncation effects originate from neglecting the contribution of modes above the cutoff energy. This means, on the one hand, that the highest spatial and temporal resolution we can achieve is restricted by the value of the energy cutoff, which plays also the role of a short-wavelength cutoff. On the other hand, since quantum dynamics is oscillatory and we have approximate values for the energy eigenvalues (therefore the oscillation frequencies), our time-series will eventually get out-of-phase after several oscillations, which restricts the longest time scale we can reach at a given cutoff. As in all spectral numerical methods, convergence for time-averaged values, amplitudes of oscillations and Fourier spectra is achieved much easier than for time-series data.
It should also be stressed that analogous effects are inevitably present in the experimental system \cite{exp-sG,exp-GGE}, since the SGM is only a low-energy approximation of the actual system. In such quantum gases the cutoff scale above which the approximation breaks down is determined by factors like the nonzero range of the effective inter-particle interaction, which induces nonlinearity in the dispersion relation of the bosonisation (density and phase) fields and violation of relativistic invariance at higher-energies. Therefore the challenge in comparing theory and experiment is precisely to disentangle such high-energy deviations from the low-energy physics.
There are several ways to determine the quality of our numerical data and the parameter ranges where the method is reliable. Expanding the states used in our computations in the free boson basis, one can examine the amplitudes versus the energies of the basis states and check whether they decrease to a sufficiently small value for states in the vicinity of the cutoff. Another way to verify the results is to plot the values of the observables (for example correlation function time-series) for different values of the cutoff and check that they have converged within a sufficiently low tolerance level. In our computation we used both approaches to verify that the TCSA is reliable for the observables and parameter values used in this work.
\begin{figure}[htbp]
\begin{center}
\centering
\includegraphics[width=.6\linewidth]{{{comparison}}}
\caption{Comparison of $\langle\cos\beta\phi\rangle$ as a function of interaction $\beta$ for three different types of states, computed by different methods: ground state in thermodynamically large system (red line) as given by the exact analytical formula of Lukyanov-Zamolodchikov \cite{Lukyanov-Zamo1}, ground state in a finite system of length $L=25$ with Dirichlet boundary conditions (black line and dots) computed from TCSA, ground state in a finite system of the same length with periodic boundary conditions (blue line and dots) computed numerically using the Non-Linear Integral Equation (NLIE) \cite{Klumper1991,Destri-deVega1}. The TCSA method gives reliable data for $\beta\lesssim1.2$, while the NLIE method for $\beta\gtrsim 0.6$. The three lines converge for $\beta \gtrsim 1$ ($\Delta \gtrsim 0.04$), because for such interactions the mass of the lightest breather is sufficiently larger than (at least 3 times) the inverse system size, so that the system is practically in the thermodynamic limit (finite size effects and dependence on boundary conditions is negligible). This convergence provides a nontrivial verification for our numerics.}
\label{fig:cos-beta-phi}
\end{center}
\end{figure}
In addition to the above convergence tests, we have also performed a number of nontrivial tests of our numerics through comparison with known analytical results and with other numerical methods. First, we compared the TCSA energy spectrum with that predicted by integrability following the approach in \cite{TCSA-bsG}. Second, we compared the expectation value of the vertex one-point function $\langle\cos\beta\phi\rangle$ to integrability predictions. We computed this value at the middle of the box with Dirichlet boundary conditions for various values of the interaction $\beta$ using the TCSA. For the ground state in an infinite size system, an analytical formula by Lukyanov-Zamolodchikov \cite{Lukyanov-Zamo1} is available. For the ground state value in a finite box with periodic boundary conditions one can use numerical data from the so-called Non-Linear Integral Equation \cite{Klumper1991,Destri-deVega1}. In Fig.~\ref{fig:cos-beta-phi} we plot together the results for these three different types of states as functions of the interaction. TCSA data converge well for all values of interaction $\beta\lesssim1.2$ ($\Delta\lesssim0.055$), while the NLIE converges for $\beta\gtrsim 0.6$. In the region $\beta \gtrsim 1$ ($\Delta \gtrsim 0.04$) where the correlation length is small enough compared to the system size so that finite size and boundary effects are negligible, all three methods give results that agree with each other very well.
In order to benchmark the application of our numerical method to quench dynamics, we have also performed comparison with exact analytical results for the free massless and massive cases \cite{CC,CC2007} always observing good agreement. Analogous tests have been already performed successfully in the context of quenches in Ising field theory \cite{TCSA-QQ, TCSA_latest} and other truncation-based methods applied to the study of quantum quenches \cite{TCSA_QQ1,TCSA_QQ2,TCSA_QQ3,TCSA_QQ4}.
\paragraph{Performance -}
The crucial steps of our TCSA simulation are the computation of the matrix elements of the Hamiltonians and the observables (for the selected ordering of the states \eqref{eq:States} in the truncated Hilbert space), the diagonalization of the Hamiltonian to find the state of interest (or exponentiation in case of thermal states), the computation of the propagator over the chosen time step, the propagation of the state and the computation of expectation values of products of the observables (the correlators). All the steps apart from the last one are numerically cheap, since we only do them once (or once per time step in case of the propagation of the state). In particular, the matrices corresponding to the Hamiltonians and the observables can be computed once, saved to the hard drive and loaded when needed. The numerically most expensive step of the simulation is the computation of the correlation functions since we have to perform it (in each time step) as many times as the number of points in the grid with the chosen spatial resolution. For example, in our case, for the time evolution after a quench in each time step this amounts to $(2+2)\cdot41\times41\sim 6700$ matrix products for the 2-p correlators and $(2+4)\cdot41\times41\sim 10^4$ matrix products for the 4-p function, where the matrices are of the sizes given in table \ref{tab:NoStates}. For the cutoff of 20 the computation of a quench normally takes between a couple of days and a week on our computational cluster. The computation is much faster if one needs the time series at just a chosen point and does not have to evolve the full grid.
The most expensive computation in this work is the computation of the kurtosis, where to compute the 4-p functions over the full 4D grid, we need to perform $(2+4)\cdot21\times21\times21\times21\sim 10^6$ matrix products for each temperature and interaction. For this reason, the computation of the lines on Fig.~1 of the main text, take between a couple of weeks and a month. For the computation of the quench timeseries of the kurtosis, using the full 4D grid at all time steps ($\sim 600$) to perform the numerical integration would be extremely expensive, so we used instead a random sampling of $10^3$ spatial points. The accuracy of this method was verified by comparison to the full 4D result at the initial and a couple of random times.
The memory usage is never an issue in our case, since because of performing so many matrix products, we are limited to the use of matrices of sufficiently small sizes that allow these operations to be performed fast enough. So the main resource needed is the processor power. One could further optimize the performance of the algorithm by taking into account the symmetries of the correlation functions.
\section{Correlation functions}
As explained in the main text, multipoint correlation functions provide important physical information for a quantum field theory. In this work we are computing two- and four-point ($N=2,4$) equal-time correlation functions:
\begin{equation}
G^{(N)}(x_1,x_2,\ldots,x_N)=\left\langle \varphi(x_1)\varphi(x_2)\cdots\varphi(x_N)\right\rangle
\end{equation}
where the expectation value is taken either in an equilibrium state (of some Hamiltonian $H$ under consideration) or in time evolved states after a quench. Equilibrium states are either pure states $|\Psi\rangle$ (ground or excited states of $H$), in which case the expectation values are $\langle \dots \rangle = \langle \Psi|\dots |\Psi\rangle$ or mixed states $\rho$, like the thermal states we consider here, in which case $\langle \dots \rangle = \text{tr}\left( \dots \rho\right)$. In equilibrium states of the SGM all the odd order (odd $N$) correlation functions vanish, since the field \eqref{eq:FBfield} is odd under reflection ($\varphi(x)=-\varphi(-x)$) and the SGM Hamiltonian \eqref{eq:SGactionZamolodchikov} is even ($H(\varphi)=H(-\varphi)$).
For the study of dynamics after a quench, the expectation value refers to the time evolved state:
\begin{equation}
|\Psi(t)\rangle = U(H,t) |\Psi \rangle = e^{- {\rm i} H t} |\Psi \rangle
\end{equation}
where $|\Psi \rangle$ is the quench initial state, that is an equilibrium (ground or excited) state of some Hamiltonian $H_0$ (the pre-quench Hamiltonian), and the time evolution operator $U(H,t)=e^{- {\rm i} H t}$ corresponds to a different Hamiltonian $H$ (the post-quench Hamiltonian). In the dynamical case, we often denote the time dependent correlation functions as $G^{(N)}(x_1,x_2,\ldots,x_N;t)$. For the quenches considered here, the pre-quench Hamiltonian is the SGM Hamiltonian at some value of the interaction $\Delta_0$ and we study initial states that are either excited states or the ground state. The post-quench Hamiltonian, on the other hand, is the SGM corresponding to a different value of the interaction $\Delta$. For comparison, we also compute the quench dynamics of the same initial state under the free massless or massive Hamiltonian.
\paragraph{Connected correlation functions - }
If we are interested in studying only the genuine multiparticle interactions, we have to subtract from the full correlation function $G^{(N)}$ the contributions that come from lower order correlation functions (fewer particle collisions). One gets what is called the connected part of the correlation functions, which are essentially the joint cumulants of the fields in the state under consideration
\begin{equation}
G_{\text{con}}^{(N)}(x_1,x_2,\ldots,x_N)=\sum_{\pi}\left[\left(|\pi|-1\right)!(-1)^{|\pi|-1}\prod_{B\in\pi}\left\langle\prod_{i\in B}\varphi(x_i)\right\rangle\right].
\end{equation}
Here, $\pi$ are all possible partitions of $\{1,2,\ldots,N\}$ into blocks $B$ and $i$ are elements of $B$. $|\pi|$ is the number of blocks in the partition. All the correlation functions can be taken at time $t$. If all connected correlations of order higher than two vanish, then Wick's theorem holds and the system is free (i.e. noninteracting).
In case of four-point functions and vanishing odd correlation functions, this formula simplifies to:
\begin{eqnarray}
G_{\text{con}}^{(4)}(x_1,x_2,x_3,x_4)&=&G^{(4)}(x_1,x_2,x_3,x_4)-\nonumber\\
&&-G^{(2)}(x_1,x_2)G^{(2)}(x_3,x_4)-G^{(2)}(x_1,x_3)G^{(2)}(x_2,x_4)-G^{(2)}(x_1,x_4)G^{(2)}(x_2,x_3).
\end{eqnarray}
\paragraph{Kurtosis - }
To estimate how close the states are to Gaussian, that is, to see the strength of interaction effects, we compute the kurtosis -- the ratio between the integrated connected and full four-point correlation function \cite{exp-sG}:
\begin{equation}
\mathcal{K}:=\frac{\int {\rm d} x_1 {\rm d} x_2 {\rm d} x_3 {\rm d} x_4 \left|G_{\text{con}}^{(4)}(x_1,x_2,x_3,x_4)\right|}{\int {\rm d} x_1 {\rm d} x_2 {\rm d} x_3 {\rm d} x_4 \left|G^{(4)}(x_1,x_2,x_3,x_4)\right|}\approx\frac{\sum_{x_1,x_2,x_3,x_4} \left|G_{\text{con}}^{(4)}(x_1,x_2,x_3,x_4)\right|}{\sum_{x_1,x_2,x_3,x_4} \left|G^{(4)}(x_1,x_2,x_3,x_4)\right|},
\end{equation}
where in the last equality we used that in the numerical simulation the domain is discretised so the integral is approximated with a sum. For Gaussian states, $\mathcal{K}$ vanishes, while a larger value of $\mathcal{K}$ corresponds to a more strongly interacting system.
For the evaluation of the kurtosis one needs the 4-p correlated functions over the entire four dimensional grid of spatial positions. However, as explained in Section "Truncation effects and computational performance", this is computationally very expensive, so for quench time sequences (i.e. time evolved states) we used a random sampling method over a thousand points, checking at a few time slices that it correctly reproduces the kurtosis result obtained from the full grid. For the value of the kurtosis in the initial state of the quench shown in Fig. 5 of the main text and for the thermal state plot (Fig. 1 in the main text) we used the full 4D grid for the computation.
\section{Quench from a ground state}
In this section we present an interaction quench starting from the ground state of the SGM for $\Delta_0=1/18$ and quenching to
$\Delta=1/8$ which is shown in Fig.~\ref{fig:quench2}.
For comparison, the energy of this initial state is $\sim 0.111 \, M$ above the post-quench ground state, while the energy of the excited state shown in Fig.~5 of the main text is $\sim 1.243 \, M$, that is about 10 times higher.
In contrast to the case of quench starting from an excited initial state, studied in the main text, in the present case the quench dynamics is dominated by low energy modes, the leading one being the lowest lying second breather mode (due to parity invariance the odd states do not contribute).
\begin{figure}[htbp]
\begin{center}
\centering
\includegraphics[width=.97\textwidth]{{{Plot_FigurePaper_SGDtoSGDquench_Coupling_Phi_labmda0_17_lambda_7_l_25_nPos_41_nHam_1_tM0_0_dtM_0.5_tMmax500_KRK_19}}}
\caption{{Time evolution of 2-p correlations $G^{(2)}(L/3,2L/3;t)$ and the kurtosis (\emph{top two rows}) and spatial density plots of 2-p correlations $G^{(2)}(x_1,x_2;t)$ and 4-p connected correlations $G^{(4)}_{\text{con}}(x_1,x_2,L/4,3L/4;t)$ at various times $t$ (\emph{bottom two rows}) after an interaction quench starting from a ground state of the SGM (pre-quench interaction: $\Delta_0 =1/{18}$, post-quench interaction: $\Delta =1/{8}$, $L=25/M$).}}
\label{fig:quench2}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\centering
\includegraphics[width=.5\textwidth]{{{Plot_SGDtoSGDquench_Coupling_Phi_TwoP_labmda0_17_lambda_7_l_25_nPos_41_nHam_1_tM0_0_dtM_0.5_tMmax500_KRK_19}}}
\caption{Fourier spectrum of the time dependence of the spatially integrated 2-p correlations after the ground state quench shown in Fig.~\ref{fig:quench2}.}
\label{fig:spectrum}
\end{center}
\end{figure}
The Fourier spectrum of the time evolution of observables is determined by the post-quench excitations, which are multiparticle states with momenta quantised by the finite system size $L$. The complete set of equations that determine the energy levels can be found e.g. in \cite{TCSA-bsG}. Here we focus only on the dominant energy level which corresponds to the $n=2$ breather moving with the lowest momentum $p(\theta):=m_2\sinh\theta$ allowed by the Bethe Yang equations:
\begin{equation}
e^{2ip(\theta)L} R(\theta)^2=+1
\end{equation}
where the second breather reflection factor for Dirichlet boundary conditions is \cite{Ghoshal-Zamo,Ghoshal}:
\begin{equation}
R(\theta)= \frac{\big(\frac12\big)_\theta\big(1+\frac\xi2\big)_\theta\big(\frac\xi2\big)_\theta\big(1+\xi\big)_\theta}{\big(\frac12+\frac\xi2\big)_\theta^2\big(\frac12-\frac\xi2\big)_\theta^2 \big(\frac32+\xi\big)_\theta\big(\frac32+\frac\xi2\big)_\theta^2}
\end{equation}
and the notation
\begin{equation}
\big(x\big)_\theta := \frac{\sinh\left(\frac\theta 2+\frac{ {\rm i} \pi x}2\right)}{\sinh\left(\frac\theta 2-\frac{ {\rm i} \pi x}2\right)}
\end{equation}
has been used. Note that static breathers are not present for Dirichlet boundary conditions. From these equations
we find that the energy of this mode measured from the ground state is $\sim 0.881039 \, M$, which matches with the frequency of the dominant peak in the Fourier spectrum of 2-p correlations, shown in Fig.~\ref{fig:spectrum}.
\providecommand{\href}[2]{#2}
|
{
"timestamp": "2018-09-11T02:20:26",
"yymm": "1802",
"arxiv_id": "1802.08696",
"language": "en",
"url": "https://arxiv.org/abs/1802.08696"
}
|
\section{Introduction}
Markov chain Monte Carlo (MCMC) has become a widely used approach
for simulation from an arbitrary distribution of interest, typically
a Bayesian posterior distribution, known as the target distribution.
MCMC really represents a family of sampling methods. Generally speaking,
any new sampler that can be shown to preserve the ergodicity of the
Markov chain such that it converges to the target distribution is
a member of the family and can be combined with other samplers as
part of a valid MCMC kernel. The key to MCMC's success is its
simplicity and applicability. In practice, however, it sometimes needs
a lot of non-trivial tuning to work well \citep{haario2005componentwise}.
To deal with this problem, many adaptive MCMC algorithms have been
proposed \citep{gilks1998adaptive,haario2001adaptive,andrieu2001controlled,sahu2003self}.
These allow parameters of the MCMC kernel to be automatically
tuned based on previous samples. This breaks the Markovian property
of the chain so has required special schemes and proofs that the resulting
chain will converge to the target distribution \citep{andrieu2003ergodicity,atchade2005adaptive,andrieu2006efficiency}.
Under some weaker and easily verifiable conditions, namely ``diminishing
adaptation'' and \textit{``}containment'', \citet{rosenthal2007coupling}
proved ergodicity of adaptive MCMC and proposed many useful samplers.
It is important to realize, however, that such adaptive MCMC samplers
address only a small aspect of a much larger problem. A typical adaptive MCMC
sampler will approximately optimize performance
given the kind of sampler chosen in the first place, but it will not
optimize among the variety of samplers that could have been chosen.
For example, an adaptive random walk Metropolis-Hastings sampler will
adapt the scale of its proposal distribution, but that adaptation
won't reveal whether an altogether different kind of sampler would
be more efficient. In many cases it would, and the exploration of
different sampling strategies often remains a human-driven trial-and-error
affair.
Here we present a method for a higher level of MCMC adaptation. The
adaptation explores a potentially large space of valid MCMC kernels
composed of different samplers. One starts with an arbitrary set of
candidate samplers for each dimension or block of dimensions in the
target distribution. The main idea is to iteratively try different
candidates that compose a valid MCMC kernel, run them for a relatively
short time, generate the next set of candidates based on the results
thus far, and so on. Since relative performance of different samplers
is specific to each model and even to each computing environment,
it is doubtful whether there is a universally optimal kind of sampler.
Hence we view the choice of efficient samplers for a particular problem
as well-suited to empirical determination via computation.
The goal of computationally exploring valid sampler combinations in
search of an efficient model-specific MCMC kernel raises a number
of challenges. First, one must prove that the samples collected as
the algorithm proceeds indeed converge to the target distribution,
even when some of the candidate samplers are internally adaptive,
such as conventional adaptive random walk samplers. We provide such
a proof for a general framework.
Second, one must determine efficient methods for exploring the very
large, discrete space of valid sampler combinations. This is complicated
by a combinatorial explosion, which is exacerbated by the fact that
any multivariate samplers can potentially be used for arbitrary blocks
of model dimensions. Here we take a practical approach to this problem,
setting as our goal only to show basic schemes that can yield substantial
improvements in useful time frames. Future work can aim to develop improvements within
the general framework presented here. We also limit ourselves to relatively simple candidate samplers,
but the framework can accommodate many more choices.
Third, one must determine how to measure the efficiency of a particular
MCMC kernel for each dimesion and for the entire model, in order to
have a metric to seek to optimize. As a first step, it is vital to
realize that there can be a tradeoff between good mixing and computational
speed. When considering adaptation within one kind of sampler, say
adaptive random walk, one can roughly assume that computational cost
does not depend on the proposal scale, and hence mixing measured by
integrated autocorrelation time, or the related effective sample size,
is a sensible measure of efficiency. But when comparing two samplers
with very different computational costs, say adaptive random walk
and slice samplers, good mixing may or may not be worth its computational
cost. Random walk samplers may mix more slowly than slice samplers
on a per iteration basis, but they do so at higher computational speed
because slice samplers can require many evaluations of model density functions. Thus
the greater number of random walk iterations per
unit time could outperform the slice sampler. An additional issue
is that different dimensions of the model may mix at different rates,
and often the slowest-mixing dimensions limit the validity of all
results \citep{turek2016automated}. In view of these considerations, we define MCMC efficiency
as the effective sample size per computation time and use that as
a metric of performance per dimension. Performance of an MCMC kernel
across all dimensions is defined as the minimum efficiency among all
dimensions.
The rest of the paper is organized as follows. Section \ref{theory} begins with a general theoretical framework
for Auto Adapt MCMC, then extends these methods to a specific Auto Adapt algorithm
involving block MCMC updating. Section \ref{method} presents an example algorithm
that fits within the framework, and provides some explanations on its
details. Section \ref{example} then outlines some numerical examples comparing
the example algorithm with existing algorithms for a variety of benchmark
models. Finally, section \ref{discussion} concludes and discusses some future research
directions.
\section{A general Auto Adapt MCMC}\label{theory}
In this section, we present a general Auto Adapt MCMC algorithm and
give theoretical results establishing its correctness.
Let $\mathbf{\mathcal{X}}$ be a state space and $\pi$ the probability
distribution on $\mathbf{\mathcal{X}}$ that we wish to sample from.
Let $\mathbf{\mathcal{I}}$ be a countable set (this set indexes the
discrete set of MCMC kernels we wish to choose from). For $\iota\in\mathbf{\mathcal{I}}$,
let $\Theta_{\iota}$ be a parameter space (for all practical purposes,
we can assume that this is some subset of some Euclidean space $\mathbb{R}^{m}$).
For $\iota\in\mathbf{\mathcal{I}}$ and $\theta\in\Theta_{\iota}$,
let $P_{\iota,\theta}$ denote a Markov kernel on $\mathbf{\mathcal{X}}$
with invariant distribution $\pi$. We set ${\displaystyle \bar{\Theta}=\bigcup_{\iota\in\mathbf{\mathcal{I}}}\{\iota\}\times\Theta_{\iota}}$
the adaptive MCMC parameter space. We want to build a stochastic process
(an adaptive Markov chain) $\{(X_{n},\ \iota_{n},\ \theta_{n}),\ n\ \geq0\}$
on $\mathbf{\mathcal{X}}\times\bar{\Theta}$ such that as $n\rightarrow\infty$,
the distribution of $X_{n}$ converges to $\pi$, and a law of large
numbers holds. We call $\iota$ the external adaptation parameter
and $\theta$ the internal adaptation parameter.
We will follow the general adaptive MCMC recipe of \citet{roberts2009examples}.
Assume that any internal adaptation on $\Theta_{\iota}$ is
done using a function $H_{\iota}$ : $\Theta_{\iota}\times\mathbf{\mathcal{X}}\rightarrow\Theta_{\iota}$,
and an ``internal clock'' sequence $\{\gamma_{n},\ n\ \geq0\}$
such that $\lim_{n\rightarrow\infty}\gamma_{n}=0$. The function $H_{\iota}$ depends on $\gamma_n$ and updates parameters
for internal adaptation.
Also, let $\{p_{k},\ k\geq1\}$
be a sequence of numbers $p_{k}\in(0,1)$ such that $\lim_{k\rightarrow\infty}p_{k}=0$.
$p_{k}$ will be the probability of performing external adaptation
at external iteration $k$. During the algorithm
we will also keep track of two variables: $\kappa_{n}$, the number
of external adaptations performed up to step $n$; and $\tau_{n}$,
the number of iterations between $n$ and the last time an external
adaptation is performed. These two variables are used to manage the
internal clock based on external iterations, which in most situations
can simply be the number of adaptation steps. We build the stochastic
process $\{(X_{n},\ \iota_{n},\ \theta_{n}),\ n\ \geq0\}$ on $\mathbf{\mathcal{X}}\times\bar{\Theta}$
as follows.
\begin{enumerate}
\item We start with $\kappa_{0}=\tau_{0}=0$. We start also with some $X_{0}\in\mathbf{\mathcal{X}},\ \iota_{0}\in\mathbf{\mathcal{I}}$,
and $\theta_{0}\in\Theta_{\iota_{0}}$.
\item At the n-th iteration, given $\mathcal{F}_{n}\overset{def}{=}\sigma\{(X_{k},\ \iota_{k},\ \theta_{k}),\ k\leq n\}$,
and given $\kappa_{n},\ \tau_{n}$:
\begin{enumerate}
\item Draw $X_{n+1}\sim P_{\iota_{n},\theta_{n}}(X_{n},\ \cdot)$.
\item Independently of $\mathcal{F}_{n}$ and $X_{n+1}$, draw $B_{n+1}\sim$
Bern$(p_{n+1})\in\{0,1\}$.
\begin{enumerate}
\item If $B_{n+1}=0$, there is no external adaptation: $\iota_{n+1}=\iota_{n}$.
We update $\kappa_{n}$ and $\tau_{n}$:
\begin{equation}
\kappa_{n+1}=\kappa_{n},\ \tau_{n+1}=\tau_{n}+1.\label{eq:1}
\end{equation}
Then we perform an internal adaptation: set $c_{n+1}=\kappa_{n+1}+\tau_{n+1}$,
and compute
\begin{equation}
\theta_{n+1}=\theta_{n}+\gamma_{c_{n+1}}H_{\iota_{n}}(\theta_{n},\ X_{n+1}).\label{eq:2}
\end{equation}
Note that the internal adaptation interval could vary between iterations.
\item If $B_{n+1}=1$, then we do an external adaptation: we choose a new
$\iota_{n+1}$. And we choose a new value $\theta_{n+1}\in\Theta_{\iota_{n+1}}$
based on $\mathcal{F}_{n}$ and $X_{n+1}$. Then we update $\kappa_{n}$
and $\tau_{n}$.
\begin{equation}
\kappa_{n+1}=\kappa_{n}+1,\ \tau_{n+1}=0.
\end{equation}
\end{enumerate}
\end{enumerate}
\end{enumerate}
For this Auto Adapt MCMC algorithm to be valid we must show that it
satisfies three assumptions:
\begin{enumerate}
\item For each $(\iota,\ \theta)\in\bar{\Theta},\ P_{\iota,\theta}$ has
invariant distribution $\pi$.
\item (diminishing adaptation):
\[
\triangle_{n+1}\overset{_{def}}{=}\sup_{x\in\mathcal{X}}\Vert P_{\iota_{n},\theta_{n}}(x,\ \cdot)-P_{\iota_{n+1},\theta_{n+1}}(x,\ \cdot)\Vert_{\mathrm{TV}}
\]
converges in probability to zero, as $n\rightarrow\infty$,
\item (containment): For all $\epsilon>0$, the sequence $\{M_{\epsilon}(\iota_{n},\ \theta_{n},\ X_{n})\}$
is bounded in probability, where
\[
M_{\epsilon}(\iota,\ \theta,\ x)\overset{_{def}}{=}\inf\{n\ \geq1:\Vert P_{\iota,\theta}^{n}(x,\ \cdot)-\pi\Vert_{\mathrm{TV}}\leq\epsilon\}.
\]
\end{enumerate}
\begin{remark} Here the first assumption holds by construction. We
will show that by the design, our Auto Adapt algorithm satisfies
the diminishing adaptation.\end{remark}
For $\iota\in\mathbf{\mathcal{I}},\ \theta,\ \theta'\in\Theta_{\iota}$,
define
\[
D_{\iota}(\theta,\ \theta')\ \overset{_{def}}{=}\ \sup_{x\in\mathbf{\mathcal{X}}}\Vert P_{\iota,\theta}(x,\ \cdot)-P_{\iota,\theta'}(x,\ \cdot)\Vert_{\mathrm{TV}}.
\]
\begin{proposition} Suppose that $\mathbf{\mathcal{I}}$ is finite,
and for any $\iota\in\mathcal{\mathbf{\mathcal{I}}}$, the adaptation
function $H_{\iota}$ is bounded, and there exists $C<\infty$ such
that
\[
D_{\iota}(\theta,\ \theta')\leq C\Vert\theta-\theta'\Vert.
\]
Then the diminishing adaptation holds. \end{proposition} \begin{proof}
We have
\begin{eqnarray*}
\mathrm{E}(\triangle_{n+1}) & = & p_{n+1}\mathrm{E}(\triangle_{n+1}|B_{n+1}=1)+(1-p_{n+1})\mathrm{E}(\triangle_{n+1}|B_{n+1}=0),\\
& \leq & 2p_{n+1}+\mathrm{E}(\triangle_{n+1}|B_{n+1}=0),\\
& = & 2p_{n+1}+\mathrm{E}\ [D_{\iota_{n}}(\theta_{n},\ \theta_{n+1})],\\
& \leq & 2p_{n+1}+C\mathrm{E}\ [\Vert\theta_{n+1}-\theta_{n}\Vert],\\
& \leq & 2p_{n+1}+C_{1}\gamma_{c_{n+1}},
\end{eqnarray*}
where $c_{n+1}=\kappa_{n+1}+\tau_{n+1}$. It is easy to see that $c_{n}\rightarrow\infty$
as $n\rightarrow\infty$. The result follows since ${\displaystyle \lim_{n\rightarrow\infty}p_{n}=\lim_{n\rightarrow\infty}\gamma_{n}=0.}$
\end{proof}
In general, the containment condition is more challenging than the
diminishing adaptation condition. This technical assumption might
be harder to satisfy and might not even be necessary sometimes \citep{rosenthal2007coupling}.
However, we still use simultaneous uniform ergodicity, a sufficient
containment condition to simplify theory and to concentrate more on
designing efficient algorithm.
\begin{definition}
The family $\{P_{\iota,\theta}:(\iota,\theta)\in{\bar{\Theta}}\}$
has simultaneous uniform ergodicity (SUE) if for all $\epsilon>0$,
there is $N=N(\epsilon)\in\mathrm{\mathbb{N}}$ so
that $\Vert P_{\iota,\theta}^{N}(x,\ \cdot)-\pi(\cdot)\Vert\leq\epsilon$
for all $x\in\mathcal{X}$ and $(\iota,\theta)\in {\bar{\Theta}}.$
\end{definition}
\begin{proposition} (Theorem 1 of \citet{rosenthal2007coupling}). SUE implies containment.
\end{proposition}
\begin{remark} It is possible to use weaker condition such as simultaneous
geometric ergodicity or simultaneous polynormal ergodicity instead
of simultaneous uniform ergodicity to imply containment condition but for the
purpose of introducing the Auto Adapt algorithm, we do not pursue
them here.\end{remark}
\section{Example algorithms}\label{method}
We present one specific approach as an example of an Auto Adapt algorithm.
Our approach to ``outer adaptation'' will be to identify the ``worst-mixing dimension''
(i.e., some parameter or latent state of the statistical model) and update the kernel by
assigning different sampler(s) for that dimension. To explain the method, we
will give some terminology for describing our algorithm. In particular, we will define a valid kernel,
MCMC efficiency, and worst-mixing dimension. We will define a set of candidate samplers for
a given dimension, which could include scalar samplers or block samplers.
In either case, a sampler may also have internal adaptation for each parameter or combination.
To implement the internal clock of each sampler ($c_{n}$ of the general algorithm), we need to
formulate all internal adaptation in the framework using equation \ref{eq:2}.
We use $P$ (without subscripts) in this section to represent $P_{\iota, \theta}$
of the general theory, so the kernel and parameters are implicit.
\subsection{Valid kernel}\label{sec:valid}
Assume our model of interest is $\mathcal{M}$, which could be represented
as a graphical model where vertices or nodes represent states or data
while edges represent dependencies among them. Here we are using ``state''
as Bayesians do to mean any dimension of the model to be sampled by
MCMC, including model parameters and latent states. We denote the
set of all dimensions of the target distribution, as $\mathcal{X}=\{\mathcal{X}_{1},\ldots,\mathcal{X}_{m}\}$.
Since we will construct a new MCMC kernel as an ordered set of samplers
at each outer iteration, it is useful to define requirements for a
kernel to be valid. We require that each kernel, if used on its own,
would be a valid MCMC to sample from the target distribution $\pi(X),\, X\in\mathcal{X}$
(typically defined from Bayes' Rule as the conditional distribution
of states given the data). This is the case if it satisfies the detailed
balance equation, $\pi=P\pi$.
In more detail, we need to ensure that a new MCMC kernel does not
omit some subspace of $\mathcal{X}$ from mixing. Denote the kernel
$P$ as a sequence of (new choice of $c_{n+1}$) samplers $P_{i},\, i=1\ldots j,$
such that $P=P_{j}P_{j-1}\ldots P_{1}$. By some abuse of language,
$P$ is a valid kernel if each sampler $P_{i}$ operates on a non-empty
subset $b_{i}$ of $\mathcal{X}$, satisfying ${\displaystyle \bigcup_{i=1}^{j}b_{i}=\mathcal{X}}$.
At iteration $n$, assume the kernel is $P^{(n)}$ and the samples
are $X_{n}=(X_{n,1},\ \ldots,\ X_{n,m})$ where the set of initial
values is $X_{0}$. For each dimension $\mathcal{X}_{k}$, $k=1,\ldots,m$
let $\mathbf{X}_{k}=\{X_{0,k},\ X_{1,k},\ \ldots\}$ be the scalar
chain of samples of $\mathcal{X}_{k}$.
\subsection{Worst mixing state and MCMC efficiency}\label{sec:worst}
We define MCMC efficiency for state $\mathcal{X}_{k}$ from a sample
of size $N$ from kernel $P$ as effective sample size per computation
time
\[
\omega_{k}(N,P)=\frac{N/\tau_{k}(P)}{t(N,P)},
\]
where $t(N,P)$ is the computation time for kernel $P$ to run $N$
iterations (often $t(N,P)\approx Nt(1,P)$) and $\tau_{k}(P)$ is
the integrated autocorrelation time for chain $\mathbf{X}_{k}$ defined
as
\[
\tau_{k}=1+2\sum_{i=1}^{\infty}\mathrm{cor}(X_{0,k},X_{i,k}),
\]
\citet{straatsma1986estimation}. The ratio $N/\tau_{k}$ is the effective
sample size (ESS) for state $\mathcal{X}_{k}$ \citep{roberts2001optimal}.
Note that $t(N,P)$ is computation time for the entire kernel, not
just samplers that update $\mathcal{X}_{k}$. $\tau_{k}$ can be interpreted
as the number of effective samples per actual sample. The worst-mixing
state is defined as the state with minimum MCMC efficiency among all
states. Let $k_{min}$ be the index of the worst-mixing state, that
is
\[
k_{min}=\arg\min_{k}\tau_{k}^{-1}
\]
Since the worst mixing dimension will limit the validity of the entire
posterior sample \citep{thompson2010graphical}, we define the efficiency
of a MCMC algorithm as $\omega_{k_{min}}(N,P)$, the efficiency of
the worst-mixing state of model $\mathcal{M}$.
There are several ways to estimate ESS, but we use $\texttt{effectiveSize}$
function in the R coda package \citep{plummer2006coda, turek2016automated} since this
function provides a stable estimation of ESS. This method,
which is based on the spectral density at frequency zero,
has been proven to have the highest convergence rate, thus giving a more accurate
and stable result \citep{thompson2010graphical}.
\subsection{Candidate Samplers}\label{sec:candidate}
A set of candidate samplers $\{P_{j},\: j\in\mathcal{S}\}$ is a list
of all possible samplers that could be used for a parameter of the model
$\mathcal{M}$. These may differ depending on the parameter's characteristics
and role in the model (e.g., whether there is a valid Gibbs sampler,
or whether it is restricted to $[0,\infty)$). In addition to univariate
candidate samplers, nodes can also be sampled by block samplers. Denote
$|b|$ the number of elements of $b.$ If $|b_{i}|>1$, $P^{(n)}_{i}$,
the sampler applied to block $b_{i}$ at iteration $n$, is called
a block sampler; otherwise it is a univariate or scalar sampler.
In the examples below we considered up to four univariate candidate
samplers and three kinds of block samplers. The univariate samplers
included adaptive random walk (ARW), adaptive random walk on a log
scale (ARWLS) for states taking only positive real values, Gibbs samplers
for states with a conjugate prior-posterior pairing, and slice samplers.
The block samplers included adaptive random walk with multivariate
normal proposals, automated factor slice sampler \citep{tibbits2014automated} (slice samplers in
a set of orthogonal rotated coordinates), and automated factor random
walk (univariate random walks in a set of orthogonal rotated coordinates).
These choices are by no means exhaustive but serve to illustrate the
algorithms here.
\subsubsection{Block samplers and how to block}\label{sec:block}
\citep{turek2016automated} suggested different ways to block the states
efficiently: (a) based on correlation clustering, (b) based on model
structure. Here we use the first method.
At each iteration, we use the generated samples to create the empirical
posterior correlation matrix. To stabilize the estimation, all of the samples
are used to compute a correlation
matrix $\rho_{d\times d}$. This in turn is used to make a distance
matrix $D_{d\times d}$ where $D_{i,j}=1-|\rho_{i,j}|$ for $i\neq j$
and $D_{i,i}=0$ for every $i$, $j$ in $1,\ldots,d$. To ensure
minimum absolute pairwise correlation between clusters, we construct a
hierarchical cluster tree from the distance matrix $D$ (\citet{everitt2011hierarchical} chapter 4). Given a selected
height, we cluster the hierarchical tree into distinct groups of states.
Different parts of the tree may have different optimal
heights for forming blocks. Instead of using a global height to cut the tree, we only choose a block that
contains the worst-mixing state from the cut and keep the other
nodes intact. Adaptively, at each outer iteration, the algorithm will try
to obtain a less correlated cluster for a chosen block sampler to
improve on the efficiency. In our implementation, we use the R function $\texttt{hclust}$
to build the hierarchical clustering tree with ``complete linkage''
from the distance matrix
$D$. By construction, the absolute correlation between states within
each group is at least $1-h$ for $h$ in $[0,1]$. We then use the R
function $\texttt{cutree}$ to choose a block that contains the worst-mixing state.
This process is justified in the sense that the partitioning
adapts according to the model structure through the posterior correlation.
The details and validity of the block sampling in our general framework are provided in Appendix A.
\subsection{How to choose new samplers}\label{sec:choice}
To choose new samplers to compose a new kernel, we determine the worst-mixing
state and choose randomly
from candidate samplers to replace whatever sampler was updating it
in the previous kernel while keeping other samplers the same. There
are some choices to make when considering a block sampler. If the worst-mixing parameter is $x$,
and the new kernel will use a block sampler for $x$ together with one or more parameters $y$,
we can either keep the current sampler(s) used for $y$ or remove them from the kernel.
Future work can consider other schemes such as changing group of samplers together
based on model structure.
\subsection{Internal clock variables}
\label{sec:internal} In the algorithm \ref{alg1},
$\theta$ represents the internal adaptation parameter of a particular
sampler and $c$ represents its internal clock. In general, an internal
clock variable is defined as a variable used in a sampler to determine
the size of internal adaptation steps such that any internal adaptation
would converge in a typical MCMC setting. An example of an internal clock
variable is a number of internal iterations that have occurred. To
use a sampler in the general framework, we need to establish what
are its internal adaptation and clock variables. A few examples of
internal adaptation variables of different samplers are summarized
as follows:
\begin{itemize}
\item For adaptive random walk: proposal scale is used.
\item For block adaptive random walk: proposal scale and covariance matrix
are used.
\item For automated factor slice sampler: covariance matrix (or equivalent,
i.e. coordinate rotation) is used.
\item For automated factor random walk: covariance matrix (ditto) and proposal
scales for each rotated coordinate axis are used.
\end{itemize}
These internal adaption variables are set to default initial values when its sampler is first used.
After that, they are retained along with internal clock variables so that whenever
we revisit a sampler,
we will used the stored values to set up this sampler.
This setting guarantees the diminishing adaption property,
which is essential for the convergence of the algorithm.
Pseudo-code for Auto Adapt MCMC is given in Algorithm 1.
\begin{algorithm}[htp]
\caption{Auto Adapt MCMC} \label{alg1}
\small
\begin{algorithmic}[1]
\INPUT
\Statex Bayesian model with initial state (including latent variables) ${X}_{0}$
\Statex $\{p_{n}, n \in \mathbb{N} |p_n\in(0,1),\lim_{n}p_{n}=0\}$, maximum iteration $M$
\Statex Candidate samplers $\{P_{j}, j \in \mathcal{S}\}$
\Statex $P_{\iota_0, \theta_0}:=$ ordered set of initial samplers $\{P^{(0)}_j\}_{j\in \mathcal{S}}$ from Bayesian model
\Ensure
\Statex An ordered set of samplers $\{P_{i^*}\}_{i^*\in \mathcal{S}}$ with the best MCMC efficiency so far
\State \; Initialize $\mathrm{EFF}$, $\mathrm{EFF_{best}}$, $n$, $\kappa_0$, $\tau_0$, $c_0$ to $0$ \Comment Denote MCMC efficiency $\mathrm{EFF}$
\While {($\mathrm{EFF}$ $\ge$ $\mathrm{EFF_{best}}$) or ($n < M$)}
\State Sample $N$ samples from the current sampler set $\{P_j^{(n)}\}_{j\in \mathcal{S}}$
\State Store internal clocks $c_n$ and adaption variables $\theta_{n}$ for each sampler \Comment Section \ref{sec:internal}
\State Compute $\mathrm{EFF_k}=\omega_{k}(N,P)=\frac{N/\tau_{k}(P)}{t(N,P)}$ \Comment $k$ is an index of parameters
\State Identify $k_{min}=\arg\min_{k}\tau_{k}^{-1}$, $\mathrm{EFF}=\mathrm{EFF_{k_{min}}}$ \Comment See Section \ref{sec:worst}
\If {($\mathrm{EFF}$ $\ge$ $\mathrm{EFF_{best}}$)}
\State Set $\{P_{i^*}\}_{i^*\in \mathcal{S}}=\{P_i^{(n)}\}_{i\in \mathcal{S}}$
\State Set $\mathrm{EFF_{best}}=\mathrm{EFF}$
\Else
\State Set $\{P_i^{(n)}\}_{i\in \mathcal{S}}=\{P_{i^*}\}_{i^*\in \mathcal{S}}$
\EndIf
\State Draw $B_{n+1} \sim \mathrm{Bern}(p_{n+1})\in \{0,1\}$
\If {$B_{n+1}=0$}
\State $\kappa_{n+1}= \kappa_{n}$, $\tau_{n+1}=\tau_n+1$, $c_{n+1}=\kappa_{n+1}+\tau_{n+1}$
\Else
\State $\kappa_{n+1}= \kappa_{n}$+1, $\tau_{n+1}=0$, $c_{n+1}=\kappa_{n+1}+\tau_{n+1}$
\State Set $P^{(n+1)}_{i}=P^{(n)}_{i}, i \neq k_{\mathrm{min}}$, choose $P^{(n+1)}_{k_{\mathrm{min}}}$ from candidate samplers \Comment See Section \ref{sec:choice}
\If {($P^{(n+1)}_{k_{\mathrm{min}}}$ has been used before)}
\State Use $c_n$, $\theta_{n}$ to set up the sampler $P^{(n+1)}_{k_{\mathrm{min}}}$
\Else
\State Use default internal adaptation value of $P^{(n+1)}_{k_{\mathrm{min}}}$ \Comment Section \ref{sec:internal}
\EndIf
\EndIf
\State Set $n = n+1$
\EndWhile
\end{algorithmic}
\end{algorithm}
\section{Examples}\label{example}
In this section, we evaluate our algorithm on some benchmark examples
and compare them to different MCMC algorithms. In particular, we compare
our approach to the following MCMC algorithms.
\begin{itemize}
\item All Scalar algorithm: Every dimension is sampled using an adaptive
scalar normal random walk sampler.
\item All Blocked algorithm: All dimensions are sampled in one adaptive
multivariate normal random walk sampler.
\item Default algorithm: Groups of parameters arising from multivariate
distributions are sampled using adaptive multivariate normal random
walk samplers while parameters arising from univariate distributions
are sampled using adaptive scalar normal random walk samplers. In
addition, whenever the structure of model $\mathcal{M}$ permits,
we assigns conjugate samplers instead of scalar normal random walk
samplers.
\item Auto Block algorithm: The Auto Block method
\citep{turek2016automated} searches blocking schemes based on
hierarchical clustering from posterior correlations to determine a
highly efficient (but not necessarily optimal) set of blocks that
are sampled with multivariate normal random-walk samplers. Thus,
Auto Block uses only either scalar or multivariate adaptive
random walk, concentrating more on partitioning the correlation
matrix than trying different sampling methods. Note that the
initial sampler of both the Auto Block algorithm and our
proposed algorithm is the All Scalar algorithm.
\end{itemize}
All experiments were carried out using the NIMBLE package \citep{nimble2017}
for R \citep{R2013} on a cluster using $32$ cores of Intel Xeon E5-2680
$2.7$ Ghz with $256$ GB memory. Models are coded using NIMBLE's
version of the BUGS model
declaration language \citep{lunn2000winbugs,lunn2012bugs}. All MCMC
algorithm are written in NIMBLE, which provides user friendly interfaces
in R and efficient execution in custom-generated C++, including matrix operations in the C++ Eigen
library \citep{guennebaud2010eigen}.
To measure the performance of a MCMC algorithm, we use MCMC
efficiency. MCMC efficiency depends on ESS, estimates of which
can have high variance for a short Markov chain. This presents
a tuning-parameter tradeoff for the Auto Adapt method: Is it better to
move cautiously (in sampler space) by running long chains for each outer adaptation in
order to gain an accurate measure of efficiency, or is it better to
move adventurously by running short chains, knowing that some
algorithm decisions about samplers will be based on noisy efficiency
comparisons? In the latter case, the final samplers may be less optimal,
but that may be compensated by the saved computation time. To explore this
tradeoff, we try our Auto Adapt algorithm with different sample
sizes in each outer adaptation and label results accordingly. For
example, Auto Adapt 10K will refer to the Auto Adapt method with
samples of 10,000 per outer iteration.
We present algorithm comparisons in terms of time spent in an
adaptation phase, final MCMC efficiency achieved, and the time
required to obtain a fixed effective sample size (e.g., 10,000). Only
Auto Block and Auto Adapt have adaptation phases. An important
difference is that Auto Block did not come with a proof of valid
adaptive MCMC convergence (it could be modified to work in the current
framework, but we compare to the published version). Therefore,
samples from its adaptation phase are not normally included in the
final samples, while the adapation samples of Auto Adapt can be
included.
To measure final MCMC efficiency, we conducted a single long run of
length $N$ with the final kernel of each method solely for the purpose
of obtaining an accurate ESS estimate. One would not normally do such
a run in an real application. The calculation of time to obtain a
fixed effective sample size incorporates both adaptation time and
efficiency of the final samplers. For both Auto Adapt and Auto Block,
we placed them on a similar playing field by assuming for this
calculation that samples are not retained from the adaptation phase,
making the results conservative.
For all comparisons, we used $20$ independent runs of each method and
present the average results from these runs. To show the variation in
runs, we present boxplots of efficiency in relation to computation
time from the $20$ runs of Auto Adapt. The final (right-most) boxplot
in each such figures shows the $20$ final efficiency estimates from
larger runs. Not surprisingly, these can be lower than obtained by
shorter runs. These final estimates are reported in the tables.
A public Github repository containing scripts for reproducing our
results may be found at https://github.com/nxdao2000/AutoAdaptMCMC.
Some additional experiments are also provided there.
\subsection{Toy example: A random effect model}
We consider the ``litters'' model, which is an original example
model provided with the MCMC package WinBUGS. This model is chosen
because of its notoriously slow mixing, which is due to the strong
correlation between parameter pairs. It
is desirable to show how much improvement can be achieved compared to other approaches
on this benchmark example. The purpose of using a simple example is to establish the
potential utility of the Auto Adapt approach, while saving more advanced applications for future work.
In this case, we show that our algorithm
indeed outperforms by a significant margin the other approaches. This
model's specification is given following \citet{deely1981bayes} and
\citet{kass1989approximate} as follows.
Suppose we observe the data in $i$ groups. In each group, the data
$y_{ij}$, $j=\{1,\ldots,n\}$ are conditionally independent given
the parameters $p_{ij}$, with the observation density
\[
y_{ij}\sim \mathrm{Bin}(n_{ij},p_{ij}).
\]
In addition, assume that $p_{ij}$ for fixed $i$ are conditionally
independent given the ``hyperparameters'' $\alpha_{i}$, $\beta_{i}$, with conjugate density
\[
p_{ij}\sim \mathrm{Beta}(\alpha_{i},\beta_{i}).
\]
Assume that $\alpha_{j}$, $\beta_{j}$ follow prior densities,
\[
\alpha_{1}\sim \mathrm{Gamma}(1,0.001),
\]
\[
\beta_{1}\sim \mathrm{Gamma}(1,0.001),
\]
\[
\alpha_{2}\sim \mathrm{Uniform}(0,100),
\]
\[
\beta_{2}\sim \mathrm{Uniform}(0,50).
\]
\begin{table}{}
\begin{center}
\caption{Summary results of different MCMC algorithms for the litters model. Runtime is presented
as seconds, and efficiency is in units of effective samples produced
per second of algorithm runtime. Time to $N$ effective samples is computed by $N/\mathrm{efficiency}$ for
static algorithms and that plus adaptation time for Auto Block and Auto Adapt algorithms.}
\label{table:1}
\vspace{2mm}
\begin{tabular}{lrrrrr}
\hline
\hline
Algorithms& \rule[-1.5mm]{0mm}{4mm}\hspace{2mm} Adapt time
& \hspace{2mm} Efficiency & \hspace{2mm} Time to 10000 effective samples\\
\hline
All Blocked & 0.00 & 0.5855 & 17079\\
Default & 0.00 & 1.8385 & 5439\\
All Scalar & 0.00 & 1.6870 & 5928\\
Auto Block & 21.97 & 12.1205 & 847 \\
Auto Adapt 10K & 1.14 & 9.6393 & 1038 \\
Auto Adapt 20K & 2.33 & 11.5717 & 866 \\
Auto Adapt 50K & 6.03 & 14.3932 & 701 \\
\hline
\end{tabular}
\end{center}
\end{table}
Following the setup of \citet{rue2005gaussian,turek2016automated},
we can jointly sample the top-level parameters and conjugate latent
states as the beta-binomial conjugacy relationships allow the use
of what \citet{turek2016automated} call cross-level sampling, but, for demonstration purposes, we do not
include this here.
Since the litters model mixes poorly, we run a large
number of iterations (i.e. $N=300000$) to produce stable estimates of
final MCMC efficiency. We start both Auto Block and Auto
Adapt algorithms with All Scalar and adaptively explore the
space of all given candidate samplers. We use Auto Adapt with
either 10000, 20000 or 50000 iterations per outer adaptation.
Results (Table \ref{table:1}) show that Auto Block generates
samples with MCMC efficiency about seven-fold, six-fold and
twenty-fold that of the All Scalar, Default and All
Blocked methods, respectively. We can also see that as the outer
adaptation sample size increases, the performance of Auto Adapt
improves. Final MCMC efficiencies of Auto Adapt 10K, Auto Adapt
20K and Auto Adapt 50K are 80\%, 95\% and 118\% of MCMC
efficiency of Auto Block, respectively. In addition, the
adaptation time for all cases of Auto Adapt are much shorter than
for Auto Block. Combining adaptation time and final efficiency
into the resulting time to 10000 effective samples, we see that in this
case, larger samples in each outer iteration are worth their
computational cost.
Figure \ref{fig:toy1} shows the boxplots
computed from 20 independent runs on litters model of All Blocked,
All Scalar, Auto Block, Default and Auto Adapt 50K
algorithms. The left panel of the figure confirms that MCMC efficiency
of Auto Block is well dominated that of other static adaptive
algorithms. The right panel of the figure shows the MCMC efficiency of
Auto Adapt 50K gradually improves with time. The right-most boxplot
verifies that the MCMC efficiency of selected samplers from
Auto Adapt algorithm (computed from large samples) is slightly
better than that of Auto Block algorithm. Last but not least,
Auto Adapt algorithms are much more efficient than Auto Block in
the sense that we can keep every sample while Auto Block algorithm
throws away most of the samples.
\begin{figure}
\begin{centering}
\vspace{-0.5cm}
\includegraphics[width=8cm,height=8cm]{litterAutoBlockEfficiency.pdf}\includegraphics[width=8cm,height=8cm]{litter50000BoxplotEfficiency.pdf}
\par\end{centering}
\caption{MCMC efficiencies of different methods for litters model. The left panel shows the box-plots of
MCMC efficiencies of All Blocked, All Scalar, Auto Block and Default algorithms computed from 20 replications.
The right panel show the box-plots of MCMC efficiencies of Auto Adapt 50K algorithm computed from 20 replications at each outer adaptation. The last (rightest) box-plot is
computed from the chain of 300000 samples generated. The time axis show the average computational time of 20 replications.}
\label{fig:toy1}
\end{figure}
\subsection{Generalized linear mixed models}
Many MCMC algorithms do not scale up well for high dimensional data.
To test the capabilities of our algorithm in such situations, we consider
a relatively large generalized linear mixed model (GLMM) \citep{gelman2006data}.
We make use of the Minnesota Health Plan dataset \citep{waller1997log}
for this model following the setup of \citet{zipunnikov2006monte}.
Specifically, let $y_{ikl}$ be the count for subject $i$ ($i=1,\ldots,121$
senior-citizens), event $k$ (either visited or called), period $l$
(one of four 6-month periods). Assume that
\[
y_{ikl}|u_{i}^{\Sigma}\sim \mathrm{Poisson}(\mu_{ikl})
\]
with log link,
\[
\log\mu_{ikl}=a_{0}+a_{k}+b_{l}+c_{kl}+\gamma_{i}+v_{ik}+\omega_{il},\ k=1,2,\:\mathrm{and}\: l=1,2,3,4.
\]
Here the fixed effect coefficients are $a_{k}$, $b_{l}$ and $c_{kl}$. To
achieve identifiability, we set $a_{2}=b_{4}=c_{14}=c_{21}=c_{22}=c_{23}=c_{24}=0$.
Priors for the non-zero parameters, $\beta=(a_{0},\ a_{1},\ b_{1},\
b_{2},\ b_{3},\ c_{11},\ c_{12},\ c_{13})$, are:
\[
a_{0}\sim \mathrm{N}(0,0.001)
\]
\[
a_{1}\sim \mathrm{N}(0,0.001)
\]
\[
b_{l}\sim \mathrm{N}(0,0.001)\;\mathrm{for\: l}=1,2,3
\]
\[
c_{1l}\sim \mathrm{N}(0,0.001)\;\mathrm{for\: l}=1,2,3.
\]
The random effect variables are $\gamma_{i}$, $v_{ik},$
$\omega_{il}$. Their distributions are:
\[
\sigma_{\gamma}^{2}\sim \mathrm{N}(0,10)
\]
\[
\gamma_{i}\sim \mathrm{N}(0,\sigma_{\gamma})
\]
\[
\sigma_{v}^{2}\sim \mathrm{N}(0,10)
\]
\[
v_{ik}\sim N(0,\sigma_{\nu})
\]
\[
\sigma_{\omega}^{2}\sim \mathrm{N}(0,10)
\]
\[
\omega_{il}\sim \mathrm{N}(0,\sigma_{\omega})
\]
\begin{table}{}
\begin{center}
\caption{Summary results of different MCMC algorithms for GLMM model. Runtime is presented
as seconds, and efficiency is in units of effective samples produced
per second of algorithm runtime. Time to $N$ effective samples is computed by $N/\mathrm{efficiency}$ for
static algorithms and that plus adaptation time for Auto Block and Auto Adapt algorithms.}
\label{table:2}
\vspace{2mm}
\begin{tabular}{lrrrrr}
\hline
\hline
Algorithms& \rule[-1.5mm]{0mm}{4mm}\hspace{2mm} Adapt time
& \hspace{2mm} Efficiency & \hspace{2mm} Time to 10000 effective samples\\
\hline
All Blocked & 0.00 & 0.0031 & 6451613\\
Default & 0.00 & 0.4641 & 43094\\
All Scalar & 0.00 & 0.4672 & 42808\\
Auto Block & 1019.35 & 0.8420 & 12896\\
Auto Adapt 5K & 247.15 & 0.5289 & 19154 \\
Auto Adapt 10K & 465.92 & 0.7349 & 14072 \\
Auto Adapt 20K & 1017.38 & 0.8594 & 12652 \\
\hline
\end{tabular}
\end{center}
\end{table}
It should be noted that GLMM model is by far the largest example
considered, containing nearly $2000$ stochastic model components,
which include both observations and a large number of independent
random effects. Since this example is rather
computationally intensive, we try our Auto Adapt algorithm with
smaller numbers of iterations in each outer adaptation as well as
smaller number of iteration to estimate final efficiency than we did
with the litters model. Specifically, we used samples sizes of 5000, 10000
and 20000 per outer adaptation, and we used $N=50000$ for computing
final efficiency.
In this example (Table \ref{table:2}) All Scalar sampling produces
MCMC efficiency of about $0.47$, while the All Blocked algorithm,
which consists of a single block sampler of
dimension $858$, has MCMC efficiency of approximately $0.003$. In this
case, All Blocked samples all $858$ dimensions jointly, which
requires computation time roughly three-times that of All Scalar
and yields only rather low ESS. The Default algorithm performs
similarly to All Scalar but they both perform much worse
than Auto Block and Auto Adapt. In this example, it is clear that all Auto Adapt and
Auto Block methods have dramatic improvements even when we take
into account the adaptation time. Amongst these Auto methods,
Auto Block performs slightly worse than Auto Adapt 20K in both
computational time and MCMC efficiencies. Overall, Auto Adapt 20K
appears to be the most efficient method in terms of time to 10000 effective
samples. One interpretation is that Auto Adapt 20K trades off well between
adaptation time and MCMC efficiency in this model.
Figure \ref{fig:GLMM} shows that Auto Adapt algorithm is very
competitive with the Auto Block algorithm. This comes from both the
flexibility to trade off the number of outer adaptations vs. adaptive
time to reach a good sampler as well as the larger space of kernels
being explored. Since MCMC efficiency is highly dependent upon
hierarchical model structure, using scalar and multivariate normal
random walks alone, as done by the Auto Block algorithm, can be
quite limiting. Auto Adapt, can overcome this limitation with the
flexibility to choose different type of samplers. We will see that
more strongly in the next example, where the model is more complex.
\begin{figure}
\begin{centering}
\vspace{-0.5cm}
\includegraphics[width=8cm,height=8cm]{GLMMAutoBlockEfficiency.pdf}\includegraphics[width=8cm,height=8cm]{GLMM20000BoxplotEfficiency.pdf}
\par\end{centering}
\caption{MCMC efficiencies of different methods for GLMMs model. The left panel shows the box-plots of
MCMC efficiencies of All Blocked, All Scalar, Auto Block and Default algorithms computed from 20 replications.
The right panel show the box-plots of MCMC efficiencies of Auto Adapt 20K algorithm computed from 20 replications at each outer adaptation. The last (rightest) box-plot is
computed from the chain of 50000 samples generated. The time axis show the average computational time of 20 replications.}
\label{fig:GLMM}
\end{figure}
\subsection{Spatial model}
In this section, we consider a hierachical spatial model as the final
example. We use the classical scallops dataset for this model. This
dataset is chosen since we want to compare our approach with other
standard approaches in the presence of spatial dependence. This data
collects observations of scallop abundance at 148 locations from the New
York to New Jersey coastline in 1993. It was surveyed by the Northeast
Fisheries Science Center of the National Marine Fisheries Service and
made publicly available at
http://www.biostat.umn.edu/\textasciitilde{}brad/data/myscallops.txt.
It has been analyzed many times, such as
\citet{ecker1994geostatistical,ecker1997bayesian,banerjee2014hierarchical}
and references therein. Following
\citeauthor{banerjee2014hierarchical}, assume the log-abundance
$\mathbf{g}=(g_{1},\ldots,g_{N})$ follows a multivariate normal
distribution with mean $\bm{\mu}$ and covariance matrix $\mathbf{\Sigma}$,
defined by covariances that decay exponentially as a function of distance. Specifically,
let $y_{i}$ be measured scallop abundance at site $i$,
$d_{i,j}$ be the distance between sites $i$ and $j$, and $\rho$ be a
valid correlation. Then
\[
\mathbf{g}\sim\mathrm{N}\left(\bm{\mu},\mathbf\Sigma\right),
\]
where each component $\Sigma_{ij}=\sigma^{2}exp(-d_{i,j}/\rho).$
We model observations as $y_{i} \sim
\mathrm{Poisson}(\mathrm{exp}(g_{i}))$. Priors for $\sigma$ and
$\rho$ are Uniform over a large range of interest.
The parameters in the posterior distribution are expected to be correlated
as the covariance structure induces a trade-off between $\sigma$
and $\rho$. This can be sampled well by Auto Block algorithm,
and we would like to show that our approach can achieve even higher
efficiency with lower computational cost of adaptation.
This spatial model, with 858 parameters, is computationally expensive
to estimate. Therefore, we will use Auto Adapt 5K, Auto Adapt
10K and Auto Adapt 20K algorithms for comparison and run
$N=50000$ for estimating final efficiency.
As can be seen from the Table \ref{table:3}, All Blocked and
Default algorithms mix very poorly, resulting in extremely low
efficiencies of 0.01 and 0.002, respectively. The All Scalar
algorithm, while achieving higher ESS, run slowly because large matrix
calculations are needed for every univariate sampler. The Auto Block
algorithm, on the other hand, selects an optimal threshold to cut the
entire hierarchical clustering tree into different groups, increasing
the ESS about 3 times. With a few small blocks, the computation cost of
Auto Block is somewhat cheaper than All Scalar algorithm. As a
result, the efficiency mean is about 3.5 times that of All
Scalar. Meanwhile, our Auto Adapt 5K, 10K and 20K algorithms perform
best. It should be noted that the Auto Adapt algorithm can achieve
good mixing with adaptation times that are only 15.5\%, 32.5\% and
59\% compared to the adaptation time of Auto Block. In Figure 3,
while the left panel shows a distinction between Auto Block and
other static algorithms, the right panel shows that Auto Adapt 20K
surpasses Auto Block in just a few outer iterations, indicating
substantial improvements in some models.
\begin{table}{}
\begin{center}
\caption{Summary results of different MCMC algorithms for spatial model. Runtime is presented
as seconds, and efficiency is in units of effective samples produced
per second of algorithm runtime. Time to $N$ effective samples is computed by $N/\mathrm{efficiency}$ for
static algorithms and that plus adaptation time for Auto Block and Auto Adapt algorithms.}
\label{table:3}
\vspace{2mm}
\begin{tabular}{lrrrrr}
\hline
\hline
Algorithms& \rule[-1.5mm]{0mm}{4mm}\hspace{2mm} Adapt time
& \hspace{2mm} Efficiency & \hspace{2mm} Time to 10000 effective samples\\
\hline
All Blocked & 0.00 & 0.0100 & 1000000\\
Default & 0.00 & 0.0020 & 5000000\\
All Scalar & 0.00 & 0.1150 & 86956\\
Auto Block & 19094.89 & 0.3565 & 47145\\
Auto Adapt 5K & 2967.558 & 0.4420 & 25592 \\
Auto Adapt 10K & 6221.61 & 0.4565 & 28127 \\
Auto Adapt 20K & 11278.78 & 0.4948 & 31488 \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}
\begin{centering}
\vspace{-0.5cm}
\includegraphics[width=8cm,height=8cm]{spatialAutoBlockEfficiency.pdf}\includegraphics[width=8cm,height=8cm]{spatial5000BoxplotEfficiency.pdf}
\par\end{centering}
\caption{MCMC efficiencies of different methods for spatials model. The left panel shows the box-plots of
MCMC efficiencies of All Blocked, All Scalar, Auto Block and Default algorithms computed from 20 replications.
The right panel show the box-plots of MCMC efficiencies of Auto Adapt 20K algorithm computed from 20 replications at each outer adaptation. The last (rightest) box-plot is
computed from the chain of 10000 samples generated. The time axis show the average computational time of 20 replications.}
\label{fig:spatial}
\end{figure}
\section{Discussion}\label{discussion}
\label{discussion} We have proposed a general Auto Adapt MCMC
algorithm. Our algorithm traverses a space of valid MCMC kernels to
find an efficienct algorithm automatically. There is only one previous
approach, namely Auto Block sampling, of this kind that we are
aware of. We have shown that our approach can substantially outperform
Auto Block in some cases, and that both outperform simple static
approaches. Using some benchmark models, we can observe that our
approach can yield orders-of-magnitude improvements.
The comparisons presented have deliberately used fairly simple
samplers as options for Auto Adapt in order to avoid comparisons among
vastly different computational implementations. A major feature of
our framework is that it can incorporate almost any sampler as a
candidate and almost any strategy for choosing new kernels from
compositions of samplers based on results so far. Samplers to be
explored in the future could include auxiliary variable algorithms such
as slice sampling or derivative-based sampling algorithm such as
Hamiltonian Monte Carlo \citep{duane1987hybrid}. Now that the basic
framework is established and shown to be useful in simple cases, it
merits extension to more advanced cases.
The Auto Adapt method can be viewed as a generalization of the
Auto Block method. It is more general in the sense that it can use
more kinds of samplers and explore the space of samplers more
generally than the correlation-clustering of Auto Block. Thus, our framework
can be considered to provide a broad class of automated kernel construction
algorithms that use a wide range of sampling algorithm as components.
If block sampling is included in the space of the candidate samplers,
choosing optimal blocks is important and can greatly increase the
efficiency of the algorithm. For this reason, we extended the cutting
of a hierarchical cluster tree to allow different cut heights on
different branches (different parts of the model). This differs from
Auto Block, which forms all blocks by cutting the entire tree at the
same height. We also have different multivariate adaptive
sampling other than random walk normal distribution
such as automated factor slice sampler and automated factor random walk sampler. With these extensions, the
final efficiency achieved by our algorithm specifically among blocking
schemes is often substantially better and is found in a shorter time.
Beyond hierarchical clustering, there are other approaches one might
consider to find efficient blocking schemes. One such approach
would be to use the structure of the graph instead of posterior
correlations to form blocks. This would allow conservation of
calculations that are shared by some parts of the graph, whether or
not they are correlated. Another future direction could be to improve
how a new kernel is determined from previous results, essentially to
determine an effective strategy for exploring the very
high-dimensional kernel space. Finally, the tradeoff
between computational cost and the accuracy of effective sample
size estimates is worth further exploration.
\bibliographystyle{chicago}
|
{
"timestamp": "2018-02-27T02:04:08",
"yymm": "1802",
"arxiv_id": "1802.08798",
"language": "en",
"url": "https://arxiv.org/abs/1802.08798"
}
|
\section{Introduction and Related Work}
Generative models for real-world graphs have important applications in many domains, including modeling physical and social interactions, discovering new chemical and molecular structures, and constructing knowledge graphs.
Development of generative graph models has a rich history, and many methods have been proposed that can generate graphs based on a priori structural assumptions~\cite{newman2010networks}.
However, a key open challenge in this area is developing methods that can directly {\em learn} generative models from an observed set of graphs.
Developing generative models that can learn directly from data is an important step towards improving the fidelity of generated graphs, and paves a way for new kinds of applications, such as discovering new graph structures and completing evolving graphs.
In contrast, traditional generative models for graphs (\emph{e.g.}, Barab\'asi-Albert model, Kronecker graphs, exponential random graphs, and stochastic block models) \cite{erdos1959random,leskovec2010kronecker,albert2002statistical,airoldi2008mixed,leskovec2007graph,robins2007introduction} are hand-engineered to model a particular family of graphs, and thus do not have the capacity to directly learn the generative model from observed data. For example, the Barab\'asi-Albert model is carefully designed to capture the scale-free nature of empirical degree distributions, but fails to capture many other aspects of real-world graphs, such as community structure.
Recent advances in deep generative models, such as variational autoencoders (VAE) \cite{kingma2013auto} and generative adversarial networks (GAN) \cite{goodfellow2014generative}, have made important progress towards generative modeling for complex domains, such as image and text data.
Building on these approaches a number of deep learning models for generating graphs have been proposed~\cite{kipf2016variational,grovergraphite,simonovsky2018graphvae,li2018learning}.
For example, \citealt{simonovsky2018graphvae} propose a VAE-based approach, while \citealt{li2018learning} propose a framework based upon graph neural networks.
However, these recently proposed deep models are either limited to learning from a single graph \cite{kipf2016variational,grovergraphite} or generating small graphs with 40 or fewer nodes \cite{li2018learning,simonovsky2018graphvae}---limitations that stem from three fundamental challenges in the graph generation problem:
\begin{itemize}[itemsep=0pt,topsep=0pt,leftmargin=11pt]
\item {\bf Large and variable output spaces:}
To generate a graph with $n$ nodes the generative model has to output $n^2$ values to fully specify its structure. Also, the number of nodes $n$ and edges $m$ varies between different graphs and a generative model needs to accommodate such complexity and variability in the output space.
\item {\bf Non-unique representations:}
In the general graph generation problem studied here, we want distributions over possible graph structures without assuming a fixed set of nodes (\emph{e.g.}, to generate candidate molecules of varying sizes).
In this general setting, a graph with $n$ nodes can be represented by up to $n!$ equivalent adjacency matrices, each corresponding to a different, arbitrary node ordering/numbering. Such high representation complexity is challenging to model and makes it expensive to compute and then optimize objective functions, like reconstruction error, during training.
For example, GraphVAE \cite{simonovsky2018graphvae} uses approximate graph matching to address this issue, requiring $O(n^4)$ operations in the worst case \cite{cho2014finding}.
\item {\bf Complex dependencies:}
Edge formation in graphs involves complex structural dependencies. For example, in many real-world graphs two nodes are more likely to be connected if they share common neighbors \cite{newman2010networks}. Therefore, edges cannot be modeled as a sequence of independent events, but rather need to be generated jointly, where each next edge depends on the previously generated edges.
\citealt{li2018learning} address this problem using graph neural networks to perform a form of ``message passing''; however, while expressive, this approach takes $O(mn^2\mathrm{diam}(G))$ operations to generate a graph with $m$ edges, $n$ nodes and diameter $\mathrm{diam}(G)$.
\end{itemize}
\cut{
Among some recent attempts to resolve these challenges, GraphVAE \cite{simonovsky2018graphvae} attempts to overcome the challenge non-unique representations by inexact graph
matching \cite{cho2014finding} but the proposed algorithm has $O(|V|^4)$ time complexity, making it difficult to scale to real-world graphs.
In addition, GraphVAE requires the maximum size of graph to be pre-specified.
Inspired by graph
convolutional neural networks (GCN) \cite{kipf2016semi}, an alternative model proposed in \citealt{li2018learning} incrementally generates a graph using GCN-based operations; however, this approach results in a time complexity of $O(|V|^2|E| \mathrm{diam}(G))$ to generate a graph with $|E|$ edges, $|V|$ nodes and diameter $\mathrm{diam}(G)$.
Thus, despite the significant advances made by these recent algorithms, we still lack an approach that can learn to generate graphs with more than ${\sim}40$ nodes.
}
\vspace{7pt}
\xhdr{Present work}
Here we address the above challenges and present \textit{Graph Recurrent Neural Networks} (\textbf{GraphRNN\xspace}), a scalable framework for learning generative models of graphs.
GraphRNN\xspace\ models a graph in an autoregressive (or recurrent) manner---as a sequence of additions of new nodes and edges---to capture the complex joint probability of all nodes and edges in the graph.
In particular, GraphRNN\xspace\ can be viewed as a hierarchical model, where a {\em graph-level RNN} maintains the state of the graph and generates new nodes, while an {\em edge-level RNN} generates the edges for each newly generated node.
Due to its autoregressive structure, GraphRNN\xspace\ can naturally accommodate variable-sized graphs, and we introduce a breadth-first-search (BFS) node-ordering scheme to drastically improve scalability.
This BFS approach alleviates the fact that graphs have non-unique representations---by collapsing distinct representations to unique BFS trees---and the tree-structure induced by BFS allows us to limit the number of edge predictions made for each node during training.
Our approach requires $O(n^2)$ operations on worst-case (\emph{i.e.}, complete) graphs, but we prove that our BFS ordering scheme permits sub-quadratic complexity in many cases.
\cut{
reduces the number of possible sequences we need to consider by collapsing multiple node orderings to a unique BFS tree.
The tree-structure induced by BFS also allows us to limit the dimensionality of the $S^\pi_i$ vectors. }
\cut{
(Section \ref{sec:proposed}).
Illustrative case studies \ref{sec:case_study}) and comprehensive empirical analyses (Section \ref{sec:experiments}), demonstrate how expressive recurrent neural networks allow GraphRNN\xspace\ to account for complex, non-local dependencies between edges in a given graph.
}
In addition to the novel GraphRNN\xspace\ framework, we also introduce a comprehensive suite of benchmark tasks and baselines for the graph generation problem, with all code made publicly available\footnote{The code is available in \url{https://github.com/snap-stanford/GraphRNN}, the appendix is available in \url{https://arxiv.org/abs/1802.08773}.}.
A key challenge for the graph generation problem is quantitative evaluation of the quality of generated graphs.
Whereas prior studies have mainly relied on visual inspection or first-order moment statistics for evaluation, we provide a comprehensive evaluation setup by comparing graph statistics such as the degree distribution, clustering coefficient distribution and motif counts for two sets of graphs based on variants of the Maximum Mean Discrepancy (MMD) \cite{gretton2012kernel}.
This quantitative evaluation approach can compare higher order moments of graph-statistic distributions and provides a more rigorous evaluation than simply comparing mean values.
Extensive experiments on synthetic and real-world graphs of varying size demonstrate the significant improvement GraphRNN\xspace\ provides over baseline approaches, including the most recent deep graph generative models as well as traditional models.
Compared to traditional baselines (\emph{e.g.}, stochastic block models), GraphRNN\xspace\ is able to generate high-quality graphs on all benchmark datasets, while the traditional models are only able to achieve good performance on specific datasets that exhibit special structures.
Compared to other state-of-the-art deep graph generative models, GraphRNN\xspace\ is able to achieve superior quantitative performance---in terms of the MMD distance between the generated and test set graphs---while also scaling to graphs that are $50\times$ larger than what these previous approaches can handle.
Overall, GraphRNN\xspace\ reduces MMD by $80\%\text{-}90\%$ over the baselines on average across all datasets and effectively generalizes, achieving comparatively high log-likelihood scores on held-out data.
\cut{
Graph-structured data have a variety of key properties in different application domains. A set of graphs could be
characterized by power law degree distribution, specific network motifs, clustering coefficient, or
a combination of them {\textcolor{red}{[CITE]}}.
We test each of these meature to ensure that the model is flexible enough to fit all of these characteristics of graphs.
In addition to visual examination and domain specific measurements,
we designed a general quantitative framework to comprehensively evaluate the quality of generated graph samples.
We compare the graph statistics such as degree distribution, clustering coefficient
distribution and motif counts for two \emph{sets} of graphs based on variants of the Maximum Mean
Discrepancy (MMD) {\textcolor{red}{[CITE]}}.
This method can compare higher order of moments of graph statistics distribution instead of
average graph statistics which only corresponds to the first order moments.
Common ways of evaluating graph generative models include qualitative visual inspection of graph embeddings and statistics (eg. degree distribution), and domain specific evaluation. However, quantitative evaluation of graph generative models that is agnostic to downstream tasks are important in order to compare the performance of the models from a variety of domains. Existing quantitative evaluation often only uses first-order moment graph statistics (eg. distribution of average degree for each graph in the generated samples), which cannot capture the complexity of degree characteristics for each graph.
}
\section{Further Related Work}
In addition to the deep graph generative approaches and traditional graph generation approaches surveyed previously, our framework also builds off a variety of other methods
\cut{
\xhdr{Traditional graph generative models}
Many generative models in the past focus on specific properties of a particular class of graphs, and design generation processes that cater to these properties. Preferential attachment models such as Barab\'asi-Albert graphs model the power law of degree and clustering distributions of many realistic networks \cite{albert2002statistical}. The forest fire model \cite{leskovec2007graph} aims to capture the properties such as heavy-tail degree distributions and shrinking diameters of social networks. However, these models have few parameters and are not flexible enough to model realistic graphs which possess a variety of different properties.
Kronecker graph models \cite{leskovec2010kronecker} utilize the fractal structure of certain graphs to fit degree, clustering and diameter statistics, but again this approach cannot model graphs with different underlying generation mechanisms.
Exponential random graph models (ERGMs) have also proved extremely useful as a statistical model of real-world graphs, where edge probabilities are defined by a learned exponential-family distribution; however, the ERGM framework is designed for a fixed number of nodes and relies heavily on knowledge of domain-specific node attributes \cite{robins2007introduction}.
\rex{Jure thinks that it can be removed. Just add some of the points into the traditional generative models discussion in intro.}
}
\cut{
\xhdr{Deep graph generative models}
Recently, a number of deep learning based models for generating graphs have been proposed.
\citealt{kipf2016variational} and \citealt{grovergraphite} both propose graph generation models based on deep variational autoencoders, but these methods are limited to generating graphs of a fixed size and cannot train on multiple graphs without explicit node alignment.
More recently, \citealt{simonovsky2018graphvae} proposed GraphVAE, which
uses an encoder-decoder framework to directly generate adjacency matrix representations of graphs, while \citealt{li2018learning} propose an approach based upon iterative graph convolutions.
These recent methods face significant limitations due to strong conditional independence assumptions that are made and involve prohibitively high computational complexity, being unable to scale to graphs with $>40$ nodes.
}
\xhdr{Molecule and parse-tree generation}
There has been related domain-specific work on generating candidate molecules and parse trees in natural language processing.
Most previous work on discovering molecule structures make use of a expert-crafted sequence representations of molecular graph structures (SMILES) \cite{olivecrona2017molecular,segler2017generating,gomez2016automatic}.
Most recently, SD-VAE \cite{dai2018syntax-directed} introduced a grammar-based approach to generate structured data, including molecules and parse trees.
In contrast to these works, we consider the fully general graph generation setting without assuming features or special structures of graphs.
\cut{
\xhdr{Exponential random graphs models} Exponential random graph models (ERGMs) are a powerful statistical tool for modeling real-world graphs, where edge probabilities are defined by a learned exponential-family distribution.
However, unlike our approach the ERGM framework is designed for inference over a fixed number of nodes and relies heavily on knowledge of domain-specific node attributes \cite{robins2007introduction}.
}
\xhdr{Deep autoregressive models}
Deep autoregressive models decompose joint probability distributions as a product of conditionals, a general idea that has achieved striking successes in the image \cite{oord2016pixel} and audio \cite{oord2016wavenet} domains.\cut{
For example, PixelRNN \cite{oord2016pixel} can generate highly realistic images by decomposing the joint distribution of pixels into conditional probability distributions of individual pixels, while
WaveNet \cite{oord2016wavenet} proposes an autoregressive model for time signals that achieves state-of-the-art performance on audio generation.}
Our approach extends these successes to the domain of generating graphs.
Note that the DeepGMG algorithm \cite{li2018learning} and the related prior work of \citealt{johnson2016learning} can also be viewed as deep autoregressive models of graphs.
However, unlike these methods, we focus on providing a scalable (\emph{i.e.}, $O(n^2)$) algorithm that can generate general graphs.
\cut{
\will{I don't think you need related work for the evaluation part.}
\xhdr{Evaluation of generative models}
}
\section{Preliminaries}
\section{Proposed Approach}
\label{sec:proposed}
We first describe the background and notation for building generative models of graphs, and then describe our autoregressive framework, GraphRNN\xspace.
\subsection{Notations and Problem Definition}
An undirected graph\footnote{We focus on undirected graphs. Extensions to directed graphs and graphs with features are discussed in the Appendix.} $G=(V, E)$ is defined by its node set $V=\{v_1,...,v_n\}$ and edge set $E=\{(v_i,v_j)|v_i,v_j\in V\}$. One common way to represent a graph is using an adjacency matrix, which requires a node ordering $\pi$ that maps nodes to rows/columns of the adjacency matrix. More precisely, $\pi$ is a permutation function over $V$ (\emph{i.e.}, $(\pi(v_1),...,\pi(v_n))$ is a permutation of $(v_1,...,v_n)$).
We define $\Pi$ as the set of all $n!$ possible node permutations. Under a node ordering $\pi$, a graph $G$ can then be represented by the adjacency matrix $A^{\pi}\in\mathbb{R}^{n\times n}$, where $A^\pi_{i,j} = \mathds{1}[(\pi(v_i),\pi(v_j))\in E]$. Note that elements in the set of adjacency matrices $A^\Pi=\{A^\pi|\pi\in\Pi\}$ all correspond to the same underlying graph.
The goal of {\em learning generative models of graphs} is to learn a distribution $p_{model}(G)$ over graphs, based on a set of observed graphs $\mathbb{G}=\{G_1,...,G_s\}$ sampled from data distribution $p(G)$, where each graph $G_i$ may have a different number of nodes and edges.
When representing $G\in\mathbb{G}$, we further assume that we may observe any node ordering $\pi$ with equal probability, \emph{i.e.},
$p(\pi)=\frac{1}{n!}, \forall\pi\in\Pi$.
Thus, the generative model needs to be capable of generating graphs where each graph could have exponentially many representations, which is distinct from previous generative models for images, text, and time series.
Finally, note that traditional graph generative models (surveyed in the introduction) usually assume a single input training graph. Our approach is more general and can be applied to a single as well as multiple input training graphs.
\subsection{A Brief Survey of Possible Approaches}\label{sec:possible}
We start by surveying some general alternative approaches for modeling $p(G)$, in order to highlight the limitations of existing non-autoregressive approaches and motivate our proposed autoregressive architecture.
\xhdr{Vector-representation based models}
One na\"ive approach would be to represent $G$ by flattening $A^\pi$ into a vector in $\mathbb{R}^{n^2}$, which is then used as input to any off-the-shelf generative model, such as a VAE or GAN.
However, this approach suffers from serious drawbacks: it cannot naturally generalize to graphs of varying size, and requires training on all possible node permutations or specifying a canonical permutation, both of which require $O(n!)$ time in general.
\cut{
However, unlike CNN which crucially makes use of spacial locality of pixels, this representation makes minimal use of the connectivity information in network architecture, thus requiring far more parameters to be learned.
In addition, VAE assumes conditional independence of each output dimension, conditioned on the latent representation \cite{kingma2013auto}, hence cannot capture more complex dependency between nodes and edges.
GAN suffers from mode collapse problem\cite{goodfellow2014generative}, and its discriminator would have to determine real and fake graphs up to any permutations of the vector representation. }
\cut{
\xhdr{Random-walk based models}
Random walks are a widely used tool to model structural properties of graphs, and the task of generating graphs can be simplified to generating set of random walks $R = (v_{r_1},...,v_{r_k})$ that obeys the distribution of random walks in a training graph.
However, like the na\"ive vector representation approach, this approach suffers from the issue that there are different representations $R^{\pi}$ under different permutations $\pi$, and learning the joint probability distribution $p(R^{\pi},\pi)$ is intractable in general.
.}
\xhdr{Node-embedding based models}
There have been recent successes in encoding a graph's structural properties into node embeddings \cite{hamilton2017representation}, and one approach to graph generation could be to define a generative model that decodes edge probabilities based on pairwise relationships between learned node embeddings (as in \citealt{kipf2016variational}).
However, this approach is only well-defined when given a fixed-set of nodes, limiting its utility for the general graph generation problem, and approaches based on this idea are limited to learning from a single input graph \cite{kipf2016variational,grovergraphite}.
\cut{
\xhdr{Sequence representation based models}
One represent a graph $G$ as sequences corresponding to a $A^\pi$ under permutation $\pi(\cdot)$. Using Recurrent Neural Network (RNN), the model can take arbitrary length sequence as input. The complexity of learning $p(G)$ is also greatly reduced through decomposing $p(G)$ into a product of distributions and parameter weight sharing. A recent work propose to represent a graph as a $O(E)$ sequence with GCN based transition function, with time complexity $O(|E| |V|^2 \mathrm{diam}(G))$. Our model adopts this formulations with novel approaches that significantly improve performances and reduce the ime complexity.
}
\subsection{GraphRNN: Deep Generative Models for Graphs}\label{sec:autoregressive}
The key idea of our approach is to represent graphs under different node orderings as sequences, and then to build an autoregressive generative model on these sequences. \
As we will show, this approach does not suffer from the drawbacks common to other general approaches (\textit{c.f.}, Section \ref{sec:possible}), allowing us to model graphs of varying size with complex edge dependencies, and we introduce a BFS node ordering scheme to drastically reduce the complexity of learning over all possible node sequences (Section \ref{sec:bfs}).
In this autoregressive framework, the model complexity is greatly reduced by weight sharing with recurrent neural networks (RNNs).
Figure~\ref{fig:arch} illustrates our GraphRNN\xspace\ approach, where the main idea is that we decompose graph generation into a process that generates a sequence of nodes (via a {\em graph-level RNN}), and another process that then generates a sequence of edges for each newly added node (via an {\em edge-level RNN}).
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{figs/arch}
\vspace{-10pt}
\caption{\cut{Graph generation procedure using GraphRNN.} GraphRNN\ at inference time.
Green arrows denote the graph-level RNN that encodes the ``graph state'' vector $h_i$ in its hidden state, updated by the predicted adjacency vector $S^\pi_{i}$ for node $\pi(v_i)$.
Blue arrows represent the edge-level RNN, whose hidden state is initialized by the graph-level RNN, that is used to predict the adjacency vector $S^\pi_{i}$ for node $\pi(v_i)$.
\cut{Green arrows denotes update of hidden state vector of the underlying RNN for $S^\pi$. Blue arrows denote sampling from the predicted probability of $S^\pi_{i, j}=1$, and state vector update given the sample. Neural network weights are shared for the same type of updates}}
\label{fig:arch}
\vspace{-10pt}
\end{figure}
\subsubsection{Modeling graphs as sequences}
We first define a mapping $f_S$ from graphs to sequences, where
for a graph $G\sim p(G)$ with $n$ nodes under node ordering $\pi$, we have
\begin{equation}
\label{eq:seq_def}
S^\pi=f_S(G,\pi)=(S^\pi_1,...,S^\pi_{n}),
\end{equation}
where each element $S^\pi_i\in \{0,1\}^{i-1}, i\in \{1,...,n\}$ is an adjacency vector representing the edges between node $\pi(v_{i})$ and the previous nodes $\pi(v_{j}), j\in \{1,...,i-1\}$ already in the graph:\footnote{We prohibit self-loops and $S^\pi_1$ is defined as an empty vector.}
\begin{equation}
S^\pi_i = (A^\pi_{1,i},...,A^\pi_{i-1,i})^T, \forall i\in \{2,...,n\}.
\end{equation}
For undirected graphs, $S^\pi$ determines a unique graph $G$, and we write the mapping as $f_G(\cdot)$ where $f_G(S^{\pi})=G$.
Thus, instead of learning $p(G)$, whose sample space cannot be easily characterized,
we sample the auxiliary $\pi$ to get the observations of $S^\pi$ and learn $p(S^\pi)$, which can be modeled autoregressively due to the sequential nature of $S^\pi$.
At inference time, we can sample $G$ without explicitly computing $p(G)$ by sampling $S^\pi$, which
maps to $G$ via $f_G$.
Given the above definitions, we can write $p(G)$ as the marginal distribution of the joint distribution $p(G,S^\pi)$:
\begin{equation}\label{eq:graphlike}
p(G) = \sum_{S^\pi}{ p(S^\pi) \ \mathds{1}[f_G(S^\pi)=G]} ,
\end{equation}
where $p(S^\pi)$ is the distribution that we want to learn using a generative model.
Due to the sequential nature of $S^\pi$, we further decompose $p(S^\pi)$ as the product of conditional distributions over the elements:
\begin{equation}
p(S^\pi) = \prod_{i=1}^{n+1}{p(S^\pi_i|S^\pi_1,...,S^\pi_{i-1})}
\end{equation}
where we set $S^\pi_{n+1}$ as the end of sequence token $\texttt{EOS}$, to represent sequences with variable lengths. We simplify $p(S^\pi_i|S^\pi_1,...,S^\pi_{i-1})$ as $p(S^\pi_i|S^\pi_{<i})$ in further discussions.
\cut{
\will{Personally, I find this paragraph more confusing than illuminating; it is also odd because we talk about training on all possible sequences but then in Section 2.4 contradict this.
I would prefer we drop this and just be more explicit in the next section about how we actually train via maximum likelihood. Rather than having this abstract description of the implicit maximum likelihood approach here.}
Note that $p(S^\pi)$ can be trained by maximum likelihood given the observed edge sequences $\mathbb{S}$ of graphs $\mathbb{G}=\{G_1,...,G_s\}$.
More precisely, given a graph $G \in \mathbb{G}$ and assuming all permutations $\pi$ to be equally likely ($p(\pi)=\frac{1}{n!}$), we can then generate a set of observations $\mathbb{S_G}=\{S^{\pi_1}_1,...,S^{\pi_1}_n, .... S^{\pi_k}_{1},...,S^{\pi_k}_{n}\}$ as a set of sequences of edges of nodes in the observed graph $G$ under all $k=|\Pi|$ permutations.
Then given a set of observed edge sequences $\mathbb{S}=\{S_G\}$ of graphs $\mathbb{G}$, we can train the model.
\jure{Note, above we are missing an index: $S$ go over all permutations, over all nodes of each graph and over all graphs -- so $S$ needs to be index by 3 elements (graph, node, permutation).}
}
\begin{algorithm}[t]
\caption{GraphRNN inference algorithm}
\label{alg:graphrnn}
\begin{algorithmic}
\STATE {\bfseries Input:} RNN-based transition module $f_{trans}$, output module $f_{out}$, probability distribution $\mathcal{P}_{\theta_i}$ parameterized by $\theta_i$, start token $\texttt{SOS}$, end token $\texttt{EOS}$, empty graph state $h'$
\STATE {\bfseries Output:} Graph sequence $S^{\pi}$
\STATE $S^{\pi}_1=\texttt{SOS}$, $h_1 = h'$, $i=1$
\REPEAT
\STATE $i=i+1$
\STATE $h_i = f_{\mathrm{trans}}( h_{i-1},S^{\pi}_{i-1})$ \COMMENT{update graph state}
\STATE $\theta_i = f_{\mathrm{out}}(h_i)$
\STATE $S^{\pi}_i \sim \mathcal{P}_{\theta_i}$ \COMMENT{sample node $i$'s edge connections}
\UNTIL{$S^{\pi}_i$ is $\texttt{EOS}$}
\STATE {\bfseries Return} $S^{\pi}=(S^{\pi}_1,...,S^{\pi}_i)$
\end{algorithmic}
\end{algorithm}
\subsubsection{The GraphRNN framework}
So far we have transformed the modeling of $p(G)$ to modeling $p(S^{\pi})$, which we further decomposed into the product of conditional probabilities $p(S^\pi_i|S^\pi_{<i})$. Note that $p(S^\pi_i|S^\pi_{<i})$ is highly complex as it has to capture how node $\pi(v_i)$ links to previous nodes based on how previous nodes are interconnected among each other.
Here we propose to parameterize $p(S^\pi_i|S^\pi_{<i})$ using expressive neural networks to model the complex distribution. To achieve scalable modeling, we let the neural networks share weights across all time steps $i$.
In particular, we use an RNN that consists of a \textit{state-transition} function and an \textit{output} function:
\begin{align}\label{eq:rnnframework}
h_i &= f_{\mathrm{trans}}({h}_{i-1}, S^\pi_{i-1}),\\
\theta_{i} &= f_{\mathrm{out}}({h}_i),
\end{align}
where $ h_i \in \mathbb{R}^{d}$ is a vector that encodes the state of the graph generated so far, $S^\pi_{i-1}$ is the adjacency vector for the most recently generated node $i-1$, and $\theta_i$ specifies the distribution of next node's adjacency vector (\emph{i.e.}, $S^{\pi}_i \sim \mathcal{P}_{\theta_i}$).
In general, $f_{\mathrm{trans}}$ and $f_{\mathrm{out}}$ can be arbitrary neural networks, and $\mathcal{P}_{\theta_i}$ can be an arbitrary distribution over binary vectors.
This general framework is summarized in Algorithm~\ref{alg:graphrnn}.
Note that the proposed problem formulation is fully general; we discuss and present some specific variants with implementation details in the next section.
Note also that RNNs require fixed-size input vectors, while we previously defined $S^\pi_i$ as having varying dimensions depending on $i$; we describe an efficient and flexible scheme to address this issue in
Section \ref{sec:bfs}.
\cut{In our formulation, $\bm h_i$ encodes the necessary information about the currently generated graph and $f_{out}$ does not directly output $S^\pi_i$. Rather, it outputs a vector $\theta_i$ which parametrizes $p(S^\pi_i|S^\pi_{<i})$, and then $S^\pi_i$ is sampled according to the distribution $p(S^\pi_i|S^\pi_{<i})$.
}
\subsubsection{GraphRNN variants}
Different variants of the GraphRNN model correspond to different assumptions about $p(S^\pi_i|S^\pi_{<i})$. Recall that each dimension of $S^\pi_i$ is a binary value that models existence of an edge between the new node $\pi(v_{i})$ and a previous node $\pi(v_{j}),j\in \{1,...,i-1\}$.
We propose two variants of GraphRNN, both of which implement the transition function $f_{\mathrm{trans}}$ (\emph{i.e.}, the graph-level RNN) as a Gated Recurrent Unit (GRU)~\cite{chung2014empirical} but differ in the implementation of $f_{\mathrm{out}}$ (\emph{i.e.}, the edge-level model).
Both variants are trained using stochastic gradient descent with a maximum likelihood loss over $S^\pi$ --- \emph{i.e.}, we optimize the parameters of the neural networks to optimize $\prod{p_{model}(S^\pi)}$ over all observed graph sequences.
\cut{
\jiaxuan{I guess readers can figure out what teacher forcing is, so maybe no need to expand}
we use a ``teacher forcing'' training scheme where we always update the GRU state using the ground truth adjacency vectors $S^\pi_i$ during training, rather than training on the prediction of the model.
}
\cut{
\xhdr{Dependency}
The connections between a new node and previous nodes are dependent (e.g. if trying to model triadic closure property), thus we would expect the random variables in $S^\pi_i$ are correlated.
\xhdr{Multi-modality}
Given a partially generated graph, there are multiple patterns that a new node may connects to previous nodes(e.g. choosing one of the triads to close), making $p(S^\pi_i|S^\pi_{<i})$ multi-modal.}
\xhdr{Multivariate Bernoulli}
First we present a simple baseline variant of our GraphRNN\xspace approach, which we term GraphRNN-S\ (``S'' for ``simplified'').
In this variant, we model $p(S^\pi_i|S^\pi_{<i})$ as a multivariate Bernoulli distribution, parameterized by the $\theta_i\in \mathbb{R}^{i-1}$ vector that is output by $f_{\mathrm{out}}$.
In particular, we implement $f_{\mathrm{out}}$ as single layer multi-layer perceptron (MLP) with
sigmoid activation function, that shares weights across all time steps.
The output of $f_{\mathrm{out}}$ is a vector $\theta_i$, whose element $\theta_{i}[j]$ can be interpreted as a probability of edge $(i,j)$.
We then sample edges in $S^\pi_i$ {\em independently} according to a multivariate Bernoulli distribution parametrized by $\theta_i$.
\cut{
Note that this simple formulation cannot model complex edge dependencies as it assumes each edge in $S^\pi_i$ to be independent. Furthermore, it also assumes that $p(S^\pi_i|S^\pi_{<i})$ has a single mode (since its parameter $\theta_i$ is deterministic given $\bm h_i$). We refer to this formulation as GraphRNN-S\ (``S'' for ``simple''). \jure{not sure what we mean by ``single mode''. We never defined or mentioned the concept of a ``mode'' before. Not sure why this is relevant.}}
\xhdr{Dependent Bernoulli sequence}
To fully capture complex edge dependencies, in the full GraphRNN\xspace\ model we further decompose $p(S^\pi_i|S^\pi_{<i})$ into a product of conditionals,
\begin{equation}
\label{eq:pS_cond_rnn}
p(S^\pi_i|S^\pi_{<i}) = \prod_{j=1}^{i-1} p(S^\pi_{i,j} | S^\pi_{i,<j}, S^\pi_{<i}),
\end{equation}
where $S^\pi_{i, j}$ denotes a binary scalar that is $1$ if node $\pi(v_{i+1})$ is connected to node $\pi(v_{j})$ (under ordering $\pi$).
In this variant, each distribution in the product is approximated by an another RNN.
Conceptually, we have a hierarchical RNN, where the first (\emph{i.e.}, the graph-level) RNN generates the nodes and maintains the state of the graph, while the second (\emph{i.e.}, the edge-level) RNN generates the edges of a given node (as illustrated in Figure~\ref{fig:arch}).
In our implementation, the edge-level RNN is a GRU model, where the hidden state is initialized via the graph-level hidden state $h_i$ and where the output at each step is mapped by a MLP to a scalar indicating the probability of having an edge. $S^\pi_{i, j}$ is sampled from this distribution specified by the $j$th output of the $i$th edge-level RNN, and is fed into the $j+1$th input of the same RNN. All edge-level RNNs share the same parameters.
\subsubsection{Tractability via breadth-first search}\label{sec:bfs}
A crucial insight in our approach is that rather than learning to generate graphs under any possible node permutation, we learn to generate graphs using breadth-first-search (BFS) node orderings, without a loss of generality.
Formally, we modify Equation (\ref{eq:seq_def}) to
\begin{equation}
\label{eq:seq_def_bfs}
S^\pi=f_S(G,\textsc{BFS}(G,\pi)),
\end{equation}
where $\textsc{BFS}(\cdot)$ denotes the deterministic BFS function.
In particular, this BFS function takes a random permutation $\pi$ as input, picks $\pi(v_1)$ as the starting node and appends the neighbors of a node into the BFS queue in the order defined by $\pi$.
Note that the BFS function is many-to-one, \emph{i.e.}, multiple permutations can map to the same ordering after applying the BFS function.
Using BFS to specify the node ordering during generation has two essential benefits.
The first is that we only need to train on all possible BFS orderings, rather than all possible node permutations, \emph{i.e.}, multiple node permutations map to the same BFS ordering, providing a reduction in the overall number of sequences we need to consider.\footnote{In the worst case (\emph{e.g.}, star graphs), the number of BFS orderings is $n!$, but we observe substantial reductions on many real-world graphs.}
The second is that the BFS ordering makes learning easier by reducing the number of edge predictions we need to make in the edge-level RNN; in particular, when we are adding a new node under a BFS ordering, the only possible edges for this new node are those connecting to nodes that are in the ``frontier'' of the BFS (\emph{i.e.}, nodes that are still in the BFS queue)---a notion formalized by Proposition \ref{prop:bfs_edge} (proof in the Appendix):
\begin{proposition}
\label{prop:bfs_edge}
Suppose $v_1, \ldots, v_n$ is a BFS ordering of $n$ nodes in graph $G$, and $(v_i, v_{j-1}) \in E$ but $(v_i, v_j) \not \in E$ for some $i < j \le n$, then $(v_{i'}, v_{j'}) \not \in E$, $\forall 1 \le i' \le i$ and $j \le j' < n$.
\end{proposition}
Importantly, this insight allows us to redefine the variable size $S^\pi_i$ vector as a fixed $M$-dimensional vector, representing the connectivity between node $\pi(v_i)$ and nodes in the current BFS queue with maximum size $M$:
\begin{equation}
S^\pi_i = (A^\pi_{\max(1,i-M),i},...,A^\pi_{i-1,i})^T, i\in \{2, ..., n\}.
\end{equation}
As a consequence of Proposition \ref{prop:bfs_edge}, we can bound $M$ as follows:
\begin{corollary}
With a BFS ordering the maximum number of entries that GraphRNN\xspace model needs to predict for $S^\pi_i$, $\forall 1 \le i \le n$ is
$O\left(\max_{d=1}^{\mathrm{diam}(G)} \left|\left\{v_i | \mathrm{dist}(v_i, v_1) = d\right\} \right| \right)$,
where $\mathrm{dist}$ denotes the shortest-path-distance between vertices.
\end{corollary}
The overall time complexity of GraphRNN\xspace\ is thus $O(Mn)$.
In practice, we estimate an empirical upper bound for $M$ (see the Appendix for details).
\cut{
The simple definition of the graph sequence $S^\pi$ in Equation (\ref{eq:seq_def}) poses two challenges:
First, a given graph can be represented by $\frac{n!}{|\mathrm{Aut}(G)|}$ distinct graph sequences
where $|\mathrm{Aut}(G)|$ denotes the order of the automorphism group of graph $G$. Since almost all graphs have no non-trivial automorphisms \cite{erdHos1963asymmetric}, the number of different representations for the same graph is almost always close to $n!$ for large $n$.
Second, if we represent $S^\pi_i$ as fixed-size, $M$-dimensional vectors (necessary to implement the graph-level RNN), then this dimensionality $M$ could restrict the maximum graph size that can be generated.
}
\cut{
To address these issues, we propose a breath-first search (BFS) node ordering scheme, which drastically reduces the number of possible sequences we need to consider by collapsing multiple node orderings to a unique BFS tree.
The tree-structure induced by BFS also allows us to limit the dimensionality of the $S^\pi_i$ vectors.
Specifically, we modify Equation (\ref{eq:seq_def}) to
\begin{equation}
\label{eq:seq_def_bfs}
S^\pi=f_S(G,\textsc{BFS}(G,\pi)),
\end{equation}
where $\textsc{BFS}(\cdot)$ denotes the deterministic BFS function that always picks $\pi(v_1)$ as the starting node and appends the neighbors of a node into the BFS queue in the order defined by $\pi$.
The BFS function collapses many node permutations to the same BFS node order, reducing the number of possible sequences and the cost of approximating Equation \eqref{eq:graphlike}.
In addition, the BFS orderings reduce the required dimensionality $M$ of $S^\pi_i$, as demonstrated in Proposition \ref{prop:bfs_edge}:
\begin{proposition}
\label{prop:bfs_edge}
Suppose $v_1, \ldots, v_n$ is a BFS ordering of $n$ nodes in graph $G$, and $(v_i, v_{j-1}) \in E$ but $(v_i, v_j) \not \in E$ for some $i < j \le n$, then $(v_{i'}, v_{j'}) \not \in E$, $\forall 1 \le i' \le i$ and $j \le j' < n$.
\end{proposition}
The proof\footnote{Full proofs are deferred to the Appendix.} relies on the observation that if a node does not connect to a previous node in the BFS ordering, then all its subsequent nodes do not have an edge to that previous node.
As a consequence, we also have that:
\begin{corollary}
With a BFS ordering the maximum number of entries that GraphRNN\xspace model needs to predict for $S^\pi_i$, $\forall 1 \le i \le n$ is
$O\left(\max_{d=1}^{\mathrm{diam}(G)} \left|\left\{v_i | \mathrm{dist}(v_i, v_1) = d\right\} \right| \right)$,
where $\mathrm{dist}$ denotes the shortest-path-distance between vertices.
\end{corollary}
Following this argument, when implementing both GraphRNN variants, we modify the definition of $S^\pi_i$ as
\begin{equation}
S^\pi_i = (A^\pi_{\max(1,i-M),i},...,A^\pi_{i-1,i})^T, i\in \{2, ..., n\},
\end{equation}
where $M$ is a constant and we zero-pad all $S^\pi_{i}$ to be a length $M$ vector. In this way, using a fixed-size vector only limits the number of previous nodes that a new node can connect to and not the maximum size of the generated graph.
GraphRNN\xspace thus has overall time complexity $O(Mn)$.
}
\cut{
In general, graphs with higher diameter benefit the most from this technique, but in practice, we found that reducing at least half of the dependencies incur negligible performance degradation. \jure{unclear what we mean by ``reducing at least half of the dependencies''.}
}
\cut{
\subsection{Extension to Graphs with Node and Edge Features}
Our GraphRNN\xspace model can also be applied to graphs where nodes and edges have feature vectors associated with them.
In this extended setting, under node ordering $\pi$, a graph $G$ is associated with its node feature matrix $X^\pi\in\mathbb{R}^{n\times m}$ and edge feature matrix $F^\pi\in\mathbb{R}^{n\times k}$, where $m$ and $k$ are the feature dimensions for node and edge respectively.
Then we can extend the definition of $S^\pi$ to include feature vectors of corresponding nodes as well as edges $S^\pi_i = (X^\pi_i, F^\pi_i)$. We can then model $X^\pi_i$ and $F^\pi_i$ respectively, and instantiate them using MLP, encoder+decoder or RNN.
Note also that directed graphs can be viewed as an undirected graphs with two edge types, which is a special case under the above extension.
}
\section{GraphRNN\xspace\ Model Capacity}
\label{sec:case_study}
In this section we analyze the representational capacity of GraphRNN, illustrating how it is able to capture complex edge dependencies.
In particular, we discuss two very different cases on how GraphRNN\ can learn to generate graphs with a \textit{global community structure} as well as graphs with a very \textit{regular geometric structure}.
For simplicity, we assume that $h_i$ (the hidden state of the graph-level RNN) can exactly encode $S^\pi_{<i}$, and that the edge-level RNN can encode $S^\pi_{i,<j}$.
That is, we assume that our RNNs can maintain memory of the decisions they make and elucidate the models capacity in this ideal case.
We similarly rely on the universal approximation theorem of neural networks \cite{hornik1991approximation}.
\xhdr{Graphs with community structure}
GraphRNN\ can model structures that are specified by a given probabilistic model. This is because the posterior of a new edge probability can be expressed as a function of the outcomes of previous nodes.
For instance, suppose that the training set contains graphs generated from the following distribution $p_{com}(G)$: half of the nodes are in community $A$, and half of the nodes are in community $B$ (in expectation), and nodes are connected with probability $p_s$ within each community and probability $p_d$ between communities.
Given such a model, we have the following key (inductive) observation:
\begin{observation}\label{obs:com}
Assume there exists a parameter setting for GraphRNN\xspace\ such that it can generate $S^\pi_{<i}$ and $S^\pi_{i,<j}$ according to the distribution over $S^\pi$ implied by $p_{com}(G)$, then there also exists a parameter setting for GraphRNN\ such that it can output $p(S^\pi_{i, j} | S^\pi_{i, <j}, S^\pi_{<i})$ according to $p_{com}(G)$.
\end{observation}
This observation follows from three facts:
First, we know that $p(S^\pi_{i, j} | S^\pi_{i, <j}, S^\pi_{<i})$ can be expressed as a function of $p_s$, $p_d$, and $p(\pi(v_j) \in A), p(\pi(v_j) \in B) \: \forall 1 \leq j \le i$ (which holds by $p_{com}$'s definition).
Second, by our earlier assumptions on the RNN memory, $S^\pi_{<i}$ can be encoded into the initial state of the edge-level RNN, and the edge-level RNN can also encode the outcomes of $S^\pi_{i,<j}$.
Third, we know that $p(\pi(v_i) \in A)$ is computable from $S^\pi_{<i}$ and $S^\pi_{i,1}$ (by Bayes' rule and $p_{com}$'s definition, with an analogous result for $p(\pi(v_i) \in B)$).
Finally, GraphRNN\xspace\ can handle the base case of the induction in Observation \ref{obs:com}, \emph{i.e.}, $S_{i,1}$, simply by sampling according to $0.5p_s + 0.5p_d$ at the first step of the edge-level RNN (\emph{i.e.}, 0.5 probability $i$ is in same community as node $\pi(v_1)$).
\cut{
Next, using the state-transition function of the edge-level RNN, the model can condition on the sampled value of $S^\pi_{i,1}$, and this information, combined with $S^\pi_{<i}$ (which is assumed to be encoded in $h_i$) can be used to compute $p(v_i \in A)$, as well as $p(v_i \in B)$.
Finally, given knowledge of $p(\pi(v_j) \in A), p(\pi(v_j) \in B), \forall 1 \le j $, the model can compute $p(S^\pi_{i,2})$ as
\begin{equation}
p(S^\pi_{i,2}) =
\end{equation}
}
\cut{
To generate such a graph, GraphRNN-S\ model makes use of the hidden state that only encodes $S^\pi_{<i}$. Thus, it can learn that the probability of an edge between the next node $\pi(i)$ and any existing node in community $A$ is
\begin{equation}
\label{eq:prob_next_node_mlp}
p_a p\left(\pi(v_i) \in A\right) + p_b p(\pi(v_i) \in B).
\end{equation}
}
\cut{
Conditioned on this outcome, and whether node $v_1$ and $v_2$ are in the same community (encoded in the state vector $\bm h_i$), the model can decide if node $v_2$ and $v_1$ are in the same community.
It then assigns probability $p_a$ or $p_b$, thus preserving the community structure of the network.
}
\xhdr{Graphs with regular structure}
GraphRNN\ can also naturally learn to generate regular structures, due to its ability to learn functions that only activate for $S^\pi_{i, j}$ where $v_j$ has specific degree.
For example, suppose that the training set consists of ladder graphs~\cite{noy2004recursively}.
To generate a ladder graph, the edge-level RNN must handle three key cases: if $\sum_{k=1}^{j}S^\pi_{i,j} = 0$, then the new node should only connect to the degree $1$ node or else any degree $2$ node; if $\sum_{k=1}^{j}S^\pi_{i,j} = 1$, then the new node should only connect to the degree $2$ node that is exactly two hops away; and finally, if $\sum_{k=1}^{j}S^\pi_{i,j} = 2$ then the new node should make no further connections.
And note that all of the statistics needed above are computable from $S^\pi_{<i}$ and $S^\pi_{i,<j}$.
The appendix contains visual illustrations and further discussions on this example.
\cut{
\rex{i think i do need a figure to explain this clearly. let me do this in appendix first. Here I'll just say how these types of graphs crucially depends on neurons activating for nodes with the right degree, and explain more in appendix} \jure{keeping this in appendix for now sounds great.}
}
\cut{
\xhdr{Discussion: Learning deep generative model of graphs}
Their parameters are dependent only if they do enough GCN rounds.
Their samples are not dependent.
multi-modality.
their formulation is not scalable. \rex{not needed due to shortage of space?}
}
\section{Experiments}
\label{sec:experiments}
\begin{figure*}[t]
\includegraphics[width=\textwidth]{figs/graph_view_v2}
\caption{Visualization of graphs from grid dataset (Left group), community dataset (Middle group) and Ego dataset (Right group). Within each group, graphs from training set (First row), graphs generated by GraphRNN (Second row) and graphs generated by Kronecker, MMSB and B-A baselines respectively (Third row) are shown. Different visualization layouts are used for different datasets.}
\label{fig:graphs_vis}
\vspace{-10pt}
\end{figure*}
We compare GraphRNN\xspace\ to state-of-the-art baselines, demonstrating its robustness and ability to generate high-quality graphs in diverse settings.
\subsection{Datasets}
We perform experiments on both synthetic and real datasets, with drastically varying sizes and characteristics.
The sizes of graphs vary from $|V|=10$ to $|V|=2025$.
\xhdr{Community} 500 two-community graphs with $60\leq|V|\leq160$. Each community is generated by the Erd\H{o}s-R\'enyi \ model (E-R) \cite{erdos1959random} with $n=|V|/2$ nodes and $p=0.3$. We then add $0.05|V|$ inter-community edges with uniform probability.
\xhdr{Grid} 100 standard 2D grid graphs with $100\leq|V|\leq400$. We also run our models on 100 standard 2D grid graphs with $1296\leq|V|\leq2025$, and achieve comparable results.
\xhdr{B-A} 500 graphs with $100\leq|V|\leq200$ that are generated using the Barab\'asi-Albert model. During generation, each node is connected to 4 existing nodes.
\xhdr{Protein} 918 protein graphs \cite{dobson2003distinguishing} with $100\leq|V|\leq500$. Each protein is
represented by a graph, where nodes are amino acids and two nodes are connected if they are less than $6$ Angstroms apart.
\xhdr{Ego} 757 3-hop ego networks extracted from the Citeseer network \cite{sen2008collective} with $50\leq|V|\leq399$. Nodes represent documents and edges represent citation relationships.
\subsection{Experimental Setup}
We compare the performance of our model against various traditional generative models for graphs, as well as some recent deep graph generative models.
\xhdr{Traditional baselines}
Following \citealt{li2018learning} we compare against the Erd\H{o}s-R\'enyi\ model (E-R) \cite{erdos1959random} and the Barab\'asi-Albert (B-A) model \cite{albert2002statistical}.
In addition, we compare against popular generative models that include learnable parameters: Kronecker graph models \cite{leskovec2010kronecker} and mixed-membership stochastic block models (MMSB) \cite{airoldi2008mixed}.
\xhdr{Deep learning baselines} We compare against the recent methods of \citealt{simonovsky2018graphvae} (GraphVAE) and \citealt{li2018learning} (DeepGMG).
We provide reference implementations for these methods (which do not currently have associated public code), and we adapt GraphVAE to our problem setting by using one-hot indicator vectors as node features for the graph convolutional network encoder.\footnote{We also attempted using degree and clustering coefficients as features for nodes, but did not achieve better performance.}
\xhdr{Experiment settings} We use $80\%$ of the graphs in each dataset for training and test on the rest.
We set the hyperparameters for baseline methods based on recommendations made in their respective papers.
The hyperparameter settings for GraphRNN\xspace\ were fixed after development tests on data that was not used in follow-up evaluations (further details in the Appendix).
Note that all the traditional methods are only designed to learn from a single graph, therefore we train a separate model for each training graph in order to compare with these methods.
In addition, both deep learning baselines suffer from aforementioned scalability issues, so we only compare to these baselines on a small version of the community dataset with $12\leq|V|\leq20$ (Community-small) and 200 ego graphs with $4\leq|V|\leq18$ (Ego-small).
\begin{table*}[t]
\centering
\begin{footnotesize}
\caption{Comparison of GraphRNN\xspace to traditional graph generative models using MMD. $(\max(|V|), \max(|E|))$ of each dataset is shown.}
\label{tab:mmd_big}
\begin{tabular}{@{}lllllllllllll@{}}
\toprule
& \multicolumn{3}{c}{Community (160,1945)} & \multicolumn{3}{c}{Ego (399,1071)} & \multicolumn{3}{c}{Grid (361,684)} & \multicolumn{3}{c}{Protein (500,1575)} \\ \cmidrule(lr){2-4}\cmidrule(lr){5-7}\cmidrule(lr){8-10}\cmidrule(lr){11-13}
& Deg. & Clus. & Orbit & Deg. & Clus. & Orbit & Deg. & Clus. & Orbit & Deg. & Clus. & Orbit \\ \midrule
E-R &0.021 &1.243 &0.049 &0.508 &1.288 &0.232 &1.011 &0.018 &0.900 &0.145 &1.779 &1.135 \\
B-A &0.268 &0.322 &0.047 &0.275 &0.973 &0.095 &1.860 &0 &0.720 &1.401 &1.706 &0.920 \\
Kronecker &0.259 &1.685 & 0.069 & 0.108 &0.975 &0.052 &1.074 &0.008 &0.080 &0.084 &0.441 &0.288 \\
MMSB &0.166 &1.59 &0.054 & 0.304 &0.245 &0.048 &1.881 &0.131 &1.239 &0.236 &0.495 &0.775 \\ \midrule
GraphRNN-S &0.055 &0.016 &0.041 &0.090 & \textbf{0.006} &0.043 & 0.029 &$10^{-5}$&0.011 &0.057 & \textbf{0.102} & \textbf{0.037} \\
GraphRNN & \textbf{0.014} & \textbf{0.002} & \textbf{0.039} & \textbf{0.077} & 0.316 & \textbf{0.030} & $\mathbf{10^{-5}}$ & \textbf{0} & $\mathbf{10^{-4}}$ & \textbf{0.034} &0.935 &0.217\\ \bottomrule
\end{tabular}
\end{footnotesize}
\end{table*}
\begin{table*}[t]
\centering
\begin{footnotesize}
\caption{GraphRNN\xspace compared to state-of-the-art deep graph generative models on small graph datasets using MMD and negative log-likelihood (NLL). $(\max(|V|), \max(|E|))$ of each dataset is shown. (DeepVAE and GraphVAE cannot scale to the graphs in Table \ref{tab:mmd_big}.)}
\label{tab:mmd_small}
\begin{tabular}{@{}lllllllllll@{}}
\toprule
& \multicolumn{5}{c}{Community-small (20,83)} & \multicolumn{5}{c}{Ego-small (18,69)} \\ \cmidrule(lr){2-4}\cmidrule(lr){5-6}\cmidrule(lr){7-9}\cmidrule(lr){10-11}
& Degree & Clustering & Orbit & Train NLL& Test NLL & Degree & Clustering & Orbit &Train NLL & Test NLL \\ \midrule
GraphVAE &0.35 &0.98 &0.54 &13.55 &25.48 & 0.13 & 0.17 & 0.05 & 12.45 & 14.28 \\
DeepGMG &0.22 &0.95 &0.40 &106.09 &112.19 &0.04 &0.10 & 0.02 &21.17 &22.40\\
GraphRNN-S &\textbf{0.02} &0.15 &\textbf{0.01} &31.24 &35.94 &0.002 &\textbf{0.05} &\textbf{0.0009} &8.51 &9.88 \\
GraphRNN &0.03 &\textbf{0.03} &\textbf{0.01} &28.95 &35.10 &\textbf{0.0003}&\textbf{0.05} &\textbf{0.0009} &9.05 &10.61 \\ \bottomrule
\end{tabular}
\end{footnotesize}
\end{table*}
\subsection{Evaluating the Generated Graphs}
Evaluating the sample quality of generative models is a challenging task in general \cite{Theis2016a}, and in our case, this evaluation requires a comparison between two sets of graphs (the generated graphs and the test sets).
Whereas previous works relied on qualitative visual inspection \cite{simonovsky2018graphvae} or simple comparisons of average statistics between the two sets \cite{leskovec2010kronecker}, we propose novel evaluation metrics that compare all moments of their empirical distributions.
\cut{
Previous works proposed to evaluate the generated graphs by visual inspection \cite{simonovsky2018graphvae} or by comparing the average statistics over the two sets \cite{leskovec2010kronecker}. However, visual examination is solely qualitative while average statistics discard the higher moments of the data. Ideally, we would like to quantitatively evaluate the similarity between two sets by comparing all moments of their empirical distributions.}
Our proposed metrics are based on Maximum Mean Discrepancy (MMD) measures.
Suppose that a unit ball in a reproducing kernel Hilbert space (RKHS) $\mathcal{H}$ is used as its function class $\mathcal{F}$,
and $k$ is the associated kernel, the squared MMD between two sets of samples from distributions $p$ and $q$ can be derived as \cite{gretton2012kernel}
\begin{equation}
\label{eq:mmd}
\begin{split}
\mathrm{MMD}^2(p||q)& =\mathbb{E}_{x,y\sim p}[k(x,y)]+\mathbb{E}_{x,y\sim q}[k(x,y)]\\
& -2\mathbb{E}_{x\sim p,y\sim q}[k(x, y)].
\end{split}
\end{equation}
Proper distance metrics over graphs are in general computationally intractable \cite{lin1994hardness}.
Thus, we compute MMD using a set of graph statistics $\mathbb{M}=\{M_1,...,M_k\}$, where each $M_i(G)$ is a univariate distribution over $\mathbb{R}$, such as the degree distribution or clustering coefficient distribution.
We then use the first Wasserstein distance as an efficient distance metric between two distributions $p$ and $q$:
\begin{equation}
\label{eq:emd}
W(p, q) = \inf_{\gamma \in \Pi(p, q)} \mathbb{E}_{(x, y) \sim \gamma} [||x - y||],
\end{equation}
where $\Pi(p, q)$ is the set of all distributions whose marginals are $p$ and $q$ respectively, and $\gamma$ is a valid transport plan.
To capture high-order moments, we use the following kernel, whose Taylor expansion is a linear
combination of all moments (proof in the Appendix):
\begin{proposition}
\label{ref:mmd_emd}
The kernel function defined by $k_W(p, q) = \exp{\frac{W(p, q)}{2\sigma^2}}$ induces a unique RKHS.
\end{proposition}\cut{
As proven in \citealt{kolouri2016sliced}, this Wasserstein distance based kernel is a positive definite (p.d.) kernel.
By properties that linear combinations,
product and limit (if exists) of p.d. kernels are p.d. kernels, $k_W(p, q)$ is also a p.d. kernel.\footnote{This can be seen by expressing the kernel function using Taylor expansion.}
By the Moore-Aronszajn theorem, a symmetric p.d. kernel induces a unique RKHS. Therefore Equation
\eqref{eq:mmd} holds if we set $k$ to be $k_W$.}
In experiments, we show this derived MMD score for degree and clustering coefficient distributions, as well as average orbit counts statistics, \emph{i.e.}, the number of occurrences of all orbits with 4 nodes (to capture higher-level motifs) \cite{hovcevar2014combinatorial}. We use the RBF kernel to compute distances between count vectors.
\subsection{Generating High Quality Graphs}
Our experiments demonstrate that GraphRNN\xspace can generate graphs that match the characteristics of the ground truth graphs in a variety of metrics.
\xhdr{Graph visualization}
Figure \ref{fig:graphs_vis} visualizes the graphs generated by GraphRNN\xspace\ and various baselines, showing that GraphRNN\xspace\ can capture the structure of datasets with vastly differing characteristics---being able to effectively learn regular structures like grids as well as
more natural structures like ego networks. Specifically, we found that grids generated by GraphRNN\xspace\
do not appear in the training set, \emph{i.e.}, it learns to generalize to unseen grid widths/heights.
\cut{
\xhdr{Graph visualization}
We first visualize some examples of training graphs and graphs generated by GraphRNN\xspace and baseline methods on 3 datasets in Figure \ref{fig:graphs_vis}. We select the generated graphs so that none of the generated graphs are in the training set, showing the generalization ability of our models.
Without hand-crafted design, GraphRNN\xspace can effectively learn from datasets with distinct characteristics, such as grid structure, community structure and high degree hub, and generate similar graphs accordingly.
Among the tasks, generating grids is especially hard, as the model should precisely capture the regular structure of grids without making any errors.
GraphRNN\xspace can perform extremely well in grid dataset, owing to the conditional information captured by the model as discussed in section 3.
In comparison, MMSB only performs well on the community dataset, Kronecker model only performs reasonable on the ego network, while B-A model performs badly in all datasets.
}
\xhdr{Evaluation with graph statistics}
We use three graph statistics---based on degrees, clustering coefficients and orbit counts---to further quantitatively evaluate the generated graphs.
Figure~\ref{fig:average} shows the average graph statistics in the test vs.\@ generated graphs, which demonstrates that even from hundreds of graphs with diverse sizes, GraphRNN\ can still learn to capture the underlying graph statistics very well, with the generated average statistics closely matching the overall test set distribution.
Tables \ref{tab:mmd_big} and \ref{tab:mmd_small} summarize MMD evaluations on the full datasets and small versions, respectively.
Note that we train all the models with a fixed number of steps, and report the test set performance at the step with the lowest training error.\footnote{Using the training set or a validation set to evaluate MMD gave analogous results, so we used the train set for early stopping.}
GraphRNN variants achieve the best performance on all datasets, with $80\%$ decrease of MMD on average compared with traditional baselines, and $90\%$ decrease of MMD compared with deep learning baselines.
Interestingly, on the protein dataset, our simpler GraphRNN-S\ model performs very well, which is likely due to the fact that the protein dataset is a nearest neighbor graph over Euclidean space and thus does not involve highly complex edge dependencies.
Note that even though some baseline models perform well on specific datasets (\emph{e.g.}, MMSB on the community dataset), they fail to generalize across other types of input graphs.
\begin{figure}[t]
\vspace{-10pt}
\centering
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{figs/average_degree}
\label{fig:robustness_degree}
\end{subfigure}
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{figs/average_clustering}
\label{fig:robustness_clustering}
\end{subfigure}
\vspace{-20pt}
\caption{Average degree (Left) and clustering coefficient (Right) distributions of graphs from test set and graphs generated by GraphRNN and baseline models.}
\label{fig:average}
\end{figure}
\xhdr{Generalization ability}
Table \ref{tab:mmd_small} also shows negative log-likelihoods (NLLs) on the training and test sets. We report the average $p(S^\pi)$ in our model, and report the likelihood in baseline methods as defined in their papers. A model with good generalization ability should have small NLL gap between training and test graphs. We found that our model can generalize well, with $22\%$ smaller average NLL gap.\footnote{The average likelihood is ill-defined for the traditional models.}\cut{, while maintaining high sample qualities, with \todo{xx} decrease of average MMD.}
\subsection{Robustness}
Finally, we also investigate the robustness of our model by interpolating between Barab\'asi-Albert (B-A) and Erd\H{o}s-R\'enyi\ (E-R) graphs. We randomly perturb [$0\%, 20\%,...,100\%$] edges of B-A graphs with $100$ nodes. With $0\%$ edges perturbed, the graphs are E-R graphs; with $100\%$ edges perturbed, the graphs are B-A graphs.
Figure \ref{fig:robustness} shows the MMD scores for degree and clustering coefficient distributions for the $6$ sets of graphs.
Both B-A and E-R perform well when graphs are generated from their respective distributions, but their performance degrades significantly once noise is introduced.
In contrast, GraphRNN\xspace maintains strong performance as we interpolate between these structures, indicating high robustness and versatility.
\begin{figure}[t]
\vspace{-10pt}
\centering
\begin{subfigure}[b]{0.22\textwidth}
\includegraphics[width=\textwidth]{figs/robustness_degree}
\label{fig:robustness_degree}
\end{subfigure}
\begin{subfigure}[b]{0.22\textwidth}
\includegraphics[width=\textwidth]{figs/robustness_clustering}
\label{fig:robustness_clustering}
\end{subfigure}
\vspace{-25pt}
\caption{MMD performance of different approaches on degree (Left) and clustering coefficient (Right) under different noise level.}
\label{fig:robustness}
\end{figure}
\section{Conclusion and Future Work}
\label{sec:conclusion}
We proposed GraphRNN, an autoregressive generative model for graph-structured data, along with a comprehensive evaluation suite for the graph generation problem, which we used to show that GraphRNN achieves significantly better performance compared to previous state-of-the-art models, while being scalable and robust to noise.
However, significant challenges remain in this space, such as scaling to even larger graphs and developing models that are capable of doing efficient conditional graph generation.
\section*{Acknowledgements}
\label{sec:ack}
The authors thank Ethan Steinberg, Bowen Liu, Marinka Zitnik and Srijan Kumar for their helpful discussions and comments on the paper. This research has been supported in part by DARPA SIMPLEX, ARO MURI, Stanford Data Science
Initiative, Huawei, JD, and Chan Zuckerberg Biohub. W.L.H. was also supported by the SAP Stanford
Graduate Fellowship and an NSERC PGS-D grant.
\section{Appendix}
\subsection{Implementation Details of GraphRNN\xspace}
In this section we detail parameter setting, data preparation and training strategies for GraphRNN\xspace.
We use two sets of model parameters for GraphRNN.
One larger model is used to train and test on the larger datasets that are used to compare with traditional methods.
One smaller model is used to train and test on datasets with nodes up to $20$. This model is only used to compare with the two most recent preliminary deep generative models for graphs proposed in \cite{li2018learning,simonovsky2018graphvae}.
For GraphRNN, the graph-level RNN uses $4$ layers of GRU cells, with $128$ dimensional hidden state for the larger model, and $64$ dimensional hidden state for the smaller model in each layer.
The edge-level RNN uses $4$ layers of GRU cells, with $16$ dimensional hidden state for both the larger model and the smaller model. To output the adjacency vector prediction, the edge-level RNN first maps the highest layer of the $16$ dimensional hidden state to a $8$ dimensional vector through a MLP with ReLU activation, then another MLP maps the vector to a scalar with sigmoid activation.
The edge-level RNN is initialized by the output of the graph-level RNN at the start of generating $S^\pi_i$, $\forall 1 \le i \le n$.
Specifically, the highest layer hidden state of the graph-level RNN is used to initialize the lowest layer of edge-level RNN, with a liner layer to match the dimensionality.
During training time, teacher forcing is used for both graph-level and edge-level RNNs, i.e., we use the groud truth rather than the model's own prediction during training.
At inference time, the model uses its own preditions at each time steps to generate a graph.
For the simple version GraphRNN-S, a two-layer MLP with ReLU and sigmoid activations respectively is used to generate $S^\pi_i$, with $64$ dimensional hidden state for the larger model, and $32$ dimensional hidden state for the smaller model.
In practice, we find that the performance of the model is relatively stable with respect to these hyperparameters.
We generate the graph sequences used for training the model following the procedure in Section 2.3.4. Specifically, we first randomly sample a graph from the training set, then randomly permute the node ordering of the graph. We then do the deterministic BFS discussed in Section 2.3.4 over the graph with random node ordering, resulting a graph with BFS node ordering. An exception is in the robustness section, where we use the node ordering that generates B-A graphs to get graph sequences, in order to see if GraphRNN can capture the underlying preferential attachment properties of B-A graphs.
With the proposed BFS node ordering, we can reduce the maximum dimension $M$ of $S^\pi_i$, illustrated in Figure \ref{fig:bfs}. To set the maximum dimension $M$ of $S^\pi_i$, we use the following empirical procedure. We randomly ran $100000$ times the above data pre-processing procedure to get graph with BFS node orderings. We remove the all consecutive zeros in all resulting $S^\pi_i$, to find the empirical distribution of the dimensionality of $S^\pi_i$. We set $M$ to be roughly the $99.9$ percentile, to account for the majority dimensionality of $S^\pi_i$. In principle, we find that graphs with regular structures tend to have smaller $M$, while random graphs or community graphs tend to have larger $M$. Specifically, for community dataset, we set $M=100$; for grid dataset, we set $M=40$; for B-A dataset, we set $M=130$; for protein dataset, we set $M=230$; for ego dataset, we set $M=250$; for all small graph datasets, we set $M=20$.
\begin{figure*}[t]
\centering
\includegraphics[width=0.75\textwidth]{figs/bfs}
\caption{Illustrative example of reducing the maximum dimension $M$ of $S^\pi_i$ through the BFS node ordering. Here we show the adjacency matrix of a graph with $N=10$ nodes. Without the BFS node ordering (Left), we have to set $M=N-1$ to encode all the necessary connection information (shown in dark square). With the BFS node ordering, we could set $M$ to be a constant smaller than $N$ (we show $M=3$ in the figure).}
\label{fig:bfs}
\end{figure*}
The Adam Optimizer is used for minibatch training. Each minibatch contains $32$ graph sequences.
We train the model for $96000$ batchs in all experiments.
We set the learning rate to be $0.001$, which is decayed by $0.3$ at step $12800$ and $32000$ in all experiments.
\subsection{Running Time of GraphRNN\xspace}
Training is performed on only $1$ Titan X GPU. For the protein dataset that consists of about $1000$ graphs, each containing about $500$ nodes, training converges at around $64000$ iterations. The runtime is around $12$ to $24$ hours. This also includes pre-processing, batching and BFS, which are currently implemented using CPU without multi-threading. The less expressive GraphRNN-S\ variant is about twice faster.
At inference time, for the above dataset, generating a graph using the trained model only takes about $1$ second.
\subsection{More Details on GraphRNN\xspace's Expressiveness}
We illustrate the intuition underlying the good performance of GraphRNN on graphs with regular
structures, such as grid and ladder networks.
Figure \ref{fig:capacity_ladder} (a) shows the generation process of a ladder graph at
an intermediate step.
At this time step, the ground truth data (under BFS node ordering) specifies that the new node added to the graph should make an edge to the node with degree $1$. Note that node degree is a function of $S^\pi_{<i}$, thus could be approximated by a neural network.
Once the first edge has been generated, the new node should make an edge with another node of degree $2$. However, there are multiple ways to do so, but only one of them gives a valid grid structure,
\emph{i.e.} one that forms a $4$-cycle with the new edge.
GraphRNN\xspace\ crucially relies on the edge-level RNN and the knowledge of the previously added edge, in order to distinguish between the correct and incorrect connections in Figure \ref{fig:capacity_ladder} (c) and (d).
\vspace{-0.15cm}
\begin{figure}[h]
\centering
\includegraphics[width=0.47\textwidth]{figs/capacity_ladder}
\vspace{-0.25cm}
\caption{Illustration that generation of ladder networks relies on dependencies modeled by GraphRNN\xspace.}
\label{fig:capacity_ladder}
\end{figure}
\subsection{Code Overview}
In the code repository, \texttt{main.py} consists of the main training pipeline, which loads datasets and performs training and inference. It also consists of the \texttt{Args} class, which stores the hyper-parameter settings of the model.
\texttt{model.py} consists of the RNN, MLP and loss function modules that are use to build GraphRNN\xspace.
\texttt{data.py} contains the minibatch sampler, which samples a random BFS ordering of a batch of randomly selected graphs.
\texttt{evaluate.py} contains the code for evaluating the generated graphs using the MMD metric introduced in Sec. 4.3.
Baselines including the Erd\H{o}s-R\'enyi\ model, Barab\'asi-Albert \ model, MMSB, and rge very recent deep generative models (GraphVAE, DeepGMG) are also implemented in the \texttt{baselines} folders.
We adopt the C++ Kronecker graph model implementation in the SNAP package \footnote{The SNAP package is available at \url{http://snap.stanford.edu/snap/index.html}.}.
\subsection{Proofs}
\subsubsection{Proof of Proposition 1}
We use the following observation:
\begin{observation}
\label{ob:bfs_order}
By definition of BFS, if $i < k$, then the children of $v_i$ in the BFS ordering come before the children of $v_k$ that do not connect to $v_{i'}$, $\forall 1 \le i' \le i$.
\end{observation}
By definition of BFS, all neighbors of a node $v_i$ include the parent of $v_i$ in the BFS tree, all children of $v_i$ which have consecutive indices, and some children of $v_{i'}$ which connect to both $v_{i'}$ and $v_i$, for some $1 \le i' \le i$.
Hence if $(v_i, v_{j-1}) \in E$ but $(v_i, v_j) \not \in E$, $v_{j-1}$ is the last children of $v_i$ in the BFS ordering. Hence $(v_{i}, v_{j'}) \not \in E$, $\forall j \le j' \le n$.
For all $i' \in [i]$, supposed that $(v_{i'}, v_{j'-1}) \in E$ but $(v_{i'}, v_{j'}) \not \in E$. By Observation \ref{ob:bfs_order}, $j' < j$.
By conclusion in the previous paragraph, $(v_{i'}, v_{j''}) \not \in E$, $\forall j' \le j'' \le n$.
Specifically, $(v_{i'}, v_{j''}) \not \in E$, $\forall j \le j'' \le n$.
This is true for all $i' \in [i]$.
Hence we prove that
$(v_{i'}, v_{j'}) \not \in E$, $\forall 1 \le i' \le i$ and $j \le j' < n$.
\subsubsection{Proof of Proposition 2}
As proven in \citealt{kolouri2016sliced}, this Wasserstein distance based kernel is a positive definite (p.d.) kernel.
By properties that linear combinations,
product and limit (if exists) of p.d. kernels are p.d. kernels, $k_W(p, q)$ is also a p.d. kernel.\footnote{This can be seen by expressing the kernel function using Taylor expansion.}
By the Moore-Aronszajn theorem, a symmetric p.d. kernel induces a unique RKHS. Therefore Equation
(9) holds if we set $k$ to be $k_W$.
\subsection{Extension to Graphs with Node and Edge Features}
Our GraphRNN\xspace model can also be applied to graphs where nodes and edges have feature vectors associated with them.
In this extended setting, under node ordering $\pi$, a graph $G$ is associated with its node feature matrix $X^\pi\in\mathbb{R}^{n\times m}$ and edge feature matrix $F^\pi\in\mathbb{R}^{n\times k}$, where $m$ and $k$ are the feature dimensions for node and edge respectively.
In this case, we can extend the definition of $S^\pi$ to include feature vectors of corresponding nodes as well as edges $S^\pi_i = (X^\pi_i, F^\pi_i)$. We can adapt the $f_{out}$ module, by using a MLP to generate $X^\pi_i$ and an edge-level RNN to genearte $F^\pi_i$ respectively.
Note also that directed graphs can be viewed as an undirected graphs with two edge types, which is a special case under the above extension.
\subsection{Extension to Graphs with Four Communities}
To further show the ability of GraphRNN\xspace to learn from community graphs, we further conduct experiments on a four-community synthetic graph dataset. Specifically, the data set consists of 500 four community graphs with $48\leq|V|\leq68$. Each community is generated by the Erd\H{o}s-R\'enyi \ model (E-R) \cite{erdos1959random} with $n \in [|V|/4-2, |V|/4+2]$ nodes and $p=0.7$. We then add $0.01|V|^2$ inter-community edges with uniform probability. FIgure \ref{fig:4community} shows the comparison of visualization of generated graph using GraphRNN\xspace and other baselines. We observe that in contrast to baselines, GraphRNN\xspace consistently generate 4-community graphs and each community has similar structure to that in the training set.
\begin{figure}[t]
\centering
\includegraphics[width=0.95\columnwidth]{figs/graph_view_4community}
\caption{Visualization of graph dataset with four communities. Graphs from training set (First row), graphs generated by GraphRNN (Second row) and graphs generated by Kronecker, MMSB and B-A baselines respectively (Third row) are shown.}
\label{fig:4community}
\end{figure}
|
{
"timestamp": "2018-06-26T02:06:09",
"yymm": "1802",
"arxiv_id": "1802.08773",
"language": "en",
"url": "https://arxiv.org/abs/1802.08773"
}
|
\section{Introduction}
Molecular crystals are a unique class of materials with diverse applications in pharmaceuticals, organic electronics, pigments, and explosives \cite{2002Bernstein, 2007Day, 2014Reilly, 2015Elder, 2007Reese, 2009Hasegawa, 2012Bergantin, 2012Cudazzo, 2015Cudazzo,2007Panina, 2015Fitzgerald}. The molecules comprising these crystals are bound by weak dispersion (van der Waals) interactions. As a result, the same molecule may crystallize in several different solid forms, known as polymorphs. Because the structure of a molecular crystal governs its physical properties, polymorphism may drastically impact the desired functionality of a given application. For pharmaceuticals, different polymorphs may display varying stability, solubility, and compressibility, affecting the drug's manufacturability, bioavailability, and efficacy \cite{2013Price, 2016Price, 2002Bernstein}. For applications in organic electronics and organic photovoltaics (OPV), different polymorphs possess different optoelectronic properties\cite{2016Curtis, 2016Wang}, directly impacting device performance \cite{2011Giri,2013Mei,2013Diao}.
Because molecular crystals have a wide range of applications, there has been increasing interest in the fundamental challenge of crystal structure prediction (CSP), or the computation of a molecule's putative crystal structure(s) solely from its two-dimensional chemical diagram, examples of which are shown in Fig. \ref{Fig1_2D_Diagrams}. This challenge is embodied by CSP blind tests, organized periodically by the Cambridge Crystallographic Data Centre \cite{2000Lommerse, 2002Motherwell, 2005Day, 2009Day, 2011Bardwell, 2016Reilly}. CSP can reveal the general behavior of a target molecule, predict the existence of new polymorphs, and serve as a complementary tool for experimental investigations \cite{2015Neumann, 2016Price, 2017Shtukenberg, 2013Meredig}. Once considered unachievable \cite{1994Gavezzotti}, CSP is still an extremely challenging task because it requires combining highly accurate electronic structure methods with efficient algorithms for configuration space exploration.
The energy differences between molecular crystal polymorphs are typically within a few kJ/mol\cite{2013Marom, 2015Cruz, 2015Beran, 2016Beran}, which calls for the accuracy of a quantum mechanical approach. Reaching the required accuracy has become more practical thanks to a decade of development in dispersion-inclusive density functional theory (DFT), including exchange-correlation functionals\cite{2004Dion, 2010Lee, 2009Vydrov, 2011Peverati, 2012Peverati, 2008Zhao, 2010Vydrov, 2010Vydrov_2, 2015Berland, 2015Thonhauser,2016Peng, 2015Sun} and pairwise methods that add the leading order $C^6/R^6$ dispersion term to the inter-nuclear energy\cite{2010Riley, 2006Grimme, 2010Grimme, 2005Johnson, 2012Otero, 2007Jurevcka, 2002Wu, 2001Wu, 2011Steinmann, 2011Steinmann_2,2009Tkatchenko, 2016Brandenburg}. Notably, the recently developed many-body dispersion (MBD) method\cite{2012Distasio, 2012Tkatchenko, 2014Ambrosetti} accurately describes the structure, energetics, dielectric properties, and mechanical properties of molecular crystals\cite{2013Marom, 2013Schatschneider, 2013Reilly, 2013Reilly_2,2014Reilly, 2015Tkatchenko, 2016Curtis, 2017Hermann, 2016Flores} by accounting for long range electrostatic screening and non-pairwise-additive contributions of many-body dispersion interactions. Using dispersion-inclusive DFT for the final ranking of relative stabilities has become a CSP best practice\cite{2016Reilly}. Vibrational contributions to the zero-point energy and free energy of the system at finite temperature have also been shown to affect the relative stabilities of certain molecular crystal polymorphs and may be further included \cite{2013Reilly_2,2014Reilly, 2017Hoja, 2016Rossi,2015Nyman}.
Approaches to configuration space exploration in CSP include molecular dynamics \cite{2011Yu, 2016Schneider}, Monte Carlo methods \cite{2015Neumann,2013Akkermans}, particle swarm optimization\cite{2012Wang}, and (quasi)-random searches \cite{2011Pickard, 2016Case}. Genetic algorithms (GAs) are a versatile class of optimization algorithms inspired by the evolutionary principle of survival of the fittest \cite{2003Johnston, 2010Sierka, 2013Heiles}. A GA starts from an initial pool of locally optimized trial structures. The scalar descriptor (or combination of descriptors) being optimized is mapped onto a fitness function and structures with higher fitness values are assigned higher probabilities for mating. Breeding operators create offspring structures by combining the structural “genes”\footnote{The term ``genetic algorithm" is sometimes reserved for an evolutionary algorithm that purely encodes an individual's genes with bit-string representations. For our purposes we make no such distinction between genetic and evolutionary algorithms.} of one or more parent structure(s). The child structure is locally optimized and added to the population. The cycle of local optimization, fitness evaluation, and offspring generation propagates structural features associated with the property being optimized and repeats till ``convergence" (a GA is not guaranteed to find the global minimum). For practical purposes, convergence may be defined as when the GA can no longer find any new low-energy structures in a large number of iterations.
GAs can be applied robustly to complex multidimensional search spaces, including those with many extrema or discontinuous derivatives. They provide a good balance between exploration and exploitation by introducing randomness in the mating step followed by local optimization. Furthermore, they are conceptually simple algorithms, ideal for parallelization, and can lead to unbiased and unintuitive solutions. In the context of structure prediction, the target function being optimized is typically the total or free energy. GAs have been used extensively to find the global minimum structures of crystalline solids\cite{2006Oganov, 2006Glass,2006Abraham,2007Trimarchi, 2011Wu, 1999Woodley, 2008Trimarchi, 2011Lonie, 2002Johannesson, 2012Zhu, 2015Lund,2017Avery,2016Falls} and clusters\cite{1999Morris, 2003Johnston,2005Alexandrova, 2010Sierka,2010Marques,1999Hartke,2010Catlow,2013Heiles,2004Bazterra, 2013Bhattacharya, 2014Bhattacharya,2017Jorgensen, 2013Tipton}. Advantageously, the GA fitness function may be based on any property of interest, not necessarily the energy\cite{2003Johnston, 2011OBoyle, 2013Jain, 2012dAvezac, 2013Zhang, 2010Chua, 2015Bhattacharya}. For organic molecular crystals the goal is not just to locate the most stable structure but also any potential polymorphs. In the most recent CSP blind test \cite{2016Reilly}, GAs were used by us\footnote{In the sixth blind test we used a preliminary version of GAtor.} and others (see submissions \#8, \#12, \#21).
Here, we present GAtor, a new, massively parallel, first principles genetic algorithm (GA) specifically designed for structure prediction of crystal structures of (semi-)rigid molecules. GAtor is written in Python with a modular structure that allows the user to switch between and/or modify core GA routines for specialized purposes. For initial pool generation, GAtor relies on a separate package, Genarris, reported elsewhere \cite{2017Li} and briefly described in Section 3.1. GAtor offers a variety of features that enable the user to customize the search settings as needed for chemically diverse systems, including different fitness, selection, crossover, and mutation schemes. GAtor is designed to fully utilize high performance computing (HPC) architectures by spawning several parallel GA replicas that read from and write to a common population of structures. This approach does not require a full ``generation" of candidates to complete before performing a new selection\cite{2015Bhattacharya, 2013Bhattacharya, 2014Bhattacharya}. For energy evaluations and local optimization of trial structures, GAtor employs dispersion-inclusive DFT by interfacing with the ab initio, all-electron electronic structure code FHI-aims \cite{2009Blum,2009Havu}.
\begin{figure}
\includegraphics{Fig1_2D_diagrams.pdf}
\caption{Two-dimensional molecular diagrams of four past blind test targets, Target I\cite{2002Motherwell}, Target II\cite{2002Motherwell}, Target XIII\cite{2009Day}, and Target XXII \cite{2016Reilly}.}
\label{Fig1_2D_Diagrams}
\end{figure}
The paper is organized as follows: Section 2 describes the DFT methods and numerical settings of FHI-aims used in conjunction with GAtor; Section 3 details GAtor's parallelization scheme and the features currently available in the code; Section 4 showcases applications of GAtor for a chemically diverse set of four past blind test targets, 3,4-cyclobutylfuran (Target I\cite{2002Motherwell}), 5-cyano-3-hydroxythiophene (Target II\cite{2002Motherwell}), 1,3-dibromo-2-chloro-5-fluorobenzene (Target XIII\cite{2009Day}), and tricyano-1,4-dithiino[c]-isothiazole (Target XXII\cite{2016Reilly}) shown in Fig. \ref{Fig1_2D_Diagrams}. Finally, Section 5 provides concluding remarks and best practices.
\section{DFT Settings}
Because first principles calculations are computationally expensive, lighter DFT settings are employed within the GA search, with the intention of locating the experimental structure and any potential polymorphs among the lowest energy structures. To obtain more precise rankings, the best structures produced from the GA are postprocessed with higher-level functionals and dispersion corrections. Hierarchal screening approaches have become a common practice in CSP \cite{2016Reilly}. In GAtor, the user has the option to input FHI-aims control files for any desired level(s) of theory. The DFT settings used in the present study are detailed below.
For local structural optimizations within the GA, the generalized gradient approximation of Perdew-Burke-Ernzerhof (PBE)\cite{1996Perdew, 1997Perdew} is used with the pairwise Tkatchenko-Scheffler (TS) dispersion-correction\cite{2009Tkatchenko} with \textit{lower-level} numerical settings, which correspond to the light numerical settings and tier 1 basis sets of FHI-aims \cite{2009Blum}. During local optimization, the space group symmetry is allowed to vary. Additionally, a $2 \times 2 \times 2$ k-point grid and reduced angular grids are used. A convergence value of $10^{-5}$ electrons is set for the change in charge density in the self-consistent field (SCF) cycle and SCF forces and stress evaluations are not computed. These settings are implemented in order to accelerate geometry relaxations within the GA. For Target XIII, atomic ZORA scalar relativity\cite{2009Blum} settings are used for the heavier halogen elements.
For postprocessing, the best 5-10\% of the final structures produced by the GA are re-relaxed and re-ranked using a $3 \times 3 \times 3$ k-point grid, PBE+TS, and \textit{higher-level} numerical settings, which correspond to the tight/tier2 default settings of FHI-aims\cite{2009Blum}. Next, single point energy (SPE) evaluations are performed using PBE with the MBD method \cite{2012Distasio, 2012Tkatchenko, 2014Ambrosetti} for the best structures as ranked by PBE+TS. The final re-ranking is performed using the hybrid functional PBE0 \cite{1996Perdew_2, 1999Adamo} with the MBD correction. The inclusion of 25\% exact exchange in PBE0 mitigates the self-interaction error, leading to a more accurate description of electron densities and multipoles \cite{2013Reilly, 2013Reilly_2}. For some molecular crystals the correct polymorph ranking is reproduced only when using PBE0+MBD \cite{2016Curtis, 2013Marom}. The PBE0+MBD ranking is considered to be the most reliable of the methods used here. Thermal contributions to the total energy, shown to change the energy ranking in approximately 9\% of organic compounds\cite{2015Nyman}, are not further included in the present study.
\section{Code Description}
GAtor is written in Python and uses the spglib \cite{2017Spglib} crystal symmetries library, sci-kit learn\cite{2011scikitlearn} machine learning package, and pymatgen\cite{2013Ong} library for materials analysis. GAtor is available for download from www.noamarom.com under a BSD-3 license. The code is modular by design, such that core GA tasks, such as selection, similarity checks, crossover, and mutation can be interchanged in the user input file and/or modified. For energy evaluations and local optimization GAtor currently interfaces with the all-electron DFT code FHI-aims \cite{2009Blum,2009Havu}, and may be modified to interface with other electronic structure and molecular dynamics packages.
\begin{figure}[h!]
\includegraphics{Fig2_GAtor_parallelization.pdf}
\caption{An example workflow of GAtor on a high performance computing cluster. In the diagram, $N$ independent GA replicas run on $N$ computing nodes, with $K$ core processing units per node. Single point energy (SPE) evaluations and local optimizations are performed using FHI-aims.}
\label{Fig2_GAtor_parallelization}
\end{figure}
GAtor takes advantage of high performance computing (HPC) architectures by avoiding processor idle time and effectively utilizing all available resources. An example workflow is shown in Fig. \ref{Fig2_GAtor_parallelization}. After initialization, the master process spawns a user-defined number of GA replicas across $N$ nodes. Each independent replica performs the core genetic algorithm tasks independently while reading from and writing to a dynamically updated pool of structures\cite{2013Bhattacharya,2014Bhattacharya,2015Bhattacharya}. Additional multiprocessing may be utilized within each replica for child generation. GAtor has been tested on up to 16,384 Blue Gene/Q nodes (262,144 cores) at the Argonne Leadership Computing Facility.
Two classes of breeding operators are implemented in GAtor, crossover and mutation, described in detail in Sections 3.4-3.5. Crossover operators generate a child by combining the structural genes of two parents, whereas mutation operators create a child by altering the structural genes of one parent. After selection, either crossover or mutation is performed with a user-defined probability. When multiprocessing is used, the same set of parents (crossover) or single parent (mutation) undergo the same breeding operation, but with different random parameters. If a child cannot pass the geometry checks after a user-defined number of attempts, a new selection is performed. Otherwise, the first child that passes the geometry checks proceeds to the first uniqueness check. If a candidate structure successfully passes all geometry checks, uniqueness checks, and energy cutoffs, it is added to the common population. The fitness of each structure in the population is updated, and a new selection can be performed immediately. A detailed account of the core tasks and features of the GA is provided below.
\subsection{GA Initialization}
During GA initialization GAtor reads in an initial pool of structures generated by the Genarris random structure generation package\cite{2017Li} using the diverse workflow. Genarris generates random symmetric crystal structures in the 230 crystallographic space groups and then combines fragment-based DFT with clustering techniques from machine learning to produce a high-quality, diverse starting population at a relatively low computational cost, as described in detail in Ref. \citenum{2017Li}. The initial pool structures are pre-relaxed with PBE+TS and \textit{lower-level} numerical settings as described in Section 2 and their total energies are stored beforehand. GAtor updates the starting fitness values of the initial pool structures, as described below, before performing selection.
\subsection{Fitness Evaluation}
The fitness of an individual determines its likelihood of being chosen for crossover or mutation. GAtor provides a traditional energy-based fitness function, in which structures with lower relative stabilities are assigned higher fitness values. Additionally, GAtor provides the option of a cluster-based fitness function, which can use various clustering techniques to perform evolutionary niching. Using cluster-based fitness can reduce genetic drift, as explained below, by suppressing the over-sampling of certain regions of the potential energy surface and promoting the evolution of several subpopulations simultaneously.
\subsubsection{Energy-based Fitness}
In energy-based fitness, the total energy $E_i$ of the $i$th structure in the population is evaluated using dispersion-inclusive DFT as detailed in Section 2. The fitness $f_i$ of each structure is defined as,
\begin{align}
f_i&= \frac{\epsilon_i}{\sum_{i}\epsilon_i}\hspace{1cm}0\leq f\leq 1\\
\epsilon_i&=\frac{E_{\mbox{\footnotesize{max}}}-E_{{\footnotesize{i}}}}{E_{\mbox{\footnotesize{max}}}-E_{\mbox{\footnotesize{min}}}}
\end{align}
where $\epsilon_i$ is the $i$th structure's relative energy, and $E_{\footnotesize{\mbox{max}}}$ and $E_{\footnotesize{\mbox{min}}}$ correspond to the structures with the dynamically updated highest and lowest total energies in the population, respectively\cite{2015Bhattacharya, 2013Bhattacharya, 2014Bhattacharya}. Hence, structures with lower relative energies have higher fitness values.
\subsubsection{Cluster-Based Fitness}
When using a traditional energy-based fitness function, a GA may be prone to exploring the same region(s) of the potential energy surface, which may or may not include the experimental structure(s) or the global minimum structure. This may be due to a number of factors, including lack of diversity in the common population and biases towards or against certain packing motifs over time, a phenomenon known as genetic drift. Genetic drift can result from biases in the initial pool\cite{2017Li} and from the topology of the potential energy landscape (e.g. a desirable packing motif for a given molecule could be located in narrow well that is rarely visited). The search may also be influenced by systematic biases of the energy method used (e.g., the exchange-correlation functional and dispersion method), towards or against certain packing motifs\cite{2016Curtis}.
GAs may be adapted to be more suitable for multi-modal optimization using evolutionary niching methods\cite{1998Sareni, 2012Shir, 2015Preuss}. Niching methods support the formation of stable subpopulations in the neighborhood of several optimal solutions. For molecular crystal structure prediction, incorporating niching techniques may increase diversity and diminish the effect of inherent or initial pool biases. The goal is for the GA to locate all low-energy polymorphs that may or may not have similar structural motifs to the experimentally observed crystal structure(s) or the most stable crystal structure present in the population.
GAtor provides the option to dynamically cluster the common population of molecular crystals into groups (niches) of structural similarity, using pre-defined feature vectors for each target molecule and clustering algorithms implemented in the sci-kit learn machine learning Python package\cite{2011scikitlearn}. Currently, GAtor offers the use of radial distribution function (RDF) vectors of interatomic distances for user-defined species, relative coordinate descriptor (RCD) vectors \cite{2017Li}, or a simple lattice parameter based descriptor, $L$, given by:
\begin{equation}
L=\frac{1}{\sqrt[3]{V}}(a, b, c)
\end{equation}
where $V$ is the unit cell volume and $a$, $b$, and $c$ are the structure's lattice parameters after employing Niggli reduction\cite{1928Niggli, 1973Gruber, 1976Kvrivy, 2004Grosse} and unit cell standardization. Niggli reduction produces a unique representation of the translation vectors of the unit cell but does not define a standard orientation. Therefore, all unit cell lattice vectors are standardized such that $\vec{a}$ points along the $\hat{x}$ direction, $\vec{b}$ lies in the $xy$ plane, and the convention $a\leq b\leq c$ is used. The lattice parameter based descriptor encourages the sampling of under-represented lattices in the population (e.g. structures which are almost 2D which may have one lattice parameter significantly shorter than the others). GAtor offers K-Means\cite{2002Kanungo} and Affinity Propagation (AP)\cite{2007Frey} clustering, and may be adapted to use other clustering algorithms implemented in sci-kit learn. AP is a clustering method that determines the number of clusters in a data set, based on a structure similarity matrix, rather than defining the number of clusters \textit{a priori}. This has the advantage of resolving small, structurally distinct clusters\cite{2017Li}. Once the common population has been clustered into niches, a fitness sharing scheme\cite{1998Sareni} is applied such that a structure's scaled fitness, $f'_i$, is given by
\begin{equation}
f'_i = \frac{f_i}{ m_i}
\end{equation}
where $m_i$ is a cluster-based scaling parameter, currently determined by the number of structures in each individual's shared cluster. This clustering scheme increases the fitness of under-sampled low-energy motifs within the population, and suppresses the over-sampling of densely populated regions. One example of evolutionary niching is discussed in Section 4.1 for Target XXII. Further investigations of the effect of the descriptor and the fitness function will be the subject of future work.
There are a variety of other strategies for incorporating niching or clustering into an evolutionary algorithm. Refs. \citenum{2010Lyakhov}-\citenum{2013Lyakhov} use fingerprint functions based on inter-atomic distances to prevent too dissimilar structures from mating. Recently, Ref. \citenum{2017Jorgensen} explored incorporating agglomerative hierarchical clustering (AHC) into an evolutionary algorithm applied to organic molecules and surfaces. AHC detects the number of clusters in the given data set, similar to AP. One of their methods promoted selection of cluster outliers, while another utilized a fitness function that combined the structure's cluster size with its energy, similar to the technique employed in GAtor.
\subsection{Selection}
Selection is inspired by the evolutionary principle of survival of the fittest. In GAtor, individuals with structural motifs associated with higher fitness values have a higher probability of being selected for mating. GAtor currently offers a choice of two genetic algorithm selection strategies: roulette wheel selection and tournament selection.
\subsubsection{Roulette wheel selection}
This selection technique \cite{1989Goldberg} simulates a roulette wheel, where fitter individuals in the population conceptually take up larger slots on the wheel, and therefore have a higher probability of being selected when the wheel is spun. In GAtor, the procedure is as follows: First, a random number $r$ is chosen, uniform in the interval [0, 1]. Then, a parent structure is selected for mating if it has the first sorted, normalized fitness value with $f_i>r$\cite{2013Bhattacharya, 2014Bhattacharya,2015Bhattacharya}.
\subsubsection{Tournament Selection}
In tournament selection \cite{1989Goldberg}, a user-defined number of individuals are randomly selected from the common population to form a tournament. In GAtor, the two structures with the highest fitness values in the tournament (i.e. the winner and the runner-up) are selected for mating. Tournament selection is efficient (requiring no sorting of the population) and gives the user control over the selection pressure via control of the tournament size \cite{1996Blickle}.
\subsection{Crossover}
Crossover is a breeding operator that combines the structural genes of two parent structures selected for mating to form a single offspring. The crossover operators implemented in GAtor were developed specifically for organic molecular crystals. The popular `cut-and-splice'\cite{1995Deaven} crossover operator used in other genetic algorithms, takes a random fraction of the each parent's unit cell (and the motifs within) and pastes them together. While this approach is successful for structure prediction of clusters and inorganic crystals\cite{2004Bazterra, 1999Hartke, 2006Oganov, 2006Glass,2008Trimarchi,2009Froltsov, 2010Ji,2011Lonie,2013Bhattacharya, 2014Bhattacharya,2015Bhattacharya,2017Jorgensen}, it may not be the most natural choice for molecular crystals because it can break important space group symmetries that may be associated with, e.g., efficient packing and lower total energies. Initialization of the starting population within random symmetric space groups has been shown to increase the efficiency of evolutionary searches \cite{2010Wang,2012Zhu,2013Lyakhov, 2017Avery_2}. In the same vein, further steps can be taken to design the breeding operators themselves to exploit and explore the symmetry of the starting population and to reduce the number of expensive first principles calculations on structures far from equilibrium. Therefore, several mutation and crossover operators implemented in GAtor can preserve or break certain space group symmetries of the parent structure(s), as detailed below.
\subsubsection{Standard Crossover}
In this crossover scheme each parent's genes are represented by the Niggli-reduced, standardized unit cell lattice parameters and angles ($a, b, c, \alpha, \beta, \gamma$) as well as the molecular geometry\footnote{The geometry of the molecules are allowed to relax during local optimization. This is important for semi-rigid molecules, such as Target XXII. This extra degree of freedom is accounted for in the crossover process by randomly selecting the relaxed molecular geometry from one parent.}, orientation $\Phi=(\theta_z, \theta_y, \theta_x)$, and center of mass (COM) position in fractional coordinates, $R_{\mbox{\tiny{COM}}}$, of each molecule within the unit cell. The orientation of each molecule within the unit cell is defined by computing the $\theta_z$, $\theta_y$, and $\theta_x$ Euler angles, respectively, which rotate a Cartesian reference frame to an inertial reference frame aligned with each molecule's principal axes of rotation. When generating a child structure, the molecules in the unit cell of each parent structure are randomly paired together. The fractional COM positions for each molecule in the child structure are directly inherited from one randomly selected parent. The lattice parameters from each parent are combined with random fractions to form the lattice parameters of the child structure. The child's molecular geometries are inherited from one randomly selected parent and initially centered at the origin with their principal axes of rotation aligned with the Cartesian axes. The final orientations of the molecules in the child structure are constructed by combining the orientation angles of the paired molecules from the parent structures with random fractions.
Fig. \ref{Fig3_Crossover_methods}, panel (a) shows an example of standard crossover for two selected parent structures of Target XXII with space groups $P2_1/c$ and $Pca2_1$, respectively. Four molecules from each parent are randomly selected (circled in blue) and paired together. The molecular geometries and COM positions of the child structure are both inherited from the $P2_1/c$ parent structure. The orientation angles of the molecules paired from each parent structure are combined with random fractions. The lattice parameters are also combined with random fractions. In this specific example, a child structure is created with a $Z^\prime=2$ motif that has lower symmetry than either of its parents, $P\bar{1}$, but still contains inversion symmetry before local optimization.
\begin{figure}
\includegraphics{Fig3_Crossover_methods.pdf}
\caption{Examples of (a) standard crossover and (b-c) symmetric crossover applied to selected parent structures of Target XXII. The colors of the molecules correspond to the symmetry operations applied to the asymmetric unit of each structure, shown in white. The structures shown are projected along the $\vec{a}$ lattice vector and the $\vec{b}$, and $\vec{c}$ lattice vectors are highlighted in green and blue, respectively.}
\label{Fig3_Crossover_methods}
\end{figure}
\subsubsection{Symmetric Crossover}
In this crossover scheme each parent's genes are represented by the orientation and COM position of their respective crystallographic asymmetric units as well as their respective space group operations and unit cell lattice parameters. For the explicit computation of each parent's asymmetric unit and space group operations, GAtor relies on the pymatgen\cite{2013Ong} package, which utilizes the spglib crystal symmetries library\cite{2017Spglib}. When generating a child structure, the genes of the parents are combined strategically to preserve one parent's space group as detailed below.
First, the asymmetric unit and corresponding space group operations are deduced for both parents. If the two asymmetric units contain the same number of molecules, then the respective molecules in each unit are paired together. If the asymmetric units contain a different number of molecules, then one parent's asymmetric unit is used as a reference and paired with an equivalent number of molecules in the second parent's unit cell. If the asymmetric units contain different relaxed molecular geometries, then the molecular conformations in the child's asymmetric unit may be randomly inherited from one parent. The orientation and COM position of the molecule(s) within the child's asymmetric unit are constructed by combining the orientation and COM position of the paired molecule(s) from each parent with random fractions. If both parents possess the same Bravais lattice type then their lattice parameters may be combined with random fractions. Otherwise, the child's lattice is randomly inherited from one parent. Finally, the symmetry operations (containing specific translations, reflections, and rotations of the asymmetric unit in fractional coordinates) are selected from one parent and applied to the child's generated asymmetric unit and lattice. Either parent's space group operations may be randomly selected and applied to the child's asymmetric unit when both parents possess the same number of molecule's in the asymmetric unit and the same Bravais lattice type. Otherwise, one parent's space group operations will be compatible with the symmetry of the generated lattice and asymmetric unit by construction and are thus applied. This crossover procedure ensures the space group of the child is directly inherited from one of its parents, at least before local optimization, which does not constrain the symmetry of the child structure.
Examples of symmetric crossover are shown in Fig. \ref{Fig3_Crossover_methods}, panels (b) and (c). The participating asymmetric units of the parent and child structures are circled in red. In panel (b), the child structure inherits the molecular geometry from the $Pca2_1$ parent structure, which is more planar than the molecular geometry of the $P2_1/c$ structure. The orientations of the asymmetric units (both $Z^\prime$=1) and lattice vectors of both parents are combined with random weights. The space group symmetry operations from the $Pca2_1$ parent are applied to the child's asymmetric unit on the generated lattice. In panel (c), the child structure inherits the molecular geometry and symmetry operations from the $P2_1/c$ parent structure. The randomness used when creating the orientation of the motif in the asymmetric unit explains why the child shown in panel (b) has a different orientation of the asymmetric unit as the one shown in panel (c), and allows for more diversity in the generated offspring. In these specific examples, both child structures produced using symmetric crossover have higher symmetry than the child produced with standard crossover, before local optimization.
\subsection{Mutation}
Mutation operators are applied to the genes of single parent structures to form new offspring. In GAtor, certain mutations may promote exploration of the potential energy surface via dramatic structural changes, while others may exploit promising regions via subtle changes. The user chooses the percentage of selected structures that undergo mutation, and may select specific or random mutations to be applied. GAtor also provides an option that allows a percentage of structures to undergo a combination of any two mutation operations before local optimization. This approach encourages exploration and may reduce the number of duplicate structures generated in the search \cite{2011Lonie}.
\subsubsection{Strains} GAtor offers a variety of strain operators that produce child structures by acting upon the lattice vectors of the selected parent structure. Similar to Refs. \citenum{2006Oganov},\citenum{2012Zhu}, and \citenum{2011Lonie}, the strain tensor is represented using the symmetric Voigt strain matrix $\boldsymbol{\epsilon}$,
\begin{equation}
\boldsymbol{\epsilon} =
\begin{bmatrix}
\epsilon_{11} & \frac{\epsilon_{12}}{2} & \frac{\epsilon_{13}}{2} \\
\frac{\epsilon_{12}}{2} & \epsilon_{22} & \frac{\epsilon_{23}}{2}\\
\frac{\epsilon_{13}}{2} & \frac{\epsilon_{23}}{2} & \epsilon_{33}\\
\end{bmatrix}.
\end{equation}
The strain matrix is applied to each lattice vector $\vec{a}_{\mbox{\tiny parent}}$ of the chosen parent structure to produce the lattice vector of the child $\vec{a}_{\mbox{\tiny child}}$ via
\begin{equation}
\vec{a}_{\mbox{\tiny child}} =\vec{a}_{\mbox{\tiny parent}} +
\boldsymbol{\epsilon} \vec{a}_{\mbox{\tiny parent}}.
\end{equation}
The components of $\epsilon_{ij}$ are chosen to produce different modes of strain. To apply random strains, all six unique $\epsilon_{ij}$ components are randomly selected from a normal distribution with a user-defined standard deviation that determines the strength of the applied strain. To apply random deformations in certain crystallographic directions, one or more random $\epsilon_{ij}$ may be chosen while the others are set to 0. Strains that preserve the overall unit cell volume of the parent structure, or change a single unit cell angle, may also be applied. When applying a strain, the COM of each molecule is moved according to its fractional coordinates. An example strain mutation is shown in Fig. \ref{Fig4_Mutation_methods}, panel (a). Here, a random strain is applied that transforms the lattice of the parent structure from monoclinic ($\alpha=\gamma=90; \beta \neq 90$) to triclinic ($\alpha \neq \beta \neq \gamma \neq 90$). The COM of each molecule is moved accordingly, breaking the glide and screw symmetry of the parent structure and creating a $Z^\prime$=2 child structure.
\begin{figure}[h!]
\includegraphics{Fig4_Mutation_methods.pdf}
\caption{Examples of (a) random strain, (b) rotation, and (c) translation mutations applied to a $P2_1/c$ structure of Target XXII. The colors of the molecules correspond to the symmetry operations applied to the asymmetric unit of each structure, shown in white and circled in red. The structures shown are projected along the $\vec{a}$ lattice vector and the $\vec{b}$, and $\vec{c}$ lattice vectors are highlighted in green and blue, respectively.}
\label{Fig4_Mutation_methods}
\end{figure}
\subsubsection{Molecular Rotations}
Rotation mutations change the orientations of the molecules in the selected parent structure. Different random rotations may be applied to the Cartesian coordinates of the atoms in selected molecules centered at the origin, or the same random rotation can be applied about each molecule's principal axes of rotation. For $Z^\prime$=1 structures, the latter type of rotation is equivalent to randomly changing the orientation of molecule in the asymmetric unit, as shown in Fig. \ref{Fig4_Mutation_methods}, panel (b). Here, each molecule from the parent structure receives the same random rotation about its principal axes of rotation, rotating the asymmetric unit and preserving the parent's $P2_1/c$ symmetry in the resulting offspring.
\subsubsection{Translations}
Translational mutations change the position of $R_{\mbox{\tiny{COM}}}$ for certain molecules within the unit cell. They are either applied randomly to the COM (in Cartesian coordinates) of randomly selected molecules, or in a random direction in the basis of the each molecule's inertial reference frame, constructed from each molecule's principal axes of rotation. An example of the latter type of mutation is depicted in Fig. \ref{Fig4_Mutation_methods}, panel (c). Here, each molecule from the parent structure receives the same random translation in the basis of its inertial reference frame. In this case, paired enantiomers are translated in equal and opposite directions, which breaks the glide symmetry of the parent structure, and forms an asymmetric unit containing two molecules in a tightly packed dimer.
\subsubsection{Permutations}
Permutation mutations swap $R_{\mbox{\tiny{COM}}}$ for randomly selected molecules in the parent unit cell. Depending on the point group symmetry of the molecule, the lattice, and the permutation, this operator can preserve, add, or break certain space group symmetries of the parent structure. An example permutation mutation that preserves the parent's space group symmetry is shown in Fig. \ref{Fig5_Mutation_methods_2}, panel (a). Here, a permutation is applied which effectively swaps $R_{\mbox{\tiny{COM}}}$ of the highlighted asymmetric unit (shown in white) and its nearest neighbor (shown in yellow), as well as swapping $R_{\mbox{\tiny{COM}}}$ of the two other molecules in the unit cell related by screw and glide symmetry (shown in green and fuchsia, respectively). As a result, the child structure inherits the $P2_1/c$ symmetry of the parent structure.
\begin{figure}[h]
\includegraphics{Fig5_Mutation_methods_2.pdf}
\caption{Examples of (a) permutation, (b) permutation-rotation, and (c) permutation-reflection mutations applied to a $P2_1/c$ structure of Target XXII. The colors of the molecules correspond to the symmetry operations applied to the asymmetric unit of each structure, shown in white and circled in red. The structures shown are projected along the $\vec{a}$ lattice vector and the $\vec{b}$, and $\vec{c}$ lattice vectors are highlighted in green and blue, respectively.}
\label{Fig5_Mutation_methods_2}
\end{figure}
\subsubsection{Permutation-Rotations and Permutation-Reflections}
Permutation-rotation mutations swap randomly selected molecules within the unit cell and then apply a random rotation about their principal axes of rotation. Fig. \ref{Fig5_Mutation_methods_2}, panel (b) shows an example of permutation-rotation. Here, the two molecules in the parent unit cell colored in yellow and green swap position and undergo a random rotation, while the others remain fixed. As a result, the structure produced (space group $Pc$) no longer contains the exact two fold screw symmetry of the parent structure (space group $P2_1/c$) and effectively contains an asymmetric unit consisting of two molecules with the same chirality. In the permutation-reflection mutation, half of the molecules in the unit cell swap positions and then undergo a reflection in the $xy$, $yz$, or $zx$ Cartesian planes centered at their COM. Fig. \ref{Fig5_Mutation_methods_2}, panel (c) shows an example of permutation-reflection. Here, the two molecules in the parent unit cell colored in yellow and fuchsia swap positions and undergo a reflection about the \textit{zx} plane pointing out of the page, while the others remain fixed. As a result, the structure produced (space group $P2_1$) no longer contains the glide symmetry of the parent structure (space group $P2_1/c$), and effectively contains an asymmetric unit consisting of two molecules of the same chirality. For crystals containing chiral molecules, such as Target XXII, this mutation can be especially effective because it can swap the relative positioning of enantiomers within the unit cell.
\subsection{Rejection Criteria}
Because crossover and mutation operations are performed randomly on a diverse set of structures, the offspring generated may be unphysical or duplicates of existing structures. GAtor applies various criteria for rejecting a child structure before performing local optimization. This preserves the diversity of the population by preventing uncontrolled multiplication of similar structures and avoids computationally expensive local optimization of unreasonable or redundant structures.
\subsubsection{Geometry Checks}
Structures may be rejected if any two intermolecular contacts are too close. The minimum distance $d{\mbox{\tiny min}}$ between any two atoms $A$ and $B$ belonging to different molecules is given by:
\begin{equation}
d{\mbox{\tiny min}} = s_{\mbox{\footnotesize r}}(r_{\mbox{\tiny A}} +r_{\mbox{\tiny B}})
\end{equation}
where $r_A$ and $r_B$ are the vdW radii of the atoms $A$ and $B$, respectively, and $s_{\mbox{\footnotesize r}}$ is a user-defined parameter typically set between 0.6-0.9. Additionally, the user may constrain how close the COMs of any two molecules are allowed to be, or specify the allowed unit cell volume range for the generated structures. If the children produced by a parent or set of parents do not pass the geometry checks after a user-defined number of attempts, a new selection is performed.
\subsubsection{Similarity Checks}
Identifying duplicate crystal structures is critical for maintaining diversity and preventing a GA from getting stuck in a specific region of the potential energy surface. Furthermore, it is imperative to identify structures that are too similar to others in the existing population before local optimization to avoid expensive and redundant DFT calculations. Checking for duplicates is complicated by the fact that multiple representations exist for the same crystal structure. To address this issue, Niggli reduction\cite{1928Niggli, 1973Gruber, 1976Kvrivy, 2004Grosse} and cell standardization are used for all structures within GAtor, as previously described in Section 3.2.2.
GAtor performs a similarity check on all generated offspring before and after local optimization. The pre-relaxation similarity check prevents the local optimization of any structures too similar to others in the population, using loose site and lattice parameter tolerances in pymatgen's StructureMatcher class\cite{2013Ong}. The post-relaxation similarity check identifies whether any optimized structures relaxed into \textit{bona fide} duplicates of existing structures in the population, using stricter site and lattice parameter tolerance settings. If the candidate structure is found to have a similar lattice to another in the common pool (within the user-defined tolerances for the lattice parameter lengths and angles), then the root mean square (RMS) distances are computed between equivalent atomic sites. If the maximum, normalized RMS distance is within the user-defined tolerance, then the two structures are determined to be duplicates.
\subsubsection{Single Point Energy (SPE) Cutoff}
Single point DFT calculations, using PBE+TS and \textit{lower-level} numerical settings, are performed on unrelaxed offspring to decide whether they should undergo local optimization, as shown in Fig. \ref{Fig2_GAtor_parallelization}.
If the energy of the unrelaxed structure is higher than the user-defined cutoff, it is immediately rejected. This reserves computational resources for the local optimization of structures with energies that are more likely to have desirable genetic features. The energy cutoff can be fixed or set relative to the current global minimum. Typically, the relative energy cutoff is set to 70-120 kJ/mol per molecule, however it may be system dependent. A recommended best practice is to set the cutoff to prevent the addition of structures worse in energy than those in the diverse initial pool.
\subsection{Termination}
Because there is no unique way of converging a genetic algorithm, the user specifies simple conditions for when the code should terminate. One option is choosing to terminate the algorithm if a certain number of the best structures in the common population have not changed in a user-defined amount of iterations (e.g. if the top 20 structures have not changed in 50 iterations of the GA). This tracks whether all low-energy structures have been located in a reasonable number of iterations. Here, an iteration is defined as when a structure has passed all rejection criteria and is added to the common pool. Alternatively, the user may choose to terminate after the total population has reached a certain size. Additionally, the user may terminate the code manually at any time. If GAtor stops due to, e.g. wall time limits or hardware failures, there is an option to restart the code and finish all calculations leftover from the previous run before performing new selection. Code restarts can also be used strategically to modify the GA settings (e.g. to tighten the energy cutoffs or change mutation schemes) without affecting
the common population of structures.
\section{Applications}
GAtor was used to perform crystal structure prediction for the four chemically diverse blind test targets shown in Fig. \ref{Fig1_2D_Diagrams}. The initial pool for each target was generated with Genarris \cite{2017Li} to create a starting population of diverse, high-quality structures\cite{2017Li}. The distribution of space groups for each initial pool is provided in the supporting information. The generated initial pool structures were locally optimized with the same DFT settings used in the GA and checked for duplicates. For each molecule, a variety of crossover, mutation, and selection parameters were run on the same initial population. For testing purposes, GA searches were performed only with the same number of molecules per unit cell as the experimental structure(s). The number of molecules in the asymmetric unit was not constrained. In all cases, the experimental structures were generated as well as several other low-energy structures that may be viable polymorphs.
\subsection{Target XXII}
Target XXII (C$_8$S$_3$N$_4$) was selected from the sixth blind test\cite{2016Reilly}. It belongs to a unique class of compounds, called thiacyanocarbons, which only contain carbon, nitrogen, sulfur and a plurality of cyano groups\cite{1962Simmons}.
\begin{figure}[h]
\includegraphics{Fig6_TargetXXII_GA_runs.pdf}
\caption{(a) The average energy of the top 20 Target XXII structures as a function of GA iteration and (b) the global minimum structure generated as a function of GA iteration, shown for different GA runs. S, N, and C atoms are colored in yellow, blue, and grey, respectively. The structures shown are projected along the $\vec{b}$ lattice vector and the $\vec{a}$, and $\vec{c}$ lattice vectors are highlighted in red and blue, respectively.}
\label{Fig6_TargetXXII_GA_runs}
\end{figure}The molecule contains no rotatable bonds, however it can bend about the S-S axis of the six-membered ring. The energy barrier between its chiral forms is small, leading to the appearance of many structures with planar or near-planar conformations in the computed crystalline energy landscape \cite{2016Reilly, 2016Curtis}. The correct crystal structure of Target XXII was generated by 12 out of 21 groups that participated in category 1 of the most recent blind test\cite{2016Reilly}, and ranked as the most stable structure by 4 groups.
\begin{figure*}[h!]
\includegraphics{Fig7_TargetXXII_Breeding_Routes.pdf}
\caption{The different evolutionary routes which generated the experimental structure of Target XXII for different runs of the GA. The $\vec{a}$, $\vec{b}$, and $\vec{c}$ crystallographic lattice vectors are displayed in red, green, and blue, respectively.}
\label{Fig7_TargetXXII_Breeding_Routes}
\end{figure*}
GAtor was run with a variety of GA settings using the same initial pool. In principle, a GA should be run numerous times to determine how a particular group of settings perform. Because GAtor is a first principles algorithm a more practical approach is adopted, where different GA settings are used in several runs and then the structures produced from all runs are combined for postprocessing. For Target XXII, the initial pool contained 100 structures in a variety of space groups. All runs were stopped when the number of structures added to the common population from the GA reached 550 structures. Fig. \ref{Fig6_TargetXXII_GA_runs} shows an analysis of the various GA runs. Here, a GA iteration corresponds to when a single structure has passed all rejection criteria and has been added to the common population. The shorthand notation used for the different GA runs is as follows: standard crossover (SC), symmetric crossover (SymC), tournament selection (T), and roulette wheel selection (R). The percentage (e.g. 75\%) indicates the crossover probability, with the remaining percentage (e.g. 25\%) indicating mutation probability. For runs that used tournament selection, the tournament size is shown in parentheses. Cluster-based fitness is denoted by a C after the selection type. Here, Affinity Propagation\cite{2007Frey} clustering was used with the descriptor given by Eq. 3, which promotes the selection of structures with under-sampled lattice parameters. Although this descriptor is simple, it provides insight into the behavior of cluster-based fitness in the GA and was successful in generating the experimental structure of Target XXII.
The average energy of the top 20 structures per GA iteration for the different runs is shown in Fig. \ref{Fig6_TargetXXII_GA_runs}, panel (a). The energies shown are relative to the global minimum structure evaluated with PBE+TS and \textit{lower-level} numerical settings. For the seven runs that used energy-based fitness, the average energy of the top 20 structures smoothly converges to within approximately 5 kJ/mol per molecule of the global minimum structure upon GA termination. The runs that used tournament selection had a slightly lower average energy of the top 20 structures over time compared to the runs using roulette wheel selection. The run that used clustering, depicted in orange, shows a larger average energy than the other runs and a slower, more erratic convergence of the top 20 structures to within 7 kJ/mol per molecule of the global minimum structure upon GA termination. This behavior is not unusual because the cluster-based fitness explicitly promotes under-represented structures in the population, which may have higher energies.
For all runs, the minimum energy structure as a function of GA iteration is shown in Fig. \ref{Fig6_TargetXXII_GA_runs}, panel (b). The energies of the experimental structure and the lowest energy structure in the initial pool are also indicated. The latter happened to correspond to the PBE+TS global minimum structure using \textit{lower-level} numerical settings. We note that the initial pool produced by Genarris is not random, but rather consists of a diverse set of structures pre-screened with a Harris Approximation\cite{1985Harris}, as detailed in Ref. \citenum{2017Li}. All runs generated the experimental structure (located approximately 3.3 kJ/mol per molecule above the global minimum) but at different GA iterations. Most runs located structures lower in energy than the experimental, but only those that used tournament selection and energy-based fitness (shown in red, yellow, green, and cyan) generated the second to the global minimum structure. GA runs that used symmetric crossover, tournament selection, and energy-based fitness (shown in yellow, green, and cyan) found the experimental structure in fewer GA iterations on average than the runs that used energy-based fitness and roulette wheel selection (shown in blue, purple, and pink).
Fig. \ref{Fig7_TargetXXII_Breeding_Routes} depicts different evolutionary routes that generated the experimental structure in selected GA runs. Each route starts from an initial pool structure and details the various breeding operations (followed by local optimization), which ultimately generate the experimental structure. The variety of evolutionary routes and paths highlights the flexibility and randomness of the GA. In particular, the run that utilized the cluster-based fitness function, shown in orange, took a unique path to the experimental structure. A crucial mutation along this route was permutation-reflection, which introduced an inversion center and created a $P\bar{1}$, $Z^\prime$=2 structure. This $P\bar{1}$ structure subsequently underwent permutation followed by local optimization to generate the $P2_1/n$ experimental structure. Overall, the combination of symmetric crossover and mutation was highly effective for Target XXII.
\begin{figure*}[h!]
\includegraphics{Fig8_TargetXXII_clustering.pdf}
\caption{A comparison of the clusters and structural motifs found in (a) the initial pool, (b) the common population evolved using energy-based fitness, and (c) the common population evolved with cluster-based fitness. The average energy for each cluster is plotted using black circles and the standard deviation of energies for each cluster is depicted in grey. For the crystal structures shown, the $\vec{a}$, $\vec{b}$, and $\vec{c}$ crystallographic lattice vectors are displayed in red, green, and blue, respectively.}
\label{Fig8_TargetXXII_clustering}
\end{figure*}
A detailed comparison between the runs that used tournament selection and 50\% percent standard crossover, with and without cluster-based fitness, is shown in Fig. \ref{Fig8_TargetXXII_clustering}.
The final structures produced from the cluster-based fitness run, including the initial pool, formed 15 clusters, using Affinity Propagation with the lattice parameter based descriptor and a Euclidean metric. The structures from the run which used energy-based fitness were assigned to one of the 15 clusters from the cluster-based run. Panel (a) depicts the population of the initial pool, while panels (b) and (c) depict the independent evolution of the initial population for the energy and cluster-based fitness runs, respectively. The initial pool contained several low-energy structures with planar or near-planar conformations, which tend to have shorter $a$ parameters than structures with bent conformations, such as the experimental structure. Panel (b) reveals initial pool bias and genetic drift in the run that used energy-based fitness. Initial pool bias is evident from the fact that the GA hardly explores regions not represented in the initial pool. Genetic drift is apparent from the preferential exploration of the clusters labeled 1, 4, and 6, compared to other clusters represented in the initial pool.
\begin{figure*}[h!]
\includegraphics{Fig9_TargetXXII_Reranking.pdf}
\caption{(a) The relative total energies as obtained by different dispersion-inclusive DFT methods and (b) the PBE0+MBD energy versus density of putative crystal structures of Target XXII. The top 8 predicted structures, as ranked by PBE0+MBD, are shown in color. The $\vec{a}$, $\vec{b}$, and $\vec{c}$ crystallographic lattice vectors are displayed in red, green, and blue, respectively.}
\label{Fig9_TargetXXII_Reranking}
\end{figure*}
These clusters contain layered structures with planar or near-planar conformations, examples of which are shown in panels (a) and (b). Such structures likely correspond to large, shallow basins of the energy landscape that are frequently visited. In addition, these structural motifs are systematically favored by PBE+TS, as discussed in detail in Ref. \citenum{2016Curtis} and below. Cluster 0, which contains structures with a bent conformation, including the experimental structure, is sampled less frequently, possibly because such structures correspond to narrow wells in the potential energy surface that are more difficult to locate. Panel (c) demonstrates that evolutionary niching helps overcome initial pool biases and genetic drift. In this case, a more uniform sampling of the potential energy landscape is achieved. Clusters 1, 4, and 6 have fewer members than in the energy-based run, while cluster 0 has more members. Evidently, for Target XXII, utilizing cluster-based fitness with the lattice parameter descriptor suppressed the over-selection of crystal structures with planar or near-planar conformations. This descriptor was effective for Target XXII because in this case the unit cell shape is correlated with the molecular conformation. Furthermore, several clusters outside the boundaries of the initial pool were only explored with the cluster-based fitness function. These clusters include, for example, structures with more elongated unit cell shapes (a representative structure is shown for cluster 2). This demonstrates that evolutionary niching can correct initial pool biases and explore novel regions of the potential energy surface (this may be particularly useful if the initial pool is not as optimal as the pools produced by Genarris). However, it does so at the price of an increased computational cost, and in this case generates more high-energy structures that may or may not be useful for the purpose of maintaining diversity.
All structures generated were combined into a final set of 200 unique structures evaluated with PBE+TS and \textit{lower-level} numerical settings. The structures were re-relaxed using PBE+TS with \textit{higher-level} numerical settings and subsequently re-checked for duplicates. The final 100 PBE+TS structures were then re-ranked with PBE+MBD and PBE0+MBD, as shown in panel (a) of Fig. \ref{Fig9_TargetXXII_Reranking}. The re-ranking of Target XXII structures generated within the sixth CSP blind test has been discussed extensively in Ref. \citenum{2016Curtis}. It has been demonstrated therein that different exchange-correlation functionals and dispersion methods systematically favor specific packing motifs. The experimental structure was ranked as the top structure only by PBE0+MBD. The same trends are observed here. Within the present study, the top 100 $Z$=4 structures are located within relative energy windows of 6.7, 7.5, and 9.6 kJ/mol per molecule using PBE+TS, PBE+MBD, and PBE0+MBD, respectively. The number of structures generated in these intervals shows significant improvement compared to our submission to the sixth blind test. In particular, an important low-energy structure (ranked as \#3 by PBE0+MBD) was located in the present study in addition to the experimental structure. These improvements may be attributed to a number of factors including updated crossover, mutation, and similarity checks, as well as the use of a more diverse and comprehensive initial pool as generated by Genarris \cite{2017Li}. Panel (b) of Fig. \ref{Fig9_TargetXXII_Reranking} shows the PBE0+MBD energy versus density for the structures. Structures with bent molecular conformations, including the experimental structure, have lower densities than structures with planar or near-planar conformations.
\subsection{Target II}
Target II (C$_5$H$_3$NOS) was selected from the second blind test \cite{2002Motherwell,1999Blake}. At the time, no participating groups used \textit{ab initio} methods for the structure prediction of this molecule, and only one group submitted the correct experimental structure, ranking it as their second most thermodynamically stable structure.
Fig. \ref{Fig10_TargetII_GA_runs} shows an analysis of the different GA runs that successfully generated the experimental crystal structure of Target II. The initial pool contained 45 structures. Each run was stopped when the number of additions to the common pool reached 350.
\begin{figure}[h!]
\includegraphics{Fig10_TargetII_GA_runs.pdf}
\caption{(a) The average energy of the top 20 Target II structures as a function of GA iteration and (b) the global minimum structure generated as a function of GA iteration, shown for different GA runs. S, N, O, C, and H atoms are colored in yellow, blue, red, grey, and white, respectively. The $\vec{a}$, $\vec{b}$, and $\vec{c}$ crystallographic lattice vectors are displayed in red, green, and blue, respectively.}
\label{Fig10_TargetII_GA_runs}
\end{figure}
\begin{figure*}[h!]
\makebox[\textwidth][c]
{\includegraphics{Fig11_TargetII_Reranking.pdf}}
\caption{(a) The relative total energies as obtained by different dispersion-inclusive DFT methods and (b) the PBE0+MBD energy versus density of putative crystal structures of Target II. The top 10 predicted structures, as ranked by PBE0+MBD, are shown in color. The $\vec{a}$, $\vec{b}$, and $\vec{c}$ crystallographic lattice vectors are displayed in red, green, and blue, respectively.}
\label{Fig11_TargetII_Reranking}
\end{figure*}
The average energy of the top 20 structures as a function of GA iteration is shown in panel (a) of Fig. \ref{Fig10_TargetII_GA_runs}. All energies shown are relative to the energy of the global minimum structure as ranked by PBE+TS with the \textit{lower-level} numerical settings used within the GA. All runs converged the top 20 structures to within 4 kJ/mol per molecule when the GA was terminated. The run that used 100\% pure mutation (denoted by 100\% M) with tournament selection, shown in purple, consistently exhibited the lowest average energy of the top 20 structures. In panel (b), the minimum energy structure added by the GA as a function of GA iteration is shown along with the lowest-energy structure from the initial population and the experimental structure. All runs generated the experimental structure, as well as at least one other structure lower in energy. Two runs, shown in orange and yellow, generated the most structures with lower energies than the experimental. Strain mutations were particularly effectively at generating new low-energy structures for this target.
All structures produced by the different GA runs were combined into a final set of 200 non-duplicate structures as evaluated with PBE+TS and \textit{lower-level} numerical settings. The structures were re-relaxed with PBE+TS and \textit{higher-level} numerical settings and subsequently re-checked for duplicates. The final 100 PBE+TS structures were then re-ranked with PBE+MBD and PBE0+MBD, as shown in panel (a) of Fig. \ref{Fig11_TargetII_Reranking}. The top 10 structures as ranked by PBE0+MBD are highlighted in color. The top 100 structures are found in relative energy windows of 5.3, 5.5, and 6.1 kJ/mol per molecule using PBE+TS, PBE+MBD, and PBE0+MBD, respectively. Interestingly, the experimental structure becomes less stable with increasingly accurate DFT methods and is ranked as \#10 with PBE0+MBD. Structures ranked as \#4-\#10 with PBE0+MBD display layered packing motifs in several different space groups, within an energy window of approximately 0.6 kJ/mol per molecule. The layered motif of Target II is characterized by hydrogen-bonds that form 1D chains between the hydroxyl group of one molecule and the nitrile group of another (O$-$H$\cdot\cdot\cdot$N) that are stacked on top of one another as shown in Fig. S1 in the supporting information. The prediction of nearly energetically degenerate crystal structures consisting of the same sheet stacked in different ways is a common phenomena\cite{2013Price,2012Braun,2016Braun}. While the structures ranked \#4-\#10 are determined as distinct lattice energy minima, they likely converge to a lower number of minima on the free energy surface\cite{2013Price, 2017Whittleton}.
The structure ranked as \#3 by PBE0+MBD (shown in yellow) was not reported by any participating group during the second blind test and has the highest computed density of the low-energy structures, as shown in panel (b) of Fig. \ref{Fig11_TargetII_Reranking}. This structure contains the same 1D hydrogen-bonded patterns as the experimental structure, but with zig-zag stacking. Ref. \citenum{2011Chan} later performed an additional CSP study on Target II using a tailor-made force field\cite{2008Neumann} within the GRACE software. This methodology has been highly successful at CSP and predicted all five targets in the most recent blind test\cite{2016Reilly}. Searching structures with $Z^\prime$=1, this study predicted the \#3 PBE0+MBD zig-zag structure for the first time, ranking it as the global minimum structure when re-ranked using DFT with a pairwise dispersion correction.
\cite{2011Grimme}. Furthermore, it was shown that this form became more stable with increasing pressure, suggesting it could be an unobserved high-pressure polymorph of Target II. Our \#2 $P2_1$ PBE0+MBD structure with a scaffold packing motif was also discussed in Ref. \citenum{2011Chan} and ranked as \#3. Ref. \citenum{2017Whittleton} computed the relative stability of the $P2_1$ scaffold structure and the experimental structure using the B86bPBE density functional\cite{1986Becke,1996Perdew} combined with the exchange-hole dipole moment (XDM)\cite{2007Becke,2012Otero_2} dispersion model and found the $P2_1$ scaffold structure to be more stable. When a quasi-harmonic thermal correction was further included, the experimental structure was ranked as the more stable structure.
The $P\bar{1}$, $Z^\prime$=2 structure with a scaffold packing motif ranked as the global minimum by all three DFT methods has not been reported in any previous CSP studies of Target II. It has a higher computed density than the experimental form, as shown in panel (b) of Fig. \ref{Fig11_TargetII_Reranking}. A discussion comparing the packing motifs of the experimental structure and the \#1 PBE0+MBD scaffold structure is provided in the supporting information. The \#1 PBE0+MBD scaffold structure would not have been found without GAtor's ability to generate crystal structures with Z$^\prime>$1 through the various crossover and mutation operators. As emphasized in Ref.\citenum{2015Steed}, stable crystal structures are formed when intermolecular interactions are optimized through close packing. While these requirements favor highly symmetric structures, symmetry can be sacrificed in favor of forming particularly stabilizing intermolecular interactions\cite{2015Steed,2010Vande,2016Taylor}. Future investigations incorporating finite temperature and pressure effects will add further insight into the relative stability of the \#1 PBE0+MBD scaffold structure and the other predicted low-energy structures, including the experimental.
\subsection{Target XIII}
Target XIII (C$_6$H$_2$Br$_2$ClF) was selected from the fourth blind test \cite{2009Day}, in which it was categorized as a rigid molecule containing challenging elements for modeling methods. Target XIII contains three different halogens, allowing for a variety of halogen bonds. Many common electronic structure theory methods do not accurately capture halogen bonds because they require a precise treatment of both electrostatic and dispersion interactions\cite{2008Riley,2016Cavallo,2013Kozuch,2012Rezac,2014Otero-de-la-Roza}. During the fourth blind test, the correct experimental structure was successfully predicted and ranked as \#1 by 4/14 groups. The methodology used in one of the successful submissions is further detailed in Ref. \citenum{2008Misquitta}.
\begin{figure}[h]
\includegraphics{Fig12_TargetXIII_GA_runs.pdf}
\caption{(a) The average energy of the top 40 Target XIII structures and (b) the global minimum structure produced as a function of GA iteration for different GA runs. C, H, Br, Cl, and F atoms are colored in grey, white, brown, green, and yellow, respectively. The $\vec{a}$, $\vec{b}$, and $\vec{c}$ crystallographic lattice vectors are displayed in red, green, and blue, respectively.}
\label{Fig12_TargetXIII_GA_runs}
\end{figure}
\begin{figure*}[h!]
\makebox[\linewidth][c]{\includegraphics{Fig13_TargetXIII_Reranking.pdf}}
\caption{(a) The relative total energies as obtained by different dispersion-inclusive DFT methods and (b) the PBE0+MBD energy versus density of putative crystal structures of Target XIII. The top 8 predicted structures, as ranked by PBE0+MBD, are shown in color. The $\vec{a}$, $\vec{b}$, and $\vec{c}$ crystallographic lattice vectors are displayed in red, green, and blue, respectively.}
\label{Fig13_TargetXIII_Reranking}
\end{figure*}
Indeed, predicting the correct crystal structure of Target XIII proved challenging. The various crossover, mutation, and selection settings used in different GA runs of Target XIII are shown Fig. \ref{Fig12_TargetXIII_GA_runs}. The initial pool for all runs contained 48 structures. The various GA runs were stopped after 1400 iterations, the first 900 of which are shown. Of the six runs attempted, only one run (50\% SymC, R), colored in green, generated the experimental structure, although all runs found crystal structures lower in energy than the experimental using PBE+TS and \textit{lower-level} numerical settings. Panel (a) shows the average energy of the top 40 structures as a function of GA iteration, relative to the global minimum energy with PBE+TS and \textit{lower-level} settings. The run that used standard crossover (50\% SC, R), colored in red, consistently had the highest average energy, even higher than the run that used cluster-based fitness with Affinity Propagation and the lattice parameter based descriptor, shown in indigo. Panel (b) shows the minimum energy structure as a function of GA iteration. All runs converged the top structure to within 1 kJ/mol per molecule within 300 iterations, except for the run that used standard crossover. For this target symmetric crossover was essential in producing low-energy structures.
All structures generated were combined into a final set of 200 unique structures evaluated with PBE+TS and \textit{lower-level} numerical settings. The top 150 structures were re-relaxed with PBE+TS with \textit{higher-level} numerical settings and subsequently re-checked for duplicates. The final top 90 structures as ranked by PBE+TS and \textit{higher-level} settings, were then re-ranked with PBE+MBD and PBE0+MBD. The top 90 structures are located within relative energy windows of 6.8, 7.8, and 6.5 kJ/molecule per molecule when ranked by PBE+TS, PBE+MBD, and PBE0+MBD, respectively. Panel (a) of Fig. \ref{Fig13_TargetXIII_Reranking} shows
the ranking of the structures found within a window of 4.2 kJ/mol per molecule of the global minimum. The top 8 crystal structures as ranked by PBE0+MBD are highlighted in color. After local optimization with PBE+TS and \textit{higher-level} numerical settings, the experimental structure is ranked as \#1. It is consistently predicted as the most stable crystal structure by PBE+MBD and PBE0+MBD. Focusing on the top 8 crystal structures as ranked by PBE0+MBD, only the experimental structure contains a zig-zag packing motif. Additionally, 4/8 of the top structures have $Z^\prime$=2. Panel (b) of Fig. \ref{Fig13_TargetXIII_Reranking} shows the PBE0+MBD energy versus density of the top structures. This reveals the experimental structure with the zig-zag motif has the highest density. For the experimental structure, close bromine-bromine contacts are found perpendicular to the zig-zag stacking direction, while 7/8 of the other top structures generated show $\pi$-stacking and/or close halogen bonds that stabilize the stacking of the layers.
Although many low-energy structures were generated, 5/6 of the GA runs did not successfully locate the experimental structure. This may be attributed to two primary factors. First, it is possible that the \textit{lower-level} numerical settings used to save computational time in the GA search, were not sufficiently accurate for this halogenated molecule. When using PBE+TS and \textit{lower-level} numerical settings, the experimental structure was nearly 6 kJ/mol per molecule higher than the global minimum, and ranked as \#39 when all structures generated from the different GA runs were combined. When these structures were postprocessed with PBE+TS and \textit{higher-level} numerical settings, the experimental structure was ranked as \#1. As lower energy structures have a higher probability of being selected, this could have systematically biased the searches. This highlights the complications that may arise when using a hierarchical approach. Second, while most low-energy structures of Target XIII have a layered packing motif, the experimental structure has a unique zig-zag packing motif and an oblong unit cell. Such oblong unit cells were rarely generated in the search. In fact, even the run that used cluster-based fitness with the lattice parameter descriptor failed to locate the experimental structure. Although candidate child structures with similar lattices to the experimental were frequently generated in this run, they were subsequently rejected by the geometric and energetic constraints before local optimization. This suggests that the experimental structure is located in a narrow well in the potential energy surface, while layered structures exist in wider, more-accessible basins. When studying halogen-bonded systems in the future, it may be beneficial to use cluster-based fitness with a descriptor based on halogen-halogen or hydrogen-halogen intermolecular contacts.
\subsection{Target I}
Target I (C$_6$H$_6$O) was selected from the second blind test \cite{2002Motherwell,1999Blake}. It has two reported polymorphs, a stable form, which crystallizes in $P2_1/c$ with $Z$=$4$, and a metastable form which crystallizes in $Pbca$ with $Z$=$8$. At the time of the second blind test, no participating groups submitted the more stable $Z$=$4$ form. 4/11 groups submitted the metastable $Z$=$8$ form, with 3/4 groups ranking it as the most stable structure.
\begin{figure}[h!]
\includegraphics{Fig14_TargetI_GA_runs.pdf}
\caption{The global minimum structure produced by the GA runs as a function of GA iteration for runs that used (a) $Z$=4 and (b) $Z$=8. C, H, and O atoms are colored in grey, white, and red, respectively. The structures shown are projected along the $\vec{a}$ lattice vector and the $\vec{b}$, and $\vec{c}$ lattice vectors are highlighted in green and blue, respectively.}
\label{Fig14_TargetI_GA_runs}
\end{figure}
\begin{figure*}[h!]
\includegraphics{Fig15_TargetI_Reranking.pdf}
\caption{(a) The relative total energies as obtained by different dispersion-inclusive DFT methods and (b) the PBE0+MBD energy versus density of selected crystal structures of Target I. The top 6 predicted structures, as ranked by PBE+MBD, are highlighted in color. Intermolecular contacts less than the sum of vdW radii are shown in cyan. The structures shown are projected along the $\vec{a}$ lattice vector and the $\vec{b}$, and $\vec{c}$ lattice vectors are highlighted in green and blue, respectively.}
\label{Fig15_TargetI_Reranking}
\end{figure*}
For Target I, independent GA searches were conducted starting from initial pools with $Z$=4 and $Z$=8. These contained 45 and 96 structures, respectively. The GA runs were stopped when the number of additions to the common pool reached 650 and 350, respectively. During evolution, the $Z$=4 runs also generated structures with $Z$=2, and the $Z$=8 runs generated structures with $Z$=4 and $Z$=2. The minimum energy as a function of GA iteration, relative to the global minimum using PBE+TS with \textit{lower-level} numerical settings, is shown in Fig. \ref{Fig14_TargetI_GA_runs}, panels (a) and (b), for the $Z$=4 and $Z$=8 runs, respectively. For the $Z$=4 runs, the convergence behavior of the minimum energy structure was similar for all settings tested, including the run that used lattice parameter based clustering, shown in orange. All runs located structures lower in energy than the $Z$=4 experimental polymorph at this level of theory. For the $Z$=8 runs, the runs that used 25\% and 50\% symmetric crossover with roulette wheel selection were slower to converge, and did not locate the $Z$=8 polymorph when the GA was stopped.
All structures produced by the $Z$=$4$ and $Z$=$8$ GA runs were combined into a final set of 200 unique structures, as evaluated with PBE+TS and \textit{lower-level} numerical settings. Supercells were allowed in the pymatgen duplicate check. The final top 100 structures were re-relaxed using PBE+TS with higher-level settings and subsequently re-ranked using PBE+MBD. The structures located within 2 kJ/mol per molecule of the global minimum are shown in panel (a) of Fig. \ref{Fig15_TargetI_Reranking}. The top 6 structures as ranked by PBE+MBD were also re-ranked using PBE0+MBD and are highlighted in color. Of these top 6 structures, 4/6 display similar packing motifs to the metastable $Pbca$ polymorph, shown in green with co-facial dimers oriented in opposite directions, stacked in slightly different ways. To highlight structural differences, intermolecular close-contacts are displayed in cyan.
The metastable $Z$=$8$ $Pbca$ polymorph, shown in green, is ranked as \#1 with PBE+TS, \#4 with PBE+MBD, and \#3 when re-ranked with PBE0+MBD. With all energy methods this polymorph is determined to be practically energetically degenerate with the putative $Z$=8 $P2_1/c$ structure, shown in yellow. However, the $Z$=8 $P2_1/c$ structure has $Z^\prime$=2 and a slightly different lattice from the metastable polymorph, and hence was determined to be a unique lattice energy minima. The experimental $P2_1/c$ polymorph with $Z$=4, highlighted in red, is ranked as \#11 with PBE+TS, but \#1 with PBE+MBD and PBE0+MBD. There is no significant re-ranking between PBE+MBD and PBE0+MBD for the structures considered. The relative energy differences between these structures increased when re-ranked by PBE0+MBD, as compared to PBE+MBD. Panel (b) of Fig. \ref{Fig15_TargetI_Reranking} shows the PBE0+MBD energy versus density of the highlighted structures. The six structures have very similar densities, but the most stable $P2_1/c$ experimental structure has the lowest density.
Several computational studies conducted after the second blind test\cite{2001Mooij,2004Day,2009Asmadi} consistently ranked the $Z$=8 $Pbca$ polymorph as the most stable form. However, attempts at its recrystallization only lead to the stable $Z$=4 $P2_1/c$ form. Ref. \citenum{2015Nyman} suggests that the $Z$=8 $Pbca$ structure is located on a saddle point of the potential energy surface and that symmetry breaking produces a stable $Z^\prime$=2 structure. This could be the $Z^\prime$=2 $P2_1/c$ structure, colored in yellow and ranked as \#4 with PBE0+MBD, as discussed above. It should be noted, however, that the nature of the potential energy landscape, including whether certain structures are determined as minima or saddle points, may depend strongly on the energy method used \cite{2014Wales, 2016Carr}. In the present study, PBE+MBD and PBE0+MBD rank the experimental $Z$=4 $P2_1/c$ structure as the most stable polymorph. This highlights the importance of accounting for many-body dispersion interactions and long-range screening effects in the MBD method. Ref. \citenum{2017Whittleton} also computed the $P2_1/c$ experimental structure as more stable than the $Pbca$ form using B86bPBE-XDM.
\section{Conclusion and Best Practices}
We have introduced GAtor, a first principles genetic algorithm for molecular crystal structure prediction. GAtor currently interfaces with FHI-aims and is optimized for HPC environments. The code offers a variety of features that enable the user to customize the GA search settings, including energy-based and cluster-based fitness (evolutionary niching), roulette wheel and tournament selection, symmetric and standard crossover, different mutation schemes, and various tunable parameters related to energy cutoffs, similarity checks, and geometric constraints. GAtor's crossover and mutation operators, specifically tailored for molecular crystals, provide a balance between exploration and exploitation. These operators enable the generation and exploration of high $Z^\prime$ structures.
GAtor was applied to predict the structures of a chemically diverse set of four past blind test targets. The known structures of all four targets were successfully predicted, as well as several additional low-energy structures. Different GA settings were found to be more effective for different targets. Target XXII contains only C, N, and S atoms and has a small energy barrier between its two enantiomers, related by a bending degree of freedom. For this target, symmetric crossover and tournament selection were particularly effective. Evolutionary niching with respect to a descriptor based on lattice parameters uniformly explored the potential energy surface, including regions outside the initial pool, and suppressed the oversampling of structures with a planar molecular conformation (genetic drift). Target II forms various hydrogen-bonds. Its known experimental structure was located with a variety of GA settings, including runs that purely used mutations. For this molecule, standard crossover was more effective than symmetric crossover. Target XIII contains several halogens (Br, Cl, F), which make it challenging due to the presence of halogen bonds. In addition, the experimental structure comprises a ziz-zag packing motif unlike the layered packing motifs found in most of the low-energy structures in the population. This may explain why the experimental structure was generated only once. For Target XIII, symmetric crossover was critical for the production of low-energy structures. Target I forms mainly weak C$\cdot\cdot\cdot$H and C$-$H$\cdot\cdot\cdot$O interactions. It has two known polymorphs with $Z$=4 and $Z$=8, the latter of which is a less stable ``disappearing polymorph". All GA settings tested were found to be equally effective in generating important low-energy $Z$=4 structures. For the $Z$=8 structure, the combination of 25\% or 50\% symmetric crossover with roulette wheel selection was less effective.
Low-energy structures found in different GA runs were grouped together, re-relaxed, and re-ranked with increasingly accurate dispersion-inclusive DFT methods: PBE+TS, PBE+MBD, and PBE0+MBD. For Target XIII, all three methods ranked the experimental structure as \#1. For Target I, PBE+MBD and PBE0+MBD correctly ranked the $Z$=4 polymorph as \#1 and the $Z$=8 polymorph as less stable, at \#4 and \#3, respectively, and very close in energy to a structure with $Z^\prime$=2 and a similar packing motif. The MBD method was instrumental in obtaining the correct ordering of the two known polymorphs of Target I based solely on lattice energy without considering vibrational and thermal contributions. For Target XXII, only PBE0+MBD ranked the experimental structure as \#1. Target II is an exception because the relative energy of its experimental structure increases, rather than decreases, with increasing accuracy. It is ranked as \#10 with PBE0+MBD. The structures ranked \#4-\#9 exhibit a variety of layered packing motifs, similar to the experimental structure. The structure consistently ranked as \#1 with all three methods was predicted for the first time using GAtor. It is a $Z^\prime$=2 structure with $P\bar{1}$ symmetry and a scaffold packing motif, whose lattice energy is 1.8 kJ/mol per molecule lower than the known experimental form. The \#2 structure, which also has a scaffold packing motif, and the \#3 structure with a zig-zag packing motif have been previously reported by others. Several of the low-lying putative structures of Target II have higher densities than the observed structure, therefore it may be possible to crystallize them under high pressure conditions. This may motivate further experimental investigations of Target II. Further computational studies considering finite temperature and pressure effects may provide additional insight into the relative stability of the putative low-energy structures identified here and the possibility of growing them experimentally.
Several best practices for the usage of GAtor have emerged from the results reported here. First, because the GA exhaustively explores regions of the configuration space represented in the initial pool (unless evolutionary niching is used), it is recommended to start GAtor from a carefully crafted initial pool, containing a diverse set of structures in all space groups appropriate for the molecule. Such an initial pool may be generated by Genarris \cite{2017Li} or by other means. Second, rather than running GAtor with predetermined settings for a large number of iterations, we recommend running GAtor with several different settings for a smaller number of iterations, and then combining the structures found in all searches for post-processing. As each system is unique and it is difficult to know \textit{a priori} which settings will be the most effective, running the GA with different settings increases the likelihood of success. Third, it is recommended to use evolutionary niching in at least one of the runs. Overall, the goal is to locate all the low-lying minima including those found in disconnected, hard to reach regions of the potential energy surface. For this reason, cluster-based fitness may be a useful tool for uniformly sampling the potential energy landscape and for overcoming initial pool biases and selection biases (genetic drift). In the future, we plan to implement increasingly sophisticated capabilities in GAtor to treat more complex systems. We expect GAtor to be a useful tool for the computational chemistry, materials science, and condensed matter communities.
\begin{acknowledgement}
Work at CMU was funded by the National Science Foundation (NSF) Division of Materials Research through grant DMR-1554428. An award of computer time was provided by the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program. This research used resources of the Argonne Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC02-06CH11357.
\end{acknowledgement}
\begin{suppinfo}
The supporting information provides a comparison between the experimental structures predicted by GAtor and the published experimental forms, including RMS differences computed with the Mercury\cite{2008Macrae} software. The distribution of space groups for the different initial pools used is also included. For each target, CIF files and total energies of the structures used for re-ranking are provided.
\end{suppinfo}
|
{
"timestamp": "2018-02-26T02:10:25",
"yymm": "1802",
"arxiv_id": "1802.08602",
"language": "en",
"url": "https://arxiv.org/abs/1802.08602"
}
|
\section{A Meta-Algorithm}
\label{sec:algo}
In this section, we propose a general upper-confidence bound (UCB) style strategy that utilizes the structure of the problem to converge to the best expert much faster than a naive UCB strategy that treats each expert as an arm of the bandit problem. One of the key observations in this framework is that rewards collected under one expert can give us valuable information about the mean under another expert, owing to the Bayesian Network factorization of the joint distribution of $X,V$ and $Y$. We propose two estimators for the mean rewards of different experts, that leverage this information leakage among experts, through importance sampling. These estimators are defined in Section~\ref{sec:estimators}. We propose a meta-algorithm (Algorithm~\ref{alg:dUCB}) that is designed to use these estimators and the corresponding confidence intervals, to control regret.
\begin{algorithm}
\caption{D-UCB: Divergence based UCB for contextual bandits with stochastic experts}
\begin{algorithmic}[1]
\State For time step $t = 1$, observe context $x_1$ and choose a random expert $\pi \in \Pi$. Play an arm drawn from the conditional distribution $\pi(V \vert x_1)$.
\For {$t = 2,...,T$}
\State Observe context $x_t$
\State Let $k(t) = \argmax_{k} U_{k}(t-1) \triangleq \hat{\mu}_k(t-1) + s_k(t-1)$.
\State \parbox[t]{\dimexpr\linewidth-\algorithmicindent}{Select an arm $v(t)$ from the distribution \\ $ \pi_{k(t)} (V \vert x_t)$.}
\State Observe the reward $Y(t)$.
\EndFor
\end{algorithmic}
\label{alg:dUCB}
\end{algorithm}
Here, $\hat{\mu}_k(t)$ denotes an estimate for the mean reward for expert $k$ at time $t$, while $s_k(t)$ denotes the upper confidence bound for the corresponding estimator at time $t$. We propose two estimators that utilize all the samples observed under various experts to provide an estimate for the mean reward under expert $k$.
The first estimator denoted by $\hat{\mu}_k^{c}(t)$ (Section~\ref{sec:estimators}, Eq.~\eqref{eq:est1}) is a clipped importance sampling estimator inspired by ~\cite{pmlr-v70-sen17a}. If this estimator is used, then $s_k(t)$ is set as in Equation.~\eqref{eq:ucb1}.
The second estimator denoted by $\hat{\mu}_k^{m}(t)$ (Section~\ref{sec:estimators}, Eq.~\eqref{eq:est2}) is a median of means based importance sampling estimator. If this estimator is used, then $s_k(t)$ is set as in Equation.~\eqref{eq:ucb2}.
\subsection*{Conclusion}
We study the problem of contextual bandits with stochastic experts. We propose two UCB style algorithms, that use two different importance sampling estimators, which can leverage \textit{information leakage} between the stochastic experts. We provide instance-dependent regret guarantees for our UCB based algorithms. Our algorithms show strong empirical performance on real-world datasets. We believe that this paper introduces an interesting problem setting for studying contextual bandits, and opens up opportunities for future research that may include better regret bounds for the problem and an instance-dependent lower-bound.
\subsubsection*{Acknowledgment}
This work is partially supported by NSF SaTC 1704778, ARO W911NF-17-1-0359, and the US DoT supported D-STOP Tier 1 University Transportation Center.
\FloatBarrier
\section{Problem Setting and Definitions}
\label{sec:defs}
We consider a general contextual bandit setting where the task is to compete against a large class of stochastic experts. In the stochastic setting the general contextual bandit problem with $K$ arms is defined as a sequential process for $T$ discrete time-steps~\cite{langford2008epoch}, where $T$ is the time-horizon of interest. At each time $t \in \{ 1,2,\cdots,T\}$ nature draws a vector $(x_t, r_1(t),...,r_K(t))$ from an unknown but fixed probability distribution. Here, $r_{i}(t) \in [0,1]$ is the reward of arm $i$. The context vector $x_t \in \mathcal{X}$ is revealed to the policy-designer, whose task is then to choose an arm out the $K$ possibilities. Only the reward $r_{v(t)}(t)$ of the chosen arm $v(t)$, is then revealed to the policy-designer. We will use $r_{v(t)}$ in place of $r_{v(t)}(t)$ for notational convenience.
\begin{figure}[h]
\centering
\includegraphics[width=0.75\linewidth]{illustrate}
\caption{\small Bayesian Network denoting the joint distribution of the random variables at a given time-step, under our contextual bandit setting. $X$ denotes the context, $V$ denotes the chosen arm, while $Y$ denotes the reward from the chosen arm that also depends on the context observed. The distribution of the reward given the chosen arm and the context, and the marginal of the context remain fixed over all time slots. However, the conditional distribution of the chosen arm given the context is dependent on the stochastic expert at that time-step. }
\label{fig:illustrate}
\end{figure}
{\bf Stochastic Experts: }
We consider a class of stochastic experts $\Pi = \{\pi_1,\cdots, \pi_N \}$, where each $\pi_i$ is a conditional probability distribution $\pi_{V \vert X}(v \vert x)$ where $V \in [K]$ is the random variable denoting the arm chosen and $X$ is the context. We will use the shorthand $\pi_i(V \vert X)$ to denote the conditional distribution corresponding to expert $i$, for notational convenience. The observation model at each time step $t$ is as follows: {\it (i)} A context $x_t$ is observed. {\it (ii)} The policy-designer chooses a stochastic expert $\pi_{k(t)} \in \Pi$. An arm $v(t)$ is drawn from the probability distribution $\pi_{k(t)}(V \vert x_t)$, by the policy-designer. {\it (iv)} The stochastic reward $y_t = r_{v(t)}$ is revealed.
The joint distribution of the random variables $X,V,Y$ denoting the context, arm chosen and reward observed respectively at time $t$, can be modeled by the Bayesian Network shown in Fig.~\ref{fig:illustrate}. The joint distribution factorizes as follows,
\begin{align}
\label{eq:joint}
p(x,v,y) = p(y \vert v,x ) p(v \vert x)p(x)
\end{align}
where $p(y \vert v,x )$ (the reward distribution given the arm and the context), and $p(x)$ (marginal distribution of the context) is determined by the nature's distribution and are fixed for all time-steps $t = 1,2,...,T$. On the other hand $p(v \vert x)$ (distribution of the arm chosen given the context) depends on the expert selected at each round. At time $t$, $p(v \vert x) = \pi_{k(t)}(v \vert x)$ that is the conditional distribution encoded by the stochastic expert chosen at time $t$. Now we are at a position to define the objective of the problem.
{\bf Regret: } The objective in our contextual bandit problem is to perform as well as the best expert in the class of experts. We will define $p_k(x,v,y) \triangleq p(y \vert v,x ) \pi_k(v \vert x)p(x)$ as the distribution of the corresponding random variables when the expert chosen is $\pi_k \in \Pi$. The expected reward of expert $k$ is now denoted by,
$\mu_k = {\mathbb{E}}}\def\PP{{\mathbb{P}}_{p_k(x,v,y)}[Y],$ where ${\mathbb{E}}}\def\PP{{\mathbb{P}}_{p(.)}$ denotes expectation with respect to distribution $p(.)$. The best expert is given by $k^* = \argmax_{k \in [N]} \mu_k$. The objective is to minimize the \textit{regret} till time $T$, which is defined as $R(T) = \sum_{t = 1}^{T} \left(\mu^* - \mu_{k(t)} \right)$, where $\mu^* = \mu_{k^*}$. Note that this is analogous to the regret definition for the deterministic \textit{expert} setting~\cite{langford2008epoch}. Let us define $\Delta_{k} \triangleq \mu^* - \mu_k$ as the optimality gap in terms of expected reward, for expert $k$. Let $\pmb{\mu} \triangleq \{\mu_1,...,\mu_N \}$. Now we will define some divergence metrics that will be important in describing our algorithms and theoretical guarantees.
\subsection{Divergence Metrics}
\label{sec:divergence}
In this section, we will define some general divergence metrics which will be important in our analysis. Before we proceed, we will define a general class of information theoretic quantities, $f$-divergences.
\begin{definition}
Let $f(\cdot)$ be a non-negative convex function such that $f(1) = 0$.
For two joint distributions $p_{X,Y}(x,y)$ and $q_{X,Y}(x,y)$ (and the
associated conditionals), the conditional $f$-divergence $D_{f}(p_{X
\vert Y} \Vert q_{X \vert Y})$ is given by:
$D_{f}(p_{X \vert Y} \Vert q_{X \vert Y}) = {\mathbb{E}}}\def\PP{{\mathbb{P}}_{q_{X,Y}} \left[f \left( \frac{p_{X \vert Y}(X \vert Y)}{q_{X \vert Y}(X \vert Y)} \right)\right].$
\end{definition}
Recall that $\pi_i$ is a conditional distribution of $V$ given $X$. Thus, $D_f(\pi_i \Vert
\pi_j)$ is the conditional $f$-divergence between the conditional distributions $\pi_i$ and $\pi_j.$ Note that in this definition the marginal distribution of $X$ is the marginal of $X$ given by nature's inherent distribution over the contexts. In this work we will be concerned with two specific $f$-divergence metrics that are defined as follows.
\begin{definition}
\label{def:mij}
($M_{ij}$ measure) Consider the function $f_1(x) = x \exp (x-1) -
1$. We define the following log-divergence measure: $M_{ij} = 1 +
\log (1 + D_{f_1} (\pi_i \lVert \pi_j)),$ $\forall i,j
\in [N].$
\end{definition}
The $M_{ij}$-measures will be crucial in analyzing one of our estimators (clipped estimator) defined in Section~\ref{sec:estimators}.
\begin{definition}
\label{def:sij}
($\sigma_{ij}$ measure) $D_{f_2} (\pi_i \Vert \pi_j)$ is known as the chi-square divergence between the respective conditional distributions, where $f_2(x) = x^2 - 1$. Let $\sigma^2_{ij} = 1 + D_{f_2} (\pi_i \Vert \pi_j)$.
\end{definition}
The $\sigma_{ij}$-measures are important in analyzing our second estimator (median of means) defined in Section~\ref{sec:estimators}.
\section{Problem Setting and Definitions}
\label{sec:defs}
The general stochastic contextual bandit problem with $K$ arms is defined as a sequential process for $T$ discrete time-steps~\cite{langford2008epoch}, where $T$ is the time-horizon of interest. At each time $t \in \{ 1,2,\cdots,T\}$ nature draws a vector $(x_t, r_1(t),...,r_K(t))$ from an unknown but fixed probability distribution. Here, $r_{i}(t) \in [0,1]$ is the reward of arm $i$. The context vector $x_t \in \mathcal{X}$ is revealed to the policy-designer, whose task is then to choose an arm out the $K$ possibilities. Only the reward $r_{v(t)}(t)$ of the chosen arm $v(t)$, is then revealed to the policy-designer. We will use $r_{v(t)}$ in place of $r_{v(t)}(t)$ for notational convenience.
\begin{figure}[h]
\centering
\includegraphics[width=8cm,height=10cm,keepaspectratio]{illustrate}
\caption{\small Bayesian Network denoting the joint distribution of the random variables at a given time-step, under our contextual bandit setting. $X$ denotes the context, $V$ denotes the chosen arm, while $Y$ denotes the reward from the chosen arm that also depends on the context observed. The distribution of the reward given the chosen arm and the context, and the marginal of the context remain fixed over all time slots. However, the conditional distribution of the chosen arm given the context is dependent on the stochastic expert at that time-step. }
\label{fig:illustrate}
\end{figure}
{\bf Stochastic Experts: }
We consider a class of stochastic experts $\Pi = \{\pi_1,\cdots, \pi_N \}$, where each $\pi_i$ is a conditional probability distribution $\pi_{V \vert X}(v \vert x)$ where $V \in [K]$ is the random variable denoting the arm chosen and $X$ is the context. We will use the shorthand $\pi_i(V \vert X)$ to denote the conditional distribution corresponding to expert $i$, for notational convenience. The observation model at each time step $t$ is as follows: {\it (i)} A context $x_t$ is observed. {\it (ii)} The policy-designer chooses a stochastic expert $\pi_{k(t)} \in \Pi$. An arm $v(t)$ is drawn from the probability distribution $\pi_{k(t)}(V \vert x_t)$, by the policy-designer. {\it (iv)} The stochastic reward $y_t = r_{v(t)}$ is revealed.
The joint distribution of the random variables $X,V,Y$ denoting the context, arm chosen and reward observed respectively at time $t$, can be modeled by the Bayesian Network shown in Fig.~\ref{fig:illustrate}. The joint distribution factorizes as follows, $p(x,v,y) = p(y \vert v,x ) p(v \vert x)p(x)~\refstepcounter{equation}(\theequation)\label{eq:joint}$,
where $p(y \vert v,x )$ (the reward distribution given the arm and the context), and $p(x)$ (marginal distribution of the context) is determined by the nature's distribution and are fixed for all time-steps $t = 1,2,...,T$. On the other hand $p(v \vert x)$ (distribution of the arm chosen given the context) depends on the expert selected at each round. At time $t$, $p(v \vert x) = \pi_{k(t)}(v \vert x)$ that is the conditional distribution encoded by the stochastic expert chosen at time $t$. Now we are at a position to define the objective of the problem.
{\bf Regret: } The objective in our contextual bandit problem is to perform as well as the best expert in the class of experts. We will define $p_k(x,v,y) \triangleq p(y \vert v,x ) \pi_k(v \vert x)p(x)$ as the distribution of the corresponding random variables when the expert chosen is $\pi_k \in \Pi$. The expected reward of expert $k$ is now denoted by,
$\mu_k = {\mathbb{E}}}\def\PP{{\mathbb{P}}_{p_k(x,v,y)}[Y],$ where ${\mathbb{E}}}\def\PP{{\mathbb{P}}_{p(.)}$ denotes expectation with respect to distribution $p(.)$. The best expert is given by $k^* = \argmax_{k \in [N]} \mu_k$. The objective is to minimize the \textit{regret} till time $T$, which is defined as $R(T) = \sum_{t = 1}^{T} \left(\mu^* - \mu_{k(t)} \right)$, where $\mu^* = \mu_{k^*}$. Note that this is analogous to the regret definition for the deterministic \textit{expert} setting~\cite{langford2008epoch}. Let us define $\Delta_{k} \triangleq \mu^* - \mu_k$ as the optimality gap in terms of expected reward, for expert $k$. Let $\pmb{\mu} \triangleq \{\mu_1,...,\mu_N \}$. Now we will define some divergence metrics that will be important in describing our algorithms and theoretical guarantees.
\subsection{Divergence Metrics}
\label{sec:divergence}
In this section we will define some $f$-divergence metrics that will be important in analyzing our estimators. Similar divergence metrics were defined in~\cite{pmlr-v70-sen17a} to analyze the clipped estimator in~\eqref{eq:est1} in the context of a best arm identification problem. In addition to the divergence metric in~\cite{pmlr-v70-sen17a}, we will also define the chi-square divergence metric which will be useful in analyzing the median of means based estimator~\eqref{eq:est2}. First, we define conditional $f$-divergence.
\begin{definition}
Let $f(\cdot)$ be a non-negative convex function such that $f(1) = 0$.
For two joint distributions $p_{X,Y}(x,y)$ and $q_{X,Y}(x,y)$ (and the
associated conditionals), the conditional $f$-divergence $D_{f}(p_{X
\vert Y} \Vert q_{X \vert Y})$ is given by:
$D_{f}(p_{X \vert Y} \Vert q_{X \vert Y}) = {\mathbb{E}}}\def\PP{{\mathbb{P}}_{q_{X,Y}} \left[f \left( \frac{p_{X \vert Y}(X \vert Y)}{q_{X \vert Y}(X \vert Y)} \right)\right].$
\end{definition}
Recall that $\pi_i$ is a conditional distribution of $V$ given $X$. Thus, $D_f(\pi_i \Vert
\pi_j)$ is the conditional $f$-divergence between the conditional distributions $\pi_i$ and $\pi_j.$ Note that in this definition the marginal distribution of $X$ is the marginal of $X$ given by nature's inherent distribution over the contexts. In this work we will be concerned with two specific $f$-divergence metrics that are defined as follows.
\begin{definition}
\label{def:mij}
($M_{ij}$ measure)~\cite{pmlr-v70-sen17a} Consider the function $f_1(x) = x \exp (x-1) -
1$. We define the following log-divergence measure: $M_{ij} = 1 +
\log (1 + D_{f_1} (\pi_i \lVert \pi_j)),$ $\forall i,j
\in [N].$
\end{definition}
The $M_{ij}$-measures will be crucial in analyzing one of our estimators (clipped estimator) defined in Section~\ref{sec:estimators}.
\begin{definition}
\label{def:sij}
($\sigma_{ij}$ measure) $D_{f_2} (\pi_i \Vert \pi_j)$ is known as the chi-square divergence between the respective conditional distributions, where $f_2(x) = x^2 - 1$. Let $\sigma^2_{ij} = 1 + D_{f_2} (\pi_i \Vert \pi_j)$.
\end{definition}
The $\sigma_{ij}$-measures are important in analyzing our second estimator (median of means) defined in Section~\ref{sec:estimators}.
\section{Estimators and Confidence Bounds}
\label{sec:estimators}
In this section we define two estimators for estimating the mean rewards under a given expert. Both these estimators can effectively leverage the information leakage between samples collected under various experts, through importance sampling. One key observation that enables us in doing so is the following equation,
\begin{align}
\label{eq:infoleakage}
\mu_k = {\mathbb{E}}}\def\PP{{\mathbb{P}}_{p_j(x,v,y)} \left[ Y \frac{\pi_k(V \vert X)}{\pi_j(V \vert X)}\right].
\end{align}
This has been termed as \textit{information leakage} and has been leveraged before in the literature~\cite{pmlr-v70-sen17a, lattimore2016causal,bottou2013counterfactual} in best-arm identification settings. Recall that the subscript $p_j(x,v,y)$ denotes that the expectation is taken under the joint distribution in~\eqref{eq:joint}, where $p(v \vert x) = \pi_j(v \vert x)$ i.e. under the distribution imposed by expert $\pi_j$. However, even under this distribution we can technically estimate the mean reward under expert $\pi_k$. The above equation is the motivation behind our estimators. Now, we will introduce our first estimator.
{\bf Clipped Estimator: } This estimator was introduced in~\cite{pmlr-v70-sen17a} in the context of a pure exploration problem. Here, we analyze this estimator in a cumulative regret setting, where the parameters of the estimator need to be adjusted differently. Let $n_i(t)$ denote the number of times expert $i$ has been invoked by Algorithm~\ref{alg:dUCB} till time $t$, for all $i \in [N]$. We define the fraction $\nu_i(t) \triangleq n_i(t)/t$. We will also define $\mathcal{T}_i(t)$ as the subset of time-steps among $\{1,..,t\}$, in which the expert $i$ was selected. Let $\hat{\mu}^c_k(t)$ be the estimate of the mean reward of expert $k$ from samples collected till time $t$. The estimator is given by,
\begin{align*}
\label{eq:est1}
\hat{\mu}^c_{k}(t) =& \frac{1}{Z_{k}(t)}\sum_{j = 1}^{N} \sum_{s \in
\mathcal{T}_j(t)} \frac{1}{M_{kj}}Y_j(s)\frac{\pi_k(V_j(s) \vert
X_j(s))}{\pi_j(V_j(s) \vert X_j(s))}
\times \mathds{1}\left\{ \frac{\pi_k(V_j(s) \vert X_j(s))}{\pi_j(V_j(s) \vert X_j(s))} \leq 2\log(2/\epsilon(t))M_{kj}\right\}. \numberthis
\end{align*}
Here, $A_j(s)$ is the value of the random variable $A$ at time $s$ drawn using expert $j$, where $A$ can be the r.v's $X$,$Y$ or $V$. We set $Z_{k}(t) = \sum_{j} n_j(t)/M_{kj}$. $\epsilon(t)$ is an adjustable term which controls the bias-variance trade-off for the estimator.
{\bf Intuition:} The clipped estimator is a weighted average of the samples collected under different experts, where each sample is scaled by the importance ratio as suggested by~\eqref{eq:infoleakage}. We also clip the importance ratios which are larger than a clipper level. This clipping introduces bias but decreases variance. The clipper level is carefully chosen to trade-off bias and variance. The clipper level values and the weights are dependent on the divergence terms $M_{kj}$'s. When the divergence $M_{kj}$ is large, it means that the samples from expert $j$ is not valuable for estimating the mean for expert $k$. Therefore, a weight of $1/M_{kj}$ is applied. Similarly, the clipper level is set at $2\log(2/\epsilon(t))M_{kj}$ to restrict the aditive bias to $\epsilon(t)$.
The upper confidence term in Algorithm~\ref{alg:dUCB} for the estimator $\hat{\mu}^c_{k}(t)$ is chosen as,
\begin{align}
\label{eq:ucb1}
s^c_{k}(t) = \frac{3}{2}\beta(t)
\end{align}
at time $t$, where $\beta(t)$ is such that, $\frac{\beta(t)}{\log (2/\beta(t))} = \frac{\sqrt{c_1 t \log t}}{Z_k(t)}.$ We set $c_1 = 16$ in our analysis.
The upper confidence bound is derived using Lemma~\ref{lem:conf1}.
\begin{lemma}
\label{lem:conf1} Consider the estimator in Eq.~\eqref{eq:est1}. We have the following confidence bound at time $t$,
\begin{align*}
&\PP \left(\mu_k -\delta - \epsilon(t)/2 \leq \hat{\mu}^c_k(t) \leq \mu_k + \delta \right) \geq 1 - 2\exp \left( -\frac{\delta^{2}t}{8 (\log (2/\epsilon(t)))^2 } \left(\frac{Z_k(t)}{t} \right)^2\right).
\end{align*}
\end{lemma}
The lemma is implied by Theorem 4 in~\cite{pmlr-v70-sen17a}. We include the proof in Section~\ref{sec:clipped} in the appendix. The lemma shows that the clipped estimator can pool samples from all experts, in order to estimate the mean under expert $k$. The variance of the estimator depends on $Z_k(t)$, which depends on the log-divergences and number of times each expert has been invoked.
{\bf Median of Means Estimator: } Now we will introduce our second estimator which is based on the well-known median of means technique of estimation. Median of means estimators are popular for statistical estimation when the underlying distributions are heavy-tailed~\cite{bubeck2013bandits}. The estimator for the mean under the $k^{th}$ expert at time $t$ is obtained through the following steps: ($i$) We divide the total samples into $l(t) = \floor{ c_2 \log (1/\delta(t))}$ groups, such that the fraction of samples from each expert is preserved. We choose $c_2 = 8$ for our analysis. Let us index the groups as $r = 1,2...,l(t)$. This means that there are at least $\floor{n_i(t)/l(t)}$ samples from expert $i$ in each group. ($ii$) We calculate the empirical mean of expert $k$ from the samples in each group through importance sampling. ($iii$) The median of these means is our estimator.
Now we will setup some notation. Let $\mathcal{T}_{j}^{(r)} \subseteq \{1,2,...,t \}$ be the indices of the samples from expert $j$ that lie in group $r$. Let $W_k(r,t) = \sum_{i}n_i(r,t)/\sigma_{ki}$, where $n_i(r,t)$ is the number of samples from expert $i$ in group $r$. Let $n(r,t) = \sum_i n_i(r,t)$. Then the mean of expert $k$ estimated from group $r$ is given by,
\begin{align*}
\label{eq:momm}
&\hat{\mu}_{k}^{(r)}(t) = \frac{1}{W_k(r,t)}\sum_{j = 1}^{N} \sum_{s \in
\mathcal{T}^{(r)}_j} \frac{1}{\sigma_{kj}}Y_j(s)\frac{\pi_k(V_j(s) \vert
X(s))}{\pi_j(V_j(s) \vert X_j(s))}. \numberthis
\end{align*}
The median of means estimator for expert $k$ is then given by,
\begin{align}
\label{eq:est2}
\hat{\mu}^m_{k}(t) \triangleq \mathrm{median} \left(\hat{\mu}_{k}^{(1)}(t), \cdots, \hat{\mu}_{k}^{(l(t))}(t) \right).
\end{align}
{\bf Intuition:} The mean of every group is a weighted average of samples from each expert, rescaled by the importance ratios. This is similar to the clipped estimator in Eq.~\eqref{eq:est1}. However, here the importance ratios are not clipped at a particular level. In this estimator, the bias-variance trade-off is controlled by taking the median of means from $l(t)$ groups. The number of groups $l(t)$ needs to be carefully set in-order to control the bias-variance trade-off.
The upper confidence bound used in conjunction with this estimator at time $t$ is given by,
\begin{align}
\label{eq:ucb2}
s^m_k(t) = \frac{1}{W_k(t)}\sqrt{\frac{c_3 \log (1/\delta(t))}{t}}
\end{align}
where $W_k(t) = \min_{r \in [l(t)]} W_k(r,t)/n(r,t)$ and $\delta(t)$ is set as $1/t^2$ in our algorithm. This choice is inspired by the following lemma.
\begin{lemma}
\label{lem:mom}
Let $\delta(t) \in (0,1)$. Then the estimator in~\eqref{eq:est2} has the following confidence bound,
\begin{align}
\PP \left( \vert \hat{\mu}^m_{k}(t) - \mu_k \vert \leq \frac{1}{W_k(t)}\sqrt{\frac{c_3 \log (1/\delta(t))}{t}} \right) \geq 1 - \delta(t).
\end{align}
\end{lemma}
We provide the proof of this lemma in Section~\ref{sec:mom} in the appendix. The constant $c_3$ is $64$.
\section{Introduction}
\label{sec:intro}
Modern machine learning applications like recommendation engines~\cite{li2011scene,sen2016contextual,bouneffouf2012contextual,li2010contextual}, computational advertising~\cite{tang2013automatic,bottou2013counterfactual}, A/B testing in medicine~\cite{tekin2015discover,tekin2016adaptive} are inherently online. In these settings the task is to take sequential decisions that are not only profitable but also enable the system to learn better in future. For instance in a computational advertising system, the task is to sequentially place advertisements on users' webpages with the dual objective of learning the preferences of the users and increasing the click-through rate on the fly. A key attribute of these systems is the well-known \textit{exploration} (searching the space of possible decisions for better learning) and \textit{exploitation} (taking decisions that are more profitable) trade-off. A principled method to capture this trade-off is the study of multi-armed bandit problems~\cite{bubeck2012regret}.
Bandit problems have been studied for several decades. The classic $K$-armed bandit problem dates back to~\cite{lai1985asymptotically}. In the stochastic setting, one is faced with the choice of pulling one arm during each time-slot among $K$ arms, where the $k$-th arm has mean reward $\mu_k$. The task is to accumulate a total reward as close as possible to a genie strategy that has prior knowledge of arm statistics and always selects the optimal arm in each time-slot. The expected difference between the rewards collected by the genie strategy and the online strategy is defined as the regret. The expected regret till time $T$, of the state of the art algorithms~\cite{bubeck2012regret,auer2010ucb,agrawal2012analysis,auer2002using} scales as $O((K/\Delta) \log T )$ when there is a constant gap in mean reward of at least $\Delta$ between the best arm and the rest.
In the presence of side information, a popular model is that of contextual bandits. In the stochastic setting, it is assumed that at each time-step nature draws $(x,r_1,...,r_K)$ from a fixed but unknown distribution. Here, $x \in \mathcal{X}$ represents the context vector, while $r_1,...,r_K$ are the rewards of the $K$-arms~\cite{langford2008epoch}. The context $x$ is revealed to the policy-designer, after which she decides to choose an arm $a \in \{1,2,...,K \}$. Then, the reward $r_a$ is revealed to the policy-designer. In the computational advertising example, the context can be thought of as the browsing history, age, gender etc. of an user arriving in the system, while $r_1,...,r_K$ are generated according to the probability of the user clicking on each of the $K$ advertisements. The task here is to learn a \textit{good} mapping from the space of contexts $\mathcal{X}$ to the space of arms $[K] = \{1,2,...,K\}$ such that when the decisions are taken according to that mapping, the mean reward observed is high.
The above problem is usually modeled in the literature as stochastic contextual bandits with \textit{experts}~\cite{agarwal2014taming,dudik2011efficient,langford2008epoch}. The task is to compete against the best \textit{expert} in a class of experts $\Pi = \{\pi_1,...,\pi_N\}$, where each expert $\pi \in \Pi$ is a function mapping $\mathcal{X} \rightarrow [K]$. The mean reward of an expert $\pi$ is defined as $ {\mathbb{E}}}\def\PP{{\mathbb{P}} \left[ r_{\pi(X)} \right]$, where $X$ is the random variable denoting the context and the expectation is taken over the unknown distribution over $(x,r_1,...,r_K)$. The best expert is naturally defined as the expert with the highest mean reward. The expected difference in rewards of a genie policy that always chooses the best expert and the online algorithm employed by the policy-designer is defined as the regret. This problem has been well-studied in the literature, where a popular approach is to reduce the contextual bandit problem to supervised learning techniques through $\argmin$-oracles~\cite{beygelzimer2009offset}. This leads to powerful algorithms with instance-independent regret bounds of $\mathcal{O} \left(\sqrt{KT\mathrm{polylog}(N)} \right)$ at time $T$~\cite{agarwal2014taming,dudik2011efficient}. In this setting, the class of experts $\Pi$ can be thought of as a class of classifying functions that classify each context into one of the $K$ arms.
We consider the problem of contextual bandits with \textit{stochastic experts}. Our problem setting is closely related to the traditional experts setting in stochastic bandits~\cite{agarwal2014taming,dudik2011efficient}, with a subtle distinction. We assume access to a class of \textit{stochastic experts} $\Pi = \{\pi_1,...,\pi_N\}$, which are \textit{not deterministic}.
Instead, each expert $\pi \in \Pi$, is a conditional probability distribution over the arms given a context. For, an expert $\pi \in \Pi$ the conditional distribution is denoted by $\pi_{V \vert X}(v \vert x)$ where $V \in [K]$ is the random variable denoting the arm chosen and $X$ is the context. One key motivation is that this allows us to derive regret bounds in terms of \textit{closeness} of these soft experts quantified by divergence measures, rather than in terms of the total number of arms $K$. Another motivation is that the cost-sensitive classification oracles used to generate deterministic experts in a practical algorithm (in the traditional experts setting~\cite{agarwal2014taming}), can be calibrated using well-known techniques~\cite{cohen2004properties}, such that they can provide reliable confidence scores for each arm. These confidence scores are precisely a probability distribution over the arms, that encodes soft belief of an expert about the best arm, given a context.
As before, the task is to compete against the expert in the class with the highest mean reward. The expected reward of a stochastic expert $\pi$ is defined as ${\mathbb{E}}}\def\PP{{\mathbb{P}}_{X,V \sim \pi(V \vert X)} \left[ r_{V}\right]$, i.e the mean reward observed when the arm is drawn from the conditional distribution $\pi(V \vert X)$. We propose upper-confidence (UCB) style algorithms for the contextual bandits with stochastic experts problem. We prove \textit{instance-dependent} regret guarantees that scale as $\mathcal{O} \left(\mathcal{C} \left(N,\Delta\right) \log T\right)$, where $\mathcal{C} \left(N,\Delta\right)$ is an instance dependent term. We show that under some assumptions the term $\mathcal{C} \left(N,\Delta\right)$ scales as $\mathcal{O}(\log N/ \Delta)$, where $\Delta$ is the gap in mean reward of the best expert from second best. In the next section, we list the main contributions of this paper.
\subsection{Main Contributions}
The contributions of this paper are three-folds:
{\bf $(i)$ (Instance Dependent Regret Bounds):} We propose the contextual bandits with stochastic experts problem. We design two UCB based algorithms for this problem, based on two different importance sampling based estimators for estimating the mean of each expert. The key idea behind these estimators is that samples collected by deploying an expert can be used to estimate the mean reward under another expert. This \textit{information leakage} helps us in achieving regret guarantees that scale sub-linearly in $N$, the number of experts. We analyze UCB based algorithms under two such estimators, the clipped estimator~\eqref{eq:est1} (introduced in~\cite{pmlr-v70-sen17a}) and the median of means estimator~\eqref{eq:est2} (proposed in this paper). The information leakage between any two experts in the first estimator is governed by a pairwise log-divergence measure (Def.~\ref{def:mij}). For the second estimator, chi-square divergence (Def.~\ref{def:sij}) characterizes it.
We show that the regret of our UCB algorithm based on these two estimators scales as \footnote{Tighter regret bounds are derived in Theorems \ref{thm:r1} and \ref{thm:r2}. Here, we only mention the Corollaries of our approach, that are easy to state.}:
\[ \mathcal{O}\left( \frac{\lambda(\pmb{\mu}) \mathcal{M}}{\Delta} \log T \right) \]
Here, $\mathcal{M}$ is related to the largest pairwise divergence values under the two divergence measures used. $\Delta$ is the gap between the mean rewards of the optimal expert and the second best. $\lambda(\pmb{\mu})$ is a parameter that only depends on the gaps between mean rewards of the optimum experts and various sub-optimal ones. It is a normalized sum of difference in squares of the gaps of adjacent sub-optimal experts ordered by their gaps. Under the assumption that the suboptimal gaps (except that of the second best arm) are uniformly distributed in a bounded interval, we can show that the parameter $\lambda(\pmb{\mu})$ is $O(\log N)$. We define this parameter explicitly in Section \ref{sec:results} for all problem instances.
For the clipped estimator we show that $\mathcal{M}= M^2 \log^2 (1/\Delta)$ where $M$ is the largest pairwise log-divergence associated with the clipped estimator. For the median of means estimator, $\mathcal{M}= \sigma^2 $ where $\sigma^2$ is the largest pairwise chi-squared divergence.
Naively treating each expert as an arm would lead to a regret scaling of $\mathcal{O}(N \log T/ \Delta)$. However, this ignores information leakage. Existing instance-independent bounds for contextual bandits scale as $\sqrt{KT \mathrm{poly} \log(N) }$. Our problem dependent bounds have a near optimal dependence on $\Delta$ and does not depend on $K$, the numbers of arms. However, it depends on the divergence measure associated with the information leakage in the problem ($M$ or $\sigma$ parameters). Besides our analysis, we empirically show that this divergence based approach rivals or performs better than very efficient heuristics for contextual bandits (like bagging etc.) on real-world data sets.
{\bf $(ii)$ (Importance Sampling based Estimators):}
As mentioned before, the key components in our UCB based algorithm are two estimators that can utilize the samples collected under all the experts, in order to estimate the mean reward under any given expert. Both of these estimators are based on importance sampling techniques.
The first estimator that we use is the clipped estimator~\eqref{eq:est1}, which was introduced in~\cite{pmlr-v70-sen17a}. However, we modify the clipper level adaptively over the course of our UCB algorithm, in order to achieve our regret guarantees.
We also propose a novel median of means~\eqref{eq:est2} based importance sampling estimator for estimating the mean under all experts, utilizing this information leakage among experts. To the best of our knowledge, importance sampling has not been used in conjunction with the median of means technique in the literature before. We provide novel confidence guarantees for this estimator which depends on chi-square divergence between the conditional distributions under the various experts. This may be of independent interest.
{\bf $(iii)$ (Empirical Validation):} We empirically validate our algorithm on three real world data-sets~\cite{frey1991letter,fehrman2017five,stream} against other state of the art contextual bandit algorithms~\cite{langford2008epoch,agarwal2014taming} implemented in Vowpal Wabbit~\cite{wabbit}. In our implementation, we use online training of cost-sensitive classification oracles~\cite{beygelzimer2009offset} to generate the class of stochastic experts. We show that our algorithms have better regret performance on these data-sets compared to the other algorithms.
\section{Introduction}
\label{sec:intro}
Modern machine learning applications like recommendation engines~\cite{li2011scene,sen2016contextual,bouneffouf2012contextual,li2010contextual}, computational advertising~\cite{tang2013automatic,bottou2013counterfactual}, A/B testing in medicine~\cite{tekin2015discover,tekin2016adaptive} are inherently online. In these settings the task is to take sequential decisions that are not only profitable but also enable the system to learn better in future. For instance in a computational advertising system, the task is to sequentially place advertisements on users' webpages with the dual objective of learning the preferences of the users and increasing the click-through rate on the fly. A key attribute of these systems is the well-known \textit{exploration} (searching the space of possible decisions for better learning) and \textit{exploitation} (taking decisions that are more profitable) trade-off. A principled method to capture this trade-off is the study of multi-armed bandit problems~\cite{bubeck2012regret}.
Bandit problems have been studied for several decades. The classic $K$-armed bandit problem dates back to~\cite{lai1985asymptotically}. In the stochastic setting, one is faced with the choice of pulling one arm during each time-slot among $K$ arms, where the $k$-th arm has mean reward $\mu_k$. The task is to accumulate a total reward as close as possible to a genie strategy that has prior knowledge of arm statistics and always selects the optimal arm in each time-slot. The expected difference between the rewards collected by the genie strategy and the online strategy is defined as the regret. The expected regret till time $T$, of the state of the art algorithms~\cite{bubeck2012regret,auer2010ucb,agrawal2012analysis,auer2002using} scales as $O((K/\Delta) \log T )$ when there is a constant gap in mean reward of at least $\Delta$ between the best arm and the rest.
Additional side information can be incorporated in this setting through the framework of contextual bandits. In the stochastic setting, it is assumed that at each time-step nature draws $(x,r_1,...,r_K)$ from a fixed but unknown distribution. Here, $x \in \mathcal{X}$ represents the context vector, while $r_1,...,r_K$ are the rewards of the $K$-arms~\cite{langford2008epoch}. The context $x$ is revealed to the policy-designer, after which she decides to choose an arm $a \in \{1,2,...,K \}$. Then, the reward $r_a$ is revealed to the policy-designer. In the computational advertising example, the context can be thought of as the browsing history, age, gender etc. of an user arriving in the system, while $r_1,...,r_K$ are generated according to the probability of the user clicking on each of the $K$ advertisements. The task here is to learn a \textit{good} mapping from the space of contexts $\mathcal{X}$ to the space of arms $[K] = \{1,2,...,K\}$ such that when the decisions are taken according to that mapping, the mean reward observed is high.
A popular model in the stochastic contextual bandits literature is the \textit{experts} setting~\cite{agarwal2014taming,dudik2011efficient,langford2008epoch}. The task is to compete against the best \textit{expert} in a class of experts $\Pi = \{\pi_1,...,\pi_N\}$, where each expert $\pi \in \Pi$ is a function mapping $\mathcal{X} \rightarrow [K]$. The mean reward of an expert $\pi$ is defined as $ {\mathbb{E}}}\def\PP{{\mathbb{P}} \left[ r_{\pi(X)} \right]$, where $X$ is the random variable denoting the context and the expectation is taken over the unknown distribution over $(x,r_1,...,r_K)$. The best expert is naturally defined as the expert with the highest mean reward. The expected difference in rewards of a genie policy that always chooses the best expert and the online algorithm employed by the policy-designer is defined as the regret. This problem has been well-studied in the literature, where a popular approach is to reduce the contextual bandit problem to supervised learning techniques through $\argmin$-oracles~\cite{beygelzimer2009offset}. This leads to powerful algorithms with instance-independent regret bounds of $\mathcal{O} \left(\sqrt{KT\mathrm{polylog}(N)} \right)$ at time $T$~\cite{agarwal2014taming,dudik2011efficient}. In this setting, the class of experts $\Pi$ can be thought of as a class of classifying functions that classify each context into one of the $K$ arms.
We consider the problem of contextual bandits with \textit{stochastic experts}. Our problem setting is closely related to the traditional experts setting in stochastic bandits~\cite{agarwal2014taming,dudik2011efficient}, with a subtle distinction. We assume access to a class of \textit{stochastic experts} $\Pi = \{\pi_1,...,\pi_N\}$, which are \textit{not deterministic}.
Instead, each expert $\pi \in \Pi$, is a conditional probability distribution over the arms given a context. For, an expert $\pi \in \Pi$ the conditional distribution is denoted by $\pi_{V \vert X}(v \vert x)$ where $V \in [K]$ is the random variable denoting the arm chosen and $X$ is the context. They key motivation is that the cost-sensitive classification oracles used to generate deterministic experts in a practical algorithm (in the traditional experts setting~\cite{agarwal2014taming}) can be calibrated using well-known techniques~\cite{cohen2004properties}, such that they can provide reliable confidence scores for each arm. These confidence scores are precisely a probability distribution over the arms, and encodes a soft belief of an expert about the best arm given a context. An additional benefit is that this setting allows us to derive regret bounds in terms of \textit{closeness} of these soft experts quantified by divergence measures, rather than in terms of the total number of arms $K$. As before, the task is to compete against the expert in the class with the highest mean reward. The expected reward of a stochastic expert $\pi$ is defined as ${\mathbb{E}}}\def\PP{{\mathbb{P}}_{X,V \sim \pi(V \vert X)} \left[ r_{V}\right]$, i.e the mean reward observed when the arm is drawn from the conditional distribution $\pi(V \vert X)$.
We propose upper-confidence (UCB) style algorithms for the contextual bandits with stochastic experts problem. We prove \textit{instance-dependent} regret guarantees that scale as $\mathcal{O} \left(\mathcal{C} \left(N,\Delta\right) \log T\right)$, where $\mathcal{C} \left(N,\Delta\right)$ is an instance dependent term. We show that under some assumptions the term $\mathcal{C} \left(N,\Delta\right)$ scales as $\mathcal{O}(\log N/ \Delta)$, where $\Delta$ is the gap in mean reward of the best expert from second best. In the next section, we list the main contributions of this paper.
\subsection{Main Contributions}
The contributions of this paper are three-folds:
{\bf $(i)$ (Importance Sampling based Estimators):}
The key components in our approach are two importance sampling based estimators for the mean rewards under all the experts. Both these estimators are based on the observation that samples collected under one expert can be reweighed by likelihood/importance ratios and averaged to provide an estimate for the mean reward under another expert. This sharing of information is termed as \textit{information leakage} and has been utilized before under various settings~\cite{lattimore2016causal,pmlr-v70-sen17a,bottou2013counterfactual}. The first estimator that we use is an adaptive variant of the well-known clipping technique, which was proposed in~\cite{pmlr-v70-sen17a}. The estimator is presented in Eq.~\eqref{eq:est1}. However, we carefully adapt the clipping threshold in an online manner, in order to achieve regret guarantees.
We also propose an importance sampling variant of the classical median of means estimator (see ~\cite{lugosi2017sub,bubeck2013bandits}). This estimator is also designed to utilize the samples collected under all experts together to estimate the mean reward under any given expert. We define the estimator in Eq.~\eqref{eq:est2}. To the best of our knowledge, importance sampling has not been used in conjunction with the median of means technique in the literature before. We provide novel confidence guarantees for this estimator which depends on chi-square divergences between the conditional distributions under the various experts. This may be of independent interest.
{\bf $(ii)$ (Instance Dependent Regret Bounds):} We propose the contextual bandits with stochastic experts problem. We design two UCB based algorithms for this problem, based on the two importance sampling based estimators mentioned above. We show that utilizing the \textit{information leakage} between the experts leads to regret guarantees that scale sub-linearly in $N$, the number of experts. The information leakage between any two experts in the first estimator is governed by a pairwise log-divergence measure (Def.~\ref{def:mij}). For the second estimator, chi-square divergences (Def.~\ref{def:sij}) characterizes the leakage.
We show that the regret of our UCB algorithm based on these two estimators scales as \footnote{Tighter regret bounds are derived in Theorems \ref{thm:r1} and \ref{thm:r2}. Here, we only mention the Corollaries of our approach, that are easy to state.}: $ \mathcal{O}\left( \frac{\lambda(\pmb{\mu}) \mathcal{M}}{\Delta} \log T \right)$.
Here, $\mathcal{M}$ is related to the largest pairwise divergence values under the two divergence measures used. $\Delta$ is the gap between the mean rewards of the optimal expert and the second best. $\lambda(\pmb{\mu})$ is a parameter that only depends on the gaps between mean rewards of the optimum experts and various sub-optimal ones. It is a normalized sum of difference in squares of the gaps of adjacent sub-optimal experts ordered by their gaps. Under the assumption that the suboptimal gaps (except that of the second best arm) are uniformly distributed in a bounded interval, we can show that the parameter $\lambda(\pmb{\mu})$ is $O(\log N)$ in expectation. We define this parameter explicitly in Section \ref{sec:results}.
For the clipped estimator we show that $\mathcal{M}= M^2 \log^2 (1/\Delta)$ where $M$ is the largest pairwise log-divergence associated with the clipped estimator. For the median of means estimator, $\mathcal{M}= \sigma^2 $ where $\sigma^2$ is the largest pairwise chi-squared divergence.
Naively treating each expert as an arm would lead to a regret scaling of $\mathcal{O}(N \log T/ \Delta)$. However, this ignores information leakage. Existing instance-independent bounds for contextual bandits scale as $\sqrt{KT \mathrm{poly} \log(N) }$~\cite{agarwal2014taming}. Our problem dependent bounds have a near optimal dependence on $\Delta$ and does not depend on $K$, the numbers of arms. However, it depends on the divergence measure associated with the information leakage in the problem ($M$ or $\sigma$ parameters). Besides our analysis, we empirically show that this divergence based approach rivals or performs better than very efficient heuristics for contextual bandits (like bagging etc.) on real-world data sets.
{\bf $(iii)$ (Empirical Validation):} We empirically validate our algorithm on three real world data-sets~\cite{frey1991letter,fehrman2017five,stream} against other state of the art contextual bandit algorithms~\cite{langford2008epoch,agarwal2014taming} implemented in Vowpal Wabbit~\cite{wabbit}. In our implementation, we use online training of cost-sensitive classification oracles~\cite{beygelzimer2009offset} to generate the class of stochastic experts. We show that our algorithms have better regret performance on these data-sets compared to the other algorithms.
\section{Introduction}
\label{sec:intro}
Modern machine learning applications like recommendation engines~\cite{li2011scene,bouneffouf2012contextual,li2010contextual}, computational advertising~\cite{tang2013automatic,bottou2013counterfactual}, A/B testing in medicine~\cite{tekin2015discover,tekin2016adaptive} are inherently online. In these settings the task is to take sequential decisions that are not only profitable but also enable the system to learn better in future. For instance in a computational advertising system, the task is to sequentially place advertisements on users' webpages with the dual objective of learning the preferences of the users and increasing the click-through rate on the fly. A key attribute of these systems is the well-known \textit{exploration} (searching the space of possible decisions for better learning) and \textit{exploitation} (taking decisions that are more profitable) trade-off. A principled method to capture this trade-off is the study of multi-armed bandit problems~\cite{bubeck2012regret}.
Bandit problems have been studied for several decades. The classic $K$-armed bandit problem dates back to~\cite{lai1985asymptotically}. In the stochastic setting, one is faced with the choice of pulling one arm during each time-slot among $K$ arms, where the $k$-th arm has mean reward $\mu_k$. The task is to accumulate a total reward as close as possible to a genie strategy that has prior knowledge of arm statistics and always selects the optimal arm in each time-slot. The expected difference between the rewards collected by the genie strategy and the online strategy is defined as the regret. The expected regret till time $T$, of the state of the art algorithms~\cite{bubeck2012regret,auer2010ucb,agrawal2012analysis,auer2002using} scales as $O((K/\Delta) \log T )$ when there is a constant gap in mean reward of at least $\Delta$ between the best arm and the rest.
Additional side information can be incorporated in this setting through the framework of contextual bandits. In the stochastic setting, it is assumed that at each time-step nature draws $(x,r_1,...,r_K)$ from a fixed but unknown distribution. Here, $x \in \mathcal{X}$ represents the context vector, while $r_1,...,r_K$ are the rewards of the $K$-arms~\cite{langford2008epoch}. The context $x$ is revealed to the policy-designer, after which she decides to choose an arm $a \in \{1,2,...,K \}$. Then, the reward $r_a$ is revealed to the policy-designer. In the computational advertising example, the context can be thought of as the browsing history, age, gender etc. of an user arriving in the system, while $r_1,...,r_K$ are generated according to the probability of the user clicking on each of the $K$ advertisements. The task here is to learn a \textit{good} mapping from the space of contexts $\mathcal{X}$ to the space of arms $[K] = \{1,2,...,K\}$ such that when the decisions are taken according to that mapping, the mean reward observed is high.
A popular model in the stochastic contextual bandits literature is the \textit{experts} setting~\cite{agarwal2014taming,dudik2011efficient,langford2008epoch}. The task is to compete against the best \textit{expert} in a class of experts $\Pi = \{\pi_1,...,\pi_N\}$, where each expert $\pi \in \Pi$ is a function mapping $\mathcal{X} \rightarrow [K]$. The mean reward of an expert $\pi$ is defined as $ {\mathbb{E}}}\def\PP{{\mathbb{P}} \left[ r_{\pi(X)} \right]$, where $X$ is the random variable denoting the context and the expectation is taken over the unknown distribution over $(x,r_1,...,r_K)$. The best expert is naturally defined as the expert with the highest mean reward. The expected difference in rewards of a genie policy that always chooses the best expert and the online algorithm employed by the policy-designer is defined as the regret. This problem has been well-studied in the literature, where a popular approach is to reduce the contextual bandit problem to supervised learning techniques through $\argmin$-oracles~\cite{beygelzimer2009offset}. This leads to powerful algorithms with instance-independent regret bounds of $\mathcal{O} \left(\sqrt{KT\mathrm{polylog}(N)} \right)$ at time $T$~\cite{agarwal2014taming,dudik2011efficient}.
In practice the class of experts are generated online by training cost-sensitive classification oracles~\cite{agarwal2014taming,dudik2011efficient}. The resulting classifiers/oracles trained can provide reliable confidence scores given a new context, especially if they are well-calibrated~\cite{cohen2004properties}. These confidence scores are nothing but a $K$-dimensional probability vector, where the $k^{th}$ entry is the probability of the classifier/oracle choosing the $k^{th}$ arm as the best, given a context. Motivated by this, we propose a variation of the traditional experts setting, which we term contextual bandits with \textit{stochastic experts.} We assume access to a class of \textit{stochastic experts} $\Pi = \{\pi_1,...,\pi_N\}$, which are \textit{not deterministic}. Instead, each expert $\pi \in \Pi$, is a conditional probability distribution over the arms given a context. For, an expert $\pi \in \Pi$ the conditional distribution is denoted by $\pi_{V \vert X}(v \vert x)$ where $V \in [K]$ is the random variable denoting the arm chosen and $X$ is the context. An additional benefit is that this setting allows us to derive regret bounds in terms of \textit{closeness} of these soft experts quantified by divergence measures, rather than in terms of the total number of arms $K$.
As before, the task is to compete against the expert in the class with the highest mean reward. The expected reward of a stochastic expert $\pi$ is defined as ${\mathbb{E}}}\def\PP{{\mathbb{P}}_{X,V \sim \pi(V \vert X)} \left[ r_{V}\right]$, i.e the mean reward observed when the arm is drawn from the conditional distribution $\pi(V \vert X)$. We propose upper-confidence (UCB) style algorithms for the contextual bandits with stochastic experts problem, that employ two importance sampling based estimators for the mean rewards under various experts. We prove \textit{instance-dependent} regret guarantees for our algorithm. The main contributions of this paper are listed in the next Section.
\subsection{Main Contributions}
The contributions of this paper are three-folds:
{\bf $(i)$ (Importance Sampling based Estimators):}
The key components in our approach are two importance sampling based estimators for the mean rewards under all the experts. Both these estimators are based on the observation that samples collected under one expert can be reweighted by likelihood/importance ratios and averaged to provide an estimate for the mean reward under another expert. This sharing of information is termed as \textit{information leakage} and has been utilized before under various settings~\cite{lattimore2016causal,pmlr-v70-sen17a,bottou2013counterfactual}. The first estimator that we use is an adaptive variant of the well-known clipping technique, which was proposed in~\cite{pmlr-v70-sen17a}. The estimator is presented in Eq.~\eqref{eq:est1}. However, we carefully adapt the clipping threshold in an online manner, in order to achieve regret guarantees.
We also propose an importance sampling variant of the classical median of means estimator (see ~\cite{lugosi2017sub,bubeck2013bandits}). This estimator is also designed to utilize the samples collected under all experts together to estimate the mean reward under any given expert. We define the estimator in Eq.~\eqref{eq:est2}. To the best of our knowledge, importance sampling has not been used in conjunction with the median of means technique in the literature before. We provide novel confidence guarantees for this estimator which depends on chi-square divergences between the conditional distributions under the various experts. This may be of independent interest.
{\bf $(ii)$ (Instance Dependent Regret Bounds):} We propose the contextual bandits with stochastic experts problem. We design two UCB based algorithms for this problem, based on the two importance sampling based estimators mentioned above. We show that utilizing the \textit{information leakage} between the experts leads to regret guarantees that scale sub-linearly in $N$, the number of experts. The information leakage between any two experts in the first estimator is governed by a pairwise log-divergence measure (Def.~\ref{def:mij}). For the second estimator, chi-square divergences (Def.~\ref{def:sij}) characterize the leakage.
We show that the regret of our UCB algorithm based on these two estimators scales as \footnote{Tighter regret bounds are derived in Theorems \ref{thm:r1} and \ref{thm:r2}. Here, we only mention the Corollaries of our approach, that are easy to state.}: $ \mathcal{O}\left( \frac{\lambda(\pmb{\mu}) \mathcal{M}}{\Delta} \log T \right)$.
Here, $\mathcal{M}$ is related to the largest pairwise divergence values under the two divergence measures used. $\Delta$ is the gap between the mean rewards of the optimal expert and the second best. $\lambda(\pmb{\mu})$ is a parameter that only depends on the gaps between mean rewards of the optimum experts and various sub-optimal ones. It is a normalized sum of difference in squares of the gaps of adjacent sub-optimal experts ordered by their gaps. Under the assumption that the suboptimal gaps (except that of the second best arm) are uniformly distributed in a bounded interval, we can show that the parameter $\lambda(\pmb{\mu})$ is $O(\log N)$ in expectation. We define this parameter explicitly in Section \ref{sec:results}.
For the clipped estimator we show that $\mathcal{M}= M^2 \log^2 (1/\Delta)$ where $M$ is the largest pairwise log-divergence associated with the clipped estimator. For the median of means estimator, $\mathcal{M}= \sigma^2 $ where $\sigma^2$ is the largest pairwise chi-squared divergence.
Naively treating each expert as an arm would lead to a regret scaling of $\mathcal{O}(N \log T/ \Delta)$. However, this ignores information leakage. Existing instance-independent bounds for contextual bandits scale as $\sqrt{KT \mathrm{poly} \log(N) }$~\cite{agarwal2014taming}. Our problem dependent bounds have a near optimal dependence on $\Delta$ and does not depend on $K$, the numbers of arms. However, it depends on the divergence measure associated with the information leakage in the problem ($M$ or $\sigma$ parameters). Besides our analysis, we empirically show that this divergence based approach rivals or performs better than very efficient heuristics for contextual bandits (like bagging etc.) on real-world data sets.
{\bf $(iii)$ (Empirical Validation):} We empirically validate our algorithm on three real world data-sets~\cite{frey1991letter,fehrman2017five,stream} against other state of the art contextual bandit algorithms~\cite{langford2008epoch,agarwal2014taming} implemented in Vowpal Wabbit~\cite{wabbit}. In our implementation, we use online training of cost-sensitive classification oracles~\cite{beygelzimer2009offset} to generate the class of stochastic experts. We show that our algorithms have better regret performance on these data-sets compared to the other algorithms.
\section{Introduction}
\label{sec:intro}
Modern machine learning applications like recommendation engines~\cite{li2011scene,bouneffouf2012contextual,li2010contextual}, computational advertising~\cite{tang2013automatic,bottou2013counterfactual}, A/B testing in medicine~\cite{tekin2015discover,tekin2016adaptive} are inherently online. In these settings the task is to take sequential decisions that are not only profitable but also enable the system to learn better in future. For instance in a computational advertising system, the task is to sequentially place advertisements on users' webpages with the dual objective of learning the preferences of the users and increasing the click-through rate on the fly. A key attribute of these systems is the well-known \textit{exploration} (searching the space of possible decisions for better learning) and \textit{exploitation} (taking decisions that are more profitable) trade-off. A principled method to capture this trade-off is the study of multi-armed bandit problems~\cite{bubeck2012regret}.
$K$-armed stochastic bandit problems have been studied for several decades.
These are formulated as a sequential process, where at each time step any one of the $K$-arms can be selected. Upon selection of the $k$-th arm, the arm returns a stochastic reward with an expected reward of $\mu_k$.
Starting from the work of \cite{lai1985asymptotically}, a major focus has been on \textit{regret}, which is the difference in the total reward that is accumulated from the \textit{genie} optimal policy (one that always selects the arm with the maximum expected reward) from that of the chosen online policy.
The current state-of-art algorithms achieve a regret
of $O((K/\Delta) \log T )$
\cite{bubeck2012regret,auer2010ucb,agrawal2012analysis,auer2002using},
which is order-wise optimal~\cite{lai1985asymptotically}. Here, $\Delta$ corresponds to the gap in expected reward between the best arm and the next best one.
Additional side information can be incorporated in this setting through the framework of contextual bandits. In the stochastic setting, it is assumed that at each time-step nature draws $(x,r_1,...,r_K)$ from a fixed but unknown distribution. Here, $x \in \mathcal{X}$ represents the context vector, while $r_1,...,r_K$ are the rewards of the $K$-arms~\cite{langford2008epoch}. The context $x$ is revealed to the policy-designer, after which she decides to choose an arm $a \in \{1,2,...,K \}$. Then, the reward $r_a$ is revealed to the policy-designer. In the computational advertising example, the context can be thought of as the browsing history, age, gender etc. of an user arriving in the system, while $r_1,...,r_K$ are generated according to the probability of the user clicking on each of the $K$ advertisements. The task here is to learn a \textit{good} mapping from the space of contexts $\mathcal{X}$ to the space of arms $[K] = \{1,2,...,K\}$ such that when the decisions are taken according to that mapping, the mean reward observed is high.
A popular model in the stochastic contextual bandits literature is the \textit{experts} setting~\cite{agarwal2014taming,dudik2011efficient,langford2008epoch}. The task is to compete against the best \textit{expert} in a class of experts $\Pi = \{\pi_1,...,\pi_N\}$, where each expert $\pi \in \Pi$ is a function mapping $\mathcal{X} \rightarrow [K]$. The mean reward of an expert $\pi$ is defined as $ {\mathbb{E}}}\def\PP{{\mathbb{P}} \left[ r_{\pi(X)} \right]$, where $X$ is the random variable denoting the context and the expectation is taken over the unknown distribution over $(x,r_1,...,r_K)$. The best expert is naturally defined as the expert with the highest mean reward. The expected difference in rewards of a genie policy that always chooses the best expert and the online algorithm employed by the policy-designer is defined as the regret. This problem has been well-studied in the literature, where a popular approach is to reduce the contextual bandit problem to supervised learning techniques through $\argmin$-oracles~\cite{beygelzimer2009offset}. This leads to powerful algorithms with instance-independent regret bounds of $\mathcal{O} \left(\sqrt{KT\mathrm{polylog}(N)} \right)$ at time $T$~\cite{agarwal2014taming,dudik2011efficient}.
In practice the class of experts are generated online by training cost-sensitive classification oracles~\cite{agarwal2014taming,dudik2011efficient}. Once trained, the resulting classifiers/oracles can provide reliable confidence scores given a new context, especially if they are well-calibrated~\cite{cohen2004properties}. These confidence scores effectively are a $K$-dimensional probability vector, where the $k^{th}$ entry is the probability of the classifier/oracle choosing the $k^{th}$ arm as the best, given a context. Motivated by this observation, we propose a variation of the traditional experts setting, which we term contextual bandits with \textit{stochastic experts.} We assume access to a class of \textit{stochastic experts} $\Pi = \{\pi_1,...,\pi_N\}$, which are \textit{not deterministic}. Instead, each expert $\pi \in \Pi$, is a conditional probability distribution over the arms given a context. For an expert $\pi \in \Pi$ the conditional distribution is denoted by $\pi_{V \vert X}(v \vert x)$ where $V \in [K]$ is the random variable denoting the arm chosen and $X$ is the context. An additional benefit is that this setting allows us to derive regret bounds in terms of \textit{closeness} of these soft experts quantified by divergence measures, rather than in terms of the total number of arms $K$.
As before, the task is to compete against the expert in the class with the highest mean reward. The expected reward of a stochastic expert $\pi$ is defined as ${\mathbb{E}}}\def\PP{{\mathbb{P}}_{X,V \sim \pi(V \vert X)} \left[ r_{V}\right]$, i.e the mean reward observed when the arm is drawn from the conditional distribution $\pi(V \vert X)$. We propose upper-confidence (UCB) style algorithms for the contextual bandits with stochastic experts problem, that employ two importance sampling based estimators for the mean rewards under various experts. We prove \textit{instance-dependent} regret guarantees for our algorithms. The main contributions of this paper are listed in the next section.
\subsection{Main Contributions}
The contributions of this paper are three-fold:
{\bf $(i)$ (Importance Sampling based Estimators):}
The key components in our approach are two importance sampling based estimators for the mean rewards under all the experts. Both these estimators are based on the observation that samples collected under one expert can be reweighted by likelihood/importance ratios and averaged to provide an estimate for the mean reward under another expert. This sharing of information is termed as \textit{information leakage} and has been utilized before under various settings~\cite{lattimore2016causal,pmlr-v70-sen17a,bottou2013counterfactual}. The first estimator that we use is an adaptive variant of the well-known clipping technique, which was proposed in~\cite{pmlr-v70-sen17a}. The estimator is presented in Eq.~\eqref{eq:est1}. However, we carefully adapt the clipping threshold in an online manner, in order to achieve regret guarantees.
We also propose an importance sampling variant of the classical median of means estimator (see ~\cite{lugosi2017sub,bubeck2013bandits}). This estimator is also designed to utilize the samples collected under all experts together to estimate the mean reward under any given expert. We define the estimator in Eq.~\eqref{eq:est2}. To the best of our knowledge, importance sampling has not been used in conjunction with the median of means technique in the literature before. We provide novel confidence guarantees for this estimator which depends on chi-square divergences between the conditional distributions under the various experts. This may be of independent interest.
{\bf $(ii)$ (Instance Dependent Regret Bounds):} We propose the contextual bandits with stochastic experts problem. We design two UCB based algorithms for this problem, based on the two importance sampling based estimators mentioned above. We show that utilizing the \textit{information leakage} between the experts leads to regret guarantees that scale sub-linearly in $N$, the number of experts. The information leakage between any two experts in the first estimator is governed by a pairwise log-divergence measure (Def.~\ref{def:mij}). For the second estimator, chi-square divergences (Def.~\ref{def:sij}) characterize the leakage.
We show that the regret of our UCB algorithm based on these two estimators scales as \footnote{Tighter regret bounds are derived in Theorems \ref{thm:r1} and \ref{thm:r2}. Here, we only mention the Corollaries of our approach, that are easy to state.}: $ \mathcal{O}\left( \frac{\lambda(\pmb{\mu}) \mathcal{M}}{\Delta} \log T \right)$.
Here, $\mathcal{M}$ is related to the largest pairwise divergence values under the two divergence measures used. $\Delta$ is the gap between the mean rewards of the optimal expert and the second best. $\lambda(\pmb{\mu})$ is a parameter that only depends on the gaps between mean rewards of the optimum experts and various sub-optimal ones. It is a normalized sum of difference in squares of the gaps of adjacent sub-optimal experts ordered by their gaps. Under the assumption that the suboptimal gaps (except that of the second best arm) are uniformly distributed in a bounded interval, we can show that the parameter $\lambda(\pmb{\mu})$ is $O(\log N)$ in expectation. We define this parameter explicitly in Section \ref{sec:results}.
For the clipped estimator we show that $\mathcal{M}= M^2 \log^2 (1/\Delta)$ where $M$ is the largest pairwise log-divergence associated with the clipped estimator. For the median of means estimator, $\mathcal{M}= \sigma^2 $ where $\sigma^2$ is the largest pairwise chi-squared divergence.
Naively treating each expert as an arm would lead to a regret scaling of $\mathcal{O}(N \log T/ \Delta)$. However, this ignores information leakage. Existing instance-independent bounds for contextual bandits scale as $\sqrt{KT \mathrm{poly} \log(N) }$~\cite{agarwal2014taming}. Our problem dependent bounds have a near optimal dependence on $\Delta$ and does not depend on $K$, the numbers of arms. However, it depends on the divergence measure associated with the information leakage in the problem ($M$ or $\sigma$ parameters). Besides our analysis, we empirically show that this divergence based approach rivals or performs better than very efficient heuristics for contextual bandits (like bagging etc.) on real-world data sets.
{\bf $(iii)$ (Empirical Validation):} We empirically validate our algorithm on three real world data-sets~\cite{frey1991letter,fehrman2017five,stream} against other state of the art contextual bandit algorithms~\cite{langford2008epoch,agarwal2014taming} implemented in Vowpal Wabbit~\cite{wabbit}. In our implementation, we use online training of cost-sensitive classification oracles~\cite{beygelzimer2009offset} to generate the class of stochastic experts. We show that our algorithms have better regret performance on these data-sets compared to the other algorithms.
\section{Median of Means Estimator}
\label{sec:mom}
The median of means estimator is popular for estimating statistics under heavy-tailed distribution~\cite{bubeck2013bandits,lugosi2017sub}. We shall see that the median of means based estimator in Eq.~\eqref{eq:est2} has good variance properties, when the chi-square divergence (Assumption~\ref{assump2}) are bounded. Before proving Lemma~\ref{lem:mom}, we will be establishing some intermediate results.
\begin{lemma}
\label{lem:chebby}
Consider the quantity $\hat{\mu}^{r}_k(t)$ in Eq.~\eqref{eq:momm}. The variance of this quantity is upper bounded as follows:
\begin{align*}
\mathrm{Var}[\hat{\mu}^{r}_k(t)] \leq \frac{1}{m}.\frac{1}{W_{k}(t)^2} \leq \frac{\sigma^2}{m}
\end{align*}
where $m = \floor{t/l(t)}$.
\end{lemma}
\begin{proof}
We have the following chain,
\begin{align*}
&\mathrm{Var}[\hat{\mu}^{r}_k(t)] = \frac{1}{W_{k}(r,t)^2} \sum_{j = 1}^{N} \sum_{s \in
\mathcal{T}^{(r)}_j} \frac{1}{\sigma_{kj}^2} \mathrm{Var} \left(Y_j(s) \times \frac{\pi_k(V_j(s) \vert
X(s))}{\pi_j(V_j(s) \vert X_j(s))} \right) \\
& \leq \frac{1}{W_{k}(r,t)^2} \sum_{j = 1}^{N} \sum_{s \in
\mathcal{T}^{(r)}_j} \frac{1}{\sigma_{kj}^2} \mathrm{Var} \left(\frac{\pi_k(V_j(s) \vert
X(s))}{\pi_j(V_j(s) \vert X_j(s))} \right) \\
& = \frac{m}{W_{k}(r,t)^2} \\
&= \frac{1}{m}\frac{1}{\left(\sum_{j = 1}^{N} \frac{n_j(r,t)}{n(r,t).\sigma_{kj}}\right)^2} \\
&\leq \frac{1}{m}.\frac{1}{W_{k}(t)^2} \numberthis \label{eq:inter}
\end{align*}
\end{proof}
Now, we can apply Chebyshev to conclude that for all $r \in [l(t)]$,
\begin{align}
\label{eq:mmeans}
\PP \left( \vert\hat{\mu}^{r}_k(t) - \mu_k \vert \leq \frac{1}{W_k(t)} \sqrt{\frac{4}{m}}\right) \geq \frac{3}{4}.
\end{align}
Now we will prove Lemma~\ref{lem:mom}.
\begin{proof}[Proof of Lemma~\ref{lem:mom}]
In light of Eq.~\eqref{eq:mmeans}, the probability that the median is not within distance $ \frac{1}{W_k(t)} \sqrt{\frac{4}{m}}$ of $\mu_k$ is bounded as,
\begin{align*}
&\PP\left(\vert\hat{\mu}^m_k(t) - \mu_k \vert > \frac{1}{W_k(t)} \sqrt{\frac{4}{m}}\right) \\
&\leq \PP \left(\mathrm{Bin}(l(t),1/4) > l(t)/2 \right) \leq \delta(t).
\end{align*}
This concludes the proof.
\end{proof}
Note that we will re-index the experts such that $0 = \Delta_{(1)} \leq \Delta_{(2)} \leq ... \leq \Delta_{(N)}$. Note that throughout this proof $U_k(t),\hat{\mu}_k(t)$ and $s_k(t)$ in Algorithm~\ref{alg:dUCB} are defined as in Equations~\eqref{eq:est2} and~\eqref{eq:ucb2} respectively. Before we proceed let us prove some key lemmas. Now we prove lemmas analogous to Lemmas~\ref{lem:lbound} and \ref{lem:ubound}.
\begin{lemma}
\label{lem:ubound_mom} We have the following confidence bound at time $t$,
\begin{align*}
\PP \left( U_{k^*}(t) \leq \mu^* \right) \geq 1 - \frac{1}{t^2}.
\end{align*}
\end{lemma}
The proof follows directly from Lemma~\ref{lem:mom}.
\begin{lemma}
\label{lem:lbound_mom} We have the following confidence bound at time $t > \frac{256\sigma^2\log T}{\Delta_k^2}$,
\begin{align*}
\PP \left( U_{k}(t) < \mu^* \right) \geq 1 - \frac{1}{t^2}.
\end{align*}
\end{lemma}
\begin{proof}
We have the following chain,
\begin{align*}
\PP \left( U_{k}(t) > \mu^* \right) &= \PP \left(\hat{\mu}_k(t) > \mu^* - \frac{1}{W_k(t)} \sqrt{\frac{64 \log t}{t}} \right) \\
& \stackrel{(i)}{\leq} \PP \left(\hat{\mu}_k(t) > \mu^* - \frac{\Delta_k}{2}\right) \\
& \stackrel{(ii)}{\leq} \PP \left(\hat{\mu}_k(t) > \mu_k + \frac{\Delta_k}{2} \right) \\
& \stackrel{(iii)}{\leq} \frac{1}{t^2}.
\end{align*}
Here, $(i)$ follows from the fact that $\frac{1}{W_k(t)} \leq \sigma$ and $t > \frac{256\sigma^2\log T}{\Delta_k^2}$. $(ii)$ is by definition of $\Delta_k$. Finally the concentration bound in $(iii)$ follows from Lemma~\ref{lem:mom}.
\end{proof}
Note that Lemma~\ref{lem:lbound_mom} and \ref{lem:ubound_mom} together imply that,
\begin{align}
\label{eq:badevent2}
\PP \left(k(t) = k \right) \leq \frac{2}{t^2}
\end{align}
for $k \neq k^*$ and $t > \frac{256\sigma^2\log T}{\Delta_k^2}$.
\begin{proof}[Proof of Theorem~\ref{thm:r2}]
Let $T_k = \frac{256\sigma^2\log T}{\Delta_{(k)}^2}$ for $k = 2,..,N$. The regret of the algorithm can be bounded as,
\begin{align*}
&R(T) \leq \Delta_{(N)}T_{N} + \sum_{k = 0}^{N-3} \sum_{t = T_{N - k }}^{T_{N - k -1}} \left( \PP\left( \mathds{1} \{k(t) \in \{{(1)},...,{(N-k-1)} \} \right) \times \Delta_{(N-k-1)} + \sum_{i = N-k}^{N} \Delta_{(i)} \PP\left( \mathds{1} \{k(t) = (i) \} \right) \right) \\
&\leq \Delta_{(N)}T_{N} + \sum_{k = 0}^{N-3} \left( \Delta_{(N-k-1)} \left(T_{N - k -1} - T_{N - k} \right) + \sum_{t = T_{N - k}}^{T_{N - k -1}} \sum_{i = N-k}^{N} \frac{2\Delta_{(i)}}{t^2}\right) \numberthis \label{eq:uselater2} \\
& \leq \sum_{k = 0}^{N-3} \frac{512\sigma^2\log T}{\Delta_{(N -k - 1)}} \left(1 - \frac{\Delta^2_{(N - k -1)}}{\Delta^2_{(N - k)}} \right) + \Delta_{(N)}T_{N} + \frac{\pi^2}{3} \left(\sum_{i = 2}^{N} \Delta_{(i)} \right)\\
&=\frac{512\sigma^2\log T}{\Delta_{(N)}} + \frac{\pi^2}{3} \left(\sum_{i = 2}^{N} \Delta_{(i)} \right) +\sum_{k = 2}^{N-1}\frac{512\sigma^2\log T}{\Delta_{(k)}} \left(1 - \frac{\Delta^2_{(k)}}{\Delta^2_{(k+1)}} \right).
\end{align*}
\end{proof}
\section{Instance Dependent Terms}
\label{sec:regret}
In this section we devote our attention to the instance dependent terms in Theorems~\ref{thm:r1} and \ref{thm:r2}. We will first prove Corollary~\ref{cor:delta}.
\begin{proof}[Proof of Corollary~\ref{cor:delta}]
$(i)$ From Eq.~\eqref{eq:uselater} in the proof of Theorem~\ref{thm:r1}, it follows that the regret of Algorithm~\ref{alg:dUCB} under the clipped estimator is bounded by,
\begin{align*}
R(T) \leq \Delta_{(N)}T_{2} + \frac{\pi^2}{3} \left( \sum_{i} \Delta_{(i)}\right).
\end{align*}
Using the definition of $T_{2}$ in~\eqref{eq:uselater} we obtain the required result.
$(ii)$ From Eq.~\eqref{eq:uselater2} in the proof of Theorem~\ref{thm:r2}, it follows that the regret of Algorithm~\ref{alg:dUCB} under the median of means estimator is bounded by,
\begin{align*}
R(T) \leq \Delta_{(N)}T_{2} + \frac{\pi^2}{3} \left( \sum_{i} \Delta_{(i)}\right).
\end{align*}
Using the definition of $T_{2}$ in~\eqref{eq:uselater2} we obtain the required result.
\end{proof}
Now we will work under the assumption that the means of the experts are generated according to the generative model in Lemma~\ref{lem:mu}.
\begin{proof}[Proof of Lemma~\ref{lem:mu}]
We will prove the two parts of the lemma separately,
$(i)$ Now we work under the assumption that $\Delta_{(2)} > 1/T$. Therefore, going back to Lemma~\ref{lem:lbound}, we get that,
\begin{align*}
\PP(U_k(t) < \mu^*) \geq 1 - \frac{1}{t^2}
\end{align*}
when $t > \frac{144M^2\log^3T}{\Delta_k^2}$. Therefore, the chain leading to Eq.~\eqref{eq:uselater} follows with the new definition of $T_k = \frac{144M^2\log^3T}{\Delta_{(k)}^2}$. Hence, under this assumption the regret of Algorithm~\ref{alg:dUCB} under estimator~\eqref{eq:est1} is bounded as follows:
\begin{align*}
&R(T) \leq \frac{144M^2 \log^3 T}{\Delta_{(2)}} \left(1 + \sum_{k = 2}^{N-1} \left(1 - \frac{\Delta_{(k)}^2}{\Delta_{(k+1)}^2} \right) \right) \\
&+ \frac{\pi^2}{3} \left(\sum_{i = 2}^{N} \Delta_{(i)} \right). \numberthis \label{eq:r12}
\end{align*}
Now, we will assume that $\{\Delta_{(i)}\}$ for $i = 3,...,N$, are order statistics of $N-2$ i.i.d uniform r.vs over the interval $[\Delta_{(2)},1]$.
Note that by Jensen's we have the following:
\begin{align}
\label{eq:jensen}
1 - {\mathbb{E}}}\def\PP{{\mathbb{P}} \left[ \frac{\Delta_{(k)}^2}{\Delta_{(k+1)}^2} \right] \leq 1 - {\mathbb{E}}}\def\PP{{\mathbb{P}} \left[ \frac{\Delta_{(k)}}{\Delta_{(k+1)}} \right]^2.
\end{align}
Let $X = \Delta_{(k)}$ and $Y = \Delta_{(k+1)}$ for $k \geq 3$. The joint pdf of $X,Y$ is given by,
\begin{align*}
&f(x,y) = \frac{(N-2)!}{(k-1)!(N-3-k)!} \left(\frac{x - \Delta_{(2)}}{1 - \Delta_{(2)} } \right)^{k-1} \times \\
&\left(1 - \frac{y - \Delta_{(2)}}{1 - \Delta_{(2)} } \right)^{N -3 -k} \frac{1}{(1 - \Delta_{(2)} )^2}.
\end{align*}
Therefore, we have the following chain:
\begin{align*}
&{\mathbb{E}}}\def\PP{{\mathbb{P}} \left[ \frac{X}{Y}\right] = \int_{y = \Delta_{(2)} }^{1} \int_{x = \Delta_{(2)}}^{y} \frac{x}{y} \frac{(N-2)!}{(k-1)!(N-3-k)!} \times \\
&\left(\frac{x - \Delta_{(2)}}{1 - \Delta_{(2)} } \right)^{k-1} \left(1 - \frac{y - \Delta_{(2)}}{1 - \Delta_{(2)} } \right)^{N -3 -k} \frac{1}{(1 - \Delta_{(2)} )^2} dx dy \\
&= \int_{b = 0 }^{1} \int_{a = 0}^{b} \frac{(1 - \Delta_{(2)})a+ \Delta_{(2)}}{(1 - \Delta_{(2)})b+ \Delta_{(2)}} \frac{(N-2)!}{(k-1)!(N-3-k)!} \times \\
&\left(a \right)^{k-1} \left(1 -b \right)^{N -3 -k} da db \\
&\geq \int_{b = 0 }^{1} \int_{a = 0}^{b} \frac{a}{b} \frac{(N-2)!}{(k-1)!(N-3-k)!} \left(a \right)^{k-1} \left(1 -b \right)^{N -3 -k} da db \\
& = \frac{k}{k+1}.
\end{align*}
Combining this with Eq~\eqref{eq:jensen} yields,
\begin{align*}
&{\mathbb{E}}}\def\PP{{\mathbb{P}}_{p_{\Delta}} \left[ \sum_{k = 2}^{N-1} \left(1 - \frac{\Delta_{(k)}^2}{\Delta_{(k+1)}^2} \right) \right] \\
&\leq 1 + \sum_{k = 3}^{N-1} \left(1 - \frac{k^2}{(k+1)^2} \right) \\
&= 1 + \sum_{k = 3}^{N-1} \left(\frac{2k+1}{(k+1)^2} \right) \\
&\leq 1 + \sum_{k = 3}^{N-1} \frac{2}{k+1} \numberthis \label{eq:deltabound}\\
&\leq 1 + 2\log N.
\end{align*}
Combining Eq.~\eqref{eq:deltabound} and Eq.~\eqref{eq:r12} yields the result.
$(ii)$ Combining the regret guarantee in Theorem~\ref{thm:r2} with Eq.~\eqref{eq:deltabound} directly yields the desired result.
\end{proof}
\section{Instance Dependent Terms}
\label{sec:regret}
In this section we devote our attention to the instance dependent terms in Theorems~\ref{thm:r1} and \ref{thm:r2}. We will first prove Corollary~\ref{cor:simple}.
\begin{proof}[Proof of Corollary~\ref{cor:simple}]
We will prove the two statements about the two estimators separately,
$(i)$ Going back to Lemma~\ref{lem:lbound} in the proof of Theorem~\ref{thm:r1}, we get that,
\begin{align*}
\PP(U_k(t) < \mu^*) \geq 1 - \frac{1}{t^2}
\end{align*}
when $t > \frac{144M^2\log^2 (6/\Delta_{(2)}) \log T}{\Delta_k^2}$. This simply follows from the fact that $\Delta_{(2)}$ is the smallest gap. Therefore, the chain leading to Eq.~\eqref{eq:uselater} follows with the new definition of $T_k = \frac{144M^2\log^2 (6/\Delta_{(2)}) \log T}{\Delta_{(k)}^2}$. Hence, the regret of Algorithm~\ref{alg:dUCB} under estimator~\eqref{eq:est1} is bounded as follows:
\begin{align*}
&R(T) \leq \frac{144M^2\log^2 (6/\Delta_{(2)}) \log T}{\Delta_{(2)}} \left(1 + \sum_{k = 2}^{N-1} \left(1 - \frac{\Delta_{(k)}^2}{\Delta_{(k+1)}^2} \right) \right) + \frac{\pi^2}{3} \left(\sum_{i = 2}^{N} \Delta_{(i)} \right)\\
& = \frac{144\lambda(\pmb{\mu})M^2\log^2 (6/\Delta_{(2)} ) \log T}{\Delta_{(2)}} + \frac{\pi^2}{3} \left(\sum_{i = 2}^{N} \Delta_{(i)} \right). \numberthis \label{eq:r12}
\end{align*}
We can analyze the same terms in an alternate manner. From Eq.~\eqref{eq:uselater} in the proof of Theorem~\ref{thm:r1}, it follows that the regret of Algorithm~\ref{alg:dUCB} under the clipped estimator is bounded by,
\begin{align*}
R(T) \leq \Delta_{(N)}T_{2} + \frac{\pi^2}{3} \left( \sum_{i} \Delta_{(i)}\right).
\end{align*}
Using the definition of $T_{2}$ in~\eqref{eq:uselater} we obtain:
\begin{align*}
&R(T) \leq \frac{144M^2\log^2 (6/\Delta_{(2)}) \log T}{\Delta_{(2)}} \times \frac{1}{\Delta_{(2)}}.
\end{align*}
Combining the above equation with~\eqref{eq:r12} we get the desired result.
$(ii)$ Theorem~\ref{thm:r2} immediately implies that
\[R(T) \leq \frac{256\lambda(\pmb{\mu})\sigma^2 \log T}{\Delta_{(2)}} + \frac{\pi^2}{3} \left(\sum_{i = 2}^{N} \Delta_{(i)} \right) \label{eq:r21},\] for the median of means estimator.
We can alternately analyze the regret as follows. From Eq.~\eqref{eq:uselater2} in the proof of Theorem~\ref{thm:r2}, it follows that the regret of Algorithm~\ref{alg:dUCB} under the median of means estimator is bounded by,
\begin{align*}
R(T) \leq \Delta_{(N)}T_{2} + \frac{\pi^2}{3} \left( \sum_{i} \Delta_{(i)}\right).
\end{align*}
Using the definition of $T_{2}$ in~\eqref{eq:uselater2} we obtain:
\begin{align*}
&R(T) \leq \frac{256\sigma^2 \log T}{\Delta_{(2)}} \times \frac{1}{\Delta_{(2)}}.
\end{align*}
Combining the equations above we get the desired result.
\end{proof}
Now we will work under the assumption that the gaps in the means of the experts are generated according to the generative model in Corollary~\ref{lem:mu}.
\begin{proof}[Proof of Corollary~\ref{lem:mu}]
In light of Corollary~\ref{cor:simple}, we just need to prove that ${\mathbb{E}}}\def\PP{{\mathbb{P}}_{p_{\Delta}}[\lambda(\pmb{\mu})] = O(\log N)$.
Now, we will assume that $\{\Delta_{(i)}\}$ for $i = 3,...,N$, are order statistics of $N-2$ i.i.d uniform r.vs over the interval $[\Delta_{(2)},1]$.
Note that by Jensen's we have the following:
\begin{align}
\label{eq:jensen}
1 - {\mathbb{E}}}\def\PP{{\mathbb{P}} \left[ \frac{\Delta_{(k)}^2}{\Delta_{(k+1)}^2} \right] \leq 1 - {\mathbb{E}}}\def\PP{{\mathbb{P}} \left[ \frac{\Delta_{(k)}}{\Delta_{(k+1)}} \right]^2.
\end{align}
Let $X = \Delta_{(k)}$ and $Y = \Delta_{(k+1)}$ for $k \geq 3$. The joint pdf of $X,Y$ is given by,
\begin{align*}
&f(x,y) = \frac{(N-2)!}{(k-1)!(N-3-k)!} \left(\frac{x - \Delta_{(2)}}{1 - \Delta_{(2)} } \right)^{k-1} \times \left(1 - \frac{y - \Delta_{(2)}}{1 - \Delta_{(2)} } \right)^{N -3 -k} \frac{1}{(1 - \Delta_{(2)} )^2}.
\end{align*}
Therefore, we have the following chain:
\begin{align*}
&{\mathbb{E}}}\def\PP{{\mathbb{P}} \left[ \frac{X}{Y}\right] = \int_{y = \Delta_{(2)} }^{1} \int_{x = \Delta_{(2)}}^{y} \frac{x}{y} \frac{(N-2)!}{(k-1)!(N-3-k)!} \times \left(\frac{x - \Delta_{(2)}}{1 - \Delta_{(2)} } \right)^{k-1} \left(1 - \frac{y - \Delta_{(2)}}{1 - \Delta_{(2)} } \right)^{N -3 -k} \frac{1}{(1 - \Delta_{(2)} )^2} dx dy \\
&= \int_{b = 0 }^{1} \int_{a = 0}^{b} \frac{(1 - \Delta_{(2)})a+ \Delta_{(2)}}{(1 - \Delta_{(2)})b+ \Delta_{(2)}} \frac{(N-2)!}{(k-1)!(N-3-k)!} \times \left(a \right)^{k-1} \left(1 -b \right)^{N -3 -k} da db \\
&\geq \int_{b = 0 }^{1} \int_{a = 0}^{b} \frac{a}{b} \frac{(N-2)!}{(k-1)!(N-3-k)!} \times \left(a \right)^{k-1} \left(1 -b \right)^{N -3 -k} da db \\
& = \frac{k}{k+1}.
\end{align*}
Combining this with Eq~\eqref{eq:jensen} yields,
\begin{align*}
&{\mathbb{E}}}\def\PP{{\mathbb{P}}_{p_{\Delta}} \left[ \sum_{k = 2}^{N-1} \left(1 - \frac{\Delta_{(k)}^2}{\Delta_{(k+1)}^2} \right) \right] \\
&\leq 1 + \sum_{k = 3}^{N-1} \left(1 - \frac{k^2}{(k+1)^2} \right) \\
&= 1 + \sum_{k = 3}^{N-1} \left(\frac{2k+1}{(k+1)^2} \right) \\
&\leq 1 + \sum_{k = 3}^{N-1} \frac{2}{k+1} \numberthis \label{eq:deltabound}\\
&\leq 1 + 2\log N.
\end{align*}
\end{proof}
\section{More on Empirical Results}
\label{sec:more_emp}
In this section we provide more details about our empirical results under the following sub-headings.
\begin{figure}[h]
\centering
\includegraphics[width=0.4\linewidth]{stream_comp}
\caption{\small We plot the instance-dependent terms from D-UCB bounds (the term in Theorem~\ref{thm:r2} involving the gaps~\eqref{eq:instance1}) and that of UCB-1 bounds~\eqref{eq:instance2} as the number of experts grows in the stream analytics dataset. It can be observed that the instance dependent term from D-UCB grows at a much slower pace with the number of experts, and in fact stops increasing after a certain point. }
\label{fig:terms}
\end{figure}
{\bf Training of Stochastic Experts: } In Algorithm~\ref{alg:batched}, new experts are added before starting a new batch. These stochastic experts are classifying functions trained using cost-sensitive classification oracles on data observed so far, which uses the ideas in~\cite{beygelzimer2009offset}. The key idea is to reduce the cost-sensitive classification problem into importance weighted classification, which can be solved using binary classifiers by providing weights to each samples. Suppose a context $x$ is observed and Algorithm~\ref{alg:batched} chooses an expert $\pi_i$ and draws an arm $a$ from the conditional distribution $\pi_i(V \vert x)$. Suppose the reward observed is $r(a)$. Then the training sample $(x,a)$ with a sample weight of $r(a)/\pi_i(a \vert x)$ is added to the dataset for training the next batch of experts. It has been shown that this importance weighing yields \textit{good} classification experts. These classifiers can provide confidence scores for arms, given a context and hence can serve as stochastic experts. $4$ different experts are added at the beginning of each batch, out of which three are trained by XgBoost as base-classifier while one is trained by logistic regression. Diversity is maintained among the experts added by training them on bootstrapped versions of the data observed so far, and also through selecting different hyper-parameters. Note that the parameter selection scheme is not tuned per dataset, but is held fixed for all three datasets.
{\bf Estimating Divergence Parameters: } Both our divergence metrics $M_{ij}$'s and $\sigma_{ij}$'s can be estimated from data observed so far, during a run of Algorithm~\ref{alg:batched}. These divergences do not depend on the arm chosen, but only on the context distribution and the conditional distributions encoded by the expert. Therefore, they can be easily estimated from data observed. Suppose, $n$ contexts have been observed so far $\{x_1,...,x_n \}$. We are interested in estimating $\sigma_{ij}$ that is the chi-square divergence between $\pi_i$ and $\pi_j$. An estimator for this would be the empirical mean $(1/n) \times \sum_{k = 1}^{n} D_{f_2} (\pi_i(.\vert x_k) \Vert \pi_j(.\vert x_k) )$. Note that the distribution over the arms $\pi_j(.\vert x_k)$ is nothing but the confidence scores observed through evaluation of the classifying oracle $\pi_j$ on the features/context $x_i$. In order to be robust, we use the median of means estimator instead of the simple empirical mean for estimating the divergences.
{\bf Empirical Analysis of Instance Dependent terms: } In this section we empirically validate that our instance dependent terms in Theorem~\ref{thm:r1} and \ref{thm:r2} are indeed much smaller compared to corresponding terms in the UCB-1~\cite{auer2002using} regret bounds, even in real problem where our generative assumptions do not hold. In order to showcase this, we plot the instance-dependent term in Theorem~\ref{thm:r2} which is given by, \[ \sum_{k = 2}^{N-1} \frac{1 }{\Delta_{(k)}} \left(1 - \frac{\Delta_{(k)}^2}{\Delta_{(k+1)}^2} \right) + \frac{1}{\Delta_{(N)}} \numberthis \label{eq:instance1}\] along with the corresponding term in UCB-1 bounds given by, \[ \sum_{k = 2}^{N} \frac{1}{\Delta_{(k)}}\numberthis \label{eq:instance2}, \] as the number of stochastic experts grow in the stream dataset experiments in Section~\ref{sec:sims}. The true means of the experts have been estimated in hindsight over the whole dataset. The plot is shown in Fig.~\ref{fig:terms}. It can be observed that the term in the bounds of D-UCB grows at a much slower pace, and in fact stops increasing with the number of experts after a certain point.
\section{Theoretical Results}
\label{sec:results}
In this section we provide \textit{instance dependent} regret guarantees for Algorithm~\ref{alg:dUCB} for both the estimators proposed. For the clipped estimator~\eqref{eq:est1}, the regret scales as $\mathcal{O}\left(\mathcal{M}\left( \pmb{\mu},\{M_{ij}\} \right) \log T \right)$, where $\mathcal{M}\left(\pmb{\mu},\{M_{ij}\}\right)$ is an instance dependent term that depends on the mean reward gaps between the experts and log-divergences. We will show that the regret typically scales as $\mathcal{O}(M^2\log N \log^3 T/\Delta)$ under some assumptions, where $\Delta = \min_{k \neq k^*} \Delta_k$ and $M$ is the maximum log-divergence between two experts.
Similarly, we will show that the regret of Algorithm~\ref{alg:dUCB} using the median of means estimator~\eqref{eq:est2}, scales as $\mathcal{O}\left(\mathcal{S}\left(\pmb{\mu},\{\sigma_{ij}\}\right) \log T \right)$. Here, $\mathcal{S}\left(\pmb{\mu},\{\sigma_{ij}\}\right)$ is an instance dependent term that depends on the mean reward gaps between the experts and chi-square-divergences. We will show that this term typically scales as $\mathcal{O}(\sigma^2\log N/\Delta)$ under some assumptions, where $\sigma$ is the maximum chi-square divergence between two experts.
In contrast, if the experts were used as separate arms, a naive application of UCB-1~\cite{auer2002using} bounds would yield a regret scaling of $\mathcal{O} \left(\frac{N}{\Delta} \log T \right)$. This can be prohibitively large when the number of experts are large.
For ease of exposition of our results, let us re-index the experts using indices $\{(1),(2),...,(N) \}$ such that $0 = \Delta_{(1)} \leq \Delta_{(2)} \leq ... \leq \Delta_{(N)}$. The regret guarantees for our clipped estimator are provided under the following assumption.
\begin{assumption}
\label{assump1}
We assume the log-divergence terms~\eqref{def:mij} are bounded for all $i,j \in [N]$. Let $M = \max_{i,j} M_{ij}$.
\end{assumption}
Now we are at a position to present one of our main theorems that provides regret guarantees for Algorithm~\ref{alg:dUCB} using the estimator~\eqref{eq:est1}.
\begin{theorem}
\label{thm:r1}
Suppose Assumption~\ref{assump1} holds. Then the regret of Algorithm~\ref{alg:dUCB} at time $T$ using estimator~\eqref{eq:est1}, is bounded as follows:
\begin{align*}
&R(T) \leq \frac{C_1M^2 \log^2 (6/\Delta _{(N)}) \log T}{\Delta_{(N)}} + \frac{\pi^2}{3} \left(\sum_{i = 2}^{N} \Delta_{(i)} \right) \\
&+ \sum_{k = 2}^{N-1} \frac{C_1M^2 \log^2 (6/\Delta _{(k)}) \log T}{\Delta_{(k)}} \left(1 - \frac{\gamma(\Delta_{(k)})}{\gamma(\Delta_{(k+1)})} \right) \\
& \leq \frac{C_1M^2 \log^2 (6/\Delta _{(2)}) \log T}{\Delta_{(2)}} \left(1 \right. \\
&\left. +\sum_{k = 2}^{N-1} \left(1 - \frac{\gamma(\Delta_{(k)})}{\gamma(\Delta_{(k+1)})} \right) \right) + \frac{\pi^2}{3} \left(\sum_{i = 2}^{N} \Delta_{(i)} \right).
\end{align*}
Here, $C_1$ is an universal constant and $\gamma(x) = \frac{x^2}{\log ^2 (6/x)}$.
\end{theorem}
We defer the proof of Theorem~\ref{thm:r1} to Appendix~\ref{sec:clipped}. We now present Theorem~\ref{thm:r2} that provides regret guarantees for Algorithm~\ref{alg:dUCB} using the estimator~\eqref{eq:est2}. The theorem holds under the following assumption.
\begin{assumption}
\label{assump2}
We assume the chi-square-divergence terms~\eqref{def:sij} are bounded for all $i,j \in [N]$. Let $\sigma = \max_{i,j} \sigma_{ij}$.
\end{assumption}
\begin{theorem}
\label{thm:r2}
Suppose Assumption~\ref{assump2} holds. Then the regret of Algorithm~\ref{alg:dUCB} at time $T$ using estimator~\eqref{eq:est2}, is bounded as follows:
\begin{align*}
&R(T) \leq \frac{C_2\sigma^2 \log T}{\Delta_{(N)}} \\
&+ \sum_{k = 2}^{N-1} \frac{C_2\sigma^2 \log T}{\Delta_{(k)}} \left(1 - \frac{\Delta_{(k)}^2}{\Delta_{(k+1)}^2} \right) + \frac{\pi^2}{3} \left(\sum_{i = 2}^{N} \Delta_{(i)} \right) \\
& \leq \frac{C_2\sigma^2 \log T}{\Delta_{(2)}} \left(1 + \sum_{k = 2}^{N-1} \left(1 - \frac{\Delta_{(k)}^2}{\Delta_{(k+1)}^2} \right) \right) \\
&+ \frac{\pi^2}{3} \left(\sum_{i = 2}^{N} \Delta_{(i)} \right).
\end{align*}
Here, $C_2$ is an universal constant.
\end{theorem}
The proof of Theorem~\ref{thm:r2} has been deferred to Appendix~\ref{sec:mom}. Now, we will delve deeper into the instance dependent terms in Theorems~\ref{thm:r1} and~\ref{thm:r2}. First, we have the following corollary (that follows trivially from Theorem~\ref{thm:r1} and \ref{thm:r2}), which states that we can achieve regret bounds without any dependence on $N$, but $\mathcal{O}(1/\Delta_{(2)}^2)$ dependence on the smallest gap.
\begin{corollary}
\label{cor:delta}
We have the following regret bound for Algorithm~\ref{alg:dUCB}:
($i$) Under estimator~\eqref{eq:est1},
$R(T) = \mathcal{O} \left(\frac{M^2 \log^2 (1/\Delta_{(2)}) \log T}{\Delta_{(2)}^2} \right)$.
($ii$) Under estimator~\eqref{eq:est2}, $R(T) = \mathcal{O} \left(\frac{\sigma^2 \log T}{\Delta_{(2)}^2} \right)$.
\end{corollary}
Now, we will show that when the gaps $\Delta$'s are uniformly distributed then the instance dependent terms approximately scale as $\mathcal{O} \left(\log N / \Delta_{(2)} \right)$, in expectation.
\begin{lemma}
\label{lem:mu}
Consider a generative model where $\Delta_{(3)}\leq...\leq\Delta_{(N)}$ are the order statistics of $N-2$ random variables drawn i.i.d uniform over the interval $[\Delta_{(2)},1]$. Let $p_{\Delta}$ denote the measure over these $\Delta$'s. Then we have the following:
($i$) For Algorithm~\ref{alg:dUCB} with estimator~\eqref{eq:est1},
${\mathbb{E}}}\def\PP{{\mathbb{P}}_{p_{\Delta}} \left[ R(T)\right] = \mathcal{O} \left(\frac{M^2 \log N \log^3 T }{\Delta_{(2)}} \right)$, if $\Delta_{(2)} > 1/T$.
($ii$) For Algorithm~\ref{alg:dUCB} with estimator~\eqref{eq:est2},
${\mathbb{E}}}\def\PP{{\mathbb{P}}_{p_{\Delta}} \left[ R(T)\right] = \mathcal{O} \left(\frac{\sigma^2 \log N \log T }{\Delta_{(2)}} \right)$.
\end{lemma}
\begin{remark}
Note that our guarantees do not have any term containing $K$ - the number of arms. This dependence is implicitly captured in the divergence terms among the experts. In fact when the number of arms $K$ is very large, we expect our divergence based algorithms to perform comparatively better than other algorithms, whose guarantees explicitly depend on $K$. This phenomenon is observed in practice in our empirical validation on real world data-sets in Section~\ref{sec:sims}.
\end{remark}
\section{Theoretical Results}
\label{sec:results}
In this section we provide \textit{instance dependent} regret guarantees for Algorithm~\ref{alg:dUCB} for both the estimators proposed. For the clipped estimator~\eqref{eq:est1}, the regret scales as $\mathcal{O}\left(\mathcal{M}\left( \pmb{\mu},\{M_{ij}\} \right) \log T \right)$, where $\mathcal{M}\left(\pmb{\mu},\{M_{ij}\}\right)$ is an instance dependent term that depends on the mean reward gaps between the experts and log-divergences. We will show that the regret typically scales as $\mathcal{O}(M^2\log N \log^2(1/\Delta) \log T/\Delta)$ in expectation, under some assumptions, where $\Delta = \min_{k \neq k^*} \Delta_k$ and $M$ is the maximum log-divergence between two experts.
Similarly, we will show that the regret of Algorithm~\ref{alg:dUCB} using the median of means estimator~\eqref{eq:est2}, scales as $\mathcal{O}\left(\mathcal{S}\left(\pmb{\mu},\{\sigma_{ij}\}\right) \log T \right)$. Here, $\mathcal{S}\left(\pmb{\mu},\{\sigma_{ij}\}\right)$ is an instance dependent term that depends on the mean reward gaps between the experts and chi-square-divergences. We will show that this term typically scales as $\mathcal{O}(\sigma^2\log N/\Delta)$ under some assumptions, where $\sigma$ is the maximum chi-square divergence between two experts.
In contrast, if the experts were used as separate arms, a naive application of UCB-1~\cite{auer2002using} bounds would yield a regret scaling of $\mathcal{O} \left(\frac{N}{\Delta} \log T \right)$. This can be prohibitively large when the number of experts are large.
For ease of exposition of our results, let us re-index the experts using indices $\{(1),(2),...,(N) \}$ such that $0 = \Delta_{(1)} \leq \Delta_{(2)} \leq ... \leq \Delta_{(N)}$. The regret guarantees for our clipped estimator are provided under the following assumption.
\begin{assumption}
\label{assump1}
We assume the log-divergence terms~\eqref{def:mij} are bounded for all $i,j \in [N]$. Let $M = \max_{i,j} M_{ij}$.
\end{assumption}
Now we are at a position to present one of our main theorems that provides regret guarantees for Algorithm~\ref{alg:dUCB} using the estimator~\eqref{eq:est1}.
\begin{theorem}
\label{thm:r1}
Suppose Assumption~\ref{assump1} holds. Then the regret of Algorithm~\ref{alg:dUCB} at time $T$ using estimator~\eqref{eq:est1}, is bounded as follows:
\begin{align*}
&R(T) \leq \frac{C_1M^2 \log^2 (6/\Delta _{(N)}) \log T}{\Delta_{(N)}} + \frac{\pi^2}{3} \left(\sum_{i = 2}^{N} \Delta_{(i)} \right) \\
&+ \sum_{k = 2}^{N-1} \frac{C_1M^2 \log^2 (6/\Delta _{(k)}) \log T}{\Delta_{(k)}} \left(1 - \frac{\gamma(\Delta_{(k)})}{\gamma(\Delta_{(k+1)})} \right)
\end{align*}
Here, $C_1$ is an universal constant and $\gamma(x) = \frac{x^2}{\log ^2 (6/x)}$.
\end{theorem}
We defer the proof of Theorem~\ref{thm:r1} to Appendix~\ref{sec:clipped}. We now present Theorem~\ref{thm:r2} that provides regret guarantees for Algorithm~\ref{alg:dUCB} using the estimator~\eqref{eq:est2}. The theorem holds under the following assumption.
\begin{assumption}
\label{assump2}
We assume the chi-square-divergence terms~\eqref{def:sij} are bounded for all $i,j \in [N]$. Let $\sigma = \max_{i,j} \sigma_{ij}$.
\end{assumption}
\begin{theorem}
\label{thm:r2}
Suppose Assumption~\ref{assump2} holds. Then the regret of Algorithm~\ref{alg:dUCB} at time $T$ using estimator~\eqref{eq:est2}, is bounded as follows:
\begin{align*}
&R(T) \leq \frac{C_2\sigma^2 \log T}{\Delta_{(N)}} \\
&+ \sum_{k = 2}^{N-1} \frac{C_2\sigma^2 \log T}{\Delta_{(k)}} \left(1 - \frac{\Delta_{(k)}^2}{\Delta_{(k+1)}^2} \right) + \frac{\pi^2}{3} \left(\sum_{i = 2}^{N} \Delta_{(i)} \right)
\end{align*}
Here, $C_2$ is an universal constant.
\end{theorem}
The proof of Theorem~\ref{thm:r2} has been deferred to Appendix~\ref{sec:mom}. Now, we will delve deeper into the instance dependent terms in Theorems~\ref{thm:r1} and~\ref{thm:r2}. The proofs of Theorem~\ref{thm:r1} and \ref{thm:r2} imply the following corollary.
\begin{corollary}
\label{cor:simple}
Let $\lambda(\pmb{\mu}) \triangleq \sum_{k = 2}^{N-1} \left(1 - \frac{\Delta_{(k)}^2}{\Delta_{(k+1)}^2} \right) $. We have the following regret bounds:
$(i)$ For Algorithm~\ref{alg:dUCB} with estimator~\eqref{eq:est1},
$R(T) \leq \mathcal{O} \left(\frac{\lambda(\pmb{\mu}) M^2 \log^2 (6/\Delta _{(2)}) \log T}{\Delta_{(2)}} \right)$.
$(ii)$ Similarly for Algorithm~\ref{alg:dUCB} with estimator~\eqref{eq:est2}, $R(T) \leq \mathcal{O} \left(\frac{\lambda(\pmb{\mu}) \sigma^2 \log T}{\Delta_{(2)}} \right)$
\end{corollary}
Corollary~\ref{cor:simple} leads us to our next result. In Corollary~\ref{lem:mu} we show that when the $\Delta$ gaps are uniformly distribution, then the term $\lambda(\pmb{\mu})$ scales as $O(\log N)$, in expectation.
\begin{corollary}
\label{lem:mu}
Consider a generative model where $\Delta_{(3)}\leq...\leq\Delta_{(N)}$ are the order statistics of $N-2$ random variables drawn i.i.d uniform over the interval $[\Delta_{(2)},1]$. Let $p_{\Delta}$ denote the measure over these $\Delta$'s. Then we have the following:
($i$) For Algorithm~\ref{alg:dUCB} with estimator~\eqref{eq:est1},
${\mathbb{E}}}\def\PP{{\mathbb{P}}_{p_{\Delta}} \left[ R(T)\right] = \mathcal{O} \left(\frac{M^2 \log N \log^2(1/\Delta_{(2)}) \log T }{\Delta_{(2)}} \right)$.
($ii$) For Algorithm~\ref{alg:dUCB} with estimator~\eqref{eq:est2},
${\mathbb{E}}}\def\PP{{\mathbb{P}}_{p_{\Delta}} \left[ R(T)\right] = \mathcal{O} \left(\frac{\sigma^2 \log N \log T }{\Delta_{(2)}} \right)$.
\end{corollary}
In Corollary~\ref{cor:delta}, we display an alternative bound for the regret of Algorithm~\ref{alg:dUCB}. We show that it is possible to avoid the dependence in $N$ altogether however at the cost of scaling $O(1/\Delta_{(2)}^2)$ in the minimum gap.
\begin{corollary}
\label{cor:delta}
We have the following regret bound for Algorithm~\ref{alg:dUCB}:
($i$) Under estimator~\eqref{eq:est1},
$R(T) = \mathcal{O} \left(\frac{M^2 \log^2 (1/\Delta_{(2)}) \log T}{\Delta_{(2)}^2} \right)$.
($ii$) Under estimator~\eqref{eq:est2}, $R(T) = \mathcal{O} \left(\frac{\sigma^2 \log T}{\Delta_{(2)}^2} \right)$.
\end{corollary}
\begin{remark}
Note that our guarantees do not have any term containing $K$ - the number of arms. This dependence is implicitly captured in the divergence terms among the experts. In fact when the number of arms $K$ is very large, we expect our divergence based algorithms to perform comparatively better than other algorithms, whose guarantees explicitly depend on $K$. This phenomenon is observed in practice in our empirical validation on real world data-sets in Section~\ref{sec:sims}.
\end{remark}
\section{Theoretical Results}
\label{sec:results}
In this section, we provide \textit{instance dependent} regret guarantees for Algorithm~\ref{alg:dUCB} for the two estimators proposed - a) The clipped estimator~\eqref{eq:est1} and b) The median of means estimator~\eqref{eq:est2}. Let $\Delta = \min_{k \neq k^*} \Delta_k$ be the gap in the expected reward between the optimum expert and the second best. We define a parameter $\lambda(\pmb{\mu})$, later in the section, that depends only on the gaps of the expected rewards of various experts from the optimal one.
For the Algorithm~\ref{alg:dUCB} that uses the clipped estimator, regret scales as $\mathcal{O} (\lambda(\pmb{\mu}) M^2 \log^2(6/\Delta) \log T/\Delta )$. Similarly, for the case of the median of means estimator, regret scales as $\mathcal{O} (\lambda(\pmb{\mu}) \sigma^2 \log T/\Delta )$. Here $M$ is the maximum log-divergence and $\sigma^2$ is the maximum chi-square divergence between two experts, respectively.
When the gaps between the optimum expert and sub-optimal ones are distributed uniformly at random in $[\delta_2, 1]$ $(\delta_2 > 0)$, we show that the $\lambda(\pmb{\mu})$ parameter is at most $O(\log N)$ in expectation. In contrast, if the experts were used as separate arms, a naive application of UCB-1~\cite{auer2002using} bounds would yield a regret scaling of $\mathcal{O} \left(\frac{N}{\Delta} \log T \right)$. This can be prohibitively large when the number of experts are large.
For ease of exposition of our results, let us re-index the experts using indices $\{(1),(2),...,(N) \}$ such that $0 = \Delta_{(1)} \leq \Delta_{(2)} \leq ... \leq \Delta_{(N)}$. The regret guarantees for our clipped estimator are provided under the following assumption.
\begin{assumption}
\label{assump1}
Assume the log-divergence terms~\eqref{def:mij} are bounded for all $i,j \in [N]$. Let $M = \max_{i,j} M_{ij}$.
\end{assumption}
Now we are at a position to present one of our main theorems that provides regret guarantees for Algorithm~\ref{alg:dUCB} using the estimator~\eqref{eq:est1}.
\begin{theorem}
\label{thm:r1}
Suppose Assumption~\ref{assump1} holds. Then the regret of Algorithm~\ref{alg:dUCB} at time $T$ using estimator~\eqref{eq:est1}, is bounded as follows:
\begin{align*}
&R(T) \leq \frac{C_1M^2 \log^2 (6/\Delta _{(N)}) \log T}{\Delta_{(N)}} + \frac{\pi^2}{3} \left(\sum_{i = 2}^{N} \Delta_{(i)} \right) + \sum_{k = 2}^{N-1} \frac{C_1M^2 \log^2 (6/\Delta _{(k)}) \log T}{\Delta_{(k)}} \left(1 - \frac{\gamma(\Delta_{(k)})}{\gamma(\Delta_{(k+1)})} \right)
\end{align*}
Here, $C_1$ is an universal constant and $\gamma(x) = \frac{x^2}{\log ^2 (6/x)}$.
\end{theorem}
We defer the proof of Theorem~\ref{thm:r1} to Appendix~\ref{sec:clipped}. We now present Theorem~\ref{thm:r2} that provides regret guarantees for Algorithm~\ref{alg:dUCB} using the estimator~\eqref{eq:est2}. The theorem holds under the following assumption.
\begin{assumption}
\label{assump2}
Assume the chi-square terms~\eqref{def:sij} are bounded for all $i,j \in [N]$. Let $\sigma = \max_{i,j} \sigma_{ij}$.
\end{assumption}
\begin{theorem}
\label{thm:r2}
Suppose Assumption~\ref{assump2} holds. Then the regret of Algorithm~\ref{alg:dUCB} at time $T$ using estimator~\eqref{eq:est2}, is bounded as follows:
\begin{align*}
&R(T) \leq \frac{C_2\sigma^2 \log T}{\Delta_{(N)}} + \sum_{k = 2}^{N-1} \frac{C_2\sigma^2 \log T}{\Delta_{(k)}} \left(1 - \frac{\Delta_{(k)}^2}{\Delta_{(k+1)}^2} \right) + \frac{\pi^2}{3} \left(\sum_{i = 2}^{N} \Delta_{(i)} \right)
\end{align*}
Here, $C_2$ is an universal constant.
\end{theorem}
The proof of Theorem~\ref{thm:r2} has been deferred to Appendix~\ref{sec:mom}. Now, we will delve deeper into the instance dependent terms in Theorems~\ref{thm:r1} and~\ref{thm:r2}. The proofs of Theorem~\ref{thm:r1} and \ref{thm:r2} imply the following corollary.
\begin{corollary}
\label{cor:simple}
Let $\lambda(\pmb{\mu}) \triangleq 1 + \sum_{k = 2}^{N-1} \left(1 - \frac{\Delta_{(k)}^2}{\Delta_{(k+1)}^2} \right) $. We have the following regret bounds:
$(i)$ For Algorithm~\ref{alg:dUCB} with estimator~\eqref{eq:est1}, $R(T) \leq \mathcal{O} \left(\frac{M^2 \log^2 (6/\Delta _{(2)}) \log T}{\Delta_{(2)}} \min \left(\lambda(\pmb{\mu}) , \frac{1}{\Delta _{(2)}} \right) \right)$.
$(ii)$ Similarly for Algorithm~\ref{alg:dUCB} with estimator~\eqref{eq:est2}, $R(T) \leq \mathcal{O} \left(\frac{ \sigma^2 \log T}{\Delta _{(2)}} \min \left(\lambda(\pmb{\mu} ) , \frac{1}{\Delta _{(2)}} \right) \right)$.
\end{corollary}
Corollary~\ref{cor:simple} leads us to our next result. In Corollary~\ref{lem:mu} we show that when the $\Delta$ gaps are uniformly distributed, then the $\lambda(\pmb{\mu})$ is $O(\log N)$, in expectation.
\begin{corollary}
\label{lem:mu}
Consider a generative model where $\Delta_{(3)}\leq...\leq\Delta_{(N)}$ are the order statistics of $N-2$ random variables drawn i.i.d uniform over the interval $[\Delta_{(2)},1]$. Let $p_{\Delta}$ denote the measure over these $\Delta$'s. Then we have the following:
($i$) For Algorithm~\ref{alg:dUCB} with estimator~\eqref{eq:est1}, ${\mathbb{E}}}\def\PP{{\mathbb{P}}_{p_{\Delta}} \left[ R(T)\right] = \mathcal{O} \left(\frac{M^2 \log N \log^2(1/\Delta_{(2)}) \log T }{\Delta_{(2)}} \right)$.
($ii$) For Algorithm~\ref{alg:dUCB} with estimator~\eqref{eq:est2}, ${\mathbb{E}}}\def\PP{{\mathbb{P}}_{p_{\Delta}} \left[ R(T)\right] = \mathcal{O} \left(\frac{\sigma^2 \log N \log T }{\Delta_{(2)}} \right)$.
\end{corollary}
\begin{remark}
Note that our guarantees do not have any term containing $K$ - the number of arms. This dependence is implicitly captured in the divergence terms among the experts. In fact when the number of arms $K$ is very large, we expect our divergence based algorithms to perform comparatively better than other algorithms, whose guarantees explicitly depend on $K$. This phenomenon is observed in practice in our empirical validation on real world data-sets in Section~\ref{sec:sims}. We also show empirically, that the term $\lambda(\pmb{\mu})$ grows very slowly with the number of experts on real-world data-sets. This empirical result is included in Appendix~\ref{sec:more_emp}.
\end{remark}
\section{Related Work}
\label{sec:rwork}
Contextual bandits has been studied in the literature for several decades, starting with the simple setting of discrete contexts~\cite{bubeck2012regret}, to linear contextual bandits~\cite{chu2011contextual} and finally the general experts setting~\cite{dudik2011efficient,agarwal2014taming,langford2008epoch,auer2002nonstochastic,beygelzimer2011contextual}. In this work, we focus on the experts setting. Contextual bandits with experts was first studied in the adversarial setting, where there are algorithms with the optimal regret scaling $\mathcal{O}(\sqrt{KT\log N})$~\cite{auer2002nonstochastic}.
In this paper, we are more interested in the stochastic version of the problem, where the context and the rewards of the arms are generated from an unknown but fixed distribution. The first strategies to be explored in this setting were explore-then-commit and epsilon-greedy~\cite{langford2008epoch} style strategies that achieve a regret scaling of $\mathcal{O}\left(\sqrt{K \log N} T^{2/3} \right)$ in the instance-independent case. Following this there have been several efforts to design adaptive algorithms that achieve a $\mathcal{O}(\sqrt{KT\mathrm{polylog}(N) })$ instance-independent regret scaling. Notable among these are~\cite{dudik2011efficient,agarwal2014taming}. These algorithms map the contextual bandit problem to supervised learning and assume access to cost-sensitive classification oracles. These algorithms have been heavily optimized in Vowpal Wabbit~\cite{wabbit}.
We study the contextual bandits with stochastic experts problem, where the experts are not deterministic functions mapping contexts to arms, but are conditional distributions over the arms given a context. We show that we can achieve instance-dependent regret guarantees for this problem, that can scale as $\mathcal{O}\left((\mathcal{M}\log N/\Delta) \log T\right)$ under some assumptions. Here, $\Delta$ is the gap between the mean reward of the best expert and the second best and $\mathcal{M}$ is a divergence term between the experts. Our algorithms are based on importance sampling based estimators which leverage information leakage among stochastic experts. We use an adaptive clipped importance sampling estimator for the mean rewards of the experts, that was introduced in~\cite{pmlr-v70-sen17a}. In~\cite{pmlr-v70-sen17a}, the estimator was studied in a best-arm/pure explore setting, while we study a cumulative regret problem where we need to adjust the parameters of the estimator in an online manner. In addition, we introduce an importance sampling based median of means style estimator in this paper, that can leverage the information leakage among experts.
\section{Empirical Results}
\label{sec:sims}
\begin{figure*}
\centering
\subfloat[][]{\includegraphics[width = 0.29\linewidth]{drugnolegend.pdf}\label{fig:drug}} \hfill
\subfloat[][]{\includegraphics[width = 0.29\linewidth]{streamnolegend.pdf}\label{fig:stream}} \hfill
\subfloat[][]{\includegraphics[width = 0.29\linewidth]{letternolegend.pdf}\label{fig:letter}} \hfill
\subfloat[][]{\includegraphics[width = 0.10\linewidth]{legend.png}\label{fig:legend}}
\caption{ \small In all these plots, the progressive validation loss~\cite{agarwal2014taming} till time $T$ has been plotted as a function of time $T$. $(a)$ Performance of the algorithms on the Drug Consumption dataset~\cite{fehrman2017five}. $(b)$ Performance of the algorithms on the Stream Analytics dataset~\cite{stream}. $(c)$ Performance of the algorithms on the Letters dataset~\cite{frey1991letter}. $(d)$ Legend.}
\label{fig:sims_regret}.
\end{figure*}
In this section, we will empirically test our algorithms on three real-world multi-class classification datasets, against other state of the art algorithms for contextual bandits with experts. Any multi-class classification dataset can be converted into a contextual bandit scenario, where the features are the contexts. At each time-step, the feature (context) of a sample point is revealed, following which the contextual bandit algorithm chooses one of the $K$ classes, and the reward observed is $1$ if its the correct class otherwise it is $0$. This is bandit feedback as the correct class is never revealed, if not chosen. This method has been widely used to benchmark contextual bandit algorithms~\cite{beygelzimer2011contextual,agarwal2014taming}, and is in fact implemented in Vowpal Wabbit~\cite{wabbit}.
Our algorithm is run in batches. At the starting of each batch, we add experts trained on prior data through cost-sensitive classification oracles~\cite{beygelzimer2009offset} and also update the divergence terms between experts, which are estimated from data observed \textit{so far}. During each batch, Algorithm~\ref{alg:dUCB} is deployed with the current set of experts. The pseudo-code for this procedure is provided in Algorithm~\ref{alg:batched}.
\begin{algorithm}
\caption{Batched D-UCB with cost-sensitive classification experts}
\begin{algorithmic}[1]
\State Let $\Pi = \{\pi_1\}$, which is an expert that chooses arms randomly. For time steps $t = 1$ to $3K$, choose an arm sampled from expert $\pi_1$. $t = 3K+1$.
\State Add experts to $\Pi$ trained on observed data and update divergences.
\While {$t <= T$}
\For {$s = t$ to $t + \mathcal{O}(\sqrt{t})$}
\State Deploy Algorithm~\ref{alg:dUCB} with experts in $\Pi$.
\EndFor
\State Let $t = t + \mathcal{O}(\sqrt{t})$. Add experts to $\Pi$ trained on observed data and update divergences.
\EndWhile
\end{algorithmic}
\label{alg:batched}
\end{algorithm}
We use XgBoost~\cite{chen2016xgboost} and Logistic Regression in scikit-learn~\cite{buitinck2013api} with calibration, as the base classifiers for our cost-sensitive oracles. Bootstrapping is used to generate different experts. At the starting of each batch $4$ new experts are added. The constants are set as $c_1 = 1, c_2 = 4$ and $c_3 = 2$ in practice. All the settings are held fixed over all three data-sets, {\it without any parameter tuning}. We provide more details in Appendix~\ref{sec:more_emp}. In the appendix we also show that the gap dependent term in our theoretical bounds grows much slower compared to UCB-1 bounds (Fig.~\ref{fig:terms}), as the number of experts increase in the stream analytics dataset~\cite{stream}. An implementation of our algorithm can be found \href{https://github.com/rajatsen91/CB_StochasticExperts}{here} \footnote{https://github.com/rajatsen91/CB\_StochasticExperts}.
We compare against Vopal Wabbit implementations of the following algorithms: $(i)$ $\epsilon$-greedy~\cite{langford2008epoch} - parameter set at '--epsilon 0.06'. $(ii)$ First (Greedily selects best expert) - parameter set at '--first 100'. $(iii)$ Online Cover~\cite{agarwal2014taming} - parameter set at '--cover 5' $(iv)$ Bagging (Simulates Thompson Sampling through bagged classifiers) - parameter set at '--bag 7'.
{\bf Drug Consumption Data: } This dataset~\cite{fehrman2017five} is a part of UCI repository. It has data from $1885$ respondents with $32$ dimensional continuous features (contexts) and their history of drug use. There are $19$ drugs under study ($19$ arms). For each entry, if the bandit algorithm selects the drug most recently used, the reward observed is $1$, o.w. $0$ reward is observed. The performance of the algorithms are shown in Fig.~\ref{fig:drug}. We see that D-UCB (Algorithm~\ref{alg:batched}) with median of moments clearly performs the best in terms of average loss, followed by D-UCB with the clipped estimator. D-UCB with median of moments converges to an average loss of $0.4$ within $1885$ samples.
{\bf Stream Analytics Data: } This dataset~\cite{stream} has been collected using the stream analytics client. It has $10000$ samples with $100$ dimensional mixed features (contexts). There are $10$ classes ($10$ arms). For each entry, if the bandit algorithm selects the correct class, the reward observed is $1$, o.w. $0$ reward is observed. The performance of the algorithms are shown in Fig.~\ref{fig:stream}. In this data-set bagging performs the best closely followed by the two versions of D-UCB (Algorithm~\ref{alg:batched}). Bagging is a strong competitor empirically, however this algorithm lacks theoretical guarantees. Bagging converges to an average loss of $8\%$, while D-UCB with median of moments converges to an average loss of $10\%$.
{\bf Letters Data: } This dataset~\cite{frey1991letter} is a part of the UCI repository. It has $20000$ samples of hand-written English letters, each with $17$ hand-crafted visual features (contexts). There are $26$ classes ($26$ arms) corresponding to $26$ letters. For each entry, if the bandit algorithm selects the correct letter, the reward observed is $1$, o.w. $0$ reward is observed. The performance of the algorithms are shown in Fig.~\ref{fig:letter}. Both versions of D-UCB significantly outperform the others. The median of moments based version converges to an average loss of $0.62$, while the clipped version converges to an average loss of $0.68$.
\section{Clipped Estimator}
\label{sec:clipped}
As mentioned in Section~\ref{sec:estimators}, the motivating equation guiding the design of our estimators is Eq.~\eqref{eq:infoleakage}. This equation tells us that even when the statistics of the samples observed are governed by the distribution of $(X,V,Y)$ under expert $j$, we can infer the mean of expert $k$. Such observations were made in~\cite{lattimore2016causal,pmlr-v70-sen17a} in the context of best arm identification problems. Suppose we observe $t$ samples under expert $\pi_j$. Guided by Eq.~\eqref{eq:infoleakage}, one might come up with the following naive importance sampled estimator for the mean under expert $k$ ($\mu_k$):
\begin{align*}
\hat{\mu}_k = \frac{1}{t}\sum_{s = 1}^{t} Y_j(s)\frac{\pi_k(V_j(s) \vert
X_j(s))}{\pi_j(V_j(s) \vert X_j(s))}.
\end{align*}
However, it is not possible to derive good confidence interval for the above estimator because even though the reward variable $Y$ is bounded, the reweighing term $\pi_k(V_j(s) \vert
X_j(s))/\pi_j(V_j(s) \vert X_j(s))$ can be unbounded and in some case heavy-tailed. The key idea is to come up with robust estimators that have good variance properties. One approach of controlling the variance of such estimators is to clip that the samples that are too large. This leads to the following clipped estimator~\cite{pmlr-v70-sen17a}:
\begin{align*}
\label{eq:clippedtwo}
\hat{\mu}_k = &\frac{1}{t}\sum_{s = 1}^{t} Y_j(s)\frac{\pi_k(V_j(s) \vert
X_j(s))}{\pi_j(V_j(s) \vert X_j(s))} \times \mathds{1} \left(\frac{\pi_k(V_j(s) \vert
X_j(s))}{\pi_j(V_j(s) \vert X_j(s))} \leq \eta_{kj} \right). \numberthis
\end{align*}
The clipping makes the estimator biased, however it helps in controlling the variance. The clipper level $\eta_{kj}$ which depends on the relationship between $\pi_k$ and $\pi_j$ needs to be set carefully to control the bias-variance trade-off. In~\cite{pmlr-v70-sen17a}, it has been shown that if the log-divergence measure $M_{kj}$ (defined in~\eqref{def:mij}) is bounded, then a good choice is $\eta_{kj} = 2 \log(2/\epsilon) M_{kj}$, if we want an additive bias of at most $\epsilon (t)/2$ (Theorem 3 in~\cite{pmlr-v70-sen17a}).
This idea can be generalized to estimating the mean of expert $k$, while observing samples from all the other experts. This leads to the clipped estimator in Eq.~\eqref{eq:est1}. Lemma~\ref{lem:conf1} provides confidence guarantees for this estimator. The proof of this lemma follows from Theorem 4 in~\cite{pmlr-v70-sen17a}, but we include it here for completeness. In what follows, we will abbreviate ${\mathbb{E}}}\def\PP{{\mathbb{P}}_{p_j(.)}[.]$ as ${\mathbb{E}}}\def\PP{{\mathbb{P}}_j[.]$. In this section let $\hat{\mu}_k(t) = \hat{\mu}^c_k(t)$.
\begin{proof}[Proof of Lemma~\ref{lem:conf1}]
Note that from Lemma 3 in~\cite{pmlr-v70-sen17a} it follows that:
\begin{align}
\label{eq:means_all}
{\mathbb{E}}}\def\PP{{\mathbb{P}}_j\left[\hat{\mu}_k(t) \right] \leq \mu_{k} \leq {\mathbb{E}}}\def\PP{{\mathbb{P}}_j\left[\hat{\mu}_k(t) \right] + \frac{\epsilon(t)}{2}.
\end{align}
For the sake of analysis, let us consider the rescaled version $\bar{\mu}_{k}(t) = (Z_k(t)/t)\hat{\mu}_{k}(t)$ which can be written as:
\begin{align*}
\label{eq:scaledestimator}
&\bar{\mu}_{k}(t) = \frac{1}{t}\sum_{j = 0}^{N} \sum_{s \in \mathcal{T}_j} \frac{1}{M_{kj}}Y_j(s)\frac{\pi_k(V_j(s) \vert X_j(s))}{\pi_j(V_j(s) \vert X_j(s))} \times\mathds{1}\left\{ \frac{\pi_k(V_j(s) \vert X_j(s))}{\pi_j(V_j(s) \vert X_j(s))} \leq 2\log(2/\epsilon (t))M_{kj}\right\}. \numberthis
\end{align*}
Since $Y_j(s) \leq 1$, we have every random variable in the sum in (\ref{eq:scaledestimator}) bounded by $2 \log (2/\epsilon (t))$
Let, $\bar{\mu}_k = \mathbb{E} [\bar{\mu}_{k}(t)]$.
Therefore by Chernoff bound, we have the following chain:
\begin{align}
\label{eq:fullchernoff}
& \PP \left( \lvert \bar{\mu}_k(t) - \bar{\mu}_k \rvert \leq \delta \right) \leq 2\exp \left( -\frac{\delta^2t}{8 (\log (2/\epsilon (t)))^2}\right) \nonumber \\
&\implies \PP \left( \lvert \bar{\mu}_k(t) \frac{t}{Z_k(t)} -\bar{\mu}_k \frac{t}{Z_k(t)} \rvert \leq \delta \frac{t}{Z_k(t)} \right) \leq 2\exp \left( -\frac{\delta^2t}{8 (\log (2/\epsilon (t)))^2}\right) \nonumber \\
&\implies \PP \left( \lvert \hat{\mu}_k(t) - \hat{\mu}_k \rvert \leq \delta \frac{t}{Z_k(t)} \right) \leq 2\exp \left( -\frac{\delta^{2}t}{8 (\log (2/\epsilon (t)))^2}\right) \nonumber \\
&\implies \PP \left( \lvert \hat{\mu}_k(t) - \hat{\mu}_k \rvert \leq \delta \right) \leq 2\exp \left( -\frac{\delta^{2}t}{8 (\log (2/\epsilon (t)))^2 } \left(\frac{Z_k(t)}{t} \right)^2\right)
\end{align}
Now we can combine Equations (\ref{eq:fullchernoff}) and (\ref{eq:means_all}) to obtain:
\begin{align*}
&\PP \left(\mu_k -\delta - \epsilon (t)/2 \leq \hat{\mu}_k(t) \leq \mu_k + \delta \right) \geq 1 - 2\exp \left( -\frac{\delta^{2}t}{8 (\log (2/\epsilon (t)))^2 } \left(\frac{Z_k(t)}{t} \right)^2\right)
\end{align*}
\end{proof}
Now, we will prove Theorem~\ref{thm:r1}. Note that we will re-index the experts such that $0 = \Delta_{(1)} \leq \Delta_{(2)} \leq ... \leq \Delta_{(N)}$. Note that throughout this proof $U_k(t),\hat{\mu}_k(t)$ and $s_k(t)$ in Algorithm~\ref{alg:dUCB} are defined as in Equations~\eqref{eq:est1} and~\eqref{eq:ucb1} respectively. Before we proceed let us prove some key lemmas.
First we prove that with high enough probability the upper confidence bound estimate for the optimal expert $k^*$ is greater than the true mean $\mu^*$.
\begin{lemma}
\label{lem:ubound} We have the following confidence bound at time $t$,
\begin{align*}
\PP \left( U_{k^*}(t) \leq \mu^* \right) \geq 1 - \frac{1}{t^2}.
\end{align*}
\end{lemma}
\begin{proof}
We have the following chain,
\begin{align*}
&\PP \left( U_{k^*}(t) \geq \mu^* \right) = \PP \left( \hat{\mu}_{k^*}(t) \geq \mu^* - s_{k^*}(t) \right) = \PP \left( \hat{\mu}_{k^*}(t) \geq \mu^* -\frac{3\beta(t)}{2} \right) \geq 1 - \frac{1}{t^{2}}.
\end{align*}
The last inequality is obtained by setting $\delta = \beta(t)$ and $\epsilon(t) = \beta(t)$ in Lemma~\ref{lem:conf1}.
\end{proof}
Next we prove that for a large enough time $t$, the UCB estimate of the $k^{th}$ expert is less than that of $\mu^*$.
\begin{lemma}
\label{lem:lbound} We have the following confidence bound at time $t > \frac{144M^2 \log^2 (6/\Delta _k)\log T}{\Delta_k^2}$,
\begin{align*}
\PP \left( U_{k}(t) < \mu^* \right) \geq 1 - \frac{1}{t^2}.
\end{align*}
\end{lemma}
\begin{proof}
We have the following chain,
\begin{align*}
\PP \left( U_{k}(t) > \mu^* \right) &= \PP \left(\hat{\mu}_k(t) > \mu^* - \frac{3 \beta(t)}{2} \right) \\
& \stackrel{(i)}{\leq} \PP \left(\hat{\mu}_k(t) > \mu^* - \frac{\Delta_k}{2}\right) \\
& \stackrel{(ii)}{\leq} \PP \left(\hat{\mu}_k(t) > \mu_k + \frac{\Delta_k}{2} \right) \\
& \stackrel{(iii)}{\leq} \frac{1}{t^2}.
\end{align*}
Here, $(i)$ follows from the fact that $\frac{Z_k(t)}{t} \geq 1/M$, $t > \frac{144M^2 \log^2 (6/\Delta _k)\log T}{\Delta_k^2}$ and the definition of $\beta(t)$. $(ii)$ is by definition of $\Delta_k$. Finally the concentration bound in $(iii)$ follows from Lemma~\ref{lem:conf1}.
\end{proof}
Note that Lemmas~\ref{lem:ubound} and~\ref{lem:lbound} together imply,
\begin{align}
\label{eq:badevent}
\PP \left(k(t) = k \right) \leq \frac{2}{t^2}
\end{align}
for $k \neq k^*$ and $t > \frac{144M^2 \log^2 (6/\Delta _k)\log T}{\Delta_k^2}$.
\begin{proof}[Proof of Theorem~\ref{thm:r1}]
Let $T_k = \frac{144M^2 \log^2 (6/\Delta _{(k)})\log T}{\Delta_{(k)}^2}$ for $k = 2,..,N$. The regret of the algorithm can be bounded as,
\begin{align*}
&R(T) \leq \Delta_{(N)}T_{N} + \sum_{k = 0}^{N-3} \sum_{t = T_{N - k }}^{T_{N - k -1}} \left( \Delta_{(N-k-1)} \PP\left( \mathds{1} \{k(t) \in \{{(1)},...,{(N-k-1)} \} \right) + \sum_{i = N-k}^{N} \Delta_{(i)} \PP\left( \mathds{1} \{k(t) = (i) \} \right) \right) \\
&\leq \Delta_{(N)}T_{N} + \sum_{k = 0}^{N-3} \left( \Delta_{(N-k-1)} \left(T_{N - k -1} - T_{N - k} \right) + \sum_{t = T_{N - k}}^{T_{N - k -1}} \sum_{i = N-k}^{N} \frac{2\Delta_{(i)}}{t^2}\right) \\ \numberthis \label{eq:uselater}
& \leq \sum_{k = 0}^{N-3} \frac{144M^2 \log^2 (6/\Delta _{(N - k - 1)})\log T}{\Delta_{(N - k -1)}} \left(1 - \frac{\gamma(\Delta_{(N - k -1)})}{\gamma(\Delta_{(N - k)})} \right) + \Delta_{(N)}T_{N} + \frac{\pi^2}{3} \left(\sum_{i = 2}^{N} \Delta_{(i)} \right)\\
&= \frac{144M^2 \log^2 (6/\Delta _{(N)}) \log T}{\Delta_{(N)}} + \frac{\pi^2}{3} \left(\sum_{i = 2}^{N} \Delta_{(i)} \right) +\sum_{k = 2}^{N-1} \frac{144M^2 \log^2 (6/\Delta _{(k)}) \log T}{\Delta_{(k)}} \left(1 - \frac{\gamma(\Delta_{(k)})}{\gamma(\Delta_{(k+1)})} \right).
\end{align*}
Here, $\gamma(x) = \frac{x^2}{\log ^2 (6/x)}$.
\end{proof}
|
{
"timestamp": "2018-02-27T02:01:27",
"yymm": "1802",
"arxiv_id": "1802.08737",
"language": "en",
"url": "https://arxiv.org/abs/1802.08737"
}
|
\section{Introduction}
\label{sec:intro}
The diameter of a group finite $G$ with respect to a generating set $S$ is the graph diameter of Cayley graph $\Gamma(G,S)$ of $G$ with respect to $S$. Consider the semidirect product of the two cyclic groups $\mathbb{Z}_m$ and $\mathbb{Z}_n$ given by the presentation
$$G_{m,n,k}:= \mathbb{Z}_m \ltimes_k \mathbb{Z}_n = \langle x,y \,| \,x^m=y^n=1, \, x^{-1} yx = y^k \rangle,$$ where
$\mathbb{Z}_m = \langle x \rangle$, $\mathbb{Z}_n = \langle y \rangle$, and $k$ is a multiplicative unit modulo $n$. We define the diameter of $G_{m,n,k}$ (in symbols $\text{diam}(G_{m,n,k})$) to be the graph diameter of
$\Gamma(G_{m,n,k},\{x,x^{-1},y,y^{-1}\}).$ The diameter of finite groups and their bounds have been widely studied, especially from the viewpoint of efficient communication networks (see~\cite{B1,C1} and the references therein). In particular, the networks arising from the Cayley graphs of groups in the subfamily $\{G_{ck,c^2{\ell}, c\ell+1}\}$, also known in computer science parlance as supertoroids, have been extensively analyzed~\cite{B1,D4,D3,LV1} for their properties. For example, in~\cite{D1,D2}, it was shown that for $c \geq 8$, $\text{diam}(G_{ck,c^2{\ell}, c\ell+1})= [ck/2]+[c\ell/2]$. However, to our knowledge, the diameter bounds for arbitrary groups in $\{G_{m,n,k}\}$ have not been studied. This is the main motivation behind undertaking such an analysis in this paper.
Since every element of $g \in G_{m,n,k}$ can be expressed uniquely as $g = x^a y^b$, a $P$ path from $1$ to an element $g \in G_{m,n,k}$ would take the form $g = \prod_{i=1}^t x^{a_i} y^{b_i}$. Such a path would be reduced if $a_i \not\equiv 0 \pmod{m}$, for $2 \leq i \leq t$, and $b_i \not\equiv 0\pmod{n}$, for $1 \leq i \leq t-1$. We define $t$ to be the \textit{syllable} of the reduced path $P$ (as above), and $\sum_{i=1}^{t} |a_i| + |b_i|$ to be its \textit{length} $l(P)$. Denoting by $\mathcal{P}_g$, the collection of all reduced paths in $G$ from $1$ to $g$, we have
$\|x^a y^b\| = \min \{l(P): P \in \mathcal{P}_g \},$ where $\| \, \|$ is the usual word norm in $G_{m,n,k}$. Thus, the diameter of $G_{m,n,k}$ is given by
\[
\text{diam}(G_{m,n,k}) = {\mathrm {max}} \{ \|x^a y^b\|: 0 \leq a \leq m-1,\, 0 \leq b \leq n-1 \}.
\]
\noindent It is apparent that $[m/2] \leq \text{diam}(G_{m,n,k}) \leq [m/2] + [n/2]$. In reality, $\text{diam}(G_{m,n,k}) = [m/2] + \delta$, where $\delta$ is significantly smaller than $[n/2]$. For example, we can show that $\text{diam}(G_{60,61,2}) = 31$ (see Section~\ref{sec:compute_weights}). In order to obtain a better bound for $\text{diam}(G_{m,n,k})$, we begin by noting that
$$ \prod_{i=1}^t x^{a_i} y^{b_i} = x^{a_1 + \dotsc + a_t} y^{b_1 k^{a_2 + \dotsc + a_t} + \dotsc + b_{t-1} k^{a_t} + b_t}.$$
Consequently, the problem of computing $\| x^a y^b \|$ reduces to the following nonlinear optimization problem in the pair of rings $(\mathbb{Z}_m,\mathbb{Z}_n)$:
\[ \tag{$\dagger$} \begin{array}{rlr}
\text{minimize} & \displaystyle \sum_{i=1}^{t} |a_i| + |b_i| \, & \text{(in }\mathbb{Z}), \\
\text{subject to} & \displaystyle a_1+ \ldots+a_t \equiv a & \pmod{m}, \\ \\
\text{and} & b_1 k^{a_2 + \dotsc + a_t} + \dotsc + b_{t-1} k^{a_t} + b_t \equiv b & \pmod{n}.
\end{array} \]
In this paper, we reduce the problem $(\dagger)$ first to a combinatorial problem in $\mathbb{Z}_n$ (Propositions~\ref{prop:first_red_step} and~\ref{prop:second_red_step}), whose solution yields two parameters $\text{wt}(n,k;\alpha)$ and $\deg(n,k;\alpha)$ (Section~\ref{sec:comb_tools}) , where $\alpha = \text{ord}(k)$. Using these parameters, we obtain our main result (Theorem~\ref{thm:main}), which gives a bound for $\text{diam}(G_{m,n,k})$.
\begin{theorem*}[Main theorem]
\label{thm:main}
Let $G_{m,n,k}$ be the split metacyclic group given by the presentation
\[
G_{m,n,k} = \langle x, y : x^m = 1 = y^{n}, x^{-1} yx = y^k \rangle,
\]
where $k$ has order $\alpha$ in the group ${\mathbb Z}^{\times}_n$ of units. If $\alpha$ is even and $k^{\alpha/2} \equiv -1 \pmod{n}$, then
\[
\text{diam}(G_{m,n,k}) \leq
\begin{cases}
[m/2] + \text{wt}(n, k; \alpha), &\mbox{if } \alpha \neq m, \\
[m/2] + \text{wt}(n, k; \alpha) + \deg(n,k;\alpha), &\mbox{if } \alpha = m.
\end{cases}
\]
\end{theorem*}
\noindent Based on our observations, we believe that $\text{diam}(G_{m,n,k}) \leq [m/2] + \text{wt}(n, k; \alpha)$ should hold true, irrespective of the conditions on $(m,n,k,\alpha)$. As a direct application of our main result, we obtain an upper bound for the diameter of an arbitrary metacyclic groups (Corollary~\ref{cor:diam_bound_for_primes}).
In practice, it is difficult to compute $\text{wt}(n,k;\alpha)$, or provide a reasonable upper bound for it. Nevertheless, we derive the following result.
\begin{theorem*}
If $p$ is an odd prime and $k \in \mathbb{Z}_{p^n}^{\times}$ with $\text{ord}(k) = p^{n-1}(p-1)$, then
\[
{\frac {p + 4 + \epsilon(p)}{4}} + (n-2)p \leq \text{wt}(p^n, k; p^{n-1}(p-1)) \leq {\frac {p + 4 + \epsilon(p)}{4}} + (n-1)p,
\]
where the $\epsilon(p) = +1$, if $p \equiv 1 \pmod{4}$, and is $-1$, otherwise.
\end{theorem*}
\noindent A similar bound is obtained for $p = 2$. These bounds indicate that the growth of $\text{wt}(n,k;\alpha)$ is linear in $n$.
\section{Some combinatorial tools}
\label{sec:comb_tools}
In this section, we build some combinatorial tools to estimate the diameter of the split metacyclic group. We will focus our attention on the finite ring ${\mathbb Z}_n$ for some $n \geq 3$, and consider a unit $k \in \mathbb{Z}_n^{\times}$. From here on, we will fix the notation that $\text{ord}(k) = \alpha \geq 2$.
\begin{definition}
\label{defn:omega_set}
Given a pair, ${\underline i} : i_1, i_2, \dotsc, i_r$ and ${\underline \lambda} : \lambda_1, \lambda_2, \dotsc, \lambda_r$, of sequences of integers such that
\[
\alpha - 1 \geq i_1 > i_2 > \dotsc > i_r \geq 0 \text{ and } \lambda_j \geq 0, \text{ for } 1 \leq j \leq r,
\]
we define
\[
\Omega ({\underline i}, {\underline \lambda}) = \{ b_1 k^{i_1} + \dotsc + b_{r-1} k^{i_{r-1}} + b_r k^{i_r} \pmod{n} \, :\, |b_i| \leq \lambda_i, \, 1 \leq i \leq r\}
\]
We will refer to $i_1$ as the \textit{degree}, and the smallest nonzero number among the $i_k$, for $1 \leq k \leq r$ will be called \textit{co-degree} of the sequence ${\underline i}$, which we denote by $\text{deg}({\underline i})$ and
$\text{codeg}({\underline i})$, respectively.
\end{definition}
\noindent Since $k \in {\mathbb Z}_n^{\times}$ we have $k {\mathbb Z}_n = {\mathbb Z}_n$, and hence for each sequence $\underline i$, there exists a finite sequence $\underline \lambda$ (mod $n$) such that $\Omega ({\underline i}, {\underline \lambda}) = {\mathbb Z}_n$. This leads us to the following definition.
\begin{definition}
Given a pair, $\underline{i}$ and $\underline{\lambda}$, of sequences as in Defintion~\ref{defn:omega_set}, we define:
\begin{enumerate}[(i)]
\item The \textit{weight of $\Omega = \Omega ({\underline i}, {\underline \lambda})$} as
$$\text{wt}(\Omega) := \lambda_1 + \lambda_2 + \dotsc + \lambda_r.$$
\item The \textit{weight of $(n, k)$ with respect ${\underline i}$} as
$$\text{wt}(n, k; {\underline i}) := \min \{ \text{wt}(\Omega ({\underline i}, {\underline \lambda})) : \Omega ({\underline i}, {\underline \lambda}) = {\mathbb Z}_n \}.$$
\item The \textit{weight of $(n, k)$ of level $r$} as
$$\text{wt}(n, k; r) := {\mathrm {min}} \{ \text{wt}(n, k; {\underline i}) : {\underline i} : \alpha - 1 \geq i_1 > i_2 > \dotsc > i_r \geq 0 \}.$$
\end{enumerate}
\end{definition}
\noindent From the definition of $\text{wt}(n, k; \alpha)$, it is clear that $\text{wt}(n, k; \alpha) = \text{wt}(n, k^{\prime}; \alpha)$, whenever $k$ and $k^{\prime}$ generate the same cyclic subgroup of ${\mathbb Z}^{\times}_n$. In our calculations, we will require sequences of the form
\[
\alpha - 1 \geq i_1 > i_2 > \dotsc > i_r = 0,
\]
which we call \textit{reduced sequences}.
\begin{definition}
Given a sequence ${\underline i}$ as in Definition~\ref{defn:omega_set}, we define the \textit{dual $I(\underline{i})$ of $\underline{i}$} by
\[
\alpha - 1 \geq j_1 = \alpha - (i_1 - i_2) > j_2 = \alpha - (i_1 - i_3) > \dotsc > j_{r-1} = \alpha - (i_1 - i_r) > j_r = 0
\]
\end{definition}
\noindent In the following proposition, we show that $I(\underline{i})$ is a reduced sequence of length $r$ having the same weight as ${\underline i}$.
\begin{proposition}
\label{lem:dual_I}
Consider sequences ${\underline i}$ and ${\underline \lambda}$ as in Definition~\ref{defn:omega_set} such that $\Omega ({\underline i}, {\underline \lambda}) = {\mathbb Z}_n$. Then
\begin{enumerate}[(i)]
\item $\Omega (I(\underline{i}), {\underline \lambda^{\prime}}) = {\mathbb Z}_n,$ where ${\underline \lambda^{\prime}}: \lambda_2, \lambda_3, \dotsc, \lambda_r, \lambda_1$, and
\item $\text{wt}(n, k; {\underline i}) = \text{wt}(n, k; I (\underline{i}))$.
\end{enumerate}
\end{proposition}
\begin{proof} Given any $b$ (mod $n$), there exists $b_1, \dotsc, b_r$ such that $|b_s| \leq \lambda_s$, for $1 \leq s \leq r$, and $b = b_1 k^{i_1} + b_2 k^{i_2} + \dotsc + b_{r-1} k^{i_{r-1}} + b_r k^{i_r}$. So, we have
\[
b k^{\alpha - i_1} = b_1 + b_2 k^{\alpha - (i_1 - i_2)} + \dotsc + b_r k^{\alpha - (i_1 - i_r)} \in \Omega ({\underline j}, {\underline \lambda^{\prime}}).
\]
Hence, $k^{\alpha - i_1} \Omega ({\underline i}, {\underline \lambda}) \subseteq \Omega (I({\underline i})), {\underline \lambda^{\prime}})$, and as $k$ is a unit, we have $k^{\alpha - i_1} \Omega ({\underline i}, {\underline \lambda}) = {\mathbb Z}_n$, which establishes (i).
For (ii), note that if $\text{wt}(n, k; {\underline i}) = \lambda$, then $\Omega ({\underline i}, {\underline \lambda}) = {\mathbb Z}_n$, for some sequence ${\underline \lambda}: \lambda_1, \lambda_2, \dotsc, \lambda_r$ with $\lambda = \lambda_1 + \lambda_2 + \dotsc + \lambda_r$ being the least possible value. As seen above, we have $\Omega ({\underline j}, {\underline \lambda^{\prime}}) = {\mathbb Z}_n$, and furthermore ${\underline \lambda^{\prime}}: \lambda_2, \lambda_3, \dotsc, \lambda_r, \lambda_1$ yields the same weight $\lambda$. Thus, we have $\text{wt}(n, k; I({\underline i})) \leq \text{wt}(n, k; {\underline i})$.
Suppose that $\mu = \text{wt}(n, k; I({\underline i}))) < \text{wt}(n, k; {\underline i})$. Then there exists a sequence ${\underline \mu}: \mu_2, \dotsc, \mu_r, \mu_1$ such that $\Omega (I({\underline i})), {\underline \mu}) = {\mathbb Z}_n$. Multiplying by $k^{i_1}$, we get $\Omega ({\underline i}, {\underline \mu^{\prime}}) = {\mathbb Z}_n$, where ${\underline \mu^{\prime}} = (\mu_1, \mu_2, \dotsc, \mu_r)$. As this contradicts the minimality of $\text{wt}(n, k; {\underline i}) = \lambda$, (iii) follows.
\end{proof}
\noindent From Lemma~\ref{lem:dual_I}, it follows that it suffices to consider only reduced sequences while computing $\text{wt}(n, k; r)$.
\begin{remark}
For any length $r \leq \alpha$, we can see that $\text{wt}(n, k; r) \leq [n/2]$, by considering the sequence $\underline{\lambda}: \lambda_1,\ldots,\lambda_r$, where $\lambda_i = [n/2]$ for a fixed $i$, and $\lambda_j = 0$, for the indices $j \neq i$.
\end{remark}
\begin{definition}
Given a reduced sequence $\underline{i}:\alpha - 1 \geq i_1 > i_2 > \dotsc > i_r = 0$ of length $r$, we define the
\textit{codual J($\underline{i}$) of \underline{i}} as
\[
j_1 = \alpha - i_{r-1} > j_2 = i_1 - i_{r-1} > \dotsc > j_{r-1} = i_{r-2} - i_{r-1} > j_r = 0.
\]
\end{definition}
\noindent Note that the co-dual of a non-reduced sequence is a reduced sequence obtained by subtracting its smallest positive entry called its \textit{codegree}.
\begin{definition}
Given a sequence ${\underline i}: \alpha - 1 \geq i_1 > i_2 > \dotsc > i_r \geq 0$, a sequence ${\underline j} = \alpha - 1 \geq j_1 > j_2 > \dotsc > j_q \geq 0$ of length $q \geq r$ is said to be \textit{finer than ${\underline i}$} (in symbols ${\underline i} \preceq {\underline j}$), if it is obtained from $\underline{i}$ by adding one or more terms.
\end{definition}
\begin{remark}
\label{rem:poset}
Note that the relation $\preceq$ defines a partial order on the collection $${\mathcal P}(\alpha - 1) := \{\underline i \, : \,\alpha - 1 \geq i_1 > i_2 > \dotsc > i_r \geq 0\}.$$ We will also be interested in the subcollection ${\mathcal P}^{\prime}(\alpha - 1)$ of all reduced sequences, which is a natural subposet of ${\mathcal P}(\alpha - 1)$.
\end{remark}
\begin{proposition}
Consider the posets $({\mathcal P}(\alpha - 1), \preceq)$ and $({\mathcal P}^{\prime}(\alpha - 1), \preceq)$ as in Remark~\ref{rem:poset}. Then:
\begin{enumerate}[(i)]
\item For any two elements ${\underline i}, {\underline j} \in {\mathcal P}(\alpha - 1)$ (resp. ${\mathcal P}^{\prime}(\alpha - 1)$), there exists ${\underline \xi} \in {\mathcal P}(\alpha - 1)$ (resp. ${\mathcal P}^{\prime}(\alpha - 1)$) such that ${\underline i} \preceq {\underline \xi}$ and ${\underline j} \preceq {\underline \xi}$.
\item $({\mathcal P}^{\prime}(\alpha - 1), \preceq)$ has a maximal element $\Delta$, and a minimal element $\delta$ given by
\begin{eqnarray*}
\Delta & : & i_1 = \alpha - 1 > i_2 = \alpha - 2 > \dotsc > i_{\alpha - 1} = 1 > i_{\alpha} = 0, \text{ and} \\
\delta & : & \alpha - 1 > \beta_1 = 0
\end{eqnarray*}
of lengths are $\alpha$ and $1$, respectively.
\item The map $\Psi : ({\mathcal P}(\alpha - 1), \preceq) \rightarrow ({\mathbb N}, \leq)$ defined by
\[
\Psi({\underline i}) := wt(n, k; {\underline i})
\]
is an order reversing function, where $({\mathbb N}, \leq)$ is regarded as a linearly ordered poset with respect to natural order.
\end{enumerate}
\end{proposition}
\begin{proof}
Given sequences ${\underline i}, {\underline j} \in {\mathcal P}(\alpha - 1)$ (resp. ${\mathcal P}^{\prime}(\alpha - 1)$), consider the sequence ${\underline \xi}$ obtained by taking the union of elements in ${\underline i}$ and ${\underline j}$, rearranged in decreasing order. Then, clearly ${\underline i} \preceq {\underline \xi}$ and ${\underline j} \preceq {\underline \xi}$, from which (i) and (ii) follow.
For showing (iii), consider sequences ${\underline i} \preceq {\underline j}$ with lengths $r < s$ such that $\text{wt}(n, k; {\underline i}) = \lambda$ is realized by a sequence ${\underline \lambda} = \lambda_1, \lambda_2, \dotsc, \lambda_r$. Define ${\underline \mu} = \mu_1, \dotsc, \mu_s$ by $\mu_t = \lambda_t$, if $j_t$ is an element in ${\underline i}$, and $\mu_t = 0$ otherwise. Then $\lambda = \mu_1 + \dotsc + \mu_s$ and $\Omega ({\underline j}, {\underline \mu}) = {\mathbb Z}_n$. Hence, we have that $\text{wt}(n, k; {\underline j}) \leq \lambda = \text{wt}(n, k; {\underline i})$, and (iii) follows.
\end{proof}
\begin{remark}
\label{rem:wt_and_Delta}
Clearly, $\text{wt}(n, k; \delta) = [n/2]$, and the only sequence of length $\alpha$ is the maximal element $\Delta$. Thus, $\text{wt}(n, k; \Delta) = \text{wt}(n, k; \alpha)$, which shows the importance of analyzing $\text{wt}(n, k; \Delta)$.
\end{remark}
\begin{definition}
A sequence ${\underline i} \in {\mathcal P}^{\prime}(\alpha -1)$ is called a \textit{minimal prime sequence realizing $\text{wt}(n,k;\alpha)$} if:
\begin{enumerate}[(i)]
\item ${\underline i}$ is a minimal element in ${\mathcal P}^{\prime}(\alpha -1)$, and
\item $\text{wt}(n, k; {\underline i}) = \text{wt}(n, k; \alpha)$.
\end{enumerate}
We denote the smallest possible degree of a minimal prime sequence realizing $\text{wt}(n,k;\alpha)$ by $\deg(n,k;\alpha)$.
\end{definition}
\noindent The following lemma shows that the operations $J$ and $I$ yield sequences that exhibit certain symmetries among minimal sequences.
\begin{lemma}
\begin{enumerate}[(i)]
\item Let ${\underline i}$ be a minimal prime sequence of largest degree, and ${\underline i^{\prime}}$ a minimal prime sequence of smallest co-degree, realizing $\text{wt}(n,k;\alpha)$. Then deg$({\underline i}) +$ codeg$({\underline i^{\prime}}) = \alpha$.
\item Let ${\underline i}$ be a minimal prime sequence of smallest degree, and ${\underline i^{\prime}}$ a minimal prime sequence of largest co-degree, realizing $\text{wt}(n,k;\alpha)$. Then deg$({\underline i}) +$ codeg$({\underline i^{\prime}}) = \alpha$.
\end{enumerate}
\end{lemma}
\begin{proof}
We will only prove (a), as the proof of (b) is similar. Consider sequences
\[
i: \alpha - 1 \geq i_1 > i_2 > \dotsc > i_r = 0 \text{ and } i': \alpha - 1 \geq i^{\prime}_1 > i^{\prime}_2 > \dotsc > i^{\prime}_s = 0.
\]
Since deg$({\underline i}) = i_1$ and codeg$({\underline i^{\prime}}) = i^{\prime}_{s-1}$,
\[
I({\underline i^{\prime}}): j_1 = \alpha - i^{\prime}_{s-1} > j_2 = i_1 - i^{\prime}_{s-1} > \dotsc > j_{s-1} = i^{\prime}_{s-2} - i^{\prime}_{s-1} > j_s = 0
\]
is a minimal prime sequence. Moreover, as ${\underline i}$ has the largest degree among the minimal prime sequences, we have $i_1 \geq \alpha - i^{\prime}_{s-1}$, that is, $i_1 + i^{\prime}_{s-1} \geq \alpha$. For the reverse inequality, consider the minimal prime sequence
\[
J({\underline i}): \alpha - 1 \geq q_1 = \alpha - (i_1 - i_2) > q_2 = \alpha - (i_1 - i_3) > \dotsc > q_{r-1} = \alpha - i_1 > q_r = 0
\]
Since ${\underline i^{\prime}}$ has smallest co-degree, we have $i^{\prime}_{s-1} \leq \alpha - i_1$, and the inequality follows.
\end{proof}
\noindent The above lemma also shows that the operations $I$ and $J$ transform a minimal prime sequence of largest (resp. smallest) degree to a minimal prime sequence of smallest (resp. largest) co-degree. We conclude this section with some computations of weights for the case $(n,k) = (30, 7)$.
\begin{example}
When $(n, k) = (30, 7)$, $\text{ord}(7) = 4$ in ${\mathbb Z}^{\times}_{30} \cong \mathbb{Z}_2 \times \mathbb{Z}_4$. We consider the sequences $\underline{i_1}: 1,0$, $\underline{i_2}: 2,0$, and $\underline{i_3}: 3,0$. For each sequence $\underline{i_k}$, in the Table~\ref{tab:wts_omega} below, we list some possible choices of a sequence $\underline{\lambda} : \lambda_1,\lambda_2$ (as in Definition~\ref{defn:omega_set}), and the values of $\text{wt}(\Omega(\underline{i},\underline{\lambda}))$.
\end{example}
\begin{table}[h]
\begin{center}
\begin{tabular}{| c | l | c | l | }
\hline
$\underline{i}$ & $\lambda_1$ & $\lambda_2$ & $\text{wt}(\Omega(\underline{i}, \underline{\lambda}))$ \\ \hline
$\underline{i_1}$ & 0 & 15 & 15 \\ \hline
$\underline{i_1}$ & 1 & 8 & 9 \\ \hline
$\underline{i_1}$ & 2 & 3 & 5 \\ \hline
$\underline{i_1}$ & 3 & 3 & 6 \\ \hline
$\underline{i_1}$ & 4, 5 & 2 & 6, 7 \\ \hline
$\underline{i_1}$ & 6, $\dotsc$, 14 & 1 & 7, $\dotsc$, 15 \\ \hline
$\underline{i_1}$ & 15 & 0 & 15 \\ \hline
$\underline{i_2}$ & 0 & 15 & 15 \\ \hline
$\underline{i_2}$ & 1 & 5 & 6 \\ \hline
$\underline{i_2}$ & 2, 3 & 4 & 6, 7 \\ \hline
$\underline{i_2}$ & 4 & 2 & 6 \\ \hline
$\underline{i_2}$ & 5, $\dotsc$, 14 & 1 & 6, $\dotsc$, 15 \\ \hline
$\underline{i_2}$ & 15 & 0 & 15 \\ \hline
$\underline{i_3}$ & 0 & 15 & 15 \\ \hline
$\underline{i_3}$ & 1 & 6 & 7 \\ \hline
$\underline{i_3}$ & 2 & 4 & 6 \\ \hline
$\underline{i_3}$ & 3, $\dotsc$, 7 & 2 & 5, $\dotsc$, 9 \\ \hline
$\underline{i_3}$ & 8, $\dotsc$, 14 & 1 & 9, $\dotsc$, 15 \\ \hline
$\underline{i_3}$ & 15 & 0 & 15 \\ \hline
\end{tabular}
\caption{Some computations of $\text{wt}(\Omega(\underline{i}, \underline{\lambda}))$.}
\label{tab:wts_omega}
\end{center}
\end{table}
\noindent A direct calculation shows that $\text{wt}(30, 7; 3) = 5$, and hence, $\underline{i_1}$ and $\underline{i_3}$ are minimal prime sequences that realize this weight.
\section{Reducing the optimization problem}
There are two key steps involved in solving our main optimization problem ($\dagger$). In the first step (first reduction), we restrict our optimization to the component ring $\mathbb{Z}_n.$ In the second step (second reduction), which we detail in the next section, we build on the results of the first step towards arriving at a solution to $(\dagger)$. We regard the elements of ${\mathbb Z}_n$ as formal sums
\[
w({\underline b}, {\underline c}) = b_1 k^{c_1} + b_2 k^{c_2} + \dotsc + b_t k^{c_t},
\]
where ${\underline b} = (b_1, b_2, \dotsc, b_t)$ and ${\underline c} = (c_1, c_2, \dotsc, c_t)$ are two arbitrary integer sequences. We will abuse notation by using the same expression of $w({\underline b}, {\underline c})$ while treating the sum as an element of ${\mathbb Z}_n$.
\begin{definition}
The formal sum $w({\underline b}, {\underline c})$ is called \textit{primal}, if the like powers of $k$ in $w({\underline b}, {\underline c})$ have exactly one nonzero coefficient $b_i$. We call the number $|b_1| + |b_2| + \dotsc + |b_t|$ as the \textit{absolute coefficient sum} of $w({\underline b}, {\underline c})$, which we denote by $\text{acs}(w({\underline b}, {\underline c}))$.
\end{definition}
\noindent For example, the sum $1. k^2 + 0. k^2 + 1. k + 1$ is primal, while $1. k^2 - 1. k^2 + 1. k + 1$ is not. Our first step of reduction process involves reducing the absolute coefficient sum of $w({\underline b}, {\underline c})$ without changing the value of $w({\underline b}, {\underline c})$ (mod $n$), and the powers $k^{c_1}, k^{c_2}, \dotsc, k^{c_t}$ retaining their multiplicities.
\begin{proposition}[First reduction step]
\label{prop:first_red_step}
Given a formal sum
\[
w({\underline b}, {\underline c}) = b_1 k^{c_1} + b_2 k^{c_2} + \dotsc + b_t k^{c_t},
\]
there exists a sequence ${\underline b^{\prime}}: b^{\prime}_1, b^{\prime}_2, \dotsc, b^{\prime}_t$ such that
\begin{enumerate}[(i)]
\item $w({\underline b^{\prime}}, {\underline c})$ is primal, as a formal element,
\item $w({\underline b^{\prime}}, {\underline c}) \equiv w({\underline b}, {\underline c}) \pmod{n}$,
\item $\text{acs}(w({\underline b^{\prime}}, {\underline c})) \leq$ $\text{acs}(w({\underline b}, {\underline c}))$, and
\item $\text{acs}(w({\underline b^{\prime}}, {\underline c})) \leq \text{wt}(n, k; r)$, where $r$ denote the number of non zero terms in the primal element $w({\underline b^{\prime}}, {\underline c})$.
\end{enumerate}
\end{proposition}
\begin{proof}
We first write
\[
w({\underline b}, {\underline c}) = b_1 k^{c_1} + b_2 k^{c_2} + \dotsc + b_t k^{c_t} = s_1 k^{i_1} + s_2 k^{i_2} + \dotsc + s_r k^{i_r},
\]
where $\alpha - 1 \geq i_1 > i_2 > \dotsc > i_r \geq 0$ with $r$ being the number of distinct powers of $k$ in the formal sum on the left, and $s_j = \sum_{c_q \equiv i_j \pmod{\alpha}} b_q$. Then any sequence ${\underline b^{\prime}}: b^{\prime}_1, b^{\prime}_2, \dotsc, b^{\prime}_t$ obtained by replacing exactly one of the elements of in each collection $\{b_q : c_q \equiv i_j \pmod{\alpha}\}$ by $s_j$, and the remaining by $0$ satisfies conditions
(i) - (ii). We obtain (iii) by applying the triangle inequality to the expression for $s_j$. Finally, if $\text{acs}(w({\underline b^{\prime}}, {\underline c})) > \text{wt}(n, k; r)$, then by definition of $wt(n, k; r)$, we may replace the sequence $s_1, \dotsc, s_r$ by a sequence $s^{\prime}_1, \dotsc, s^{\prime}_r$ so that
\[
s_1 k^{i_1} + s_2 k^{i_2} + \dotsc + s_r k^{i_r} \equiv s^{\prime}_1 k^{i_1} + s^{\prime}_2 k^{i_2} + \dotsc + s^{\prime}_r k^{i_r} \pmod{n},
\]
where $|s^{\prime}_1| + \dotsc + |s^{\prime}_r| \leq \text{wt}(n, k; r) <$ acs$(w({\underline b^{\prime}}, {\underline c}))$. Thus, replacing the terms $s_q$ by $s^{\prime}_q$ and then reconstructing the sequence ${\underline b^{\prime}}$ yields (iv).
\end{proof}
\noindent The preceding result fortifies the significance of estimating $wt(n, k; \alpha)$. It is well known that when $p$ is an odd prime, for $n \geq 1$, we have ${\mathbb Z}^{\times}_{p^n} \cong \mathbb{Z}_{p^{n-1}(p-1)}$, and for $n \geq 2$, we have ${\mathbb Z}^{\times}_{2^n} \cong \mathbb{Z}_2 \times \mathbb{Z}_{2^{n-2}}$. For $n \geq 2$, let $\varphi_n$ denote the natural quotient ring homomorphism ${\mathbb Z}_{p^n} \rightarrow {\mathbb Z}_{p^{n-1}}$. If $k \in {\mathbb Z}^{\times}_{p^n}$ has order $p^{n-1}(p-1)$, then $\varphi_n(k)$ generates the cyclic group ${\mathbb Z}^{\times}_{p^{n-1}}$. Denoting the epimorphism $\varphi_n \vert_{{\mathbb Z}^{\times}_{p^n}}$ by
$\widetilde{\varphi}_n$, for an arbitrary unit $k \in {\mathbb Z}^{\times}_{p^n}$, we have
\[
{\mathrm {ord}}(\widetilde{\varphi}_n(k)) =
\begin{cases}
{\mathrm {ord}}(k)/p, &\mbox{if } p \mid {\mathrm {ord}}(k), \text{ and}\\
{\mathrm {ord}}(k), &\mbox{otherwise.}
\end{cases}
\]
\noindent When $n >1 $, in the following theorem, we derive bounds for $wt(p^n, k; \text{ord}(k))$ in terms of the $wt(p^n, \widetilde{\varphi}_n(k); \text{ord}(\widetilde{\varphi}_n(k)))$.
\begin{theorem}
\label{thm:wt_prime_power}
For a fixed prime $p$ and $n > 1$, consider $k \in {\mathbb Z}^{\times}_{p^n}$ with $\text{ord}(k) = m$, and $k_0 = \widetilde{\varphi}_n(k) \in {\mathbb Z}^{\times}_{p^{n-1}}$ with $\text{ord}(k_0) = m_0$. Then,
\[
\text{wt}(p^{n-1}, k_0; m_0) \leq \text{wt}(p^n, k; m) \leq
\begin{cases}
\text{wt}(p^{n-1}, k_0; m_0) + p, & \mbox{if } p \mid m, \text{ and} \\
\text{wt}(p^{n-1}, k_0; m_0) + [m/2], &\mbox{otherwise.}
\end{cases}
\]
\end{theorem}
\begin{proof} Clearly, $\text{wt}(p^{n-1}, k_0; m_0) \leq \text{wt}(p^n, k; m)$, since $\varphi_n$ is a ring epimorphism. Let $\lambda^{\prime} = \text{wt}(p^{n-1}, \widetilde{\varphi}_n(k); m_0)$ be realized by a sequence $\lambda^{\prime}_1, \dotsc, \lambda^{\prime}_{m_0}$ such that every $a^{\prime} \in {\mathbb Z}_{p^{n-1}}$ is expressed as
\[
a^{\prime} \equiv b^{\prime}_1 \widetilde{\varphi}_n(k)^{m_0 - 1} + b^{\prime}_2 \widetilde{\varphi}_n(k)^{m_0 - 2} + \dotsc + b^{\prime}_{m_0 - 1} \widetilde{\varphi}_n(k) + b^{\prime}_{m_0} \pmod{p^{n-1}}
\]
for integers $b^{\prime}_j$ with $|b^{\prime}_j| \leq \lambda^{\prime}_j$. Then every $a \in {\mathbb Z}_{p^n}$ can be expressed as
\begin{equation}
\label{eqn:lifting}
a \equiv b^{\prime}_1 k^{m_0 - 1} + b^{\prime}_2 k^{m_0 - 2} + \dotsc + b^{\prime}_{m_0 - 1} k + b^{\prime}_{m_0} + z_a \pmod{p^n},
\end{equation}
with $|b^{\prime}_j| \leq \lambda^{\prime}_j$, and $z_a \in {\mathrm {ker}}(\varphi_n)$, which depends on $a$. Note that,
\[
{\mathrm {ker}}(\varphi_n) = \{ \xi p^{n-1} : 0 \leq \xi \leq p-1 \} \text{ and } {\mathrm {ker}}(\widetilde{\varphi}_n) = \langle k^{m_0} \rangle \leq \langle k \rangle = {\mathbb Z}^{\times}_{p^n}.
\]
If $p \mid m$, then we have $m_0 = m/p$, and so
\[
\mathrm {ker}({\varphi}_n) = \{k^{\tau m_0}-1 : 1 \leq \tau \leq p-1\} \cup \{0\}.
\]
Hence, from equation~\ref{eqn:lifting}, it follows that every $a \in \mathbb{Z}_{p^n}$ is represented as
$$a \equiv \sum_{\tau=1}^{p-1} b_{\tau} k^{\tau m_0} + \sum_{i=1}^{m_0-1} b_i'k^{m_0-i} + (b_{m_0}' + \mu) \pmod{p^n},$$
where $$\mu = \begin{cases}
-1, & \text{ if } (b_1, b_2,\ldots, b_{p-1}) = (0,\ldots,0,1,0,\ldots,0), \text{ and}, \\
0, & \text{ if } (b_1, b_2,\ldots, b_{p-1}) = (0,\ldots,0),
\end{cases}$$
Thus, each of the coefficients in the formal sum above (considered in sequence) is bounded above by the terms of the non-negative sequence
\[
\underline{\lambda}: \underbrace{1, \dots, 1}_{p-1}, \lambda^{\prime}_1, \dotsc, \lambda^{\prime}_{m_0 - 1}, \lambda^{\prime}_{m_0} + 1.
\]
The result for this case now follows from the fact that the sum of the terms of $\underline{\lambda}$ is $\lambda'+p$.
If $p \nmid m$, then $m = m_0$, and the result follows immediately by noting that the coefficient of each term of the sequence $k^{m_0 - 1}, k^{m_0 - 2}, \dotsc, k, 1$ is bounded above by the terms of the non-negative sequence
\[
\lambda^{\prime}_1, \dotsc, \lambda^{\prime}_{m_0 - 1}, \lambda^{\prime}_{m_0} + [m/2].
\]
\end{proof}
\noindent We will now derive bounds for $\text{wt}(p,k;p-1)$, when $p$ is an odd prime.
\begin{theorem}
\label{thm:prime_wt_bounds}
Let $p$ be an odd prime, and let $k \in \mathbb{Z}_p^{\times}$ with $\text{ord}(k) = p-1$.
\begin{enumerate}[(i)]
\item If $p \equiv 1 \pmod{4}$, then $wt(p, k; p-1) \leq \frac {p+3}{4}$. Moreover, there exists a sequence $\underline{i}$ such that $\deg(\underline{i}) = \frac {p-5}{4}$ and $\text{wt}(p,k,\underline{i}) \leq \frac {p+3}{4}$.
\item If $p \equiv 3 \pmod{4}$, then $wt(p, k; p-1) \leq {\frac {p+5}{4}}$. Moreover, there exists a sequence $\underline{i}$ such that $\deg(\underline{i}) = \frac {p-3}{4} $ and $\text{wt}(p,k,\underline{i}) \leq \frac {p+5}{4}$.
\end{enumerate}
\end{theorem}
\begin{proof} We present a proof only for (i), as (ii) will follow from similar arguments. Consider the list of units $A = \{ k^{i_1}, k^{i_1 - 1}, \dotsc, k, 1 \}$, where $i_1 = {\frac {p-1}{4}} - 1$. Since $k^{\frac {p-1}{2}} \equiv -1 \pmod{p}$, the set
\[
A \cup -A = \{ k^j ~:~ 0 \leq j \leq {\frac {p-1}{4}} - 1 ~{\mathrm {or}}~ {\frac {p-1}{2}} \leq j \leq {\frac {p-1}{2}} + i_1 \}
\]
comprises exactly half of the elements of ${\mathbb Z}^{\times}_p$, that is, $|A \cup -A| = {\frac {p-1}{2}}$. Now let $k^{\tau} \not\in A \cup -A$. For every $k^j \in A \cup -A$, there are ${\frac {p-1}{2}}$ distinct elements among $k^{\tau} - k^j$ none of which is equal to $k^{\tau}$. Since there are only ${\frac {p-1}{2}} - 1$ elements outside $A \cup -A$ other than $k^{\tau}$, one of these elements must be from $A \cup -A$. Hence, there exists $k^{j_1} \in A \cup -A$ such that $k^{\tau} = k^{j} + k^{j_1}$.
Suppose that $j = j_1$. Then write $k^{\tau} = k^j - (-k^j) = k^j - k^{j+ {\frac {p-1}{2}}}$. If $k^j \in A$, then $k^{j+ {\frac {p-1}{2}}} \in -A$. Otherwise, if $k^j \in -A$, then we have $k^{j+ {\frac {p-1}{2}}} \in A$, except when $j = {\frac {p-1}{2}}$. In this particular case, we have $k^{\tau} = -2$, which implies that $\Omega({\underline i}, {\underline \lambda}) = {\mathbb Z}_p$ where
\[
{\underline i} ~:~ i_1, i_1 - 1, \dotsc, 1, 0 \text{ and } \underline{\lambda}: 1, \dotsc, 1, 2.
\]
\end{proof}
\begin{remark}
In the proof above, the bound on $\text{wt}(p, k; p-1)$ can be slightly improved in case $2 \in A \cup -A$. Notice that the only difficulty was to represent $k^{\tau} = -2 \not\in A \cup -A$. But, this situation does not arise when $2 \in A \cup -A$. Hence, we have ${\underline \lambda}: 1, \ldots, 1$, which implies
$$\text{wt}(p,k;p-1) \leq \begin{cases}
\frac{p-1}{4}, & \text{if } p \equiv 1 \pmod{4}, \text{ and} \\
\frac{p+1}{4}, & \text{if } p \equiv 3 \pmod{4}.
\end{cases}$$
\end{remark}
\noindent The following is a direct consequence of Theorem~\ref{thm:prime_wt_bounds}.
\begin{corollary}
\label{rem:bound_prime_power}
Let $p$ be a prime.
\begin{enumerate}[(i)]
\item If $p$ is odd and $k \in \mathbb{Z}_{p^n}^{\times}$ with $\text{ord}(k) = p^{n-1}(p-1)$, then
\[
{\frac {p + 4 + \epsilon(p)}{4}} + (n-2)p \leq \text{wt}(p^n, k; p^{n-1}(p-1)) \leq {\frac {p + 4 + \epsilon(p)}{4}} + (n-1)p,
\]
where the $\epsilon(p) = +1$, if $p \equiv 1 \pmod{4}$, and is $-1$, otherwise.
\item If $p= 2$, $n \geq 4$, and $k \in \mathbb{Z}_{2^n}^{\times}$ with $\text{ord}(k) = 2^{n-2}$, then
\[
\gamma(k) + 2(n-2) \leq \text{wt}(2^n, k; 2^{n-2}) \leq \gamma(k) + 2(n-1),
\]
where $\gamma(k) = 4$, if $k^{2^{n-3}} \equiv -1 \pmod{2^n}$, and is $2$, otherwise.
\end{enumerate}
\end{corollary}
\noindent This shows that, as $n \rightarrow \infty$, the growth of $\text{wt}(p^n, k; p^{n-1}(p-1))$ is at most linear. Moreover, for a sequence $\underline{i}$ realizing the weight corresponding to a unit of maximum order in ${\mathbb Z}^{\times}_{p^n}$, we have:
\[
\deg(\underline{i}) \leq
\begin{cases}
{\frac {p-1}{4}} + (n-1)p - 1, &\mbox{if } p \equiv 1 \pmod{4}, \text{ and}\\
{\frac {p-3}{4}} + (n-1)p, &\mbox{if } p \equiv 3 \pmod{4}.
\end{cases}
\]
\noindent The final result in this section gives a bound on the degree of a minimal prime sequence realizing $\text{wt}(n,k;\alpha)$, when $\alpha$ is even and $k^{\alpha/2} \equiv -1 \pmod{n}$.
\begin{theorem}
\label{thm:bound_for_deg_nkalpha}
Let $k \in {\mathbb Z}^{\times}_n$ with $\text{ord}(k) = \alpha > 1$, where $\alpha$ is even and $k^{\alpha/2} \equiv -1 \pmod{n}$. Then $\deg(n,k;\alpha) \leq \alpha/2$.
\end{theorem}
\begin{proof} We know from Remark~\ref{rem:wt_and_Delta} that $\text{wt}(n,k;\Delta) = \text{wt}(n,k;\alpha)$, where $\Delta:\alpha-1, \dotsc, 1, 0$. For this $\Delta$, let $wt(n,k;\alpha)$ be realized by a sequence $\underline{\lambda}:\lambda_1, \ldots, \lambda_{\alpha}$, so that each element of $\Omega(\underline{i},\underline{\lambda})$ has a representation of the form
\[
b_1 k^{\alpha - 1} + b_2 k^{\alpha - 2} + \dotsc + b_{\alpha - 1} k + b_{\alpha},
\]
where $|b_i| \leq \lambda_i$.
Replacing the powers $k^j$, for $\alpha - 1 \geq j \geq \alpha/2$, in this representation by $-k^{j - \alpha/2}$, we obtain an expression of the form
\[
(b_{\alpha/2 +1} - b_1) k^{\alpha/2 - 1} + (b_{\alpha/2 +2} - b_2) k^{\alpha/2 - 2} + \dotsc + (b_{\alpha - 1} - b_{\alpha/2 - 1}) k + (b_{\alpha} - b_{\alpha/2}),
\]
which represents an element of $\Omega(\underline{i}',\underline{\lambda}')$, where $\underline{i}': \alpha/2-1,\ldots,1,0$ and $\underline{\lambda}' :\lambda_{\alpha/2 +1} + \lambda_1, \dotsc, \lambda_{\alpha} + \lambda_{\alpha/2}$. The result now follows from the definition of a minimal prime sequence realizing $\text{wt}(n,k;\alpha)$.
\end{proof}
\section{Estimating the diameter of split metacyclic groups}
Let $g = x^a y^b \in G_{m,n,k}$ be connected to $1$ by a fixed reduced path
\[
P : x^{a_1} y^{b_1} x^{a_2} y^{b_2} \dotsc x^{a_t} y^{b_t} = x^a y^b.
\]
Set $c_i := a_{i+1} + \dotsc + a_t$, for $1 \leq i \leq t-1$, and $c_t = 0$. In the notation developed in Section~\ref{sec:intro}, recall that the number $t$ is called the syllable of $P$ (which we denote by $\text{syl}(P)$), and the number $l(P) = \sum_{i=1}^t |a_i|+|b_i|$ is the called the length of $P$. Also, we denote the collection of all reduced paths from $1$ to $g \in G_{m,n,k}$ by $\mathcal{P}_g$. In the following proposition, which is a direct consequence of Proposition~\ref{prop:first_red_step}, we show that the minimum number of syllables required to represent a path of shortest length is $\leq \text{ord}(k)$.
\begin{proposition}[Second reduction step]
\label{prop:second_red_step}
Let $P : \prod_{i=1}^t x^{a_i}y^{b_i}$ be a reduced path from $1$ to an element $g = x^a y^b \in G_{m,n,k}$. Suppose that $r$ is the number of distinct terms in the formal element
$w({\underline b}, {\underline c}) = \prod_{i=1}^t b_i k^{c_i}$.
\begin{enumerate}[(i)]
\item If $w({\underline b}, {\underline c})$ is not primal as a formal element, then there is another $P' \in \mathcal{P}_g$ such that $l(P') \leq l(P)$.
\item If $\text{acs}(w({\underline b}, {\underline c})) > \text{wt}(n, k; r)$, then $P$ cannot be a shortest path in $\mathcal{P}_g$.
\item If $P \in \mathcal{P}_g$ is a path of shortest length, then $\text{syl}(P) \leq \text{ord}(k).$
\end{enumerate}
\end{proposition}
\noindent This second reduction step implies that $|b_1| + \dotsc + |b_t| \leq \text{wt}(n, k; \alpha)$, which finally brings us to the main result in this paper.
\begin{theorem}[Main theorem]
\label{thm:main}
Let $G_{m,n,k}$ be the split metacyclic group given by the presentation
\[
G_{m,n,k} = \langle x, y : x^m = 1 = y^{n}, x^{-1} yx = y^k \rangle,
\]
where $k$ has order $\alpha$ in the group ${\mathbb Z}^{\times}_n$ of units. If $\alpha$ is even and $k^{\alpha/2} \equiv -1 \pmod{n}$, then
\[
\text{diam}(G_{m,n,k}) \leq
\begin{cases}
[m/2] + \text{wt}(n, k; \alpha), &\mbox{if } \alpha \neq m, \\
[m/2] + \text{wt}(n, k; \alpha) + \deg(n,k;\alpha), &\mbox{if } \alpha = m.
\end{cases}
\]
\end{theorem}
\begin{proof}
We wish to bound the length of a path from $1$ to an element $g = x^a y^b \in G$. Set $i_1 = \deg(n,k;\alpha)$, assume without loss of generality that $-[m/2] \leq a \leq [m/2]$. We break our argument into three cases.
\begin{case} Assume that $i_1 \leq a \leq [m/2]$. Let ${\underline i}$ denote a minimal prime sequence with degree $i_1 = \deg(n,k;\alpha)$ given by
\[
\alpha - 1 \geq i_1 > i_2 > \dotsc > i_{t-1} > i_t = 0.
\]
First, we express $b \in {\mathbb Z}_n$ as
\[
b \equiv b_1 k^{i_1} + b_2 k^{i_2} + \dotsc + b_{t-1} k^{i_{t-1}} + b_t \pmod{n}. \tag{*}
\]
with $\sum_{j=1}^{t} |b_t| \leq \text{wt}(n, k; \alpha)$. Take $\xi = a - i_1 \geq 0$, and consider the path
\[
P : x^{\xi} y^{b_1} x^{i_1 - i_2} y^{b_2} x^{i_2 - i_3} y^{b_3} \dotsc x^{i_{t-2} - i_{t-1}} y^{b_{t-1}} x^{i_{t-1}} y^{b_t} \tag{**}
\]
Clearly, $P \in \mathcal{P}_g$, and since every exponent of $x$ in $P$ is non negative, we have
\[
l(P) = (a - i_1) + (i_1 - i_2) + \dotsc + (i_{t-2} - i_{t-1}) + i_{t-1} + \sum_{j=1}^{t} |b_t| \leq [m/2] + wt(n, k; \alpha),
\]
which proves the result for this case.
\end{case}
\begin{case} Assume that $-[m/2] \leq a \leq -i_1$. Note that for any path $P \in \mathcal{P}_g$, we have $P^{-1} \in \mathcal{P}_{g^{-1}}$ and $l(P) = l(P^{-1})$, where $P^{-1} = \prod_{i=0}^{t-1} y^{-b_{t-i}}x^{-a_{t-i}}$. Consider $b^{\prime} \in {\mathbb Z}_n$ such that $y^{-b} x^{-a} = x^{-a} y^{b^{\prime}}$. Since the exponent $-a$ of $x$ satisfies the condition of Case 1, the result for this case follows.
\end{case}
\begin{case} Finally, assume that $-i_1 < a < i_1$. As before, it suffices to assume that $0 \leq a \leq i_1$. Write $b$ as in the sum $(*)$ above, set $\xi = a - i_1$ and consider a path $P'$ of the form $(**)$. Clearly, $P' \in \mathcal{P}_g$, and further note that every other exponent, except the first exponent of $x$ in $P'$ is non-negative. Hence, we have
\[
l(P) = -a + i_1 + (i_1 - i_2) + (i_2 - i_3) + \dotsc + (i_{t-2} - i_{t-1}) + i_{t-1} = -a + 2i_1.
\]
Applying Theorem~\ref{thm:bound_for_deg_nkalpha}, we get
$$l(P) \leq [m/2] + i_1.$$ The result now follows from the fact that
$$\alpha \leq \begin{cases}
[m/2], & \text{if } \alpha \text{ is a proper divisor of $m$, and} \\
[m/2] + i_1, & \text{otherwise.}
\end{cases}$$
\end{case}
\end{proof}
\noindent Notice that the third case of Theorem~\ref{thm:main} used the fact that $2 i_1 \leq \alpha$. However, when $n$ is a prime $p \equiv 1 \pmod{4}$, we know that $2 i_1 \leq [\alpha / 2]$, which leads to a better bound. More generally, we have the following:
\begin{corollary}
\label{cor:diam_bound_for_primes}
Let $G_{m,n,k}$ be the split metacyclic group given by the presentation
\[
G_{m,n,k} = \langle x, y : x^m = 1 = y^{n}, x^{-1} yx = y^k \rangle,
\]
where $k$ has order $\alpha$ in the group ${\mathbb Z}^{\times}_n$ of units. If $\alpha$ is even with $k^{\alpha/2} \equiv -1 \pmod{n}$ and $2\deg(n,k;\alpha) \leq [\alpha/2]$, then
$$\text{diam}(G_{m,n,k}) \leq [m/2] + \text{wt}(n,k;\alpha).$$
\end{corollary}
\noindent The fact that every metacyclic group $G$ is a quotient of a split metacyclic group yields a bound for $\text{diam}(G)$.
\begin{corollary}
Let $G_{m_0,\ell,n,k}$ be an arbitrary metacyclic group given by the presentation
$$G_{m_0,\ell,n,k} = \langle x, y : x^{m_0} = y^{\ell}, \, y^{n} = 1, \, x^{-1} yx = y^k \rangle,$$
where $k^{m_0} \equiv 1 \pmod{n}$, $n \mid \ell(k-1)$ and $\ell \mid n$. Then
$$\text{diam}(G_{m_0,\ell,n,k}) \leq \text{diam}(G_{m_0n/\ell,n,k}) \leq \left[\frac{m_0n/\ell}{2}\right] + \text{wt}(m_0n/\ell,k;m_0).$$
\end{corollary}
\begin{proof}
First, we note that the condition $\ell \mid n$ does not violate the generality of the above presentation (see~\cite[Lemma 2.1]{H1}). Clearly, there exists a natural surjection $G_{m_0n/\ell,n,k} \twoheadrightarrow G_{m_0,\ell,n,k}$, and the result follows.
\end{proof}
\section{Some explicit computations}
\label{sec:compute_weights}
When $n$ is a prime $p \equiv 1 \pmod{4}$ (resp. $\equiv 3 \pmod{4}$), we showed in Theorem~\ref{thm:prime_wt_bounds} that $\text{wt}(p, k; p-1) \leq \frac {p+3}{4}$ (resp. $\leq \frac {p+5}{4}$). Nevertheless, in practice (as we will see), the value of $\text{wt}(p, k; p-1)$ is much less than these bounds.
In Tables~\ref{tab:primes_1mod4} and~\ref{tab:primes_3mod4} below, we list several computations of $\text{wt}(n, k; \alpha)$ for various primes $n$ and primitive units $k \in \mathbb{Z}_n ^{\times}$. Further, for these values of $n$, we consider $m = n-1$, and indicate how the values of the bound (derived in Corollary~\ref{cor:diam_bound_for_primes}) compares with the actual values of $\text{diam}(G_{m,n,k})$. These computations were made using software written in Mathematica 11~\cite{W1}.
\begin{table}[h]
\begin{center}
\small\addtolength{\tabcolsep}{-4pt}
\begin{tabular}{ | l | c | c | c | c | }
\hline
$(m, n, k)$ & $\text{wt}(n,k;\alpha)$ & ${\underline \lambda}$ realizing $\text{wt}(n,k;\alpha)$ & $\text{diam}(G_{m,n,k})$ & $[m/2] + \text{wt}(n,k;\alpha)$ \\ \hline
$(12, 13, 2)$ & $3$ & $1,1,1$ &$7$ & $9$ \\ \hline
$(16, 17, 3)$ & $3$ & $0, 1, 1, 1$ & $9$ & $11$ \\ \hline
$(28, 29, 2)$ & $4$ & $0, 0, 0, 1, 1, 1, 1$ & $15$ & $18$ \\ \hline
$(36, 37, 2)$ & $4$ & $0, 0, 0, 1, 0, 0, 1, 1, 1$ & $19$ & $22$ \\ \hline
$(40, 41, 6)$ & $4$ & $0, 0, 0, 0, 1, 0, 1, 0, 1, 1$ & $21$ & $24$ \\ \hline
$(52, 53, 2)$ & $4$ & $0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1$ & $27$ & 30 \\ \hline
$(60, 61, 2)$ & $4$ & $0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1$ & $31$ & $34$ \\ \hline
\end{tabular}
\caption{Values of $\text{wt}(p,k;p-1)$, for some primes $p \equiv 1 \pmod{4}$.}
\label{tab:primes_1mod4}
\end{center}
\end{table}
\begin{table}[h]
\begin{center}
\small\addtolength{\tabcolsep}{-4pt}
\begin{tabular}{ | l | c | c | c | c | }
\hline
$(m, n, k)$ & $\text{wt}(n,k;\alpha)$ & ${\underline \lambda}$ realizing $\text{wt}(n,k;\alpha)$ & $\text{diam}(G_{m,n,k})$ & $[m/2] + \text{wt}(n,k;\alpha)$ \\ \hline
$(6, 7, 3)$ & $2$ & $0,1,1$ & $4$ & $5$ \\ \hline
$(10, 11, 2)$ & $3$ & $0,0,1,2$ & $6$ & $8$\\ \hline
$(18, 19, 2)$ & $3$ & $0, 1, 0, 0, 1, 1$ & $10$ & $12$\\ \hline
$(22, 23, 5)$ & $3$ & $1, 0, 0, 0, 0, 1, 1$ & $12$ & $14$\\ \hline
$(30, 31, 3)$ & $4$ & $0, 0, 0, 0, 0, 0, 1, 1, 2$ & $16$ & $19$\\ \hline
$(42 43, 3)$ & $4$ & $0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 2$ & $22$ & $25$\\ \hline
$(46, 47, 5)$ & $4$ & $0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1$ & $24$ & $27$ \\ \hline
$(58, 59, 2)$ & $4$ & $0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1$ & $30$ & $33$ \\ \hline
\end{tabular}
\caption{Values of $\text{wt}(p,k;p-1)$, for some primes $p \equiv 3 \pmod{4}$.}
\label{tab:primes_3mod4}
\end{center}
\end{table}
\bibliographystyle{plain}
|
{
"timestamp": "2018-02-27T02:04:30",
"yymm": "1802",
"arxiv_id": "1802.08811",
"language": "en",
"url": "https://arxiv.org/abs/1802.08811"
}
|
\section{Introduction}
The Kepler circumbinary planets (CBPs) present a rich set of dynamical systems in close binary systems that resemble architectures around single stars. Soon after the first detection of Kepler-16b by \cite{Doyle2011}, theorists have been seeking to understand more fully the possible dynamics, evolution, and formation of these bodies \citep[e.g.,][]{Quarles2012,Meschiari2012,Kane2013,Rafikov2013,Dunhill2013}. Analysis of the Kepler data has uncovered more CBPs around a variety of stellar hosts, such as Kepler-34b and Kepler-35b \citep{Welsh2012} that orbit nearly sunlike stars or two confirmed planets in the same system, Kepler-47b \& Kepler-47c \citep{Orosz2012b}, whose stellar hosts have a nearly circular orbit. The Transiting Exoplanet Survey Satellite (TESS) is expected to observe $\sim$500,000 eclipsing binaries and allow for a substantial increase in the number of observed CBPs in the next few years {using the detection method outlined in \cite{Kostov2016}}.
The stability limits{, or smallest stable semimajor axis ratio,} of planets as test particles in binary systems have been identified \citep{Dvorak1986,Dvorak1989,Holman1999} assuming that the test bodies begin on nearly coplanar, circular orbits around their host stars. However, the eccentricities of the known CBPs cover a wide range and explore regions of parameter space where the stability formula by \cite{Holman1999} (hereafter HW99) may become inadequate. Additionally the definition of a stability limit must be inherently ``fuzzy'' due to the overlap of mean motion resonances \citep{Chirikov1979,Wisdom1980,Mudryk2006,Deck2013} and some regions of parameter space may be stable but unaccessible through processes within modern formation models. Some studies \citep{Doolin2011,Quarles2016,Li2016} have investigated how the stability changes when the planets are significantly inclined relative to the binary planet, or at least enough to prevent the CBP from transiting \citep{Li2016}.
The evolution of these systems have been studied prior to the discovery of the Kepler CBPs using N-body dynamical models dominated by planetesimals composed of a mixture of rock and ice \citep{Quintana2006} or hydrodynamical models dominated by gas with planets embedded \citep{Artymowicz1994,Artymowicz1996,Gunther2002,ValBorro2006,Pierens2007,Pierens2008,Marzari2009}. The actual systems do not resemble the Earth in composition, where they are typically between a $\sim$Neptune -- Jupiter mass and have volatile-rich compositions. As such, hydrodynamical models have been used to characterize the Kepler CBPs that incorporate interactions between the growing planetary core with a gaseous disk \citep{Paardekooper2012,Meschiari2014,Kley2014,Kley2015,Bromley2015}.
When comparing the Kepler CBPs with their respective stability limits (e.g., HW99), the CBP community has remarked on the closeness of these bodies to this ``inner boundary'' and there is only
a small probability that the pile up of planets near the
stability limit is due to selection bias \citep{Li2016}. {Although the terminology is similar, the closeness to the inner stability boundary should not be conflated with the observed 3-day pile up in the Hot Jupiters from RV observations as the underlying physical mechanisms are likely different. This closeness in CBP systems can be quantitatively defined by: (1) a ratio of semimajor axes ($a_p/a_{c,HW}$) relative to the stability limit by HW99, (2) a spike in the distribution of planetary semimajor axes (log scale), or (3) the dynamical fullness of each system, where a dynamically full system will not allow for additional planets to be placed between the observed planet and the inner stellar binary. In this paper, we use the third definition because the first does not account for spacing with respect to Hill spheres and the second is not currently applicable given the small number of known CBP systems. As a result, we seek to better understand the transition to stability for CBPs, improve the historical formalism for stability, and address whether the Kepler CBPs could host additional planets on interior orbits. In order for such planets to exist, they must presently be on mutually inclined orbits as to not transit and matching this observational constraint is beyond the scope of this work. }
Our methods, the initial conditions for our simulations, definitions for our stability analysis, and assumptions are summarized in Section \ref{sec:methods}. In Section \ref{results}, we present the results of our numerical simulations, a comparison with the Kepler CBPs (at a population and individual level), and a discussion of how this work can be applied to future observations with TESS. We provide the conclusions of our work and compare our results with previous studies in Section \ref{sec:conc}.
\section{Methodology} \label{sec:methods}
\subsection{Numerical Setup}
Our simulations use a modified scheme within the popular \texttt{mercury} integration package \citep{Chambers2002} that has been designed for the efficient simulation of circumbinary systems. This modification allows for the integration of the inner binary orbit and an outer planet at different timescales while preserving the symplectic nature of the integration method. As a result, we find that the largest integration step for the planet to be $\sim$2.5\% of the planetary Keplerian period. Our numerical scheme stops the simulation when an instability event occurs, which we define as an intersection with the binary orbit or when the radial distance of the planet to the more massive star exceeds 10 AU. {Using 10 AU from the more massive star as a distance cutoff is justified because the planets begin with small semimajor axes, which rules out such a large apastron distance, and planets that reach this distance are likely to exceed the respective escape velocity.}
A majority of our simulations use ideal initial conditions for the binary orbit, where the binary semimajor axis is 1 AU and the total mass $(M_A+M_B)$ of the stellar components is 1 M$_\odot$. Our runs consider a range of binary mass ratios ($\mu = M_B/(M_A+M_B)$) from 0.01 -- 0.5 in steps of 0.01 and include one additional case, $\mu = 0.001$, for a total of 51 steps. The eccentricity of the binary orbit varies from 0.0 -- 0.80 in steps of 0.01. Most of our integrations begin the binary orbit at periastron ($\lambda_{bin} = 0^\circ$) because previous investigations (HW99) have shown this assumption to produce a conservative estimate for the stability limit. However, we do investigate a small subset of runs to quantify how beginning the binary at apastron ($\lambda=180^\circ$) would change our results.
\subsection{Coplanar, Circular Planetary Orbits} \label{sec:stab}
In order to compare with previous results (i.e., HW99), we perform integrations to determine a critical semimajor axis $a_c$ for a Jupiter-mass planet that begins on an initially coplanar, circular orbit. {We define the stability limit as the the critical semimajor axis ratio $a_c$ in units of the binary semimajor axis $a_{bin}$ and measure it by the smallest planetary semimajor axis where a planet is stable for all choices of initial Mean anomaly or phases (i.e., the lower critical orbit, \cite{Dvorak1989}) relative to the host binary orbit.} This definition is motivated by our models of gas giant CBPs that employ migration from a larger distance through interactions with the disk \citep{Pierens2008,Paardekooper2012,Meschiari2014,Kley2014,Kley2015,Bromley2015}. Such studies have shown that gas disk migration around tight binaries can occur in a similar manner as in single star systems, where gas drag acts to circularize planetary orbits. After the gas disk dissipates, the binary excites the eccentricities of close-in exoplanets leading to scattering events or expulsion from the system \citep[e.g.,][]{Silsbee2015,Kley2015,Thebault2015,Vartanyan2016}.
In order to determine $a_c$ consistently, given the above definition, we perform simulations over a grid of orbital parameters, where the total simulation time per integration is 10$^5$ binary orbits. Our grid of planetary orbital parameters, for each combination of the stellar $\mu$ and $e_{bin}$, vary the semimajor axis ratio ($a_p/a_{bin}$) from 1.01 to 5.0 in steps of 0.01 and the initial planetary Mean anomaly from $0^\circ - 180^\circ$ in steps of $2^\circ$.
HW99 performed their simulations using a similar definition for the stability limit but only include 8 initial phases ($0^\circ - 315^\circ$) for each test particle. Our simulations take advantage of a symmetry with respect to the initial phase and increase the number of trial phases because some initial values can become unstable in between the $45^\circ$ increments employed by HW99. A higher resolution is necessary to ensure that our definition for the stability limit is reliable.
\subsection{Stability Limits using the Hill Radius}\label{sec:Hill}
For very small values of both $\mu$ and $e_{bin}$, our simulations approach conditions consistent with Hill stability \citep{Szebehely1981,Gladman1993}. This type of analysis identifies the gravitational radius of influence that the secondary mass has on the tertiary mass through the Hill radius $R_H = a(\mu/3)^{1/3}$.
Parameterization using the Hill radius has also been used in the stability of planetary systems around single stars, but has been modified slightly to include the average semimajor axis between adjacent pairs of planets and the mass interior to the outermost body \citep{Chambers1996}. Using this formalism, we measure the dynamical fullness of the system. The mathematical definitions of the mutual Hill Radius, $R_{H,m}$, and dynamical spacing, $\beta$, between the $k,k+1$ planets with identical mass, $m$, are:
\begin{align}
R_{H,m} &= \frac{1}{2}(a_k + a_{k+1})\left(\frac{2m}{3M_T}\right)^{1/3}, \;{\rm and} \\
\beta &= \frac{a_{k+1} - a_k}{R_{H,m}}, \label{eq:beta}
\end{align}
where $M_T = M_A + M_B + m$ and represents the total mass interior to the outermost body. Our analysis uses this representation to evaluate whether a planet of identical mass could be placed at $a_c$ in addition to the observed CBP. Others have investigated planet packing for CBPs and determined that values of $\beta = 5 - 7$ would represent the minimum dynamical spacing necessary near the stability limit \citep{Kratter2014,Andrade2017}. {We use this formalism to measure the dynamical fullness of each system, where dynamically full systems relative to the stability limit are potential evidence for a pile-up in the CBPs.}
\subsection{Effects of the Binary Orbit on the Stability Limit} \label{sec:binary}
We use numerical simulations to identify the stability limit for a Jupiter-mass planet in an initially circular, coplanar orbit around a range of binary parameters. The results of such simulations can vary with the assumed binary orbit. Therefore, we first identify the range of variation we can expect when the binary begins at either periastron ($\lambda_{bin}=0^\circ$) or apastron ($\lambda_{bin}=180^\circ$).
Figure \ref{fig:bin0} illustrates how the maximum eccentricity (color-coded) of the planet varies with respect to the initial semimajor axis ratio ($a_p/a_{bin}$) and planetary Mean Anomaly when the binary begins at periastron. Each subplot varies the binary stellar parameters ($\mu,e_{bin}$), where the respective values are given in the upper right corner. Additionally the stability limit $a_c$ is identified by a horizontal cyan line and the value is given in the lower left corner. The white space denotes regions of parameter space that are unstable on the timescale of $10^5$ binary orbits.
The stability limit in the top row of Fig. \ref{fig:bin0} increases as the binary eccentricity, $e_{bin}$, increases. There are also increases in $a_c$ as $\mu$ increases (i.e., starting from the top row and going down a given column). However the largest value in $a_c$ does not occur at both the largest $\mu$ and $e_{bin}$ combination, rather when $e_{bin}$ is large (0.5) and $\mu$ is modest (0.1 -- 0.3). Also, the stability islands--that depend on mean motion resonances--are symmetric about 180$^\circ$ in the planetary Mean Anomaly. \cite{Deck2013} found similar results when investigating first-order resonance overlap in close two planet systems around a single star, where a similar dynamical environment exists. The symmetry justifies our choice to investigate only from $0^\circ - 180^\circ$ in the initial planetary Mean Anomaly in the more computationally expensive portion of our study (See Section \ref{results}).
In contrast, Figure \ref{fig:bin180} demonstrates how the stability limit $a_c$ changes when the binary begins at apastron ($\lambda_{bin}=180^\circ$). Similar trends are present and most changes in $a_c$ between respective subplots are small ($<$ 0.1). The most drastic change occurs when $\mu = 0.5$ and $e_{bin} = 0.5$, where the difference in $a_c$ is 0.37. In order to produce conservative (and possibly more reliable) results, we begin the binary at periastron for simulations used in the rest of our analysis.
\section{Results and Discussion} \label{results}
\subsection{Stability Limits Revisited} \label{sec:limits}
We perform a multitude of simulations\footnote{The results of our simulations are publicly available on zenodo.org as a compressed tar archive. See Section \ref{sec:tools} for details.} ($\sim$150 million) to improve the accuracy of the stability limit, $a_c$, for CBPs. We determine the $a_c$ for a given combination of binary parameters, $\mu$ and $e_{bin}$, using a grid of simulations (e.g., Figs. \ref{fig:bin0} \& \ref{fig:bin180}), where we limit initial planetary Mean Anomaly to 180$^\circ$ (see Section \ref{sec:binary}).
From these results, we analyze how $a_c$ varies at a given binary eccentricity, $e_{bin}$, as a function of the binary mass ratio. Figure \ref{fig:ac_fits} shows these results where the color-code represents the binary mass ratio, $\mu$, and the smallest value $\mu = 0.001$ is excluded because it is significantly flatter relative to the rest of the points. When including this broad range of mass ratio, the value of $a_c$, can typically vary by $\sim$0.5 -- 1.0 $a_{bin}$. This variation changes with the binary eccentricity and the median value is not proportional to the mean value. We have overplotted the median value (black points) with error bars indicating the upper and lower extremes in Fig. \ref{fig:ac_fits} to illustrate how the stability limit is affected by the binary mass ratio, $\mu$.
Most of the variation in the lower bound occurs for $\mu < 0.1$. If points with $\mu <0.1$ were excluded, then a polynomial function could approximately reflect $a_c$ statistically. HW99 included results where $\mu = 0.1 - 0.5$ and determined a quadratic polynomial to be appropriate, although their selection on $\mu$ was to exclude a regime that can be modeled using Hill stability.
In order to make a fair comparison with HW99, we plot in Figure \ref{fig:ac_ebin} the median values of the stability limit, $a_c$, using error bars to indicate the total range (maximum/minimum values) at a given binary eccentricity $e_{bin}$. In addition to the data points (blue), we also include the curves using the respective coefficients from \cite{Dvorak1989} (solid black), HW99 (dashed black), and those determined from our simulations (solid red). The coefficients and uncertainties are provided in Table \ref{tab:Coeff_alt}, where the values for \cite{Dvorak1989} and HW99 are both quoted from HW99 due to the possible errors in labeling noted by HW99.
\begin{deluxetable*}{lcccccccc}
\tablecaption{Coefficients for the Critical Semimajor Axis Using $e_{bin}$ \label{tab:Coeff_alt}}
\tablecolumns{4}
\tablehead{\colhead{} & \colhead{$C_1$} & \colhead{$C_2$} & \colhead{$C_3$} }
\startdata
\cite{Dvorak1989} & 2.37 & 2.76 & -1.04 \\
HW99 & 2.278$^{+0.008}_{-0.008}$ & 3.824$^{+0.33}_{-0.33}$ & -1.71$^{+0.10}_{-0.10}$ \\
this work & 2.170$^{+0.017}_{-0.017}$ & 4.017$^{+0.10}_{-0.10}$ & -1.75$^{+0.14}_{-0.14}$ \\
\enddata
\tablecomments{The coefficients (and uncertainties) for $C_1$, $C_2$, and $C_3$ from previous studies are listed that use a quadratic fitting function ignoring $\mu$, $a_c/a_{bin} = C_1 + C_2e_{bin} + C_3e_{bin}^2$. We use the same function in this work but use the maximum, median, and minimum values of $a_c$ (e.g., Fig. \ref{fig:ac_ebin}).}
\end{deluxetable*}
Upon inspection of Fig. \ref{fig:ac_ebin} and Table \ref{tab:Coeff_alt}, we reaffirm the previous results, where most of our coefficients overlap (within errors) with those of HW99. However, both fits are applicable at a statistical distribution level and not very accurate individually due to the effective marginalization over the binary mass ratio, $\mu$. In Fig. \ref{fig:ac_ebin}, we also mark the expected locations of the mean motion resonances the planet would encounter with the binary orbit{, which act to destabilize CBPs \citep{Mudryk2006}}. \cite{Doolin2011}, \cite{Quarles2016}, and others have shown through large parameter space studies that these resonances produce unstable gaps and stability islands can exist at locations approximately half-way between the resonances.
Another method utilized by HW99 is to allow both the binary semimajor axis, $\mu$, and the binary eccentricity, $e_{bin}$, to vary as quadratic functions. We perform a similar approach (using all our data) and provide our results alongside those determined by HW99 in Table \ref{tab:Coeff}. The reduced chi-square statistic is provided using the HW99 coefficients as well as our own. We provide an additional fitting where we make the replacement of $\mu \rightarrow \mu^{1/3}$ as motivated by stability studies using the Hill radius (i.e., planet packing around a single star). Both of our fittings produce a lower chi-square statistic than HW99, although our Fit 2 is likely to be biased in that a large portion of our simulations ($\sim$40\%) have a binary mass ratio within the Hill regime.
\begin{deluxetable*}{lcccccccc}
\tablecaption{Coefficients for the Critical Semimajor Axis \label{tab:Coeff}}
\tablecolumns{9}
\tablehead{\colhead{} & \colhead{$C_1$} & \colhead{$C_2$} & \colhead{$C_3$} & \colhead{$C_4$} & \colhead{$C_5$} & \colhead{$C_6$} & \colhead{$C_7$} & \colhead{$\chi^2_\nu$}}
\startdata
HW99 & 1.60$^{+0.04}_{-0.04}$ & 5.10$^{+0.05}_{-0.05}$ & -2.22$^{+0.11}_{-0.11}$ & 4.12$^{+0.09}_{-0.09}$ & -4.27$^{+0.17}_{-0.17}$ & -5.09$^{+0.11}_{-0.11}$ & 4.61$^{+0.36}_{-0.36}$ & 2015.97\tablenotemark{a}\\
Fit 1 & 1.48$^{+0.01}_{-0.01}$ & 3.92$^{+0.06}_{-0.06}$ & -1.41$^{+0.06}_{-0.06}$ & 5.14$^{+0.10}_{-0.10}$ & 0.33$^{+0.19}_{-0.19}$ & -7.95$^{+0.15}_{-0.15}$ & -4.89$^{+0.44}_{-0.44}$ & 876.25\\
Fit 2 & 0.93$^{+0.02}_{-0.02}$ & 2.67$^{+0.08}_{-0.08}$ & -0.25$^{+0.06}_{-0.06}$ & 3.72$^{+0.06}_{-0.06}$ & 2.25$^{+0.12}_{-0.12}$ & -2.72$^{+0.05}_{-0.05}$ & -4.17$^{+0.15}_{-0.15}$ & 450.55\tablenotemark{b}\\
\enddata
\tablecomments{The coefficients and uncertainties for $C_1 - C_7$ from HW99 are listed using the fitting formula, $a_c/a_{bin} = C_1 + C_2e_{bin} + C_3e_{bin}^2 + C_4\mu + C_5e_{bin}\mu + C_6\mu^2 + C_7e_{bin}^2\mu^2$. We perform two separate fits (Fit 1 and Fit 2) using all our data and list the resulting reduced chi-square value, $\chi^2_\nu$.}
\tablenotetext{a}{This value was calculated using the coefficients listed in HW99 and our larger dataset.}
\tablenotetext{b}{This fit modifies the equation where $\mu \rightarrow \mu^{1/3}$ in order to better match the form with the Hill radius when $e_{bin}$ and $\mu$ are small.}
\end{deluxetable*}
Although we find good agreement statistically with HW99, the final result will represent CBPs at the population level and there are not enough detections made thus far to justify a completely statistical treatment. Therefore, we suggest a different approach, which is to think of a stability surface (i.e., two-dimensional) rather than a stability limit. In this interpretation, we can obtain much higher accuracy at a individual system level through grid interpolation of our results.
Figure \ref{fig:full_space} illustrates how our dataset can be used to make such a map\footnote{We provide python tools on GitHub to query our dataset and reproduce all of our figures. Specifically, there is a routine that returns $a_c$ through grid interpolation for a given combination of $\mu$ and $e_{bin}$. See Section \ref{sec:tools} for details}. Each point is color-coded to the stability limit, $a_c$, determined through a smaller grid of simulations (e.g., Fig. \ref{fig:Kepler_ac}). Additionally, in Fig. \ref{fig:full_space}, we have over-plotted (white dots) the locations corresponding to the stellar parameters of the Kepler CBPs. The smallest value of $a_c$ is 1.31 and is located where one would expect. Interestingly, the largest value of $a_c$ is 4.49 and is not produced considering the largest value of $\mu$ that we consider. HW99 also observed a similar feature ($a_c = 4.2 - 4.3$), but their range in $e_{bin}$ and resolution did not allow them to identify this location accurately.
\subsection{Comparison to the Kepler CBPs -- Populations} \label{sec:pop}
The Kepler mission has uncovered 9 CBP systems, whose stellar and planetary properties vary widely and comparing them statistically in terms of a stability limit may not be reliable. \cite{Li2016} examined how the mutual inclination of CBPs relative to the binary orbital plane would affect the probability of observing a pile-up of the Kepler CBPs at the stability limit. Their study demonstrated that different conclusions can be drawn when Kepler-1647 is and is not included in the sample of CBPs and more systems need to be observed in order to distinguish between their two scenarios.
As a result, we compare each of the Kepler CBPs at a system-by-system level using the values of the critical semimajor axis, $a_c$ determined in Section \ref{sec:limits}. We also note that our analysis represents a conservative estimate of the stability limit as our determined limits for $a_c$ could decrease with an increased mutual inclination of the CBP \citep{Doolin2011,Li2016} or the stellar binary begins closer to apastron rather than periastron (See Section \ref{sec:binary}). Table \ref{tab:Star_param} summarizes the observationally determined stellar masses and orbital parameters of each of the known Kepler CBPs.
\begin{deluxetable*}{lcccccccc}
\tablecaption{Stellar Parameters for the Kepler CBPs \label{tab:Star_param}}
\tablecolumns{9}
\tablehead{\colhead{} & \colhead{$M_A$ ($M_\odot$)} & \colhead{$M_B$ ($M_\odot$)} & \colhead{$\mu$} & \colhead{$a_{bin} (AU)$} & \colhead{$e_{bin}$} & \colhead{$\omega$ (deg.)} & \colhead{$MA$ (deg.)} & Ref.}
\startdata
Kepler-16 & 0.6897 & 0.20255 & 0.2270 & 0.22431 & 0.15944 & 263.464 & 188.888 & \cite{Doyle2011} \\
Kepler-34 & 1.0479 & 1.0208 & 0.4934 & 0.22882 & 0.52087 & 71.437 & 228.760 & \cite{Welsh2012} \\
Kepler-35 & 0.8877 & 0.8094 & 0.4769 & 0.17617 & 0.1421 & 89.1784 & 2.9021 & \cite{Welsh2012} \\
Kepler-38 & 0.949 & 0.249 & 0.208 & 0.1469 & 0.1032 & 268.68 & 181.32 & \cite{Orosz2012a} \\
Kepler-47 & 0.957 & 0.342 & 0.263 & 0.08145 & 0.0288 & 226.253 & 310.818 & \cite{Orosz2012b} \\
Kepler-64 & 1.528 & 0.408 & 0.211 & 0.1744 & 0.2117 & 219.7504 & 251.558 & \cite{Schwamb2013} \\
Kepler-413 & 0.820 & 0.5423 & 0.398 &0.10148 & 0.0365 & 279.54 & 169.5328 & \cite{Kostov2014} \\
Kepler-453 & 0.944 & 0.1951 & 0.171 &0.185319 & 0.0524 & 263.05 & 187.7059 & \cite{Welsh2015} \\
Kepler-1647 & 1.2207 & 0.9678 & 0.4422 &0.1276 & 0.1602 & 300.5442 & 139.0749 & \cite{Kostov2016} \\
\enddata
\tablecomments{The stellar parameters ($M_A$, $M_B$, $\mu$, $a_{bin}$, $e_{bin}$, $\omega$, and $MA$) of the Kepler CBPs are listed. The definitions of these orbital parameters carry their usual meaning {from the exoplanet literature}.}
\end{deluxetable*}
To measure the proximity of the Kepler CBPs to our determined stability limit, we first determine $a_c$ through a grid interpolation of Fig. \ref{fig:full_space} using the $\mu$ and $e_{bin}$ values given in Table \ref{tab:Star_param}. The result of this interpolation for each CBP is given in Table \ref{tab:pl_param}. The observed planetary semimajor axis $a_p$ is also provided along with a measure of the percentage difference between $a_p$ and $a_c$. The comparison using percent difference shows that some CBPs are much closer to $a_c$ than others, where the average difference between $a_p$ and $a_c$ is $\sim$42\%. For systems that are much lower than 42\%, we initially classify to be at the stability limit and those that are much higher are not at the stability limit.
Another method for determining the proximity to the stability limit uses formalisms from planet packing studies \citep[e.g.,][]{Kratter2014} and require the calculation of the mutual Hill radius, R$_{H,m}$ (See Section \ref{sec:Hill}). For this calculation, we propose that planets classified to reside at the stability limit should not allow for an additional equal-mass planet to exist on an interior orbit at our determined $a_c$. Along with R$_{H,m}$, we also determine the dynamical spacing, $\beta_c$, between an equal-mass planet at $a_c$ relative to the observed planet at $a_p$ in Table \ref{tab:pl_param}.
\cite{Kratter2014} determined that stability is possible with $\beta_c = 5 - 7$, where we define in this analysis that $\beta_c \leq 7$ does not allow for an interior equal-mass planet to exist at $a_c$. Using this criterion, we find that 5 out of 9 CBP systems (55\%) do allow for an interior equal-mass planet. {However, the previous study \citep{Kratter2014} did not take the binary eccentricity into account and we perform a limited suite of N-body simulations to confirm the above estimate for the Kepler CBPs.}
{In these simulations, we introduce an equal-mass planet with a semimajor axis between $a_{bin}$ and $q_p (= a_p(1 - e_p))$ with steps of 0.001 AU. The binary can induce a forced eccentricity on the inner planet \citep[e.g.,][]{Mudryk2006}. As a result, we choose the initial eccentricity vectors of both planets to be aligned ($\omega = 0^\circ$) with the binary orbit and vary the magnitude of the eccentricity vector from 0.0 - 0.50 in steps of 0.01. We follow a similar relative phase setup from \cite{Gladman1993}, where each planet pair starts 180$^\circ$ out of phase from one another and the inner planet begins at periastron ($MA = 0^\circ$). In order to identify robust regions of stability, these simulations are integrated up to 500 million orbits for a planet at $a_c$ (see Table \ref{tab:pl_param} for values of $T_c$). The integration step is adjusted for each simulation at 2.5\% of the initial Keplerian period for the inner planet.}
{Our full N-body simulations justify our criterion, $\beta_c > 7$, to allow for additional equal-mass planets to stably orbit within 5 of the Kepler CBP systems. Figure \ref{fig:beta_space} illustrates the relative distribution of the Kepler CBPs through their respective values of $\beta_c$ as concentric circles, where the origin denotes the location that is exactly at the stability limit. In this schematic, the dynamical separation, $\beta_c$, from the stability limit does \emph{not} appear to cluster at any particular value. If we choose the inner planet mass to be Earthlike, then our values of $\beta_c$ would increase by $\sim$2$^{1/3}$ and potentially allow for an additional planet in Kepler-35. Note: We emphasize that we are \emph{not} confirming the existence of any planets interior to the observed Kepler CBPs. If such planets do exist, then they must be on sufficiently inclined orbits at the present epoch to have avoided detection.}
\begin{deluxetable*}{lcccccc}
\tablecaption{Stability Limits for the Kepler CBPs \label{tab:pl_param}}
\tablecolumns{7}
\tablehead{\colhead{} & \colhead{$a_{c}$ (AU)} & \colhead{$T_c$ (days)} & \colhead{$a_p$ (AU)} & \colhead{\% diff} & \colhead{R$_{H,m}$ (AU)} & \colhead{$\beta_{c}$} }
\startdata
Kepler-16b & 0.6050 & 182.0 & 0.7048 & 15.24 & 0.0405 & 2.4610 \\
Kepler-34b & 0.8118 & 185.7 & 1.0896 & 29.220 & 0.0387 & 7.1703 \\
Kepler-35b & 0.4795 & 93.09 & 0.6035 & 22.89 & 0.0196 & 6.3175 \\
Kepler-38b & 0.4328 & 95.02 & 0.4644 & 7.047 & 0.0264 & 1.1968 \\
Kepler-47b & 0.1848 & 25.46 & 0.2956 & 46.13 & 0.00820 & 13.519 \\
Kepler-64b & 0.5368 & 103.2 & 0.634 & 16.6 & 0.0327 & 2.9697\\
Kepler-413b & 0.2389 & 36.54 & 0.353 & 38.6 & 0.0136 & 8.3487 \\
Kepler-453b & 0.4184 & 92.62 & 0.7903 & 61.53 & 0.0184 & 20.152 \\
Kepler-1647b & 0.3497 & 51.06 & 2.72 & 154 & 0.117 & 20.275 \\
\enddata
\tablecomments{Calculated values of $a_c$, $T_c$, $\% \;{\rm diff}$, R$_{H,m}$ and $\beta_c$ are listed for each of the Kepler CBPs, where the $a_p$ values are drawn from the discovery papers (see Table \ref{tab:Star_param}). We use the definition of percent difference as $\% \;{\rm diff} = 2|a_p - a_c|/(a_p + a_c)$.}
\end{deluxetable*}
\subsection{Comparison to the Kepler CBPs -- Individual Systems}
The Kepler CBPs are a snapshot of the larger population of CBPs, where we want to investigate them at a system-by-system level. We identify the ways that the initial conditions could alter our determination of the critical semimajor axis ratio, $a_c$, by choices in the: initial phase of the binary orbit, initial phase of the planetary orbit, or the initial eccentricity of the planetary orbit.
We examine the stability of systems \textit{similar} to the Kepler CBPs in the binary parameters ($\mu,e_{bin}$) using the results from our simulations in Fig. \ref{fig:full_space}. Figure \ref{fig:Kepler_ac} illustrates the variation of stability over $10^5$ binary orbits with respect to variations in the initial semimajor axis ratio and Mean Anomaly of the planetary orbit, when the binary begins at periastron ($\lambda_{bin}=0^\circ$). We emphasize these assumptions because the actual Kepler CBPs will likely not adhere to them and shifts in the initial phase may be necessary for 1:1 comparisons.
In Fig. \ref{fig:Kepler_ac}, most systems are not strongly dependent on the choice of the planetary Mean Anomaly (except Kepler-34, Kepler-38, and Kepler-64) and our definition of stability (see Section \ref{sec:stab}) appears to be robust when stability islands exist at specific ranges in the Mean Anomaly of the planetary orbit. Comparing the values of $a_c$ at these points are consistent (within $\sim$1\%) to those given in Table \ref{tab:pl_param}, after multiplying by $a_{bin}$, that were determined through a grid interpolation.
We go beyond our ideal setup (excluding Kepler-1647, see \cite{Kostov2016} for a stability map) that makes assumptions on the binary and planetary orbit. For this, we evaluate the variation of stability considering the actual host binary orbit (see Table \ref{tab:Star_param}) with a range of initial eccentricity (0 -- 0.5 in steps of 0.01) and semimajor axis ($a_{bin}$ -- 1.5 AU in steps of 0.001 AU) for a Jupiter-mass planet. The planet begins along the reference node so that $\omega = \Omega = MA = 0^\circ$. We plot the initial conditions that are stable for at least 100,000 years in Figure \ref{fig:Kepler_stab} using a color-code, the location (green dot) of the observed Kepler CBP parameters, and over-plot the approximate stability boundary for 3 methods: our Fit 1 (cyan, see Table \ref{tab:Coeff}), our interpolation (yellow, see Table \ref{tab:pl_param}), and HW99 (violet, see Table \ref{tab:Coeff}) using mean values where applicable. The stable initial conditions are color-coded based upon the range of eccentricity ($\Delta e = e_{max}-e_{min}$) a planet attains over the simulation time on a base-10 logarithmic scale. \cite{Ramos2015} and \cite{Giuppone2017} have used a similar metric because it highlights dynamical regions affected by resonant interactions, where our definition differs from theirs by a factor of 2 for clarity. Our results from Section \ref{sec:pop} investigating whether interior planets could be possible are also shown as gray squares, where those simulations used planet pairs more similar to the actual mass of the Kepler CBPs.
For the stability boundary, we assume that a critical pericenter distance, $q_c$, exists for an eccentric orbit that corresponds approximately to the critical semimajor axis for a circular orbit \citep[i.e.,][]{Popova2016}. The mathematical expression that we use to approximate the critical eccentricity, $e_c$, is:
\begin{align}
e_c &= 0.8\left(1 - \frac{a_c}{a_p}\right),
\end{align}
where $a_c$ is the critical semimajor axis (in AU) derived via each method, and $a_p$ is the planetary semimajor axis in AU. The 0.8 factor in our equation is arbitrary, but we found that using this value consistently improves the fit of the upper boundary (high values of $e_p$) of stability for most cases. Our interpolation method (yellow curve) in Fig. \ref{fig:Kepler_stab} typically agrees well with the innermost stable circular orbit (within $\sim$1\%). The other 2 methods (Fit 1 \& HW99) also agree within their error limits, although a substantial fraction within the error range exists in a region of unstable parameter space.
\subsection{CBPs in Context: Observations with TESS}
With a sample of only 10 planets, one of which is already an outlier in terms of orbital separation (Kepler-1647b), interpretations of the available data--such as the proposed pile-up of CBPs at the dynamical stability limit--may be affected by observational bias. Increasing the sample by a factor of two would be very useful. Increasing it by a factor of ten would be fantastic and, with the help of the tools we develop here, would enable comprehensive statistical studies for or against the potential pile-up.
An order of magnitude increase in the number of known CBPs will be indeed possible with the TESS mission by using a novel method for CBP detection based on the occurrence of multiple transits during the same conjunction. This method has already been demonstrated for the case of two such transits of Kepler-1647b, where \cite{Kostov2016} estimated a planet period within 5\% of the true period by combining radial velocity measurements with transit timing--independently from the full photodynamical solution of the systems. Similar transits can easily occur within the 30-day all-sky observing window of TESS.
TESS will observe the entire sky for at least 30 days, and continuously measure the brightness of $\sim$20 million stars (including $\sim$500,000 eclipsing binaries) brighter than $R\sim15$ with mmag precision \citep{Sullivan2015}. Based on the CBP results from Kepler, we expect the CBP yield of TESS to be a few hundred planets similar to Kepler's (Kostov et al in prep). The tools and methods we describe here will be directly applicable for both estimating the stability of each new CBP candidate during the initial detection phase, as well as for detailed dynamical investigations of the entire sample after the comprehensive photodynamical characterization of all planets. For example, our method would allow rapid identification ($<$1 second) of the likelihood that a candidate is a false positive--and thus immediately guide follow-up efforts--based on stability criteria and dynamical packing. In addition, if any TESS CBP system exhibits extra transits, not associated with either the binary or the detected planets, by applying the methodology presented here we will be able to rule out the orbital parameter space available to additional planets in the respective systems.
{\subsection{Numerical Tools for the Community} \label{sec:tools}
We perform a multitude of simulations into order to determine the most general and reliable stability limit given a set of binary parameters ($\mu$, $e_{bin}$). The results of these simulations are available through \texttt{GitHub}\footnote{\url{https://github.com/saturnaxis/CBP_stability}} and \texttt{Zenodo}\footnote{\url{http://doi.org/10.5281/zenodo.1174228}}. The \texttt{GitHub} repository contains scripts to identify the stability limit, $a_c$, and reproduce the figures contained in this paper using \texttt{Matplotlib} \citep{Hunter2007,Droettboom2016}. The determination of $a_c$ is not limited to the binary parameters used to make Fig. \ref{fig:full_space}, but can be interpolated using routines from \texttt{Scipy} \citep{Jones2001} in Python or other programming languages \citep{Press1992}.}
{The full dataset is available as a compressed tar archive on \texttt{Zenodo}. The archive contains text files that are delineated by the assumed binary parameters ($\mu$, $e_{bin}$) in the filenames. Python scripts to manipulate the dataset without extracting all the files are available in the \texttt{GitHub} repository. Each comma delimited file in the archive lists the results of a given simulation, where the columns are the initial semimajor axis ratio, the initial planetary phase in degrees, the maximum planetary eccentricity attained, the minimum eccentricity attained, and the collision/escape time in years. For initial conditions that survived the full simulation time, a value of $10^5$ yr is reported the final column.}
\section{Conclusions} \label{sec:conc}
The number of known circumbinary planets (CBPs) is currently small ($\sim$10), but the current methods to determine the proximity of these CBPs to the stability limit for their host stars is statistical (HW99). In this paper, we perform a multitude of numerical simulations ($\sim$150 million) to better understand the stability surface of CBPs as a function of stellar mass ratio, $\mu$, and eccentricity, $e_{bin}$. We provide open-source python software for the community to access and make use of our simulations, specifically with a python script that can interpolate our results for the \textit{stability surface} for CBP candidates derived from photometric planet surveys.
Using our numerical tools, we devise a grid interpolation method that uses the stability surface to accurately characterize the inner limits of stability with respect to $\mu$ and $e_{bin}$ (see Figure \ref{fig:full_space}). We compare our derived stability limits to the previous study by \cite{Holman1999} for completeness and find good agreement, within errors. The reduced chi-square of our fits are smaller than HW99, which is likely a result of the increased resolution. We find that replacing $\mu \rightarrow \mu^{1/3}$ provides a better fit due to the weak dependence on the stellar mass ratio \citep{Szebehely1981,Holman1999}. However, this result is likely biased due to the large number of simulations we performed ($\sim$40\%) where $\mu \lesssim 0.2$ and Hill stability would be more applicable. The largest values of the critical semimajor axis, $a_c$ in units of $a_{bin}$, occur for large binary eccentricity ($e_{bin}\sim 0.8$) and a more modest stellar mass ratio $\mu \sim 0.18$. {Recently \cite{Lam2018} performed a similar study using machine learning through a deep neural network (DNN), where we find good agreement between the studies (typically within 5\%) when $\mu \gtrsim 0.05$ and much larger disagreement (up to $\sim$33\%) for smaller $\mu$. They did not train for $\mu < 0.05$ and thus one should not use their DNN on such systems (D. Kipping, private communication).}
We apply 3 different methods to estimate the stability limit and compare to numerical simulations that take a wide range of values in the initial semimajor axis and eccentricity of the planet into account. The derived stability limits for $a_c$ agree well (within $\sim$1\%) when considering either an ideal or more realistic architecture for the known Kepler CBPs. The derived limits for $a_c$ can also be generalized to include eccentric planetary orbits by considering a proportional critical eccentricity, $e_c$.
Our analysis also finds that {55\% of CBP systems from Kepler could host another equal-mass planet closer to $a_c$ on a coplanar orbit using numerical simulation and a planet packing framework \citep{Kratter2014}}. We consider this to be a conservative estimate because smaller values of $a_c$ are possible if the interior planet is highly misaligned (or even retrograde) relative to the binary orbital plane \citep{Doolin2011,Li2016}. As a result, we do \textit{not} find strong evidence for a pile-up near the stability limit for the Kepler CBP systems (see Table \ref{tab:pl_param}), especially considering the observing bias toward the discovery of small semimajor axis planets using conventional methods.
However, we do find that most ($\sim$90\%) of the Kepler CBP host binary eccentricities are $<$0.25 and have similar stability limits ($a_c = 2.3 - 3.1 a_b$). T{he dynamical spacing, $\beta_c$, is larger than 7 mutual Hill radii for the systems that could host an interior planet on coplanar orbit and indicates a need for more in-depth studies (Kepler-34, Kepler-413, Kepler-47, Kepler-453, \& Kepler-1647).} Although the sample of confirmed Kepler CBPs is limited, observations from TESS are expected to substantially increase the sample, where we can then identify more robustly any trends within the CBP population in relation to the stability surface.
\acknowledgments
{We thank the anonymous referee for providing helpful comments that improved the overall quality and clarity of the manuscript. The simulations presented here were performed using the OU Supercomputing Center for Education \& Research (OSCER) at the University of Oklahoma (OU). S.S. would like to thank Zdzislaw Musielak and Alex Weiss for their continued support in the exoplanetary research.}
\bibliographystyle{apj}
|
{
"timestamp": "2018-02-27T02:06:40",
"yymm": "1802",
"arxiv_id": "1802.08868",
"language": "en",
"url": "https://arxiv.org/abs/1802.08868"
}
|
\section{Introduction\label{sec:Introduction}}
Intensely investigated in the last few decades, the multi-scale dynamical
process called \emph{aging} is widely observed in glassy systems subject
to a change of an external parameter, e.g. a thermal quench. While
spin-glasses~\cite{Fisher88a,Nordblad97,Vincent07,Guchhait15}, colloidal
suspensions~\cite{Hunter12}, vortices in superconductors~\cite{Nicodemi01},
magnetic nanoparticles in a ferrofluid~\cite{Jonsson00} and ecosystems
\cite{Becker14,Andersen16} may have little in common in terms of
microscopic variables and interactions, strong similarities emerge
in their aging phenomenology. For example, one point averages feature
a logarithmic time dependence~\cite{Amir12} which entails an asymptotically vanishing rate
of change of the corresponding observables and clarifies
why aging systems deceptively appear in equilibrium for observation times shorter
than their age. Secondly, two-time averages such as correlation and
response functions often possess an approximate dependence on the single
scaling variable $t/t_{{\rm w}}$~\cite{Sibani10}. Interestingly,
this property is shared by the probability that a species is extant at times $t_{{\rm w}}$
and $t>t_{{\rm w}}$ in a model of biological evolution~\cite{Andersen16}.
Thermal relaxation models associate the multi-scaled nature of aging
processes to a hierarchy of metastable components of configuration
space~\cite{Palmer84,Hoffmann88,Sibani89}, often described as nested `valleys'
of an energy landscape. Local thermal equilibration is described in
terms of time dependent valley occupation probabilities~\cite{Sibani93},
which are controlled by transition rates over the available `passes'.
When applied to a hierarchical structure, such description gradually
coarsens over time as valleys of increasing size reach equilibrium.
That barrier crossings are connected to
record values in time series of sampled energies~\cite{Dall03,Boettcher05}
is a central point in record
dynamics (RD), a coarse-grained description of aging which uses the statistics of non-equilibrium events called \emph{quakes} to
describe aging in different settings~\cite{Anderson04,Sibani06,Sibani13,Sibani13a}.
In connection with spin-glasses, RD has predictions
describing Thermo-Remanent
Magnetization (TRM) data~\cite{Sibani06} and explaining their
observed \emph{sub-aging} behavior~\cite{Sibani10}, i.e. their deviation
from $t/t_{{\rm w}}$ scaling. In this work we explicitly check its basic assumptions
and use it
to provide a different perspective
on an iconic model of glassy behavior,
the Edwards-Anderson (EA) spin-glass~\cite{Edwards75}.
Usually more reliant on system specific details than their more abstract
configuration space counterparts, real-space models often build on
the properties of domains whose time dependent linear size $l(T,t)$
characterizes the aging process, see e.g.~\cite{Jonsson00,Berthier02}.
Independent of
the mechanism assumed for domain growth, degrees of freedom belonging
to the same domain are assumed to fluctuate around their thermal equilibrium
state, while those located in different domains have, for a fixed time scale, frozen relative
orientations. The functional form of $l(T,t)$ can be extracted from
simulational data using a four-point equilibrium correlation function~\cite{Berthier02}.
Specifically in the spin glass droplet model~\cite{Fisher88a}, domains
are defined in terms of projections onto
the two available ground states. Since the time growth of $l(T,t)$
minimizes the free energy by decreasing the domain wall length, the
droplet model views domain growth in a spin glass as homologous to
the scale-free coarsening process of a ferromagnet
at its critical temperature.
Note however that,
while the interior of a ferromagnetic domain only harbors
local excitations of the ground state, analyses of small short-ranged
spin glass systems~\cite{Sibani94} indicate that each domain accommodates
a multitude of metastable configurations. The same conclusion can be reached from a
more recent enumeration of all the metastable configurations of E-A models of different
linear sizes~\cite{Schnabel18}.
It thus seems questionable
that domain walls provide the main contribution to free energy
barriers in a spin glass. Finally, the droplet model leaves no room
for the temporally intermittent and spatially heterogeneous events
now recognized as key features of glassy dynamics~\cite{Schweizer07}.
From data analyses, real space length scales in aging systems
are linked to the
equilibrium correlation length of their metastable states, and recent numerical~\cite{Janus09,Janus17}
and experimental~\cite{Guchhait17,Zhai17} efforts utilize correlation
and response functions to describe the growth of
correlated domains.
Inspired by a recent model of colloidal aging~\cite{Boettcher11,Becker14a},
we use a different approach to identify growing real space structures in the E-A spin glass and
argue that these are the coarsening
variables controlling aging by linking them to
TRM data.
In models of dense colloids~\cite{Boettcher11,Becker14a} clusters of
contiguous particles, which gradually grow by accretion and suddenly
collapse through quakes, fullfill this dynamical role, while the microscopic particle motion
is only described
statistically through a size dependent cluster collapse rate. The
crucial assumption that this rate decreases exponentially with cluster
size, corresponding
to the likelihood of a spontaneous fluctuation of that size, reproduces
the available numerical and experimental evidence on dense hard sphere
colloids.
As well, pertinent RD predictions, including a logarithmic time growth of
the average cluster size,
are obtained. A recent re-analysis~\cite{Robe16} of experimental
evidence shows that the quaking rate in dense colloidal suspensions
decreases as $1/t$, which is the basic claim from which RD predictions
flow. The experimental evidence was confirmed with molecular dynamics simulations of such a colloid~\cite{Robe18}.
To buttress our hypothesis, we analyze, as anticipated, the dynamics of
the E-A spin-glass~\cite{Edwards75},
a model with quenched randomness microscopically
very different from a dense colloid.
Its very well
studied behavior is usually associated with two competing
theoretical approaches~\cite{Fisher88a,Newman96,Contucci12}
which,
in spite of their differences, share conceptual roots in the equilibrium statistical
mechanics of critical phenomena. A unified description of
aging phenomenology requires, we believe,
a much stronger focus on the statistics
of the rare non-equilibrium events that drive the dynamics in the full range
of parameters, e.g. temperature or density, where aging is observed.
Our simulations show: \emph{i)} That the energy changes associated
to quakes stand out from the overwhelming majority of energy fluctuations.
\emph{ii)} That quakes are statistically uncorrelated and occur at
a rate which is constant in \emph{logarithmic time}, as predicted
by RD. \emph{iii)} That suitably defined clusters grow on average in proportion
to $\ln t$. The last result concurs with the behavior observed in~\cite{Boettcher11,Becker14a}
for a model of colloids.
Provided that the cluster size distribution is sufficiently peaked
around its mean, it also supports the latter model hypothesis that clusters
are overturned at a rate exponentially decreasing with their size.
Last but not least, our analysis provides an approximate description of spin glass dynamics
in terms of flipping clusters
which is more complete than previously available
and covers the TRM decay behavior.
The rest of the paper is organized as follows: In Section~\ref{Model}
the E-A model definition is stated for the reader's convenience. In Section~\ref{Method} we summarize the
theoretical concepts used in our data analysis. Our numerical results are presented
in Section \ref{Results} and a real space coarse grained description
of the E-A spin glass dynamics is given in Section~\ref{Cluster_dynamics}. Finally, Section~\ref{Implications} highlights similarities
between our observed $T$ scaling of energy fluctuations and experimental memory and rejuvenation properties of
spin glasses.
Section~\ref{Conclusion} provides a summary and draws conclusions.
\section{Model}
\label{Model} We consider an Ising E-A spin glass~\cite{Edwards75}
placed on a cubic grid with linear size $L=20$ and periodic boundary
conditions. Each of the $2^{N}$ configurations is specified by the
value of $N=L^{3}$ dichotomic spins, and has, in zero magnetic field,
an energy given by
\begin{equation}
H(\sigma_{1},\sigma_{2},\ldots\sigma_{N})=\frac{1}{2}\sum_{i=1}^{N}\sum_{j\in{\mathcal{N}}(i)}J_{ij}\sigma_{i}\sigma_{j},\label{En_def}
\end{equation}
where $\sigma_{i}=\pm1$ and where ${\mathcal{N}}(i)$ denotes the
six nearest neighbors of spin $i$. For $j<i$, the $J_{ij}$s are
drawn independently from a Gaussian distribution with zero average
and unit variance. Finally, $J_{ij}=J_{ji}$ and $J_{ii}=0$. All
parameters are treated as dimensionless. This model has a phase transition from a paramagnetic
to a spin-glass phase at critical temperature which in Ref.~\cite{Katzgraber06} is
estimated to be $T_{\rm c}=0.9508$. The same reference reviews the
different $T_{\rm c}$ estimates found in the literature.
\section{Method of analysis}
\label{Method} Starting from a configuration previously
equilibrated at temperature $T_{0}=1.25$, the system is instantaneously
quenched at time $t=0$ down to $T<1$. The ensuing aging process
is then followed for five decades in time. For aging temperature $T=.3,.4,.5,.6,.7,.75$
and $.8$, $512$ independent simulations are carried out and special
events, the quakes, are extracted from the trajectories thus obtained.
After defining a detection criterion
(see below), we check that quake events are uncorrelated and Poisson
distributed with an average proportional to $\ln t$. We then identify
clusters of spins that move in unison during the quakes, and from
those construct the average cluster size, $S_{{\rm Cl}}(t)$, as a
function of time.
{ The Waiting Time Method~\cite{Dall01} (WTM), a kinetic
MC algorithm which performs single spin flips with no rejections,
is used in all simulations.
Similarly to the more widely used Metropolis algorithm and its more recent variants, e.g.
parallel tempering~\cite{Katzgraber06a}, the WTM fulfils the detailed balance condition, and is by design guaranteed
to eventually sample the equilibrium distribution of the problem at hand.
Its performance in exploring the EA energy landscape at low $T$ was compared in Ref.~\cite{Boettcher05} to that
of Extremal Optimization~\cite{Boettcher01a}. These two very different
methods extracted the same geometrical features from the landscape, e.g. that
a record high energy barrier must be scaled in order to find a lower value of the lowest energy
seen `so far', or `best so far energy' $E_{\rm bsf}$ to which we shall return.
Being calculated
along the trajectories as
differences between the energy of the current state and the $E_{\rm bsf}$, the above barriers differ
conceptually from the overlap barriers investigated in Refs.~\cite{Berg00}, which describe displacement fluctuations
in thermal equilibrium.
In a jammed system as an aging spin-glass, Metropolis executes a large number
of unsuccessful trials (and the acceptance rate drastically declines), which the WTM avoids by
rank-ordering the execution time of all possible moves and
then executing the one with the lowest execution time.
Specifically, flipping spin $i$ at energy cost $\delta_{i}$ is associated
to a waiting time $w_{i}$ and the intrinsic time variable $t$ (flipping time) of the WTM is a real positive
number which sums up, at any point of the simulation, the times spent
`waiting' for all previous flips.
Each waiting time is
drawn from an exponential distribution
with average
\begin{equation}
\langle w_{i}\rangle=\exp(\frac{\delta_{i}}{2T}).
\label{avWT_def}
\end{equation}
Hence, as long as its local environment
remains unchanged, the thermal flips of each spin are a memoryless Poisson
process with the above average. This seems a physically appealing
description of systems with many coupled degrees of freedom and implies that,
when a spin is reversed, only the waiting and flipping times
of that spin and its neighbors need to be recalculated, while all
others can stay put.
Both the WTM and the Metropolis algorithm lack a physical time scale,
and their ability to empirically describe aging processes depends
on the temporal scale invariance of such processes, combined with the fact that both
methods seek the pseudo-equilibrium states in which aging system dwell most of the time.
Once the Metropolis algorithm has had a chance to query every spin,
it flips a set of spins similar to that flipped by the WTM.
For times of the order of a MC sweep or larger,
the two methods are equivalent and our $t$ corresponds to the number of MC sweeps~\cite{Dall01}.
The sequence of flips is however clearly different,
since Metropolis chooses the `next' flip candidate at random, while each choice of the WTM can be influenced by
the last flip:
Equation~\eqref{avWT_def} implies that any negative `barrier' $\delta_{i}$
which arise after a move creates a locally unstable situation where the involved spins quickly flip. This process can iteratively
generate a series of negative $\delta_{i}$ values in a local neighborhood, triggering
event cascades whose
short duration allows one to
time-stamp quakes with high resolution.
The latter feature is important when assessing the temporal statistics
of the quakes.
Besides being computationally inefficient at low $T$, a Metropolis
algorithm would express `times' as integer
number of sweeps, which is at variance with time being a real
variable in a Poisson process. In contrast, WTM readily resolves
sub-sweep timescales.
}
For short time intervals and at
low temperatures, the WTM dwells in real space neighborhoods
of local energy minima, and the sampled energy changes feature a previously
unnoticed temperature scaling which is
found
in most of our figures and explained in Section~\ref{Explanation}
in terms of the distribution of single flip energy changes available
near local energy minima.
\subsection*{Clusters and domains}
A local energy minimum configuration consists of disjoint groups of
contiguous spins, our clusters, whose orientation is either the same
or the opposite as one of the two ground states, if one neglects, as we presently do,
the spins on the cluster boundaries. Since each cluster
may contain sub-clusters of opposite orientation, a partially nested
structure is generated, reflecting the degree of hierarchical organization
of the system's configuration space~\cite{Sibani89,Sibani94}. The
situation is illustrated in Fig.~\ref{domains_fig}, using two dimensions
for graphical convenience. Excess energy relative to the ground state
stems from cluster interfaces and can be reduced in a thermally activated
process overturning gradually larger clusters. The free energy cost
of such reversals is mainly associated with barriers in the bulk of
each cluster, as we will explain below. In contrast, the cost of overturning a ferromagnetic
domain is mainly associated with the domain's interface.
Quickly reversible single spin flips similar to `in cage rattlings'
in a colloid are excluded from cluster configurations. Their long
term effects are subsumed into the statistics of the quakes which
provide the elementary moves, i.e. cluster flips, of the coarse-grained
dynamics we are about to describe. Since spins move together in a
quake, the final configurations of two successive quakes are compared,
all spins which changed orientation are identified and grouped into
clusters of spatially contiguous elements. Finally, clusters with
less than $5$ spins are discarded to minimize the risk of erroneously
counting reversible moves as part of a quake.
\begin{figure}
\hfill{}\includegraphics[width=0.9\columnwidth]{domains}\hfill{}
\caption{\label{domains_fig} Depiction
of the domain hierarchy in a hyper-plane of a 3d-Edwards-Anderson
spin glass during the aging process. Each numbered area represents
spin clusters with the same configuration as one of the two ground
states of the E-A spin glass. With the exception of area $12$, which
has two colors, each cluster is surrounded by a region of the opposite
color and takes up this color when overturned by a quake. In
this picture, randomly fluctuating, isolated spins have been suppressed.
A quake event amounts to filling in one of the inner-most domains
through flipping all its spins, thereby coarsening the otherwise self-similar
spatial hierarchy of domains-within-domains.}
\end{figure}
\subsection*{Quake detection protocol}
\label{detection}
Observation of non-equilibrium
phenomena is fundamentally tied to choosing the correct time and length scales.
This applies certainly to the aging process. On very large scales
macroscopic variables seem to change in
a smooth and gradual manner. On
intermediate scales aging systems appear in a state of quasi-equilibrium
punctuated by increasingly
rare, intermittent quakes that significantly (i.e., irreversibly)
relax the system and lead to overall structural changes.
The importance of these events
for the progression of the aging process was highlighted in \cite{Sibani05a}
using a system-wide approach.
However, since quakes unfold almost
instantaneously on an intermediate time-scale, a more detailed
investigation is needed to explore the \emph{spatial} dynamic that facilitates
the quake. In the following we outline a protocol to zoom in more
closely into a narrower time-window, as illustrated in Fig. \ref{fig:protocol},
where the quake's footprint is measured
from the difference between the configuration it generates and that it
inherits previous quake, see
Fig. \eqref{domains_fig}. This contrasts with equivalent aging experiments
on structural glasses such as colloids, where spatial traces of quakes
are faint.
Our method of data analysis identifies quakes on the fly from an evolving
trajectory and treats them, approximately, as instantaneous events.
The identification process involves a number of computational choices,
which are all based
on the following assumptions: Using $\ln(t)$ rather than $t$ as
independent variable transforms the quakes into a memoryless Poisson
process. Accordingly, successive quakes are statistically independent,
and if $t_{k}$ is the time of occurrence of the $k^{{\rm th}}$
quake, the `logarithmic waiting times' $\Delta{\rm ln}_{k}=\ln(t_{k})-\ln(t_{k-1})=\ln(t_{k}/t_{k-1})$
are independent stochastic variables with the same exponential distribution.
Correspondingly, the logarithmic rate of quakes is constant.
In Ref.~\cite{Sibani05a,Sibani07} energy differences were sampled
over time intervals of duration $\delta t$, chosen much smaller than
the system age but larger than the decay time of the energy autocorrelation
function. {On this intermediate
time-scale, intermittent
events were distinguished from equilibrium fluctuations based on their
correspondence to rare, negative and numerically large energy changes
without resolving the quake event itself. In our case, we provide precise
values for the onset
times of quakes by explicitly connecting them
to the extremal value of the `energy barrier' function discussed in
Refs.~\cite{Dall03,Boettcher05}. For that purpose, energy changes
in close proximity
of local energy minima are monitored by choosing $\delta t$ now
much \emph{shorter} than the energy autocorrelation decay time, such
that neither equilibrium fluctuations nor quakes can unfold within
a single $\delta t$. Energy changes measured within such a short
$\delta t$ without reference to barrier-height feature a perfect
normal distribution over many orders of magnitude, see Fig. \eqref{FL_stat}.
That the width of this distribution scales anomalously with with temperature
confirms that the energy changes sampled are not equilibrium fluctuations.
\begin{figure}
\vspace{-5mm}
\hfill{}\includegraphics[bb=0bp 400bp 550bp 612bp,clip,width=1\columnwidth,height=0.4\columnwidth]{CoarseValleys}\hfill{}
\label{fig:CoarseValley}
\vspace{-7.5mm}
\caption{\label{fig:CoarseValleys} The instantaneous energy $E(t)$ of the system fluctuates widely while decaying slowly overall (left panel). The lowest energy $E_{{\rm bsf}}(t)=\min_{t}[E(t)]$, and the highest barrier $\max_{t}[E(t)-E_{{\rm bsf}}(t)] $
ever seen up to time $t$ are marked by $E$ and $B$, respectively. In Refs.~\cite{Dall03,Boettcher05}, intermediate records were stricken (crossed-out green letters) and the last $B$-record before the next $E$, or the last $E$-record before the next $B$
were kept to coarse-grain the states visited into "valleys" entered and exited at barrier-crossings $B_{i-1}$ and $B_i$ and to demarcate the catchment basin of the local minimum at $E_i$, as shown in the right panel. Here, we focus on the record-producing parts of the trajectory enclosed in the shaded boxes. In the lower box $E(t)$ begins to undercut the previous minimum, $E_{i-1}$, until $E_i$ is reached and in the
upper box it exceeds the previous barrier record (up-arrow) until $B_i$ is reached.}
\hfill{}\includegraphics[bb=0bp 120bp 650bp 612bp,clip,width=1\columnwidth, height=0.6\columnwidth]{QuakeMeasureDown}\hfill{}
\vspace{-2mm}
\hfill{}\includegraphics[bb=0bp 120bp 650bp 612bp,clip,width=1\columnwidth,height=0.6\columnwidth]{QuakeMeasureUp}\hfill{}
\vspace{-5mm}
\caption{\label{fig:protocol} On-the-fly detection
of quakes while reaching new energy
minima $E_{i}$ (top panel) or barrier records $B_{i}$ (bottom
panel). Within the respective ranges (shaded boxes in insets), a progression
of new records, either of $E_{{\rm bsf}}(t)$ (top) or of $b(t)$ (bottom),
is reached through quakes. In top (bottom) panel, once the energy
signal reaches below (above) the previous record, a quake event commences,
marked by a colored horizontal line. To capture the footprint of such
a quake, we record the spin configuration at the end of those time-intervals
$\delta t$ that contain a record (vertical dashed lines). The spin orientation changes
between consecutive quakes provide the spatial
extent of the intervening quake. The sub-interval duration $\delta t$ used in
the simulation is $\delta t=0.999$.}
\end{figure}
In contrast, to capture an actual quake, we have to use a specific
trigger, described in Figs.~\ref{fig:CoarseValleys} and \ref{fig:protocol}. Following Refs.~\cite{Dall03,Boettcher05},
we consider the barrier function $b(t)=E(t)-E_{{\rm bsf}}(t)$, where
$E_{{\rm bsf}}(t)=\min_{t}[E(t)]$ is the lowest energy ever seen
up to time $t$. According to Ref.~\cite{Dall03}, the entry and
exit times of a trajectory in and out of a valley in the energy landscape
can be evinced from the sequence of configurations where $b(t)$ and
$E_{{\rm bsf}}(t)$ reach their maxima and minima, respectively.
As the description in Fig.~\ref{fig:CoarseValleys} demonstrates, the most recent barrier record $B_{i}$ only becomes recognized
as such when the next minimum is reached and, correspondingly, the
latest $E_{i}$ is certified as such only after $b(t)$ achieves a
new record. Thus, this classification scheme requires a-priori knowledge
of the entire time series of energy values, which we want to avoid.
Furthermore, we do not only focus on exit and entry points of valleys
in configuration space, but wish to identify the spatially localized
non-equilibrium events which provide the path approaching $E_{i}$
and $B_{i}$, respectively marked by a shaded box in the insets of
Fig.~\ref{fig:CoarseValleys}.
Approaching $E_i$, $E(t)$ achieves a sequence of new
$E_{{\rm bsf}}(t)$ after the latest record barrier crossing. In turn, the function $b(t)$ reaches new records after
the latest minimum $E_{{\rm bsf}}(t)$ become fixed and $B_i$ is approached. Typical sequences
of $E(t)$ within those regimes are depicted in the main panels of
Fig. \ref{fig:protocol}. For either regime, we stipulate
that, if $E_{{\rm bsf}}(t)$ or $b(t)$ achieve a new record value
at $t=t_{r}$, a quake is unfolding. As soon as $t$ then reaches
the upper boundary of the sub-interval containing $t_{r}$, i.e.,
$t\leq t_{r}<t+\delta t$, that quake is deemed to have ended and
the system's configuration is saved. We then repeat this procedure
for the next record, until $E_{i}$ or $B_{i}$, respectively, is reached
and continue the process in valley $i+1$ at later times. From the energy
differences $\delta E_{q}(i),\;i=1,2\ldots N$ between the current
and the previously saved configurations one easily finds the total
energy change connected to the quake and the positions of the participating
spins. The statistical error in the procedure comes from unrelated
spins which flip and participating spins which flip twice.
The above detection scheme allows a precise assessment of quake times,
and does not use threshold values to discriminate quakes
from quasi-equilibrium thermal fluctuations. The arbitrary subdivision of the observation interval into
sub-intervals of length $\delta t$
determines when a quake ends, but has only a minor effect on the measured values of
inter-quakes times, which are typically much longer than $\delta t$.
Finally, reaching the different energy records which define our quake detection technique
also requires tortuous paths, which are tantamount to entropic barriers. These are not
shown in Figs.~\ref{fig:CoarseValleys}-\ref{fig:protocol}, but are important for the dynamics,
as argued in Section~\ref{Explanation}.
To conclude, the WTM is ideally
suited for our measurements. It produces equivalent physical results
to random sequential MC, yet, WTM focuses more efficiently on the
few active spins that drive the dynamics. By ranking degrees of freedom
by their time for change, it targets on exactly those spins connected
within a quake and is able to time-stamp quakes with high accuracy.
\section{Numerical results}
\label{Results}
{ After the initial quench $T=1.25 \rightarrow T<1$, the system
is aged up to time $t_{\rm w}=100$ without taking any data.
Data are taken in the interval $[t_{\rm w},10^5] $ which is subdivided into $10^5$
subintervals of duration $\delta t=0.999$. This duration is an upper bound for the temporal resolution of quake times, as explained in
the `Quake detection protocol' section above.
As mentined, $512$ independent simulations are carried out for statistical
reasons, all starting from the same equilibrium configuration. }
The first two subsections below detail different types of simulational results, and the last subsection
rationalizes the $T$ scaling form used to collapse all our data. All quantities specified below are dimensionless.
\begin{center}
\begin{tabular}{ |l| | c| }
\hline
\multicolumn{2}{|c|}{Mathematical symbols used}\\ \hline
$T, t$ & Temperature and time \\ \hline
$\delta t$ & Short time interval \\ \hline
$\Delta$ & Energy change over $\delta t$ \\ \hline
$\Delta_{\rm q}$ & Quake induced energy change\\ \hline
$\Delta{\rm ln}$ & Logarithmic waiting time\\ \hline
$r_{\rm q}$ & Logarithmic quaking rate\\ \hline
$R_{\rm q}(t)$ & Quaking rate $=r_{\rm q}/t$\\ \hline
$r_{\rm cl}$ & Logarithmic rate of cluster growth\\ \hline
$n_{\rm q}(t)$ & Number of quakes up to time $t$\\ \hline
$\mathbf{F}_{A}(x)$ & PDF of stochastic variable $A$ \\ \hline
\end{tabular}
\end{center}
\subsection{Energy fluctuations PDFs}
Energy fluctuations sampled during isothermal aging at constant temperature $T$
have PDFs which change widely with $T$. As one would expect, the fluctuations are smaller
the lower the temperature. Interestingly, their scaling is not linear in $T$, as would be the case
when dealing with equilibrium energy fluctuations, but involves instead the power law $T^\alpha$, where
$\alpha=1.75$.
Let $T^{-\alpha}\Delta$ denote the scaled
energy changes (per spin) sampled at temperature $T$ over an interval of a
very short duration $\delta t=0.999$. The length of this interval, which is much shorter
than those considered in \cite{Sibani05a} and far too short to straddle
equilibrium like energy fluctuations,
provides an upper bound for the duration of `instantaneous' quakes.
The seven estimated PDFs of
$T^{-\alpha}\Delta$,
sampled at seven different aging temperatures
$T=.3,.4,\ldots.7,.75$ and $.8$, are
plotted in Fig.~\ref{FL_stat} using a light color (yellow) and using, in order of increasing $T$,
squares, circles, diamonds, hexagrams, pentagrams, and down- and up-pointing
triangles as symbols. The dotted line is a fit of all these scaled PDFs to a Gaussian
of zero average. We note that the data collapse is excellent and that
the standard deviation of the Gaussian $\sigma_{G}\approx6.2\;10^{-3}$ is much smaller
than unity, the statistical spread of the coupling constants $J_{ij}$. This confirms
that the sampled energy changes are
strongly constrained, as expected.
Quake induced energy changes $\Delta_{{\rm q}}$ occur over the
time intervals of varying length which
stretch from one quake to the next. Positive and negative values of
$\Delta_{\rm q}$ are associated with the system's energy increasing or
decreasing beyond its previous maximum or minimum, respectively. The
average effect of a quake is however an energy loss.
The empirical PDFs of $T^{-\alpha}\Delta_{{\rm q}}$ are shown using
the same symbols as for the Gaussian changes, but a darker color (red).
For negative values of the abscissa these PDFs feature the exponential
decay given by the fitted line, which is reminiscent of
the intermittent tail seen in~\cite{Sibani05a}.
In this case, the scaling with $T^{-\alpha}$ narrows but does not fully eliminate the spread of the
data.
\begin{figure}
\hfill \includegraphics[bb=20bp 170bp 570bp 600bp,clip,width=1\columnwidth]{Fl_pdf}\hfill{}
\caption{Seven
PDFs of energy fluctuations $\Delta$ collected at aging temperatures
$T=.3,.4,\ldots.7,.75$ and $.8$ are collapsed into a single Gaussian
PDF by the scaling $\Delta\rightarrow T^{-\alpha}\Delta,\;\alpha=1.75$, and plotted
using a logarithmic vertical scale.
The data plotted with yellow symbols are fitted by the Gaussian shown as a dotted line.
This Gaussian has average $\mu_G=0$ and standard deviation $\sigma_{G}\approx6.2\;10^{-3}$.
Data plotted with red symbols represent quake induced energy fluctuations $\Delta_{{\rm q}}$ and, for negative values of the abscissa,
have estimated probabilities close to the exponential PDF shown by the
line.}
\label{FL_stat}
\end{figure}
Isothermal aging was considered in~\cite{Dall03} for various spin-glass
models and the height of the energy barriers separating the neighboring
`valleys' illustrated in Fig.~\ref{fig:protocol} was studied at different temperatures. Those data were collapsed
by $T^{1.8}$ scaling, a result which seems in reasonable agreement
with our present findings and is likely to have the same origin.
\begin{figure}
\hfill{}\includegraphics[bb=20bp 180bp 560bp 600bp,clip,width=1\columnwidth]{Dlntpdf}\hfill{}
\hfill{}\includegraphics[bb=20bp 180bp 560bp 600bp,clip,width=1\columnwidth]{noquakes}\hfill{}
\caption{Upper panel. Symbols: PDF of scaled `logarithmic waiting times' $T^{-\alpha}\Delta{\rm ln}$, $\alpha=1.75$,
for the seven aging temperatures $T=.3,.4,\ldots.7,.75$ and $.8$.
Dotted line: fit to the exponential form $y(x)=.81e^{-1.57x}$. Insert:
the normalized autocorrelation function of the logarithmic waiting
times is very close to a Kronecker delta function $C_{\Delta{\rm ln}}(k)\approx\delta_{k,0}$.
The data shown are collected at $T=.3$, but similar behavior is observed
at the other investigated temperatures. Lower panel: the number
of quakes occurring up to time $t$ is plotted with a logarithmic
abscissa, for all $T$ values, with the steepest curve corresponding
to the lowest temperature. Insert: The quake rate, obtained as the
logarithmic slope of the curves shown in the main figure, is plotted
vs. $T^{-\alpha}$, where $\alpha=1.75$. The dotted line is a fit with slope $1.11$. }
\label{DLTS}
\end{figure}
Consider now the times of
occurrence $t'$ and $t$ of two successive quakes, $t>t'$, and
form the logarithmic time difference $\Delta{\rm ln}=\ln(t)-\ln(t')=\ln(t/t')>0$,
called for short, \emph{log waiting time}.
If quaking is a Poisson process in logarithmic time, the corresponding
PDF, $F_{{\Delta}{\rm ln}}(x)$ is given theoretically by
\begin{equation}
F_{\Delta{\rm ln}}(x)=r_{q}e^{-r_{q}x},\label{quaking_r}
\end{equation}
where $r_{q}$ is the constant logarithmic quaking rate. The applicability of equation~\eqref{quaking_r}
has already been tested in a number of different systems, including
spin-glasses~\cite{Sibani07}.
The upper panel of Fig.~\ref{DLTS} shows the empirical PDFs of our logarithmic
waiting times, sampled at different temperatures and collapsed through
the scaling $\Delta{\rm ln}\rightarrow T^{-\alpha}\Delta{\rm ln}$.
The resulting PDF is fitted by the expression
$F_{T^{-\alpha}\Delta{\rm ln}}(x)=.81e^{-1.57x}$, which
covers two decades of decay. Its mismatch with
the correctly normalized expression~\eqref{quaking_r}
stems from the systematic deviations from an exponential
decay visible for small $x$ values. These deviations arise in turn
from quakes which occur in rapid succession,
and produce values $\ln(t_{k}/t_{k-1})\approx 0$. The effect, which is
most pronounced at early times in the simulation, roughly doubles the assessed
number of quakes, and correspondingly lowers the fitted pre-factor from $\approx 1.6$
to $\approx 0.8$. It furthermore produces
non-zero correlation values in the series of log-waiting times
at $k=1$ and, to lesser extent, $k=2$.
Treating closely spaced quakes as parts of the same dynamical
event leads to the corrected number of quakes $n_{{\rm q}}(t)$
occurring up to time $t$ which is shown
in the bottom
panel of the figure for seven different aging temperatures.
The steepest
curve corresponds to the lowest temperature. The red dotted lines
are linear fits of $n_{{\rm q}}(t)$ vs. $\ln t$, and the insert
shows that the logarithmic slope of the curves is well described by
the function $r_{{\rm q}}=1.11 T^{-1.75}$. We note that the logarithmic
quake rate as obtained from the exponent (not the pre-factor)
of the fit $y(x)=.81e^{-1.57x}$ is $r_{{\rm q}}=1.57T^{-1.75}$.
The two procedures followed to determine the quaking rate are thus mathematically but not numerically
equivalent: in the time domain they give the same $T^{-1.75}/t$ dependence of the quaking
rate, but with two different pre-factors. The procedure using the PDF of
the logarithmic waiting times seems preferable, due to better statistics.
Glossing over procedural difference, we write
$r_{{\rm q}}=cT^{-1.75}$ where $c$ is a constant, and note that in our RD description the
number of quakes occurring in the interval $[0,t)$ is then a Poisson process
with average $\mu_{N}(t)=cT^{-\alpha}\ln(t)$.
Qualitatively, we see that lowering the temperature decreases the log-waiting times
and correspondingly increases the
quaking rate. The quakes involve, however, much smaller energy differences
at lower temperatures. Considering that $T^{-\alpha}\gg T^{-1}$,
we see that the strongest dynamical constraints are not provided by
energetic barriers. As detailed later, they are entropic in nature
and stem from the dearth of available low energy states close to local
energy minima. Finally, our numerical evidence fully confirms the idea
that quaking is a Poisson process whose average is proportional to the logarithm of time.
In other words, the transformation $t \rightarrow \ln t$ renders the aging dynamics
(log) time homogeneous and permits a greatly simplified mathematical
description.
\subsection{Growth and decay of real space clusters}
The mean cluster sizes shown in Fig.~\ref{Av_cls} are calculated
as follows: Spins reversed by a quake are grouped into one or more
spatially disjoint sets, each comprising adjacent spins. Each set
is a cluster, and a first average cluster size $\overline{C_{j}}(t)$
is computed as the arithmetic mean of the sizes of all clusters generated
at time $t$ during the $j^{{\rm th}}$
simulation.
In a second step, our data are temporally coarse-grained
by placing
logarithmically equidistant time points
$t_{1},t_{2}\ldots t_{n}$ within the chosen observation
interval, and by treating the quakes occurring in the same log-time bin as
simultaneous.
The averaged cluster size
$\tilde{S}_{{\rm cl}}(t_k)$ is then calculated as the arithmetic mean
of all the $\overline{C_{j}}(t)$s for which $t_{k-1}<t<t_{k+1}$.
This whole procedure is repeated for different
values of the aging temperature $T$. It follows that $\tilde{S}_{{\rm cl}}(t_k)$
is the average cluster size, conditional to a quake
happening near $t_{k}$. Multiplying the result with the corresponding
probability $r_{{\rm q}}$ yields the (unconditional) average
cluster size ${S}_{{\rm cl}}(t_k)$.
Figure~\ref{Av_cls} shows that
\begin{equation}
\begin{split}
\tilde{S}_{{\rm Cl}}(t)&=r_{\rm cl}(T) \ln t=
c'T^{2\alpha}\ln t\Rightarrow \\
{S}_{{\rm Cl}}(t)&=cc'T^{\alpha}\ln t, \label{MainR}
\end{split}
\end{equation}
where $c$ and $c'$ are positive constants.
The rate at which clusters
are overturned in real time, as opposed to logarithmic time, is
$R_{{\rm q}}(t)=r_{\rm q}/t=cT^{-\alpha}/t$.
Inserting $t=\exp(\frac{S_{\rm Cl}T^{-\alpha}}{cc'}) $ from Eq.~\eqref{MainR}, we then obtain
\begin{equation}
R_{{\rm q}}(t)=cT^{-\alpha}\exp(-\frac{S_{{\rm Cl}}(t)T^{-\alpha}}{c'c}),\label{big}
\end{equation}
which provides the anticipated exponential relationship between the typical
cluster size and the rate at which clusters of that size are overturned.
Eq.~\eqref{big} does not prove that a specific cluster will be overturned
at a rate exponentially decreasing with its size, but is compatible
with that statement, if the spatial distribution of cluster sizes
is narrow.
\begin{figure}
\hfill{} \includegraphics[bb=20bp 180bp 560bp 600bp,clip,width=1\columnwidth]{mcls}\hfill{}
\caption{Main plot: the average cluster size vs. the logarithm of time. The
data set, from bottom to top, are obtained at aging temperatures $T=.3,.4,.5,.6,.7,.75$
and $.8$. The red lines are linear fits of the data vs. $\ln t$.
The insert shows the slope of the linear fits vs. $T^{2\alpha},\;\alpha=1.75.$ }
\label{Av_cls}
\end{figure}
\subsection{Origin of $T$ scaling}
\label{Explanation}
To rationalize the $T$ scaling of our data, we note that the conditional waiting time $W|x$
for a spin to carry out a move with energy change $x$
is exponentially distributed with average $e^{-\frac{x}{2T}}$, see Eq.~\eqref{avWT_def}, i.e.
\begin{equation}
p_{{\rm W|x}}(t)=e^{\frac{-x}{2T}}\exp(-t\;e^{\frac{-x}{2T}}).
\label{wtpdf0}
\end{equation}
The scaled energy changes $T^{-\alpha} \Delta$
shown in Fig.~\ref{FL_stat} have a
Gaussian distribution indicating that $\Delta$ is a sum of several
independent terms, all sampled over
short time spans of order one. Consequently, the positive energy changes selected must be of order $x \approx T$, and
the negative ones are simply their reversals. Let $g(x)$ be the probability density that
an energy difference $x$
is associated to moves out of a given configuration.
If the configuration is a local energy minimum,
very few `freewheeling' spins are present and, for numerically small values of $x$, $g (x)$ is zero for
$x\le 0$ and increases with $x$ for $x>0$.
For configurations neighboring
a local energy minimum, negative $x$ are available corresponding to moves back to the minimum
and the form of $g$ is reversed.
Glossing over the difference between local energy minima and their neighbors, we now assume that
$g(x)\approx |x|^\beta$ for $\beta >0$ and, for $x\propto T$, find $\Delta \propto T^{1+\beta}$, which implies that
the $T$ dependence of the sampled energy differences
can be scaled away by scaling them with $T^{-\alpha}$, with $\alpha=\beta+1$.
Energy changes from one quake to the next are plotted in the same
figure, and have been similarly scaled. The $T^{-\alpha}$ scaling
does not fully collapse their PDFs as expected, since the time difference
between successive quakes is stochastic and typically much larger
than one. The result indicates however that a trajectory triggering a quake
mainly consists of a sequence of flips associated to small
and reversible energy changes with the `correct' $T$ scaling, rather
than fewer but larger energy changes associated to long waiting times.
In other words, entropic barriers play a large role in the dynamics.
Since, as we just argued, the overwhelming
majority of the moves are associated with small time changes,
the time between two quakes is a sum of a varying, but large
number of short waiting times and inherits their $T^{\alpha}$ dependence.
The number
of quakes preceding an arbitrary fixed time $t$ is then proportional
to $T^{-\alpha}$ as directly confirmed by the insert of the
lower panel of Fig.~\eqref{DLTS}, and indirectly by its upper
panel, since the contents of the figures are mathematically equivalent.
\section{Spin clusters as dynamical variables}
\label{Cluster_dynamics}
The real space clusters discussed in the previous section are mesoscopic
objects which grow logarithmically in time.
In this mainly theoretical section, we use them as coarse-grained variables,
and show that their dynamics explains
the fit of TRM data
provided in~\cite{Sibani06}
as well as other features of these macroscopic data.
A table is included summarizing the notation used in this section.
\begin{center}
\begin{tabular}{ |l| | c| }
\hline
\multicolumn{2}{|c|}{Mathematical symbols in this section}\\ \hline
$\lambda_i$ & $i$'th eigenvalue in corr. decay \\ \hline
$w_i$ & weight of the corresponding term \\ \hline
$r_{\rm q}(s)$&logarithmic rate of quakes hitting cl. of size $s$ \\ \hline
$b$&logarithmic rate of quakes per spin \\ \hline
$\kappa_s(t)$&no. of quakes hitting cl. of size $s$ in $[0,t)$ \\ \hline
$p(s)$ & prob. that a cl. of size $s$ flips when hit \\ \hline
$n_{\rm cl}(s,t)$ &no. of clusters of size $s$ present at time $t$\\ \hline
$\mu_s(t_{\rm w},t)$ & average no. of hits to cl. of size $s$ in $[t_{\rm w},t)$\\ \hline
$\mu_s$ & same as above\\ \hline
\end{tabular}
\end{center}
Adapting Eq.(5) of ref.~~\cite{Sibani06}, TRM data are described by the followin equation:
\begin{equation}
M_{{\rm TRM}}(t,t_{{\rm w}})=A_0\left(\frac{t}{t_{{\rm w}}}\right)^{\lambda_{0}(T)}\!\!\!\!\!\! +A_1\left(\frac{t}{t_{{\rm w}}}\right)^{\lambda_{1}(T)}\!\!\!\!\!\!+A_2\left(\frac{t}{t_{{\rm w}}}\right)^{\lambda_{2}(T)}\label{from_SRK0},
\end{equation}
where the pre-factors $A_i$ and the exponents $\lambda_i$ are positive respectively negative quantities.
Using that $\lambda_0$ is numerically very small, one further expands the first power-law, obtaining
\begin{equation}
M_{{\rm TRM}}(t,t_{{\rm w}})=A_0+a \ln (\frac{t}{t_{{\rm w}}} )+A_1\left(\frac{t}{t_{{\rm w}}}\right)^{\lambda_{1}(T)}\!\!\!\!\!\!+A_2\left(\frac{t}{t_{{\rm w}}}\right)^{\lambda_{2}(T)}\label{from_SRK},
\end{equation}
where $a=\lambda_0A_0\approx -1$ is independent of temperature in the available data range. Furthermore
$\lambda_{1}(T)$
and $\lambda_{2}(T)$ are weakly decreasing functions of $T$, with ranges close
to $-1$ and $-6$, respectively.
Clearly, the logarithmic approximation to the first power-law eventually fails as $t/t_{\rm w}\rightarrow \infty$.
However, for the data range analyzed in~\cite{Sibani06} the logarithmic term is dominant and the two remaining power-law terms
only provide fast decaying transients.
Since the gauge transformation $\sigma_{i}\rightarrow\sigma_{i}(t_{{\rm w}})\sigma_{i},\;J_{ij}\rightarrow\sigma_{i}(t_{{\rm w}})\sigma_{j}(t_{{\rm w}})J_{ij}$
maps the Thermoremanent Magnetization (TRM) into the correlation function
$C(t_{{\rm w}},t)=\sum_{i}\langle\sigma(t_{{\rm w}})\sigma(t)\rangle$,
modulo multiplicative constants, the two functions hold for our purposes equivalent
information, and will be used interchangeably in the discussion.
Equation \eqref{from_SRK} was justified in \cite{Sibani06} by the RD assumption that
aging is log-time homogeneous and by then applying a standard eigenfunction expansion~\cite{VanKampen06}
for the magnetization autocorrelation function, alias TRM, namely
\begin{equation}
C(t,t_{{\rm w}})\propto\sum_{i}w(i)\exp(\lambda_{i}\ln(t/t_{{\rm w}}))=\sum_{i}w(i) \left( \frac{t}{t_{\rm w}}\right)^{\lambda_{i}},
\label{anC}
\end{equation}
where $w_{i}\ge0$ and $\lambda_{i}\le0$.
In view of the limited
accessible range of $\ln(t/t_{{\rm w}}))$, most modes in Eq.~\eqref{anC}
will either be frozen or have decayed to zero, leaving only a few active
terms with an observable time dependence,
precisely as assumed in \eqref{from_SRK}.
The approach leading to Eq.~\eqref{anC} implicitly describes
the effects of the quakes by
an unspecified master equation, with time replaced by its logarithm.
As a consequence, the exponential decays seen in many relaxation processes
are replaced by power-laws, with no connection to a critical behavior.
Continuing along this line, we now construct the relevant master equation and relate
its eigenvalues $\lambda_i$ to real space
properties uncovered in our numerical investigation.
Specifically, we shall use that \emph{i)} quakes are statistically independent events
inducing cluster flips, and that \emph{ii)} they constitute a Poisson process. Since
spatial extensiveness then follows, the rate of quakes hitting a sub-system, e.g. a cluster,
is proportional to the volume of the latter.
Some of the following arguments rest on unproven hypotheses, i.e.
given that a quake hits a cluster of size $s$, the latter is assumed to flip with probability $p(s)$,
a decreasing function of $s$, parametrised by
\begin{equation}
p(s)=a_{0}+a_{1}s^{-1}+a_{2}s^{-2},
\label{flip_prob}
\end{equation}
where all three coefficients are positive. Further below, we argue that $a_0=a_1=0$.
Let $\kappa_s(t)$ denote the number
of quakes hitting a cluster of size $s$ and $n_{\rm cl}(s,t)$
the number of such clusters present at time $t$.
Finally, $s_{\rm min}$ and $s_{\rm max}$ denote the sizes of the smallest and the largest
clusters in the system.
The range of cluster sizes is constrained by
the condition $\sum_{s=s_{\rm min}}^{s_{\rm max}}s\; n_{\rm cl}(s,t) =L^{3}$.
Finally, the total number of quakes hitting the system between $t_{\rm w}$ and $t$
is $n_{\rm q}(t)=\sum_{s=s_{\rm min}}^{s_{\rm max}} \kappa_s(t)$.
Even though the $\kappa_s(t)$ presumably share the $T^{-1.75}$
temperature dependence of $n_{\rm q}(t)$, the $T$ dependence of $p(s)$ is unknown,
as is that of the cluster distribution decay, which depends on
the products $\kappa_s(t) p(s)$, see Eq.~\eqref{C_an2}.
We therefore gloss over $T$ dependences, but note that, in order to produce
exponents with a weak $T$ dependence~\cite{Sibani06}, $p(s)$ should increase with
$T$ to counteract the strong decrease of the $\kappa_s(t)$. In other words, as the
temperature decreases the number of quakes increases but their dynamical effect
is reduced.
As illustrated in Fig.~\ref{domains_fig},
flipping a cluster, e.g. cluster 8, eliminates all
the sub-clusters present in its interior, in this case, cluster 1.
To simplify our treatment, this possibility is eliminated
by assuming that clusters are flipped
in order of increasing size. This is reasonable if, as we
shall argue, the logarithmic rate of cluster flipping decreases with
cluster size. Secondly, changes in the size of a
cluster induced by sub-clusters flipping in the cluster's interior
are neglected.
The assumptions assign a dynamical significance to
the hierarchy of cluster sizes present at $t=t_{\rm w}$ and allows
clusters of different sizes to develop independently of each other.
Having neglected
the possibility that clusters flip in the `wrong' sequence, a
cluster which flips contributes with its own size to the decay of the correlation
function. Furthermore, standard
arguments then imply that the number $n_{\rm cl}(s,t)$ of clusters of size $s$ decays exponentially
in $\kappa_s(t)$.
The correlation function and, equivalently,
the TRM, are given by
\begin{equation}
C(t_{\rm w},t)\propto
\left\langle \sum_{s=s_{\rm min}}^{s_{\rm max} } s n_{\rm cl}(s,t_{\rm w})\exp(-p({s})\kappa_s(t))\right\rangle,
\label{C_an2}
\end{equation}
where the constant ensuring the initial normalization has been
omitted and the average $\langle \ldots \rangle$ is performed over
the distribution of each $\kappa_s(t)$.
The $\kappa_s(t)$ are independent Poisson variables with expectation values
\begin{equation}
\mu_{s}(t_{{\rm w}},t)=r_{{\rm q}}(s)\ln(t/t_{{\rm w}}),
\end{equation}
where $r_{{\rm q}}(s)$ is the logarithmic rate of quakes impinging
on a cluster of size $s$. The extensivity of the quaking rates implies $r_{{\rm q}}(s)=bs$
where $b$, a positive constant, is the logarithmic quake rate per spin.
As a consistency check, note that
\begin{equation}
\sum_s r_{\rm q}(s)n_{\rm cl}(s,t)=b \sum_s s n_{\rm cl}(s,t)=b L^3 =r_{\rm q},
\end{equation}
the logarithmic quake rate for the whole system.
Each term of Eq.~\eqref{C_an2} can be averaged independently using
\begin{equation}
\langle \exp(-p({s})\kappa_s(t)) \rangle=e^{-\mu_{s}(t_{{\rm w}},t)} \sum_{j=0}^\infty \frac{\mu_{s}(t_{{\rm w}},t)^j}{j!}e^{-p(s)j},
\end{equation}
which evaluates to
\begin{equation}
\langle \exp(-p({s})\kappa_s(t)) \rangle= \exp(-\mu_{s}(1-e^{-p(s)})).
\end{equation}
Expanding $e^{-p(s)}$ to first order, we finally obtain the
contribution
\begin{equation}
\langle \exp(-p({s})\kappa_s(t)) \rangle \approx\exp(-\mu_{s}p(s))=\left(\frac{t}{t_{{\rm w}}}\right)^{-bsp(s)}
\end{equation}
to the average correlation function.
Summarizing,
\begin{equation}
C(t_{\rm w},t)\propto
\sum_{s=s_{\rm min}}^{s_{\rm max} } s n_{\rm cl}(s,t_{\rm w})\left(\frac{t}{t_{{\rm w}}}\right)^{-bsp(s)},
\label{C_an3}
\end{equation}
which has the same structure as Eq.~\eqref{anC}, with
the weight $w_i$ replaced by the volume fraction $s n_{\rm cl}(s,t_{\rm w})$
occupied by clusters of size $s$ at time $t_{\rm w}$
and the eigenvalue $\lambda_i$ replaced by
$\lambda_s=-b s p(s)=r_{\rm q}(s) p(s)$, the flipping rate of clusters of size $s$.
Noting that Eq.~\eqref{flip_prob} entails
$\lambda_s=-b(a_0 s -a_1-a_2 s^{-1})$,
we set $a_0=0$ on physical grounds, since
the largest clusters would otherwise contribute to the fastest
decay of the correlation function.
The first non-zero term produces then a power-law decay term, $(t/t_{\rm w})^{-a_1 b}$,
while the next term gives a whole family of power laws with different
decay exponents, corresponding to the cluster size values
initially represented in the system.
To regain the form given in Eq.~\eqref{from_SRK0} we set $a_1=0$ and
obtain a sum of power-laws with exponents of decreasing magnitude
\begin{equation}
C(t_{\rm w},t)\propto
\sum_{s=s_{\rm min}}^{s_{\rm max} } s n_{\rm cl}(s,t_{\rm w})
\left( \frac{t}{t_{\rm w}}\right)^{-a_2 b/s}.
\label{C_an3}
\end{equation}
Exponents corresponding to
sufficiently large clusters
will, to first order in $ -a_2 s^{-1} \ln(t/t_{\rm w})$, all contribute to the constant and logarithmic terms
$ A_0+ a \ln(t/t_{\rm w})$ seen in Eq.~\eqref{from_SRK}. In summary, the
general form of the time dependence of the TRM data given in Eq.~\eqref{from_SRK} is accounted
for by our qualitative arguments, provided that a quake flips clusters of size $s$ with probability $p(s)=a_2 s^{-2}$.
The (mainly) logarithmic decrease of the TRM data is explained using our EA model analysis in terms of large clusters
associated with power-law terms with very small exponents, which can be suitably expanded.
A different interpretation~\cite{Guchhait15} of the same data uses the presence of crystallites of different sizes
each size associated to an energy barrier and attributes the logarithmic decay of the TRM to a wide
distribution of these barriers. Even though the E-A spin-glass lacks any crystallites, the presence
of clusters of different sizes means that expanding the power-laws with small exponents in Eq.~\eqref{C_an3}
yields, once the fast terms corresponding to small clusters have decayed,
\begin{equation}
M(t_{\rm w},t)\propto
A_0- a \ln \left( \frac{t}{t_{\rm w}}\right),
\label{C_an4}
\end{equation}
where $a\propto ( a_2 b)$.
This expression concurs with the analysis of Ref.~\cite{Sibani06}, based on the measurements of Ref.~\cite{Rodriguez03}
if $a_2 b $ is independent or nearly independent
of $T$. Recalling that $b$
is the number of quakes per unit volume and per unit (log) time, an educated guess is $b \propto T^{-1.75}$,
in which case the probability that a cluster of size $s$ flips when hit by a quake should be $ p(s)=a_2/s \propto T^{1.75}/s$.
Note however that
the $T$ dependence of the pre-factor of the
logarithmic decay is linear in Ref.\cite{Guchhait15}.
Most commonly denoted by $t$ in the literature, the `observation
time' elapsed after $t_{{\rm w}}$ is, in our notation, denoted by
$t_{{\rm obs}}\stackrel{{\rm def}}{=}t-t_{{\rm w}}$. Interesting
geometrical features of the spin glass phase, such as the size of
correlated domains~\cite{Joh99,Janus17}, are associated to the `relaxation
rate' $S_{\rm R}(t_{{\rm obs}},t_{\rm w})$, defined as the derivative of the TRM with
respect to $\ln t_{{\rm obs}}$~\cite{Nordblad97}, and in particular to its broad maximum
at $t_{{\rm obs}}\approx t_{{\rm w}}$. To see the origin of the latter,
we derive the relaxation rate from Eq.~\eqref{anC} as
\begin{equation}
S_{\rm R}(t_{{\rm obs}}/t_{{\rm w}})\propto\frac{t_{{\rm obs}}}{t_{{\rm w}}}
\sum_{s}|\lambda_{s}|w_{s}\left(\frac{t_{{\rm obs}}+t_{{\rm w}}}{t_{{\rm w}}}\right)^{\lambda_{s}-1},
\label{rel_rate}
\end{equation}
which is the product of an
increasing pre-factor $\frac{t_{{\rm obs}}}{t_{{\rm w}}}$ and a sum
of decreasing terms $\left(\frac{t_{{\rm obs}}+t_{{\rm w}}}{t_{{\rm w}}}\right)^{\lambda_{s}-1}$.
Each of these terms has a maximum at $t_{\rm obs}/t_{\rm w}=-1/\lambda_s$, and, together, they give rise to
the broad maximum near $t=t_{\rm w}$ experimentally observed for the relaxation rate~\cite{Nordblad97}.
Using $\lambda_s=- a_2 b/s$,
and recalling that
$w_s =s n_{\rm cl}(s,t_{\rm w})$, we find that the relaxation
rate for the value $t_{\rm obs}= 2 t_{\rm w}$ commonly used in the literature is
\begin{equation}
S_{\rm R}(2)\propto \sum_{s=s_{\rm min}}^{s_{\rm max}} n_{\rm cl}(s,t_{\rm w})3^{-a_2 b/s} \propto
\langle 3^{-a_2 b/s} \rangle,
\label{rr_max}
\end{equation}
where the brackets denote an average over the size distribution of clusters
present at $t=t_{\rm w}$.
Importantly, Eq.~\eqref{rel_rate} and\eqref{rr_max} show that
the relaxation rate and its maximum both gauge the characteristic size of the
clusters, or domains, present in the system at time $t_{\rm w}$.
\section{Implications of $T^{1.75}$ scaling}
\label{Implications}
The $T^{1.75}$ dependence of energy changes characterizing
isothermal trajectories at different temperatures (see
Fig.~\ref{FL_stat}) implies that the barriers separating the
parts of configuration space where these trajectories unfold
are not easily surmounted by the
thermal ${\mathcal O}(T)$ fluctuations available in quasi-equilibrium states.
To the best of the authors' knowledge, this anomalous scaling has not been noticed
in other numerical simulations, except for a brief mention in~Ref.\cite{Dall03}, where a slightly different
scaling exponent was found.
However, as we argue below, the behavior fits and partly explains the rejuvenation and memory effects experimentally
seen in spin-glasses~\cite{Jonason98,Mathieu10} under a change of temperature protocol.
In~\cite{Jonason98}, the imaginary part of the magnetic susceptibility is measured
at high frequency, $\omega > 1/t_{\rm w}$, while the system is cooled at constant rate through a range of low temperatures.
As such, this protocol produces an out of phase (pseudo-)equilibrium magnetic susceptibility $\chi ''(\omega,T)$,
which is utilized as a reference or master curve. Importantly, the cooling process is halted at temperature $T_1$ and
the system is allowed to age isothermally
for several hours, leading to a decrease, or `dip', of the susceptibility away from the master
curve. When cooling is resumed, the measurements soon return to that curve, a rejuvenation effect implying that states
seen during the aging process at $T_1$ have little influence on those seen at other temperatures. Furthermore,
a second aging stop at a lower temperature $T_2$ produces a second dip.
The striking memory behavior
of the system is revealed when the system, continuously re-heated without any aging stops, re-traces the dips of the susceptibility
previously created at $T_1$ and $T_2$.
Similar rejuvenation and memory behavior is observed in TRM traces~\cite{Mathieu10}.
These experiments
show that aging trajectories at different, not too close, temperatures are dynamically disconnected.
Our numerical data point, as anticipated, in the same direction and offer at the same time an explanation of the rejuvenation
part of the experimental findings.
\section{Summary \& Discussion}
\label{Conclusion}
{
This work's main focus is to buttress Record Dynamics (RD)~\cite{Anderson04,Sibani06,Sibani13,Sibani13a,Robe16}
as a general method
to coarse-grain aging processes,
by analyzing numerical simulations from
a model with quenched randomness.
Spin glasses are iconic systems,
where a wealth of fascinating phenomena illustrating central aspects of complexity
have been experimentally uncovered
(see ~\cite{Nordblad97,Vincent07} and references therein), and the
E-A model was an obvious choice.
For historical reasons, traditional interpretations
of both numerical and experimental spin-glass data rely on adaptations of
equilibrium concepts, e.g. critical behavior and other properties of either~\cite{Vincent07}
the Parisi solution~\cite{Parisi83}
of the mean field Sherrington-Kirkpatrick model~\cite{Sherrington75},
or~\cite{Nordblad97} the real space description of the E-A model~\cite{Edwards75}
proposed by
Fisher and Huse~\cite{Fisher88a}.
Since RD relies on the statistical properties of non-equilibrium events, the
picture emerging from our investigations unsurprisingly differs in some respects
from more established descriptions.
RD tacitly assumes the existence of a hierarchy of free energy barriers
in configuration space~\cite{Sibani13a,Robe16} which, however,
bears no direct relation to mean-field spin-glass models and
rests on general arguments of dynamical nature~\cite{Simon62,Hoffmann88},
exemplified by
a coarse-grained discrete toy-model of `valleys within valleys', i.e. thermal hopping
on a tree structure~\cite{Sibani89}.
A connection between
the ultrametrically organized pure states~\cite{Parisi83}
of the SK model, which are intrinsically stable equilibrium objects,
and the metastable states
of real spin glasses requires a degree of funambulism. The needed tight-rope~\cite{Ginzburg86}
is provided in Ref.~\cite{Lederman91}, where the spin-glass configuration space
is depicted as a hierarchically organized set of metastable states.
Treating an aging spin glass as a critical ferromagnet in disguise
is, we argued, a dubious undertaking on two counts: \emph{i)} Even though the energy difference between two metastable states is
associated to a domain wall, the dynamical barriers that hinder a reversal of the domain orientation
are not. They are instead associated to the
interior of the domain. \emph{ii)} While the dynamics of a 3D spin glass looks critical
when $T_{\rm c}$ is approached from above, once below $T_{\rm c}$ thermal equilibration is chimeric and the physical relevance of
the critical temperature is moot.
Some descriptions, see e.g.~\cite{Vincent95}, model aging dynamics as a
random walk in a configuration space fraught with traps
whose exit times feature
a long tailed distribution~\cite{Bouchaud92} of unspecified origin. For a detailed
discussion of continuous time random walks
and `weak ergodicity breaking' vs RD, we refer to~\cite{Sibani13}. Here we just note
that RD traps all have a finite depth, i.e. a finite average exit time,
but are typically visited in order of increasing depth. Last but not least, the quake, i.e. jump, statistics in RD is
predicted from configuration space properties, rather than
simply assumed.
Keeping our focus in mind, the experimental results discussed in some detail~\cite{Sibani06,Guchhait15,Jonason98,Mathieu10}
are all directly connected to our findings. Secondly,
variants of the E-A model, e.g. binary coupling distributions, are not discussed. Considering
RD's broad applicability, it seems plausible that such models would yield qualitatively similar
results. Some technical adjustments would however be needed for our definition of clusters,
as the ground state is degenerated beyond
a global inversion symmetry.
In a spin glass context, RD has been used to describe TRM experiments~\cite{Sibani06} and
numerical heat exchange data~\cite{Sibani05a}. In the present investigation,
quakes are operationally defined by associating them to record values of a
suitably defined `energy barrier' function sampled during the simulations, as
graphically illustrated by Fig.~\ref{fig:protocol}. That
these quakes are a Poisson process whose average grows with the logarithm of time
is explicitly verified in Fig.~\ref{DLTS}, which confirms the basic assumption on
which RD relies.
Neglecting easily reversed single spin excitations produces the coarse-grained picture
we use, where
every low temperature configuration
appears as a collection of adjacent spin clusters, each
oriented as one of the two ground states of the E-A model.
Clusters are identified from simulational data as groups of spins
which change direction during a quake while
keeping their relative orientations unchanged.
On average, the size of spin clusters overturned at time $t$ grows as $\ln t$ and the rate at which
a cluster is overturned decreases exponentially with its size. This relation subsumes the effect of both
entropy and energy barriers and establishes a connection
with our model of dense colloids~\cite{Boettcher11,Becker14a}.
A so far unnoticed property of the E-A model seen in Figs.~\ref{FL_stat} and \ref{DLTS}
is that aging data, e.g. energy differences and logarithmic waiting
times, collected at different (low) temperatures
can be collapsed by scaling them with $T^{-1.75}$.
This property is explained with the form $g(x) \propto |x|^{3/4}$ which, for $x \approx 0$
is assumed to describe the energy changes associated to moves to and from a local energy
minimum configuration.
In the simulations, the WTM's
dwells near local energy minima, where it repeatedly samples this type
of energy fluctuations.
We argue that the dearth of available moves with a low associated energy change can
explain the
rejuvenation part of memory and rejuvenation experiments~\cite{Jonason98,Mathieu10}:
Simply put, states explored during isotherrmal aging at different temperature are
separated by large dynamical barriers of entropic nature, and these
barriers are not easily overcome by thermal equilibrium fluctuations, which scale linearly with $T$.
Finally, an approximate real-space analytical description
is developed using growing clusters as mesoscopic dynamical variables.
Important elements
are that the logarithmic rate of quakes is an extensive and time independent quantity
and that, given that a cluster is hit by a quake, it flips with a probability inversely proportional
to its size. Unlike the first assumption, the second is only supported \emph{a posteriori} by the formula it produces,
which empirically
describes TRM decay data~\cite{Sibani06}.
Importantly, the power-law terms vanish fairly rapidly and the remaining logarithmic decay, which formally arises by expanding a possibly large group
of power-laws with small exponents, has a pre-factor which is $T$ independent,
as in the experimental data analysis of~\cite{Sibani06} but in contrast with the formula given in~\cite{Guchhait15}.
A similar behavior~\cite{Nicodemi01,Oliveira05} is seen in the temperature independence
of the magnetic creep rate of high $T_{\rm c}$
superconductors.
By focussing on non-equilibrium quakes and their statistics,
several real space implications are brought forth of the hierarchical
energy landscape organization which RD relies on, and a clear relation emerges
between configuration and real space pictures of spin-glass dynamics, namely that
increasingly scales in Hamming and Euclidean distance become relevant as increasing dynamical barriers are
overcome.
}
|
{
"timestamp": "2018-05-01T02:15:24",
"yymm": "1802",
"arxiv_id": "1802.08845",
"language": "en",
"url": "https://arxiv.org/abs/1802.08845"
}
|
\section{Introduction}
\label{sec:intro}
In this article we develop a theory of Lagrangian distributions on asymptotically Euclidean manifolds.
Lagrangian distributions were defined by H\"ormander \cite{HormanderFIO} as a tool to obtain a global calculus of Fourier integral operators.
The latter are widely applied, e.g. in the study of partial differential equations \cite{DH}, spectral theory \cite{DG}, index theory \cite{BaeStro} and mathematical physics \cite{GuSt}.
Motivating examples for the necessity of studying Lagrangian distributions on asymptotically Euclidean spaces include fundamental solutions to the Klein-Gordon equation, which exhibit Lagrangian behavior ``at infinity'', see \cite{CoSc2}, as well as simple or multi-layers which arise when solving partial differential equations along infinite boundaries or Cauchy hypersurfaces, see \cite{Cordes}.
In local coordinates, a classical Lagrangian distribution $u$ on a manifold $X$ is given by an oscillatory integral of the form
\begin{equation}
\label{eq:osciintproto}
I_\varphi(a)=\int_{{\RR^s}} e^{i\varphi}a(x,\theta)\,\dd\theta,
\end{equation}
for some symbol $a\in S^m({\RR^d}\times{\RR^s})$ and a phase function $\varphi$ on a subset of ${\mathbb{R}}^d\times{\mathbb{R}}^s$ bounded in $x$. A class of oscillatory integrals on Euclidean spaces, the local model for our theory, was studied in \cite{CoSc}.
The key feature of the theory of Lagrangian distributions is that each such distribution is globally associated to a Lagrangian submanifold $\Lambda\subset T^*X$ and that its leading order behavior can be invariantly described by its principal symbol which is a section in a line bundle on $\Lambda$.
In this article, we prove that the situation on asymptotically Euclidean manifolds is similar, but with a more delicate structure ``at infinity''. To make this precise, we work within the framework of scattering geometry, developed in \cite{Melrose1,MZ}, see also \cite{HV,WZ}. In the article, we provide an extensive introduction to this theory and add to it a class of naturally arising morphisms, the \emph{scattering maps}. We note that the scattering manifolds may also be seen as Lie manifolds, and in this way our theory complements recent advances in the theory of Lagrangian distributions and Fourier integral operators on such singular spaces (via groupoid techniques), see \cite{Lescure}.
The prototype of a scattering geometry is the Euclidean space ${\RR^d}$, identified with a ball under radial compactification. For this setting, a fitting theory of Lagrangian submanifolds on ${\RR^d}$ was developed in \cite{CoSc2}. As a first step, we adapt this to general scattering manifolds with boundary $X=X^o\cup \partial X$, the boundary being viewed as infinity. On such manifolds, the environment for microlocalization is then the compactified scattering cotangent bundle ${}^\scat \,\overline{T}^*X$, a manifold with corners of codimension $2$ and its boundary $\mathcal{W}=\partial{}^\scat \,\overline{T}^*X$.
This boundary may be seen as a stratified space, and the two boundary faces of ${}^\scat \,\overline{T}^*X$, which intersect in the corner, inherit a type of contact structure.
The geometric objects of study in our theory are then Legendrian submanifolds of the faces $\mathcal{W}$ which intersect in the corner and are the boundary of some Lagrangian submanifold in the interior and smooth (distribution) densities thereon.
The link with Lagrangian distributions is now as follows. We prove that, despite the singular geometry, any Lagrangian submanifold $\Lambda \subset \mathcal{W}$ locally admits a parametrization through some phase function $\varphi$, via a generalization of the map
\[{\lambda_\varphi}:\mathcal{C}_\varphi\rightarrow\Lambda_\varphi\quad (x,\theta)\mapsto \big(x,\dd_x\varphi(x,\theta)\big),\]
where $\mathcal{C}_\varphi=(\dd_\theta\varphi)^{-1}\{0\}$.
For each such a phase function, a Lagrangian distribution can be expressed locally as an oscillatory integral as in \eqref{eq:osciintproto}.
Up to Maslov factors and some density identifications, the restriction of $a(x,\theta)$ to $C_\varphi$ yields the (principal) symbol $\sigma(u)$ of $u$ and is interpreted as a (density valued) function on $\Lambda$ by identification via ${\lambda_\varphi}$.
Indeed, the main theorem characterizing the principal symbol will be:
\begin{thm*}
Let $\Lambda$ be a ${\mathrm{sc}}$-Lagrangian on $X$. Then there exists a surjective principal symbol map
%
\begin{equation*}
j^\Lambda_{m_e,m_\psi}\colon I^{m_e,m_\psi}(X,\Lambda) \to {\mathscr{C}^\infty}(\Lambda, M_\Lambda\otimes\Omega^{1/2}),
\end{equation*}
%
where $M_\Lambda$ is the Maslov bundle and $\Omega^{1/2}$ denotes the half-density bundle over $\Lambda$.
Moreover, its null space is $I^{m_e-1,m_\psi-1}(X,\Lambda)$ and
we have the short exact sequence
%
\[
0 \longrightarrow
I^{m_e-1,m_\psi-1}(X,\Lambda) \longrightarrow
I^{m_e,m_\psi}(X,\Lambda) \xrightarrow{j^\Lambda_{m_e,m_\psi}}
{\mathscr{C}^\infty}(\Lambda,M_\Lambda\otimes\Omega^{1/2})
\longrightarrow 0.
\]
Equivalently,
\[I^{m_e,m_\psi}(X,\Lambda) / I^{m_e-1,m_\psi-1}(X,\Lambda) \simeq {\mathscr{C}^\infty}(\Lambda, M_\Lambda\otimes\Omega^{1/2}).\]
%
%
%
\end{thm*}
Summarizing, our results show that the theory of Lagrangian distributions, classically studied either locally or on compact manifolds, may be generalized to a theory of Lagrangian distributions on Euclidean spaces or manifolds with boundaries, hence a much wider class of geometries. It is formulated in a way that makes it easily transferable to other singular geometries as well as manifolds with corners, see \cite{Melrosemwc}.
The paper is organized as follows.
In Section \ref{sec:prelim} we give an introduction to scattering geometry.
In particular, we discuss the natural class of maps between scattering manifolds, compactification and scattering amplitudes.
In Section \ref{sec:phaseandlag} we define the Lagrangian submanifolds and phase functions that arise in our theory.
In Section \ref{sec:exchphase} we discuss the techniques of classifying phase functions which parametrize the same Lagrangian submanifold.
In Section \ref{sec:Lagdist} we define the Lagrangian distributions in this setting, starting from oscillatory integrals, and study their transformation properties.
Finally, in Section \ref{sec:symb}, we define the principal symbol of Lagrangian distributions and prove its invariance.
\subsection*{Acknowledgements}
The second author was supported by the DFG GRK-1463.
The third author has been partially supported by the University of Turin ``I@UniTO'' project ``Fourier integral operators, symplectic geometry and analysis on noncompact manifolds'' (Resp. S. Coriasco).
We wish to thank J. Wunsch for useful discussions.
\section{Preliminary definitions}\label{sec:prelim}
In the following, we will recall some elements of the geometric theory known as ``scattering geometry'', cf. \cite{Melrose1,Melrose2,MZ,WZ}. To start with, we need to recall some groundwork on the analysis on manifolds with corners, for which we adopt the definition of \cite{MelroseAPS,Melrosemwc}, cf. also \cite{MO} and \cite{Joyce} for a discussion on the different notions of manifolds with corners in the literature.
\subsection{Manifolds with corners and scattering geometry}
\label{sec:scat}
We recall the following extrinsic definition of a (smooth) manifold with (embedded) corners.
\subsubsection*{Manifolds with corners and ${\mathscr{C}^\infty}$-functions}
Let $X$ be a paracompact Hausdorff space. As in the case of manifolds without boundary, a manifold with corners is defined in terms of local charts. A $d$-dimensional chart with corners (of codimension $k$) on $X$ is a pair $(U,\phi)$, where $U$ is an open subset of $[0,\infty)^k\times {\mathbb{R}}^{d-k}$ for some $0\leq k \leq d$, and $\phi : U \to \phi(U)\subset X$ is a homeomorphism. If $k = 1$ we call $(U,\phi)$ a chart with boundary.
As usual, we define compatibility between charts and an atlas of charts and therefore obtain a definition of manifolds with boundary and manifolds with corner (abbreviated mwb and mwc, respectively, in the following).
For every manifold with corners $X$ of dimension $d$ there exists a $d$-dimen\-sional ${\mathscr{C}^\infty}$-manifold $\wt{X}$ without boundary
with $X\subset\tilde{X}$, and the interior $X^o$ of $X$ is open in $\wt{X}$ and non-empty when $d>0$.
We denote by ${\mathscr{C}^\infty}(X)$ the space of the restrictions of the elements of ${\mathscr{C}^\infty}(\wt{X})$ to $X$. The tangent space $TX$ and differentials of maps $f:X\rightarrow Y$, $Tf:TX\rightarrow TY$, between manifolds with corners $X, Y$, are obtained as restrictions of the corresponding objects on $\wt{X}$ and $\wt{Y}$.
We always assume $X$ to be compact and assume that there is a finite collection of ${\mathscr{C}^\infty}$-functions on $\tilde{X}$, $\{\rho_i\}_{i\in I}$, called boundary defining functions (abbreviated bdf), such that $X=\bigcap_{i\in I}\{p\in \tilde{X}, \rho_i(p)\geq 0\}$, and at every point where $\rho_j=0$ for every $j\in J\subset I$, the differentials of these $\rho_j$ are supposed to be linearly independent. In particular, $\dd\rho_j\neq 0$ when $\rho_j=0$. We also always assume to be working in local coordinates of the form
$\mathbf{x}\colon p\mapsto (\rho_1,\dots,\rho_k,x_1,\dots,x_{d-k})(p)$, where $k$ is the number of boundary defining functions\footnote{Note that the $\rho_j$ cannot always be chosen as coordinates at interior points, since their differential may vanish in the interior. As it is customary, we disregard this minor technical inconvenience in order to allow for an easier consistent notation and think of the $\rho$ to be replaced by any other admissible coordinate function there.}. %
\begin{rem}\label{rem:joycedef}
Joyce calls this notion a (compact) \emph{manifold with embedded corners} (cf. Remark 2.11 in \cite{Joyce}).
By Proposition 2.15 in \cite{Joyce}, we see that, locally, a boundary defining function always exists, and the property that all corners are embedded ensures that a global boundary defining function exists.
Most of the times the actual choice of boundary defining function is not relevant (cf. Proposition 2.15).
\end{rem}
Let $p\in X$. Then the depth of $p$, $\mathrm{depth}(p)$, is the number of independent boundary defining functions vanishing at $p$, which coincides with the co-dimension of the boundary stratum in which $p$ is contained. We recall that for $j\in\{0,\dots,d\}$ one sets
$\partial_jX=\{p\in X\,|\,\mathrm{depth}(p)= j\}.$
In particular, $X^o=\partial_0 X$ and $\partial X=\bigcup_{j>0} \partial_j X$. We note that as such, the boundary of a mwc is not a mwc itself, but rather a topological manifold. Nevertheless, it is possible to define smooth functions on $\partial X$ as the set of restrictions smooth functions on $X$ to $\partial X$.
Given a relatively open subset $U$ of a manifold with corner $X$, we say that $U$ is \emph{interior} if $\overline{U}\cap\partial X=\emptyset$. Otherwise, we always assume that $U$ contains all interior points of the boundary $\overline{U}\cap \partial X$ and call $U$ a \emph{boundary neighbourhood}.
We will write $f\in{\mathscr{C}^\infty}(U)$ if and only if there is an extension $\wt{f}\in{\mathscr{C}^\infty}(X)$ that coincides with $f$ on $U$. The space $\rho_1^{-m_1}\cdots\rho_k^{-m_k}{\mathscr{C}^\infty}(U)$ is the space of functions $h\in{\mathscr{C}^\infty}(U^o)$ such that $\rho_1^{m_1}\cdots\rho_k^{m_k}h$ extends to an element of ${\mathscr{C}^\infty}(U)$.
The class of mwc that interest us is that of (products of) fiber bundles where both the base as well as the fiber are allowed to be a compact manifold with boundary (abbreviated ``mwb''). The archetype of such a mwc is the product of two mwbs. Indeed, if $X$ and $Y$ are mwbs, $B=X\times Y$ is a mwc. We write $\mathcal{B}=\partial B$ and we have (adopting the notation of \cite{CoSc2,ES})
\begin{align*}
\mathcal{B}&=\underbrace{(\partial X\times Y^o) \cup (X^o\times \partial Y)}_{=\partial_1 B}\cup \underbrace{(\partial X\times \partial Y)}_{=\partial_2 B}=: \B^e \cup \B^\psi\cup \B^{\psi e}.
\end{align*}
We now describe the basics of scattering geometry, cf. \cite{Melrose1,Melrose2,MZ,WZ}. We first recall the guiding example.
\begin{defn}[Radial compactification of ${\RR^d}$]
Pick any diffeomorphism $\iota:{\RR^d}\rightarrow ({\BB^d})^o$ that, for $|x|>3$, is given by
\[\iota:x\mapsto \frac{x}{|x|}\left(1-\frac{1}{|x|}\right).\]
Then its inverse is given, for $|y|\geq \frac{2}{3}$, by
\[\iota^{-1}:y\mapsto \frac{y}{|y|}(1-|y|)^{-1}.\]
The map $\iota$ is called the \emph{radial compactification map}. We may hence view ${\RR^d}$ as the interior of the mwb ${\BB^d}$ and call $\partial {\BB^d}$ ``infinity''.
Denote by $\bdf{x}$ a smooth function ${\RR^d}\rightarrow (0,\infty)$ that, for $|x|>3$, is given by $x\mapsto |x|$. Then $(\iota^{-1})^*\bdf{x}^{-1}$ is a boundary defining function on ${\BB^d}$ (and we view $\bdf{x}^{-1}$ as a boundary defining function on ${\RR^d}$). Indeed, for $|y|>2/3$ it is given by
$y\mapsto 1-|y|=\rho_Y$.
\end{defn}
\begin{rem}
\label{rem:compequiv}
In scattering geometry, the explicit choice of compactification of ${\RR^d}$ often differs from ours, see \cite{MZ}. Write $\jap{x}=\sqrt{1+|x|^2}$ for $x\in{\RR^d}$ and define
\[x\mapsto \left(\frac{1}{\jap{x}},\frac{x}{\jap{x}}\right)=:\left(\wt{\rho_Y},\wt{y}\right).\]
This maps ${\RR^d}$ into the interior of the half-sphere with positive first component, and $\wt{\rho_Y}$ and $d-1$ of the $\wt{y} = \wt{\rho_Y}\cdot x$ functions may be chosen as local coordinates. Because of the following computation, both compactifications are equivalent, meaning they yield diffeomorphic manifolds. In fact, for $|x|>3$, we may write
\[\jap{x}^{-1}=\bdf{x}^{-1}\frac{1}{1+\bdf{x}^{-2}}, \qquad \bdf{x}^{-1}=\jap{x}^{-1}\frac{1}{\sqrt{1-\jap{x}^{-2}}}.\]
Hence, $\jap{x}^{-1}$ and $\bdf{x}^{-1}$ yield equivalent boundary defining functions on ${\RR^d}$.
\end{rem}
\begin{defn}[Scattering vector fields on mwbs]
Let $X$ be a mwb with bdf $\rho$. Consider the space $ {}^{b}\mathcal{V}(X)$ of vector fields tangential to $\partial X$.
Then ${}^\scat \,\mathcal{V}(X)$ is the space $\rho\, {}^{b}\mathcal{V}(X)$.
Near any point with $\rho=0$, the vector fields
$\{\rho^2\partial_\rho,\ \rho\partial_{x_j}\}$
generate ${}^\scat \,\mathcal{V}(X)$. In particular, ${}^\scat \,\mathcal{V}(X)$ contains vector fields supported in $X^o$.
By the Serre-Swan theorem, there exists a ${\mathscr{C}^\infty}$-vector bundle ${}^\scat \,T X$ such that ${}^\scat \,\mathcal{V}(X)$ are its ${\mathscr{C}^\infty}$-sections.
We have a natural inclusion map ${}^\scat \,T X\hookrightarrow TX$. Note that $\{\rho^2\partial_\rho,\ \rho\partial_{x_j}\}$ are, as elements of ${}^\scat \,T_pX$, non-vanishing at boundary points $p\in \partial X$ despite $\rho=0$.\\
The inclusion reverses for the dual bundles $ T^*X\hookrightarrow {}^\scat \,T^*X$. In coordinates, we denote the dual elements to $\{\rho^2\partial_\rho,\ \rho\partial_{x_j}\}$ by $\left\{\frac{\dd\rho}{\rho^2},\frac{\dd x_j}{\rho}\right\}$, and these span the sections of ${}^\scat \,T^*X$ near the boundary.
We now consider the the \textit{compactified scattering cotangent bundle} ${}^\scat \,\overline{T}^*X$, which is the fiber-wise radial compactification of ${}^\scat \,T^*X$, a compact manifold with corners.
The new-formed fiber boundary may be identified with a rescaling of the cosphere bundle, called ${}^\scat \,S^*X$.
The boundary of the new-formed mwc $W={}^\scat \,\overline{T}^*X$, which we denote\footnote{This is a slight change of notation compared to \cite{Melrose1} where it is denoted $C_{\mathrm{sc}}$.} by $\mathcal{W}$, splits into three components: the boundary faces
$$\Wt^e:={}^\scat \,T^*_{\partial X}X,\qquad \Wt^\psi:={}^\scat \,S^*_{X^o}X,\qquad \Wt^{\psi e}:={}^\scat \,S^*_{\partial X}X.$$
This geometric situation (with $X$ identified as the zero section) near the boundary is summarised in Figure \ref{fig:Wt} (cf. \cite{CoSc2,MZ}).
\begin{figure}[htb!]
\begin{center}
\begin{tikzpicture}[scale=1.2]
\node (A) at (1.5,3.3) {$\Wt^\psi$};
\node (B) at (3.4,1.5) {$\Wt^e$};
\node (C) at (3.3,3.2) {$\Wt^{\psi e}$};
\node (D) at (1.2,1.2) {$X^o$};
\node (E) at (0.2,2) {${}^\scat \,\overline{T}^*X$};
\draw (1,3) -- (3,3);
\draw[dotted] (0,3) -- (1,3);
\draw (3,0) -- (3,3);
\draw[opacity=0.75] (1,1) -- (3,1);
\draw[opacity=0.75,dotted] (3,0) -- (3,-1);
\draw[opacity=0.75,dotted] (0,1) -- (1,1);
\draw[->,thick] (3.5,0.6) -- (3.05,0.9);
\node at (3.6,0.4) {$\partial X$};
\end{tikzpicture}
\caption{The boundary faces and corner of ${}^\scat \,\overline{T}^*X$}
\label{fig:Wt}
\end{center}
\end{figure}
The exterior derivative $\dd$ lifts to a well-defined scattering differential ${}^\scat \dd$ on the scattering geometric structure.
In coordinates, with $\rho$ a local boundary defining function, we write
\begin{align}
\label{def:scddef}
{}^\scat \dd f=\rho^2\partial_\rho f\,\frac{\dd\rho}{\rho^2}+\sum_{j=1}^{d-1}\rho\partial_{x_j} f\,\frac{\dd x_j}{\rho}.
\end{align}
Note that for $f\in {\mathscr{C}^\infty}(X)$, this means that as a section of ${}^\scat \,T^*X$, ${}^\scat \dd f$ necessarily vanishes on the boundary. In fact, we may extend ${}^\scat \dd$ to the space $\rho^{-1}{\mathscr{C}^\infty}(X)$ and obtain a map
$${}^\scat \dd:\rho^{-1}{\mathscr{C}^\infty}(X)\longrightarrow {}^\scat\, \varTheta(X)=\Gamma({}^\scat \,T^*X).$$
That is, in local coordinates near the boundary,
$${}^\scat \dd(\rho^{-1}f) = \rho^{-1}\, {}^\scat \dd f - f \frac{\dd\rho}{\rho^2}=(-f+\rho\partial_\rho f)\,\frac{\dd\rho}{\rho^2}+\sum_{j=1}^{d-1}\partial_{x_j} f\,\frac{\dd x_j}{\rho}.$$
\end{defn}
\begin{rem}
We note that $\rho^{-1}{\mathscr{C}^\infty}(X)$ and similarly defined spaces are independent of the actual choice of boundary defining function $\rho$ (cf. Remark \ref{rem:joycedef}).
\end{rem}
\begin{ex}
Outside a compact neighbourhood of the origin, polar coordinates provide an isomorphism ${\RR^d}\cong{\mathbb{R}}_+\times {\SSS^{d-1}}$. The vector fields $\partial_r$ and $\frac{1}{r}\partial_{x_j}$, $x_j$ being coordinates on ${\SSS^{d-1}}$, correspond (up to a sign) under radial inversion $\rho=\frac{1}{r}$ to $\rho^2\partial_\rho$ and $\rho\partial_{x_j}$. Hence, scattering vector fields on ${\BB^d}$ arise as the image of the vector fields of bounded length on ${\RR^d}$ under radial compactification.
\end{ex}
\begin{defn}
A \emph{scattering manifold} (also called asymptotically Euclidean manifold) is a compact manifold with boundary $(X, \rho)$ whose interior is equipped with a Riemannian metric $g$ that is supposed to take the form, in a tubular neighbourhood of the boundary,
$$g=\frac{{(\dd \rho)}^{\otimes 2}}{\rho^4}+\frac{g_\partial}{\rho^2}$$
where $g_\partial\in{\mathscr{C}^\infty}(X,\mathrm{Sym}^2 T^*X)$ restricts to a metric on $\partial X$.
\end{defn}
Any mwb may be equipped with a scattering metric.
\begin{ex}
In polar coordinates, the metric on ${\RR^d}\setminus\{0\}$ can be written as
$$g=(\dd r)^{\otimes 2}+r^2 g_{{\SSS^{d-1}}}.$$
Pulled back to ${\BB^d}$ using $\iota$, that is $r=(1-|y|)^{-1}=\rho^{-1}$ near the boundary, this becomes
$$g_{{\BB^d}}=\frac{(\dd \rho)^{\otimes 2}}{\rho^4}+\frac{g_{{\SSS^{d-1}}}}{\rho^2}.$$
\end{ex}
\begin{defn}[Scattering vector fields on product type manifolds]
For a product $B=X\times Y$, with $(X,\rho_X)$ and $(Y,\rho_Y)$ mwbs, we may introduce ${}^\scat \,\mathcal{V}(B)$ as $\rho_X\rho_Y ( {}^{b}\mathcal{V}(B))$.
Near a corner point the resulting bundle ${}^\scat \,T^*B$ is hence generated, if
$\mathbf{x}=(\rho_X,x)$ and $\mathbf{y}=(\rho_Y,y)$ are local coordinates on $X$ and $Y$ respectively, by
$$\rho_X^2\rho_Y\partial_{\rho_X},\ \rho_X\rho_Y\partial_{x_j},\ \rho_X\rho_Y^2\partial_{\rho_Y},\ \rho_X\rho_Y\partial_{y_k}.$$
The space ${}^\scat \,\mathcal{V}(B)$ splits into horizontal and vertical vector fields\footnote{Consider the projection ${\mathrm{pr}}_X:B\rightarrow X$. Then $v\in{}^\scat \,\mathcal{V}(B)$ satisfies $v\in{}^\scat \,\mathcal{V}^X(B)$ if $v({\mathrm{pr}}_X^*f)=0$ for all $f\in{\mathscr{C}^\infty}(X)$. The set ${}^\scat \, \mathcal{V}^Y(B)$ is defined in analogy.},
${}^\scat \,\mathcal{V}^X(B)$ and ${}^\scat \, \mathcal{V}^Y(B)$, respectively, and we define
${}^\scat\, \varTheta^X(B)$ as the set of (scattering) 1-forms $w \in {}^\scat\, \varTheta^1(B)$ such that
$w(v) = 0$ for all $v \in {}^\scat \, \mathcal{V}^Y(B)$.
Given complete set of coordinates $\mathbf{x}=(\rho_X, x)$,
$\mathbf{y}=(\rho_Y, y)$ on $X$ and $Y$, respectively,
we see that ${}^\scat\, \varTheta^X(B)$ is the set of sections generated by
\[\frac{\dd\rho_X}{\rho_X^2\rho_Y}, \frac{\dd x_j}{\rho_X\rho_Y}.\]
The underlying vector bundle will be denoted by ${}^\scat H^X B$.
Similarly, we define ${}^\scat\, \varTheta^Y(B)$ and ${}^\scat H^Y B$.
It is important to note that we have the following ``rescaling identifications'':
\begin{equation}
\label{eq:rescal}
\begin{aligned}
{}^\scat\, \varTheta^X(B)\ni \frac{\dd \rho_X}{\rho_X^2 \rho_Y}&\,\longleftrightarrow\, \rho_Y^{-1}\frac{\dd\rho_X}{\rho_X^2}\in \rho_Y^{-1}{\mathscr{C}^\infty}(Y,{}^\scat\, \varTheta(X)),
\\
{}^\scat\, \varTheta^X(B)\ni \frac{\dd x_j}{\rho_X\rho_Y}&\,\longleftrightarrow\, \rho_Y^{-1}\frac{\dd x_j}{\rho_X}\in \rho_Y^{-1}{\mathscr{C}^\infty}(Y,{}^\scat\, \varTheta(X)).
\end{aligned}
\end{equation}
Again, we may define the scattering exterior differential
${}^\scat \dd$, induced by the usual exterior differential $\dd$, and extend it to a map
\[{}^\scat \dd : \rho_X^{-1}\rho_Y^{-1}{\mathscr{C}^\infty}(B)\longrightarrow {}^\scat\, \varTheta(B).\]
In terms of the scattering differentials on $X$ and $Y$ we may decompose ${}^\scat \dd$ as ${}^\scat \dd={}^\scat \dd_X+{}^\scat \dd_Y$, where
\begin{align*}
{}^\scat \dd_X : \rho_X^{-1}\rho_Y^{-1} {\mathscr{C}^\infty}(B) \to {}^\scat\, \varTheta^X(B),\\
{}^\scat \dd_Y : \rho_X^{-1}\rho_Y^{-1} {\mathscr{C}^\infty}(B) \to {}^\scat\, \varTheta^Y(B).
\end{align*}
\end{defn}
\subsection{Amplitudes}
\begin{defn}[Amplitudes of product-type]
Let $B$ be a mwc, $\{\rho_j\}_{j=1\dots k}$ a complete set of bdfs. Then $a$ is called an amplitude of order $m\in{\mathbb{R}}^{k}$ if
$$a\in \rho_1^{-m_1} \cdots \rho_k^{-m_k}{\mathscr{C}^\infty}(B).$$
For an open subset $U$ of $X$, a \emph{locally defined} amplitude of product type is an element of $\rho_1^{-m_1}\cdots\rho_k^{-m_k}{\mathscr{C}^\infty}(\overline{U})$.
For $p\in\partial X$ we call $a$ \emph{elliptic at $p$} if $\rho^{m_1}_1\cdots\rho_k^{m_k}a(p)\neq 0$.
We write
$${\dot{\mathscr{C}}^\infty_0}(X):=\bigcap_{m\in{\mathbb{R}}^k}\rho_1^{-m_1}\cdots\rho_k^{-m_k}{\mathscr{C}^\infty}(B)$$
for the smooth functions vanishing at the boundary of infinite order.
For $p\in\partial B$ we call $a$ \emph{rapidly decaying at $p$} if there exists a neighbourhood $U$ of $p$ such that $a$ vanishes of infinite order on $U\cap \partial B$, that is $a \in {\dot{\mathscr{C}}^\infty_0}(\overline U)$.
\end{defn}
We now study the leading boundary behavior of these amplitudes. For simplicity, we only consider $B=X\times Y$ for mwbs $X$ and $Y$.
\begin{defn}
\label{def:princsymbol}
Let $a \in \rho_X^{-m_e}\rho_Y^{-m_\psi} {\mathscr{C}^\infty}(B)$ and write $a = \rho_X^{-m_e}\rho_Y^{-m_\psi} f$ for some $f \in {\mathscr{C}^\infty}(B)$.
Given a coordinate neighbourhood $U$ of a point $p\in\mathcal{B}^\bullet$, we define symbols $\sigma^\bullet(a)$ of $a$ on $U$ by
\begin{align*}
\begin{cases}
\sigma^e(a)(\mathbf{x},\mathbf{y})=\rho_X^{-m_e}\rho_Y^{-m_\psi}f(0,x,\mathbf{y}), &p\in\mathcal{B}^e\cup\mathcal{B}^{\psi e} \\
\sigma^\psi(a)(\mathbf{x},\mathbf{y})=\rho_X^{-m_e}\rho_Y^{-m_\psi}f(\mathbf{x},0,y), &p\in\mathcal{B}^\psi\cup\mathcal{B}^{\psi e}\\
\sigma^{\psi e}(a)(\mathbf{x},\mathbf{y})=\rho_X^{-m_e}\rho_Y^{-m_\psi}f(0,x,0,y) &p\in\mathcal{B}^{\psi e}.
\end{cases}
\end{align*}
The tuple $(\sigma^\psi(a),\sigma^e(a), \sigma^{\psi e}(a))$ is denoted by $\sigma(a)$ and called the \emph{principal symbol}.
\end{defn}
Fix $\epsilon>0$ so small that $\rho_X$ and $\rho_Y$ can be chosen as coordinates on $B$ respectively whenever $\rho_X<\epsilon$ and $\rho_Y<\epsilon$.
We choose a cut-off function $\chi \in {\mathscr{C}^\infty}({\mathbb{R}})$ such that $\chi(t) = 0$ for $t > \epsilon/2$ and $\chi(t) = 1$ for $t < \epsilon/4$.
\begin{defn}\label{def:princpart}
For any $a \in \rho_X^{-m_e}\rho_Y^{-m_\psi} {\mathscr{C}^\infty}(B)$ the amplitude
%
\begin{align*}
a_p(\mathbf{x},\mathbf{y}) = \chi(\rho_X) \sigma^e(a)(\mathbf{x},\mathbf{y}) + \chi(\rho_Y) \sigma^\psi(a)(\mathbf{x},\mathbf{y}) - \chi(\rho_X)\chi(\rho_Y) \sigma^{\psi e}(a)(\mathbf{x},\mathbf{y})
\end{align*}
%
is called the \emph{principal part} of $a$.
\end{defn}
While $a_p$ does depend on the choice of $\chi$, its leading boundary asymptotic do not. By Taylor expansion of $f$, we obtain:
\begin{lem}
\label{lem:princpart}
The principal part $a_p$ of $a$ satisfies $ a - a_p \in \rho_X^{-m_e+1}\rho_Y^{-m_\psi+1} {\mathscr{C}^\infty}(B).$
\end{lem}
\begin{ex}[Classical ${\mathrm{SG}}$-symbols]
Let $B={\BB^d}\times{\BB^s}$, where ${\BB^d}$ and ${\BB^s}$ are the radial compactifications of ${\RR^d}$ and ${\RR^s}$.
The space of so-called classical ${\mathrm{SG}}$-symbols, $\mathrm{SG}_\mathrm{cl}^{m_e,m_\psi}({\RR^d}\times{\RR^s})$, is that of $a\in{\mathscr{C}^\infty}({\RR^d}\times{\RR^s})$ such that
$(\iota^{-1}\times\iota^{-1})^*a\in \rho_X^{-m_e} \rho_Y^{-m_\psi}{\mathscr{C}^\infty}(B)$. These symbols are then precisely those that satisfy the estimates
\begin{equation}
\label{eq:SGest}
\left|\partial_x^{\alpha}\partial_\theta^{\beta} a(x,\theta)\right|\lesssim \jap{x}^{m_e-|\alpha|}\jap{\theta}^{m_\psi-|\beta|}
\end{equation}
and admit a polyhomogeneous expansion, see \cite{ES,Melrose1,WZ} and the principal symbol of $a$ corresponds to its homogeneous coefficients, see \cite[Chap. 8.2]{ES}.
\end{ex}
We will need to consider density-valued amplitudes and integrate amplitudes on mwbs. For this, we introduce the space of scattering $\sigma$-density bundles, cf. \cite{Melrose1}, where ${}^\scat\, \Omega^{\sigma}(X)=\rho^{-\sigma(d+1)}\Omega^\sigma(X)$ in terms of the usual $\sigma$-density bundle. Note that ${}^\scat\, \Omega^{\sigma}$ does not depend on the choice of boundary defining function.
\begin{ex}\label{ex:mugdensity}
Under the radial compactification, the canonical Lebesgue integration density on ${\mathbb{R}}^d$, $\dd x \in \Omega^1({\mathbb{R}}^d)$, is mapped to $\iota_*\dd x \in {}^\scat\, \Omega^1({\mathbb{B}}^d)$.
In particular, we obtain $\iota_*\dd x = \rho^{-(d+1)}\dd\rho\,\dd{\SSS^{d-1}}$. More generally, if $(X,g)$ is a scattering manifold, then the metric induces a canonical volume scattering 1-density $\mu_g$.
\end{ex}
Since the density bundle is a line bundle, any choice of scattering density provides a section of it and allows for an identification of scattering densities on $X$ and ${\mathscr{C}^\infty}$-functions.
We denote the set of all smooth sections of the bundle ${}^\scat\, \Omega^\sigma(X)$ by ${\mathscr{C}^\infty}(X,{}^\scat\, \Omega^\sigma(X))$, and
the tempered distribution densities $({\dot{\mathscr{C}}^\infty_0})'(X, {}^\scat\, \Omega^\sigma(X))$ are the continuous linear functionals on ${\dot{\mathscr{C}}^\infty_0}(X, {}^\scat\, \Omega^{1-\sigma}(X))$.
\begin{lem}\label{lem:intdensity}
Let $X$ be a mwb and $Y$ a manifold without boundary. Then, integration over $Y$ induces a map
\begin{align*}
\int_Y : {\mathscr{C}^\infty_{c}}(X\times Y, {}^\scat\, \Omega^1(X\times Y)) \longrightarrow \rho_X^{-\dim Y} {\mathscr{C}^\infty_{c}}(X,{}^\scat\, \Omega^1(X)).
\end{align*}
\end{lem}
\begin{rem}\label{rem:pushforward}
More generally, let $X,Y$ be mwbs and $Z$ a manifold without boundary.
Consider a differentiable fibration $f : X \to Y$ with typical fiber $Z$.
For every scattering density $\mu \in {\mathscr{C}^\infty}(X,{}^\scat\, \Omega^1(X))$ the pushforward
\[f_* \mu \in \rho_Y^{-\dim Z} {\mathscr{C}^\infty_{c}}(Y, {}^\scat\, \Omega^1(Y))\]
is defined locally by integration along the fiber.
Let $(U, \psi)$ be a trivializing neighborhood of the fiber bundle,
that is $U \subset Y$ open, $\psi : X \to U \times Z$ smooth and $f|_{f^{-1}(U)} = {\mathrm{pr}}_M \circ \psi$.
Assume without loss of generality that $\mu$ is supported on $f^{-1}(U)$.
Then set
\[f_* \mu = \int_Z \mu\circ\psi_j.\]
\end{rem}
\subsection{Scattering maps}
We now introduce and characterize the class of maps whose pull-backs preserve amplitudes of product type. They are a special case of interior $b$-maps in the sense of \cite{MelroseAPS}, and humbly mimicking Melrose's naming conventions we call them ${\mathrm{sc}}$-maps. We first introduce them on manifolds with boundary and then generalize to manifolds with higher corner degeneracy, such as products of mwcs.
\begin{defn}[${\mathrm{sc}}$-maps on mwb]
Let $Y$ and $Z$ be mwbs. Suppose $\Psi:Y\rightarrow Z$. Then $\Psi$ is called an ${\mathrm{sc}}$-map if for any $m\in{\mathbb{R}}$ and $a\in \rho_Z^{-m}{\mathscr{C}^\infty}(Z)$ it holds that:
\begin{enumerate}
\item $\Psi^*a\in \rho_Y^{-m}{\mathscr{C}^\infty}(Y)$;
\item if $p\in \Psi(Y)$ with $p=\Psi(q)$ and $(\rho_Z^{m} a)(p)> 0$, then $(\rho_Y^{m} \Psi^*a)(q)> 0$.
\end{enumerate}
\end{defn}
\begin{rem}\label{rem:inward}
In particular, $\Psi$ maps the boundary of $Y$ into that of $Z$.
It also follows that $T\Psi$ maps inward pointing vectors at the boundary (meaning vectors with strictly positive $\partial_\rho$-component) to inward pointing vectors at the corresponding points. Indeed, we see that, at the boundary, $\Psi_*\partial_{\rho_Z}=h^{-1}\partial_{\rho_Y}$.
\end{rem}
\begin{rem}\label{rem:SGmapcomp}
It is obvious that the composition of two ${\mathrm{sc}}$-maps is again a ${\mathrm{sc}}$-map.
\end{rem}
It is straightforward to adapt this definition to that of a local ${\mathrm{sc}}$-map by replacing $Y$ and $Z$ with open subsets.
\begin{lem}[${\mathrm{sc}}$-maps in coordinates]
\label{lem:SGmapcoord}
Let $Y$ and $Z$ be mwbs, $U\subset Y$ and $V\subset Z$ open subsets.
A smooth map $\Psi:U\rightarrow V$ is a local ${\mathrm{sc}}$-map if and only if for the boundary defining functions on $Y$ and $Z$, $\rho_Y$ and $\rho_Z$, respectively, we have
\begin{equation}
\label{eq:locscmap}
\Psi^*\rho_Z=\rho_Yh\text{ for some }h\in{\mathscr{C}^\infty}(Y)\text{ with }h> 0.
\end{equation}
\end{lem}
Hence, any local diffeomorphism of mwbs is a local scattering map. Moreover:
\begin{lem}\label{lem:SGmapproj}
Let $X, Z$ be mwbs.
Given any open, bounded set $U \subset {\RR^d}$, define the projection ${\mathrm{pr}}_Z : Z \times U \to Z, (z,y) \mapsto z$.
Then $\mathrm{id}_X \times {\mathrm{pr}}_Z$ is a ${\mathrm{sc}}$-map.
\end{lem}
We now investigate the action of pull-backs by ${\mathrm{sc}}$-maps on the
objects introduced above. The following assertions can be verified in local coordinates.
\begin{lem}\label{lem:scdpullback}
%
Let $Y$ and $Z$ be mwbs, $U\subset Y$ and $V\subset Z$ open subsets.
Let $\Psi:U\rightarrow V$ be a local ${\mathrm{sc}}$-map. Then,
the following properties hold true.
\begin{itemize}
%
\item $\Psi^*$ yields a map $\rho_Z^m\,{}^\scat\, \varTheta^k(V)\rightarrow \rho_Y^m\,{}^\scat\, \varTheta^k(U)$ for any $m\in {\mathbb{R}}$ and $k\in {\mathbb{N}}$. Moreover, for $\theta\in\rho_{Z}^{m} \,{}^\scat\, \varTheta^k(V)$, we have
${}^\scat \dd (\Psi^*\theta) = \Psi^*({}^\scat \dd \theta)$.
\item $\Psi^*$ yields a map ${}^\scat\, \Omega^\sigma(V)\rightarrow {}^\scat\, \Omega^\sigma(U)$ for any $\sigma\in[0,1]$.
\item The map $T^*\Psi:T^*V\rightarrow T^*U$ lifts to a map ${}^\scat \,\overline{T}^*\Psi:{}^\scat \,\overline{T}^*V\rightarrow {}^\scat \,\overline{T}^*U$. In local coordinates, away from fiber-infinity, ${}^\scat \,\overline{T}^*\Psi$ is given by
$$(\Psi(\mathbf{y}),\boldsymbol{\zeta})\mapsto \big(\mathbf{y},\iota(^t(J \Psi)(\iota^{-1}\boldsymbol{\zeta}))\big),$$
%
wherein $J\Psi$ is the Jacobian of $\Psi$ at $\mathbf{y}.$
The extension to fiber-infinity is obtained by taking interior limits $|\zeta|^{-1}\rightarrow 0$.
\end{itemize}
%
\end{lem}
We observe that ${\mathrm{sc}}$-maps provide a natural class of maps between scattering manifolds.
\begin{cor}
Suppose $Y$ is a mwb, $(Z,\rho_Z,g)$ a scattering manifold, $\Psi$
a ${\mathrm{sc}}$-map $Y\rightarrow Z$ which is an immersion. Then $(Y,\Psi^*\rho_Z,\Psi^*g)$ is a scattering manifold.
\end{cor}
\begin{proof}
We first observe that $\Psi^*\rho_Z$ is a boundary defining function on $Y$. Indeed,
\begin{equation}
\label{eq:scTident}
\dd\Psi^*\rho_Z=h\,\dd\rho_Y+\rho_Y \dd h.
\end{equation}
This implies, at the boundary, $h\,\dd\rho_Y\neq 0$. The scattering metric on $Z$ pulls back to
$$\Psi^*g=\Psi^*\frac{(\dd \rho_Z)^{\otimes 2}}{\rho_Z^4}+\Psi^*\frac{g_\partial}{\rho_Z^2}=\frac{(\dd \Psi^*\rho_Z)^{\otimes 2}}{(\Psi^*\rho_Z)^4}+\frac{\Psi^*g_\partial}{(\Psi^*\rho_Z)^2},$$
which is again a scattering metric.
\end{proof}
\begin{cor}
\label{cor:realizeonball}
Any scattering manifold $Y$ of dimension $s$ is locally diffeomorphic to ${\BB^s}$. Moreover, any scattering density on $Y$ can locally be written as the pull-back by one on ${\BB^s}$.
\end{cor}
We now extend the notion of ${\mathrm{sc}}$-map to manifolds with corners.
\begin{defn}[${\mathrm{sc}}$-maps on mwc]
Let $Y$ and $Z$ be mwcs. Then, a smooth map $\Psi:Y\rightarrow Z$ is a local ${\mathrm{sc}}$-map for some complete sets of local bdfs $\{\rho_{Y_i}\}_{i\in I}$ and $\{\rho_{Z_i}\}_{i\in I}$ if:
$$\text{For all }i\in I\text{ we have }\Psi^*\rho_{Z_i}=\rho_{Y_i} h_i\text{ for some }h_{i}\in{\mathscr{C}^\infty}(Y)\text{ with }h_{i}> 0.$$
\end{defn}
\begin{rem}
In particular, $\Psi$ maps the boundary of $Y$ into that of $Z$.
As mentioned before, ${\mathrm{sc}}$-maps are special cases of $b$-maps. In fact, they are those \emph{interior} $b$-maps that are smooth maps in the sense of \cite{Joyce}. The only difference with the smooth maps in \cite{Joyce} is that, therein, $\Psi^*\rho_{Z_i}\equiv 0$ is allowed.
\end{rem}
\begin{ex}
In particular, if $\Psi_1:Y_1\rightarrow Z_1$ and $\Psi_2:Y_2\rightarrow Z_2$ are ${\mathrm{sc}}$-maps on mwb, then $\Psi_1\times \Psi_2:Y_1\times Y_2\rightarrow Z_1\times Z_2$ is a ${\mathrm{sc}}$-map on the resulting product mwc.
\end{ex}
\begin{rem}
Note that we fix the ordering of the boundary defining functions. This is important, in particular, when considering ${\mathrm{sc}}$-maps between products $X\times Y\rightarrow X\times Z$ or of the form $X\times Y\rightarrow{}^\scat \,\overline{T}^*X$. Most of the times, the choice of bdfs will be clear from the context.
\end{rem}
Note that, on a mwb, it is possible to extend any map $\partial X\mapsto \partial X$ with $x\mapsto x'$ to a scattering map, by setting $(\rho_X,x)\mapsto (\rho_X,x')$ in a collar neighbourhood of $\partial X$ given by $X\cong [0,\epsilon)\times \partial X$. The following proposition grants us the ability to continue scattering maps from a corner into the interior.
\begin{prop}
\label{prop:cornerdiffeo}
Let $B_1=X_1\times Y_1$ and $B_2=X_2\times Y_2$ be products of mwbs. Let $\Psi^e$, $\Psi^\psi$ be two (local) scattering maps near a point $p\in\B^{\psi e}_1$,
\begin{align*}
\Psi^e:\B^e_1\longrightarrow \B^e_2 \quad\text{ and }\quad
\Psi^\psi: \B^\psi_1\longrightarrow \B^\psi_2
\end{align*}
such that $\Psi^e=\Psi^\psi$ when restricted to $\B^{\psi e}_1$. Then there exists a (local) scattering map $\Psi$ on a neighbourhood $U\subset B_1$ of $p$ with $\Psi^\bullet=\Psi|_{\mathcal{B}^\bullet}$ such that
\begin{equation}
\label{eq:strictscproperty}
\partial_{\rho_{X_1}}\Psi^*\rho_{Y_2}=\partial_{\rho_{Y_1}}\Psi^*\rho_{X_2}=0\quad \text{on }\mathcal{B}_1.
\end{equation}
If $\Psi^e$ and $\Psi^\psi$ are local diffeomorphisms near $p$ (in their respective boundary faces), then $\Psi$ is a local diffeomorphism near $p$.
\end{prop}
\begin{proof}
This is Whitney's extension theorem for smooth functions, applied to the system of functions (and their derivatives)
\begin{align*}
(\Psi^e)^*x,(\Psi^e)^*y,(\Psi^e)^*\rho_Y &\qquad\textrm{on }\B^e_1,\\
(\Psi^\psi)^*\rho_X,(\Psi^\psi)^*x,(\Psi^\psi)^*y &\qquad\textrm{on }\B^\psi_1,
\end{align*}
together with the conditions \eqref{eq:strictscproperty} and
\begin{align*}
\label{eq:strictscproperty}
D_{x,y}\Psi^*\rho_{Y_2}=0\quad \text{on }\mathcal{B}_1^\psi,\\
D_{x,y}\Psi^*\rho_{X_2}=0\quad \text{on }\mathcal{B}_1^e.
\end{align*}
Note that, if $\Psi^e$ and $\Psi^\psi$ are local diffeomorphisms at $p$, the differential of $\Psi$ is an invertible block matrix, and hence $\Psi$ is a local diffeomorphism.
\end{proof}
\begin{lem}\label{lem:trafoprinc}
Consider a ${\mathrm{sc}}$-map $\Psi : X \times Y \to X \times Y$ of product form $\Psi = \Psi_X \times \Psi_Y$, with ${\mathrm{sc}}$-maps on $X,Y$, $\Psi_X$ and $\Psi_Y$, respectively.
Assume $a \in \rho_Y^{-m_\psi} \rho_X^{-m_e} {\mathscr{C}^\infty}(X\times Y)$. With the notation of Definition \ref{def:princsymbol} and \ref{def:princpart}, we have:
\begin{align*}
\sigma^\psi(\Psi^*a) - \Psi^*(\sigma^\psi a) &\in \rho_Y^{-m_\psi + 1} \rho_X^{-m_e}{\mathscr{C}^\infty},\\
\sigma^e(\Psi^*a) - \Psi^*(\sigma^e a) &\in \rho_Y^{-m_\psi}\rho_X^{-m_e + 1} {\mathscr{C}^\infty},\\
(\Psi^*a)_p - \Psi^*(a_p) &\in \rho_Y^{-m_\psi + 1} \rho_X^{-m_e + 1}{\mathscr{C}^\infty}.
\end{align*}
\end{lem}
\begin{proof}
We will only prove the first identity, the others follows by similar arguments.
Write $(\Psi^*\rho_X)(\mathbf{x})=\rho_X h_X(\mathbf{x})$ and $(\Psi^*\rho_Y)(\mathbf{y})=\rho_Y h_Y(\mathbf{y})$. If $a = \rho_X^{-m_e} \rho_Y^{-m_\psi} f$ then
\begin{align*}
(\Psi^*a)(\mathbf{x},\mathbf{y}) = \rho_X^{-m_e} \rho_Y^{-m_\psi} h_X^{-m_e}(\mathbf{x}) h_Y^{-m_\psi}(\mathbf{y}) (\Psi^*f)(\mathbf{x}, \mathbf{y}).
\end{align*}
This implies
\begin{align*}
\sigma^{\psi}(\Psi^*a)(\mathbf{x},\mathbf{y}) &= \rho_X^{-m_e} \rho_Y^{-m_\psi} h_X^{-m_e}(\mathbf{x}) h_Y^{-m_\psi}(0,y) (\Psi^*f)(\mathbf{x}, 0, y),\\
\Psi^*(\sigma^{\psi}a)(\mathbf{x},\mathbf{y}) &= \rho_X^{-m_e} \rho_Y^{-m_\psi} h_X^{-m_e}(\mathbf{x}) h_Y^{-m_\psi}(\mathbf{y}) (\Psi^*f)(\mathbf{x}, 0, y).
\end{align*}
Using Taylor's theorem, we obtain that $h_Y^{-m_\psi}(\mathbf{y}) - h_Y^{-m_\psi}(0,y) \in \rho_Y {\mathscr{C}^\infty}(X\times Y)$, and therefore
$\sigma^\psi(\Psi^*a) - \Psi^*(\sigma^\psi a) \in \rho_Y^{-m_\psi + 1} \rho_X^{-m_e}{\mathscr{C}^\infty}(X\times Y)$, as claimed.
\end{proof}
\begin{cor}
The principal part of $a\in \rho_Y^{-m_\psi} \rho_X^{-m_e} {\mathscr{C}^\infty}(X\times Y)$ is well-defined as an element of
\[
\rho_X^{-m_e} \rho_Y^{-m_\psi} {\mathscr{C}^\infty}(X\times Y) / \rho_X^{-m_e+1} \rho_Y^{-m_\psi+1} {\mathscr{C}^\infty}(X\times Y),
\]
and does not depend on the choice of boundary-defining functions $\rho_X,\rho_Y$ on $X,Y$.
\end{cor}
\begin{rem}
Note that the space
\[\rho_X^{-m_e} \rho_Y^{-m_\psi} {\mathscr{C}^\infty}(X\times Y) / \rho_X^{-m_e+1} \rho_Y^{-m_\psi+1} {\mathscr{C}^\infty}(X\times Y)\]
can be identified with ${\mathscr{C}^\infty}(\partial(X\times Y))$, which identifies our notion of principal symbol with that of \cite[Section 6.4]{Melrose2}.
\end{rem}
The following lemma is one of the main technical tools in this article.
We have already observed that the local model of a scattering manifold near the boundary is the radial compactification of ${\RR^d}$. We now show that scattering maps arise naturally as the composition of vector-valued amplitudes and radial compactification. Furthermore, we clarify the relation between total derivative and the scattering differential under compactification.
\begin{lem}
\label{lem:horror}
Let $Y$ be a mwb.
Let $f\in \rho_Y^{-1}{\mathscr{C}^\infty}(Y,{\RR^d})$ with $\rho_Y|f|\neq 0$ on $\partial Y$.\footnote{This means $\rho_Y f$ is the restriction to $Y^o$ of an element of $g\in{\mathscr{C}^\infty}(Y,{\RR^d})$ with $g\neq 0$ on $\partial Y$.} Then,
$\Psi=\iota\circ f$ extends to a local ${\mathrm{sc}}$-map $Y\rightarrow {\BB^d}$. Moreover, the matrix of coefficients of
\[{}^\scat \dd f = \begin{pmatrix}
{}^\scat \dd f_1\\
\vdots\\
{}^\scat \dd f_d
\end{pmatrix}\]
has the same rank as the differential $T\Psi$ of $\Psi$.
\end{lem}
\begin{proof}
Since $\iota$ is a diffeomorphism, $\iota \circ f$ is a smooth map while $\rho_Y>\varepsilon$ and we may thus restrict our attention to a neighbourhood of $\partial Y$ where $\rho_Y |f|$ is everywhere non-vanishing.
As usual, we pick a suitable collar neighbourhood of product type such that locally $Y=[0,\varepsilon)\times \partial Y$, and we write $\dim(Y)=s$ and
$\mathbf{y}=(\rho_Y,y)$ for the coordinates. There we need to compute $\Psi^*\rho_{Z}$.
Write
$ f(\rho_Y, y) = \rho^{-1}_Y h(\rho_Y,y)$
for $h\in {\mathscr{C}^\infty}(Y,{\RR^d})$ with $h(0,y)\neq 0$ for all $(0,y)\in\partial Y$. Since $\rho_Y$ is assumed sufficiently small,
$|f(\mathbf{y})|=\rho_Y^{-1}|h(\mathbf{y})|$ may be assumed sufficiently large and hence
$$\Psi(\mathbf{y})=(\iota\circ f)(\mathbf{y})=\frac{f(\mathbf{y})}{|f(\mathbf{y})|}\left(1-\frac{1}{|f(\mathbf{y})|}\right)=\frac{h(\mathbf{y})}{|h(\mathbf{y})|}\left(1-\frac{\rho_Y}{|h(\mathbf{y})|}\right).$$
In this form, $\Psi$ clearly extends up to the boundary.
The boundary defining function on ${\BB^d}$ is, in this coordinate patch,
$\rho_Z=1-|x|$. Thus,
$$\Psi^*\rho_Z=\frac{1}{|f(\mathbf{y})|}=\rho_Y \frac{1}{\rho_Y|f(\mathbf{y})|}.$$
By assumption, $\rho_Y|f(\mathbf{y})|=|h(\mathbf{y})|$ is smooth and non-vanishing, which proves that $\Psi$ is an ${\mathrm{sc}}$-map.
For the second half of the statement we first observe that, since $\iota$ is a diffeomorphism ${\RR^d}\rightarrow ({\BB^d})^o$ and ${}^\scat \dd$ coincides, up to a rescaling by a non-vanishing factor, with the usual differential in the interior, we may restrict our attention to the boundary $\partial Y$. Then we compute
\begin{align*}
{}^\scat \dd f(\mathbf{y}) &=
\rho_Y^2\partial_{\rho_Y} f(\mathbf{y})\,\frac{\dd\rho_Y}{\rho_Y^2}+\sum_{j=1}^{s-1}\rho_Y\partial_{y_j} f(\mathbf{y})\,\frac{\dd y_j}{\rho_Y}\\
&=(-h(\mathbf{y})+ \rho_Y \partial_{\rho_Y}h(\mathbf{y}))\,\frac{\dd\rho_Y}{\rho_Y^2} + \sum_{j=1}^{s-1}\partial_{y_j} h(\mathbf{y}) \,\frac{\dd y_j}{\rho_Y}.
\end{align*}
We identify ${}^\scat \dd f$ with its coefficients ($s\times d$)-dimensional block matrix
$$\begin{pmatrix}
-h(\mathbf{y})+ \rho_Y \partial_{\rho_Y}h(\mathbf{y}) & (\partial_{y_j} h(\mathbf{y}))_{j=1,\dots,s-1}
\end{pmatrix}.$$
At the boundary $\rho_Y=0$ we obtain
\begin{align}
\label{eq:scdhorror}
\begin{pmatrix}
-h & (\partial_{y_j} h)_{j=1,\dots,s-1}
\end{pmatrix}\!(0,y).
\end{align}
We want to compare the rank of \eqref{eq:scdhorror} with that of the differential of $\Psi$ at the point $(0,y)\in\partial Y$.
As shown above, the function $\Psi$ is given, at an arbitrary point $\mathbf{y}=(\rho_Y,y)$ close enough to $\partial Y$, by
\begin{align*}
\frac{h(\mathbf{y})}{|h(\mathbf{y})|}\left(1-\frac{\rho_Y}{|h(\mathbf{y})|}\right),
\end{align*}
whose differential at $(0,y)$ is the block matrix
\begin{align}\label{eq:JPsi}
T \Psi(0, y) =
\begin{pmatrix}
-\frac{h}{|h|^2}+\partial_{\rho_Y}\frac{h}{|h|} & \left(\partial_{y_j} \frac{h}{|h|}\right)_{j=1,\dots,s-1}
\end{pmatrix}\!(0,y).
\end{align}
Now observe that, since they are derivatives of unit vectors,
$\partial_{y_j} \frac{h}{|h|}$ and $\partial_{\rho_Y} \frac{h}{|h|}$ are orthogonal to $h$, which is itself non-zero.\footnote{Recall that,
in fact, $|v(t)|=1 \Leftrightarrow v(t)\cdot v(t) = 1\Rightarrow 2v(t)\cdot v^\prime(t)=0\Leftrightarrow v(t) \perp v^\prime(t)$.}
Therefore, the rank of $T\Psi(0,y)$ equals that of the block matrix
\begin{align}\label{eq:JPsimod}
\begin{pmatrix}
-h &
\left(\partial_{y_j} \frac{h}{|h|}\right)_{j=1,\dots,s-1}
\end{pmatrix}\!(0,y).
\end{align}
Finally, we have that
$$\partial_{y_j} h= \partial_{y_j} \left(|h|\frac{h}{|h|}\right)=\underbrace{|h| \partial_{y_j} \frac{h}{|h|}}_{\text{collinear to }{\partial_{y_j} \frac{h}{|h|}}} + \underbrace{\frac{(h\cdot\partial_{y_j} h)}{|h|^2} h}_{\text{collinear to }h} .$$
This means that the null space (and hence the ranks) of \eqref{eq:scdhorror} and \eqref{eq:JPsimod} coincide.
\end{proof}
\begin{ex}
The simplest example for a map where Lemma \ref{lem:horror} applies is given by the map $f=\iota^{-1}:{\BB^d}\rightarrow {\RR^d}$.
\end{ex}
\begin{rem}
Recall (cf. \cite[App. C.3]{Hormander3}) that the intersection of two ${\mathscr{C}^\infty}$-sub\-mani\-folds $Y$ and $Z$ of a ${\mathscr{C}^\infty}$-manifold $X$ is \emph{clean} with excess $e\in{\NNz_0}$ if $Y\cap Z$ is a ${\mathscr{C}^\infty}$-submanifold of $X$ satisfying
\begin{align*}
T_x(Y\cap Z)&=T_xY\cap T_x Z,\qquad \forall x\in Y\cap Z,\\
\codim(Y)+\codim(Z)&=\codim(Y\cap Z)+e.
\end{align*}
\end{rem}
\begin{ex}\label{ex:embdball}
Let $X$ be a mwb and $a \in \rho_X^{-m_e} \rho_{{\mathbb{B}}^s}^{-m_\psi} {\mathscr{C}^\infty}(X \times {\mathbb{B}}^s)$. In this example, we extend $a$ to a local symbol on a suitable subset of $X \times {\mathbb{B}}^{s+1}$.
We view ${\mathbb{B}}^{s+1}$ as embedded in ${\mathbb{R}}^{s+1}$ with coordinates $(y_1,\dots,y_s,\tilde{y})$.
Define
\[\jmath : {\mathbb{B}}^{s+1} \to {\mathbb{B}}^s \times (-1,1), \qquad (y,\tilde{y}) \mapsto \left(\frac{y}{\sqrt{1 - \tilde{y}^2}}, \tilde{y}\right),\]
where $y = (y_1, \dotsc, y_s)$.
For every $\varepsilon \in (0,1)$, we obtain coordinates on
$$U = \jmath^{-1}\left\{{\mathbb{B}}^s \times (-\varepsilon, \varepsilon)\right\} = {\mathbb{B}}^{s+1} \cap \{|\tilde{y}| < \varepsilon\},$$
cf. Figure \ref{fig:fiberball}.
We note that $U$ is a fibration of base ${\mathbb{B}}^s$ and fiber $(-\varepsilon, \varepsilon)$.
\begin{figure}[!ht]
\begin{center}
\begin{tikzpicture}[
scale=0.7,
MyPoints/.style={draw=black,fill=white,thick},
]
\draw (-3,0) -- (3,0) node[below, midway]{$\mathbb{B}^s$};
\foreach \sss in {-3,-2.7,...,3}{
\draw[lightgray,domain=-0.9:0.9,smooth,variable=\y] plot ({\sss*sqrt{(1-\y*\y)},3*\y});
};
\draw (0,0) circle (3);
\draw[lightgray,dashed] (-{3*sqrt(0.19)},2.7) -- ({3*sqrt(0.19)},2.7);
\draw[lightgray,dashed] (-{3*sqrt(0.19)},-2.7) -- ({3*sqrt(0.19)},-2.7) node[black,below=3pt]{$\mathbb{B}^{s+1}$};
\draw[->] (3.5,0) -- (4.5,0) node[above, midway]{$\jmath$};
\begin{scope}[shift={(8,0)}]
\foreach \sss in {0,0.3,0.6,...,3}{
\draw[-,lightgray] ({\sss},-2.7)--({\sss},2.7);
\draw[-,lightgray] ({-\sss},-2.7)--(-{\sss},2.7);}
\draw (-3,0) -- (3,0) node[below, midway]{$\mathbb{B}^s$};
\draw[dashed,lightgray] (-3,2.7) -- (3,2.7);
\draw (3,2.7) -- (3,-2.7) node[black,below=3pt]{$\mathbb{B}^s \times (-\varepsilon, \varepsilon)$};
\draw[dashed,lightgray] (3,-2.7) -- (-3,-2.7);
\draw (-3,-2.7) -- (-3,2.7);
\end{scope}
\end{tikzpicture}
\caption{The action of $\jmath$ visualized}
\label{fig:fiberball}
\end{center}
\end{figure}
We verify that $\jmath$ is a ${\mathrm{sc}}$-map. For this we now view ${\mathbb{B}}^s\times(-\varepsilon,\varepsilon)$ as a (non-compact) manifold with
boundary\footnote{This means we view ${\mathbb{B}}^s\times(-\varepsilon,\varepsilon)$ as embedded in the manifold with boundary ${\mathbb{B}}^s\times{\mathbb{S}}^1$, which can be embedded in ${\mathbb{S}}^s\times{\mathbb{S}}^1$.
For higher dimension, we embed $(-\varepsilon, \varepsilon)^r \hookrightarrow \mathbb{T}^r$.}
with boundary defining function $\rho_Z=1-[y]$. Observe that near the boundary we have
\begin{align*}
\jmath^*\rho_Z &= 1-\frac{[y]}{\sqrt{1-\tilde{y}^2}}\\
&=(1-\sqrt{[y]^2+\tilde{y}^2})\cdot \frac{1}{ \sqrt{1-\tilde{y}^2} } \cdot \frac{\sqrt{1 - \tilde{y}^2} - [y]}{1 - \sqrt{\tilde{y}^2 + [y]^2}}\\
&=\rho_{{\mathbb{B}}^{s+1}}h.
\end{align*}
Since $|\tilde{y}| < \epsilon$, $h$ is positive and in ${\mathscr{C}^\infty}(U)$. Hence $\jmath$ is an ${\mathrm{sc}}$-map.
As usual, we may perform the same construction fiber-wise on a fiber bundle by considering local product decompositions to obtain a local ${\mathrm{sc}}$-map. Namely, let $X$ be an arbitrary mwb. Then $\Psi = \mathrm{id}_X \times \jmath$ is again a ${\mathrm{sc}}$-map on the product $X\times \big({\BB^s}\times(-\varepsilon,\varepsilon)\big)$. Using Lemma \ref{lem:SGmapproj} and Remark \ref{rem:SGmapcomp}, wee see that $\tilde{\Psi} = \Psi \circ (\mathrm{id}_X \times {\mathrm{pr}}_{{\mathbb{B}}^s}) : X \times U \to X \times {\mathbb{B}}^s$
is a ${\mathrm{sc}}$-map. Hence, $\tilde{\Psi}^* a \in \rho_X^{-m_e} \rho_{{\mathbb{B}}^{s+1}}^{-m_\psi} C^\infty(X \times U)$.
\end{ex}
\section{Phase functions and Lagrangian submanifolds}
\label{sec:phaseandlag}
\subsection{Clean phase functions}
\begin{defn}[Phase functions]
Let $X$ and $Y$ be mwbs, $B=X\times Y$. Let $U$ be an open subset in $B$. Then, a real valued $\varphi\in \rho_X^{-1}\rho_Y^{-1}{\mathscr{C}^\infty}(U)$ is a \emph{local} (${\mathrm{sc}}$-\-)\allowbreak phase function if it is the restriction of some $\widetilde{\varphi}\in \rho_X^{-1}\rho_Y^{-1}{\mathscr{C}^\infty}(B)$ to $U$ such that ${}^\scat \dd\tilde{\varphi}(p)\neq 0$ for all $p\in\overline{\B^\psi}\cap\overline{\partial U}$.
If $U=B$, that is $\varphi\in \rho_X^{-1}\rho_Y^{-1}{\mathscr{C}^\infty}(B)$ with ${}^\scat \dd\varphi(p)|_{\overline{\mathcal{B}^\psi}}\neq 0$, we call $\varphi$ a \emph{global} ${\mathrm{sc}}$-phase function.
\end{defn}
\begin{rem}
Phrased differently, if $U$ is an interior open set, $\varphi$ is just a smooth function.
In the non-trivial case of $U$ being a boundary neighbourhood, the above definition means that, for every $p\in\partial B$ in the $\psi$- or $\psi e$-component of the boundary of $U$, there exists an element $\zeta \in{}^\scat \,\mathcal{V}(B)$ such that $\zeta (\varphi)$ is elliptic at $p$, meaning $\zeta (\varphi)\in{\mathscr{C}^\infty}(X\times Y)$ satisfies $\big(\zeta\varphi\big)(p)\neq 0$. It is, by compactness, bounded away from zero at the possible limit points in $\overline{\partial U}$. In the following, we usually do not write $\widetilde{\varphi}$ but simply identify $\widetilde{\varphi}$ and $\varphi$ at these limit points.
\end{rem}
\begin{ex}[${\mathrm{SG}}$-phase functions]
If $B={\BB^d}\times{\BB^s}$, such $\varphi$ correspond to so-called (classical) ${\mathrm{SG}}$-phase functions on ${\RR^d}\times{\RR^s}$, cf. \cite{CoSc,CoSc2}, but with a relaxed condition as $\|x\|\rightarrow \infty$. Indeed, in light of the ${\mathrm{SG}}$-estimates \eqref{eq:SGest}, the previous definition translates to
\begin{equation}
\label{eq:SGphaseineq}
|\jap{x}^{-1} \nabla_\theta\varphi|^2+|\jap{\theta}^{-1}\nabla_x\varphi|^2\geq C\quad \text{for} \quad |\theta|\gg 0.
\end{equation}
The relationship between these and ``standard'' phase functions which are homogeneous in $\theta$ is discussed in \cite{CoSc2}. Examples of ${\mathrm{SG}}$-phase functions are the standard Fourier phase $x\cdot \theta$ on ${\mathbb{R}}^d_x\times{\mathbb{R}}^d_\theta$ and $x_0\jap{\theta}-x\cdot \theta$ on
${\mathbb{R}}_{x_0,x}^{d+1}\times{\mathbb{R}}^d_\theta$.
\end{ex}
\begin{defn}[The set of critical points]
Let $B=X\times Y$, $\varphi\in \rho_X^{-1}\rho_Y^{-1}{\mathscr{C}^\infty}(B)$ a (local) phase function. A point $p\in B$ (in the domain of $\varphi$) is called a \emph{critical point} of $\varphi$ if ${}^\scat \dd_Y\varphi(p)=0$, that is, if
$\zeta(\varphi)(p)=0$ for every $\zeta \in{}^\scat \, \mathcal{V}^Y(B).$
We define
\begin{equation}
C_\varphi=\{p\in B\,|\, {}^\scat \dd_Y\varphi(p) = 0 \}.
\end{equation}
We set $\mathcal{C}_\varphi=C_\varphi\cap \mathcal{B}$ and specify
$$\mathcal{C}_\varphi^\bullet = \mathcal{C}_\varphi\cap \mathcal{B}^\bullet \quad \text{for} \quad \bullet \in\{e,\psi,\psi e\}. $$
\end{defn}
We now adapt the usual definition of a \emph{clean} phase function from the classical setting to the case with boundary.
\begin{defn}[Clean phase functions]
\label{def:cleanphase}
A phase function $\varphi$
is called \emph{clean} if the following conditions hold:
\begin{itemize}
\item[1.)] there exists a neighbourhood $U\subset B$ of $\partial B$ such that $C_\varphi\cap U$ is a manifold with corners with $\partial C_\varphi\subset \partial B$;
\item[2.)] the tangent space of $T_pC_\varphi$ is at every point $p$ given by those vectors in $v\in T_p B$ such that $v(\zeta(\varphi))=0$ for all $\zeta\in {}^\scat \, \mathcal{V}^Y$, that is, $T({}^\scat \dd_Y\varphi)v=0$;
\item[3.)] the intersections $\mathcal{C}_\varphi^\bullet=C_\varphi\cap\mathcal{B}^\bullet$ are clean.
\end{itemize}
\end{defn}
The last condition is equivalent to the existence of $w \in T_{\mathcal{C}_\varphi^\bullet}\mathcal{C}_\varphi^\bullet$ such that
\begin{align}\label{eq:cleanbdry}
(T{}^\scat \dd_Y\varphi)(w + \partial_{\rho_\bullet}) = 0.
\end{align}
This means that, for some $w$ tangent to $\mathcal{B}^\bullet$, we have $w + \partial_{\rho_\bullet} \in T_{\mathcal{C}_\varphi^\bullet} \mathcal{C}_\varphi$. Here, $\rho_\bullet$ is a bdf of $\mathcal{B}^\bullet$. We now discuss the implications of these conditions.
\begin{lem}
\label{lem:Cpprops}
Let $\varphi$ be a clean phase function. Then either we are in the ``non-corner crossing case'' $1a.)$ or in the ``corner crossing case'' $1b.)$, namely,
\begin{enumerate}[label=1\alph*.)]
\item both $\mathcal{C}_\varphi^e$ and $\mathcal{C}_\varphi^\psi$ are closed manifolds (without boundary) and $\mathcal{C}_\varphi^{\psi e}=\emptyset$;
\item $\mathcal{C}_\varphi$ consists of two components, $\overline{\mathcal{C}_\varphi^e}$ and $\overline{\mathcal{C}_\varphi^\psi}$, which are both submanifolds (with boundary), of the same dimension $\dim(C_\varphi)-1$, with joint boundary $\mathcal{C}_\varphi^{\psi e}=\partial \overline{\mathcal{C}_\varphi^e}=\partial \overline{\mathcal{C}_\varphi^\psi}$ of $\mathcal{B}$. The intersection of $\overline{\mathcal{C}_\varphi^e}$ and $\overline{\mathcal{C}_\varphi^\psi}$ in $\mathcal{C}_\varphi^{\psi e}$ is again clean.
\end{enumerate}
In both cases, the differential of ${}^\scat \dd_Y\varphi:B\rightarrow {}^\scat \,T^*B$, viewed as a map $T({}^\scat \dd_Y\varphi):TB\rightarrow T({}^\scat \,T^*B)$, characterizes $T\mathcal{C}_\varphi^\bullet$:
\begin{enumerate}[label=\arabic*.)]
\setcounter{enumi}{1}
\item \label{it:Cpprops2} The tangent space of $\overline{\mathcal{C}_\varphi^e}$ and $\overline{\mathcal{C}_\varphi^\psi}$ at each point $p$ is given by those vectors $v\in T\mathcal{B}^\bullet$ such that $v(\zeta(\varphi))=0$ for all $\zeta\in {}^\scat \, \mathcal{V}^Y$, that is $T({}^\scat \dd_Y\varphi)v=0$.
\end{enumerate}
\end{lem}
By condition $3.)$ of Definition \ref{def:cleanphase}, we have
$\dim(\ker(T({}^\scat \dd_Y\varphi)))=\dim C_\varphi$. Hence,
the restrictions of $T({}^\scat \dd_Y\varphi)$ to the individual boundary components of $B$ on $\mathcal{C}_\varphi$ are of constant rank. Namely,
\[
\rk(T({}^\scat \dd_Y\varphi))=
\begin{cases}
s-e & \text{on $C_\varphi^o$},
\\
s-e-1 & \text{on $\mathcal{C}_\varphi^\psi$ and $\mathcal{C}_\varphi^e$},
\\
s-e-2 & \text{on $\mathcal{C}_\varphi^{\psi e}$},
\end{cases}
\]
for some fixed number $e$, called the excess of $\varphi$, which is given by
$$e=\dim C_\varphi - d.$$
\begin{rem}
Conversely, if the rank of $T({}^\scat \dd_Y\varphi)$ is constant \emph{in a neighborhood} of each critical point of ${}^\scat \dd_Y\varphi$, then $\varphi$ is clean by the constant rank theorem. In case $e=0$, $\varphi$ is called \emph{non-degenerate}, and the two characterizations coincide. The corresponding case of ${\mathrm{SG}}$-phase functions (on ${\RR^d}$) was studied in \cite{CoSc2}.
\end{rem}
\subsection{The associated Lagrangian}
In the classical local theory without boundary on subsets of ${\RR^d}\times({\mathbb{R}}^s\setminus\{0\})$, see \cite[Chapter XXI.2]{Hormander3},
the set of critical points $\mathcal{C}_\varphi$ is realized as an immersed Lagrangian in $T^*{\RR^d}$ by the map
$(x,\theta)\rightarrow (x,\varphi_x^\prime(x,\theta))$.
In the present setting, the situation is more complicated.
Following \cite{CoSc2}, we define an analogous map ${\lambda_\varphi}$ on the mwc $B=X\times Y$ into ${}^\scat \,\overline{T}^*X$.
For that, we consider the following sequence of maps: Using the ``rescaling identifications'' \eqref{eq:rescal}, we may view $(\mathbf{x},\mathbf{y})\rightarrow {}^\scat \dd_X\varphi(\mathbf{x},\mathbf{y})$ as a map in
$\rho_Y^{-1}{\mathscr{C}^\infty}(Y,{}^\scat\, \varTheta(X))$. Since ${}^\scat\, \varTheta(X)$ are the sections of ${}^\scat \,\overline{T}^*X$, composing with the radial compactification yields, in view
of Lemma \ref{lem:horror}, a map into the compactified fibers of ${}^\scat \,\overline{T}^*X$.
\begin{defn}
The map $\lambda_\varphi: B\rightarrow {}^\scat \,\overline{T}^*X$ is defined by
$$(\mathbf{x},\mathbf{y})\mapsto \big(\mathbf{x},\iota({}^\scat \dd_X\varphi(\mathbf{x},\mathbf{y}))\big).$$
\end{defn}
\begin{lem}
\label{lem:lpsct}
There is a neighbourhood $U\subset B$ of $\mathcal{C}_\varphi$ such that ${\lambda_\varphi}: U\rightarrow {}^\scat \,\overline{T}^*X$ is a local ${\mathrm{sc}}$-map.
\end{lem}
\begin{proof}
We write, $\mathbf{x}=(\rho_X,x)$, $\mathbf{y}=(\rho_Y,y)$ for coordinates in $B$, $\mathbf{x}$ and $\boldsymbol{\xi}=(\rho_\Xi,\xi)$ for coordinates in ${}^\scat \,\overline{T}^*X$. Since $\lambda_\varphi$ is the identity in the first set of variables, we have
$\lambda_\varphi^*\mathbf{x}=\mathbf{x}.$
In the second set of variables, $\lambda_\varphi$ acts as $\iota\,\circ\,{}^\scat \dd_X\varphi$, with ${}^\scat \dd_X\varphi\in\rho_Y^{-1}{\mathscr{C}^\infty}(Y,{}^\scat\, \varTheta(X))$. Notice that on $\mathcal{C}_\varphi^\psi\cup\mathcal{C}_\varphi^{\psi e}$, we have ${}^\scat \dd_X\varphi(\mathbf{x},\mathbf{y})\neq 0$, since ${}^\scat \dd \varphi\neq 0$ on $\B^\psi\cup\B^{\psi e}$ and ${}^\scat \dd_Y\varphi=0$ on $\mathcal{C}_\varphi$. Hence, due to compactness, we may find a neighbourhood of $\mathcal{C}_\varphi^\psi\cup\mathcal{C}_\varphi^{\psi e}$ on which ${}^\scat \dd_X\varphi(\mathbf{x},\mathbf{y})\neq 0$.
Writing $\varphi=\rho_X^{-1}\rho_Y^{-1}f$ for $f\in{\mathscr{C}^\infty}(X\times Y)$, this means
$$(-f+\rho_X\partial_{\rho_X}f) \frac{\dd \rho_X}{\rho_X^2\rho_Y} + \sum_{j=1}^{d-1}\partial_{x_j}f\frac{\dd x_j}{\rho_X\rho_Y} \neq 0.$$
Rescaling and viewing ${}^\scat \dd_X \varphi$ as a map in $\rho_Y^{-1}{\mathscr{C}^\infty}(Y,{}^\scat\, \varTheta(X))$, we express ${}^\scat \dd_X\varphi$ as
\begin{equation}
\label{eq:scdxexpl}
{}^\scat \dd_X\varphi=\rho_Y^{-1} \left((-f+\rho_X\partial_{\rho_X}f) \frac{\dd \rho_X}{\rho_X^2} + \sum_{j=1}^{d-1}\partial_{x_j}f\frac{\dd x_j}{\rho_X} \right).
\end{equation}
Composing with $\iota$, we are therefore in the situation of Lemma \ref{lem:horror}, up to additional smooth dependence on the $X$-variables, and conclude that $\lambda_\varphi$ is a local ${\mathrm{sc}}$-map.
On $\mathcal{C}_\varphi^e$, away from $\mathcal{C}_\varphi^{\psi e}$, we have that $\rho_Y\neq 0$ and correspondingly ${}^\scat \dd_X\varphi(\mathbf{x},\mathbf{y})$ stays bounded. Since $\iota$ maps bounded arguments into the interior,
we find ${\lambda_\varphi}^*\rho_\Xi\neq 0$. Since ${\lambda_\varphi}$ is smooth, ${\lambda_\varphi}$ is an ${\mathrm{sc}}$-map.
\end{proof}
In particular, $\iota({}^\scat \dd_X\varphi(\mathbf{x},\mathbf{y}))$ maps boundary points with $\rho_Y=0$ to boundary points of the fiber, that is to $\Wt^\psi\cup \Wt^{\psi e}$.
\begin{defn}
We define $L_\varphi={\lambda_\varphi}(C_\varphi)$ and $\Lambda_\varphi:={\lambda_\varphi}(\mathcal{C}_\varphi)$. We further write $\Lambda_\varphi^\bullet$ for ${\lambda_\varphi}(\mathcal{C}_\varphi^\bullet)\subset \mathcal{W}^\bullet$ for $\bullet\in\{e,\psi,\psi e\}$. We say that $\varphi$ parametrizes $L_\varphi$ and $\Lambda_\varphi$.
\end{defn}
\begin{thm}
\label{thm:lpsubm}
The map ${\lambda_\varphi}: \mathcal{C}_\varphi \rightarrow {}^\scat \,\overline{T}^*X$ is of constant rank $d$. Its image $L_\varphi$ as well as the boundary and corner faces $\Lambda_\varphi^\bullet={\lambda_\varphi}(\mathcal{C}_\varphi^\bullet)$ are immersed manifolds of dimension $\dim\Lambda_\varphi^\bullet=\dim\mathcal{C}_\varphi^\bullet-e$. Furthermore, ${\lambda_\varphi}:\mathcal{C}_\varphi\rightarrow \Lambda_\varphi$ is a submersion.
\end{thm}
The proof is inspired by that of Lemma 2.3.2 in \cite{Duistermaat} (adapted to clean phase functions), but much more involved, due to the presence of the compactification. We treat this new phenomenon by carefully applying Lemma~\ref{lem:horror}.
\begin{proof}
We obtain the rank of $T{\lambda_\varphi}$ for ${\lambda_\varphi}: \mathcal{C}_\varphi \rightarrow {}^\scat \,\overline{T}^*X$ by computing the dimension of its null space.
Let $v = \delta\rho_X \cdot \partial_{\rho_X} + \delta x\cdot \partial_x + \delta\rho_Y\cdot \partial_{\rho_Y} + \delta y\cdot \partial_y$ be a vector at a point
$p = (\rho_X,x,\rho_Y,y) \in \mathcal{C}_\varphi$. For the moment, we assume $\rho_Y>0$. We write ${\lambda_\varphi} = (\mathrm{id} \times \iota) \circ {\ell_\varphi}$ with
\begin{align*}
{\ell_\varphi} : X\times Y^o \rightarrow {}^\scat \,T^*X\qquad
(x,y) \mapsto (x,{}^\scat \dd_X \varphi(x,y)).
\end{align*}
Assume that $T{\ell_\varphi}(p)v = 0$ and $v \in T_p \mathcal{C}_\varphi$.
The condition $T{\ell_\varphi}(p)v = 0$ implies that $\delta\rho_X = 0$ and $\delta x = 0$.
Let $\tilde{v} = \delta\rho_Y\cdot \partial_{\rho_Y} + \delta y\cdot \partial_y$. Hence the assumptions are reduced to
\begin{equation}\label{eq:VscdYX}
\begin{aligned}
\tilde{v} {}^\scat \dd_X \varphi(p) &= 0,\\
\tilde{v} {}^\scat \dd_Y \varphi(p) &= 0,
\end{aligned}
\end{equation}
where $\tilde{v}$ is interpreted as acting on the coefficient functions of the differentials.
In coordinates, these coefficient functions are given by
\begin{align*}
{}^\scat \dd_X \varphi(p) = \rho_Y^{-1}(-f + \rho_X\partial_{\rho_X} f, \partial_x f)(p), \qquad
{}^\scat \dd_Y \varphi(p) = (-f + \rho_Y \partial_{\rho_Y} f, \partial_y f)(p).
\end{align*}
On $\mathcal{C}_\varphi$, where $-f + \rho_Y \partial_{\rho_Y} f = 0$ and $\partial_y f = 0$ hold true, it is easily seen that \eqref{eq:VscdYX} is
equivalent to
\begin{align}\label{mat:scdX}
\begin{pmatrix}
\rho_X\rho_Y^{-2} (\rho_Y \partial_{\rho_Y} - 1) \partial_{\rho_X} f & \rho_X\rho_Y^{-1} \partial_{\rho_X}\partial_y f\\
\rho_Y^{-2}(\rho_Y \partial_{\rho_Y} - 1) \partial_x f & \rho_Y^{-1}\partial_x\partial_y f\\
\rho_Y \partial_{\rho_Y}\partial_{\rho_Y} f & \rho_Y\partial_{\rho_Y}\partial_y f\\
\partial_{\rho_Y}\partial_y f & \partial_y \partial_y f
\end{pmatrix}
\begin{pmatrix}
\delta \rho_Y\\
\delta y
\end{pmatrix}
= 0.
\end{align}
The cleanness condition translates to the dimension of the nullspace of $T{}^\scat \dd_X \varphi$ being constantly $e$. We identify $T{}^\scat \dd_Y\varphi$ with the matrix
\begin{align}\label{mat:clean}
J =
\begin{pmatrix}
(\rho_Y \partial_{\rho_Y}-1) \partial_{\rho_X}f & \partial_y \partial_{\rho_X} f\\
(\rho_Y \partial_{\rho_Y}-1) \partial_x f & \partial_y\partial_x f\\
\rho_Y\partial_{\rho_Y}\partial_{\rho_Y} f & \partial_y\partial_{\rho_Y} f\\
\rho_Y \partial_{\rho_Y}\partial_y f & \partial_y\partial_y f
\end{pmatrix}.
\end{align}
The matrices appearing in \eqref{mat:scdX} and \eqref{mat:clean} are related by
\begin{align*}
J
=
\begin{pmatrix}
\rho_Y\rho_X^{-1} & 0 & 0 & 0\\
0 & \rho_Y & 0 & 0\\
0 & 0 & \rho_Y^{-1} & 0\\
0 & 0 & 0 & 1
\end{pmatrix}
\begin{pmatrix}
\rho_X\rho_Y^{-2} (\rho_Y \partial_{\rho_Y} - 1) \partial_{\rho_X} f & \rho_X\rho_Y^{-1} \partial_{\rho_X}\partial_y f\\
\rho_Y^{-2}(\rho_Y \partial_{\rho_Y} - 1) \partial_x f & \rho_Y^{-1}\partial_x\partial_y f\\
\rho_Y \partial_{\rho_Y}\partial_{\rho_Y} f & \rho_Y\partial_{\rho_Y}\partial_y f\\
\partial_{\rho_Y}\partial_y f & \partial_y \partial_y f
\end{pmatrix}
\begin{pmatrix}
\rho_Y & 0\\
0 & 1
\end{pmatrix}.
\end{align*}
This proves that \eqref{eq:VscdYX} is equivalent to $v \in \ker T{}^\scat \dd_Y \varphi$ under our assumptions $\rho_Y > 0$ and $\rho_X > 0$, and the rank of
${\ell_\varphi}$ is given by
\begin{align*}
\rk {\ell_\varphi} &= \dim T_p \mathcal{C}_\varphi - \dim \ker T{}^\scat \dd_Y \varphi
= (d + e) - e=d.
\end{align*}
Now assume that $\rho_X = 0$. We see that the first row of \eqref{mat:scdX} vanishes identically,
but we have the additional condition \eqref{eq:cleanbdry}, implying that, at $\rho_X=0$, the first row of \eqref{mat:clean} depends linearly on the other rows.
Therefore, the rank of ${\ell_\varphi}$ is still $d$ at points with $\rho_X=0$. The composition with $\mathrm{id} \times \iota$ changes nothing for $\rho_Y > 0$, since $\iota$ is a diffeomorphism there.
To perform the limit $\rho_Y \rightarrow 0$, we have to examine carefully the effect of the presence of the
compactification $\iota$, in the spirit of the proof of Lemma \ref{lem:horror}.
For $v \in T_p \mathcal{C}_\varphi$ such that $T{\lambda_\varphi}(p)v = 0$, that is, as above, of the form
\[v = \delta\rho_Y\cdot \partial_{\rho_Y} + \delta y\cdot \partial_y,\]
we now obtain the set of equations
\begin{equation}\label{eq:iotaVscdYX}
\begin{aligned}
v \big(\iota\,{}^\scat \dd_X \varphi\big)(p) &= 0,\\
v {}^\scat \dd_Y \varphi(p) &= 0,
\end{aligned}
\end{equation}
which are equivalent to the set of equations
\begin{align}\label{mat:iotaVscd}
\begin{pmatrix}
\partial_{\rho_Y} \iota {}^\scat \dd_X \varphi & \partial_y \iota {}^\scat \dd_X \varphi\\
\partial_{\rho_Y} \partial_y f & \partial_y \partial_y f
\end{pmatrix}
\begin{pmatrix}
\delta \rho_Y\\
\delta y
\end{pmatrix}
= 0.
\end{align}
We need to compare the rank of the coefficient matrix in \eqref{mat:iotaVscd} with that of $T{}^\scat \dd_Y \varphi$ at points of the form $(\rho_X,x,0,y)$.
For this purpose, we go through a series of ``reductions'', along the lines of the proof of Lemma \ref{lem:horror}, to simplify the comparison.
First, we can identify ${}^\scat \dd_X\varphi$ with
\[
\rho_Y^{-1}\begin{pmatrix}- f+\rho_X\partial_{\rho_X}f \\ \partial_x f \end{pmatrix}=:\rho_Y^{-1} h.
\]
Note that $h\neq 0$ near $\overline{\mathcal{C}_\varphi^\psi}$, since $\varphi$ is a phase function.
As in the proof of Lemma \ref{lem:horror}, the evaluation at $(\rho_X,x,0,y)$ then gives
\begin{align}
\label{eq:matma}
\begin{pmatrix}
\partial_{\rho_Y} \iota {}^\scat \dd_X \varphi & \partial_y \iota {}^\scat \dd_X \varphi\\
\partial_{\rho_Y} \partial_y f & \partial_y \partial_y f
\end{pmatrix}= \begin{pmatrix}
-\frac{h}{|h|^2}+\partial_{\rho_Y}\frac{h}{|h|} & \partial_{y} \frac{h}{|h|} \\
\partial_{\rho_Y} \partial_y f & \partial_y \partial_y f
\end{pmatrix}.
\end{align}
Since all derivatives of $\frac{h}{|h|}$ are orthogonal to $\frac{h}{|h|}$ and $h\neq 0$, the rank of the matrix \eqref{eq:matma} equals the one of
\begin{align}
\label{eq:matmat}
\begin{pmatrix}
-\frac{h}{|h|^2} & \partial_y \frac{h}{|h|}\\
0 & \partial_y \partial_y f
\end{pmatrix}.
\end{align}
In fact, in \eqref{eq:matma}, as well as in \eqref{eq:matmat}, the first column is linearly independent of the others.
Now we write
$$\partial_{y_j} \frac{h}{|h|}=\frac{1}{|h|}\partial_{y_j} h- \underbrace{\frac{(h\cdot\partial_{y_j} h)}{|h|^3} h}_{\text{collinear to }h},$$
and remove the collinear summands, which again does not change the rank of the matrix \eqref{eq:matmat}. Therefore, the rank of \eqref{eq:matma} is the same
as the one of
\begin{align}
\label{eq:matmat2}
\begin{pmatrix}
-\frac{h}{|h|^2} & \frac{1}{|h|}\partial_y h\\
0 & \partial_y \partial_y f
\end{pmatrix}.
\end{align}
Multiplying the first $d$ rows and the first column of \eqref{eq:matmat2} by the non-vanishing factor $|h|$, again the rank does not change, and we can look at
\begin{align}
\label{eq:matmat3}
\begin{pmatrix}
-h & \partial_y h\\
0 & \partial_y \partial_y f
\end{pmatrix}=\begin{pmatrix}
f-\rho_X\partial_{\rho_X}f & -\partial_yf + \rho_X\partial_y\partial_{\rho_X} f\\
-\partial_x f & \partial_y\partial_x f\\
0 & \partial_y \partial_y f
\end{pmatrix}.
\end{align}
On $\mathcal{C}_\varphi$ at $\rho_Y=0$ this equals
\begin{align}
\label{eq:matmat4}
\begin{pmatrix}
-\rho_X\partial_{\rho_X}f & \rho_X\partial_y\partial_{\rho_X} f\\
-\partial_x f & \partial_y\partial_x f\\
0 & \partial_y \partial_y f
\end{pmatrix}.
\end{align}
Finally, we observe that the dimension of the null space of \eqref{eq:matmat4} is,
by cleanness of $\varphi$ (in particular by \eqref{eq:cleanbdry} applied to $\mathcal{C}_\varphi^\psi$ or $\mathcal{C}_\varphi^{\psi e}$), the same as the one of
\begin{align}
\label{eq:cleandiff}
\begin{pmatrix}
- \partial_{\rho_X}f & \partial_y \partial_{\rho_X} f\\
- \partial_x f & \partial_y\partial_x f\\
0 & \partial_y\partial_{\rho_Y} f\\
0 & \partial_y\partial_y f
\end{pmatrix} = T{}^\scat \dd_Y \varphi|_{\mathcal{C}_\varphi^\psi},
\end{align}
namely $e$.
Therefore, the rank of ${\lambda_\varphi}$ equals $d=(d+e)-e$ near $\mathcal{C}_\varphi$, which concludes the proof.
\end{proof}
\begin{lem}
\label{lem:lpfibration}
The map ${\lambda_\varphi}: C_\varphi\rightarrow L_\varphi$ is a local fibration and the fiber is everywhere a smooth manifold without boundary.
\end{lem}
\begin{proof}
Since ${\lambda_\varphi}$ is locally an ${\mathrm{sc}}$-map, $T{\lambda_\varphi}$ maps the set of vectors at the boundary that are inwards pointing into itself, see Remark \ref{rem:inward}. Therefore ${\lambda_\varphi}$ is a so-called ``tame'' submersion in the sense of \cite[Lemma 1.3]{Nistor}. As such, it is a local fibration and the fiber is a manifold without boundary.
\end{proof}
\subsection{Symplectic properties of the associated Lagrangian}
\label{sec:symp}
As in the classical theory, $L_\varphi$ is an immersed Lagrangian submanifold, and its boundary faces $\Lambda^\bullet$ are immersed Legendrian submanifolds. Let us briefly recall these concepts. For more information, the reader is referred to \cite{CoSc2,MZ,HV}.
As a cotangent space, $T^*X^o$ carries a natural symplectic $2$-form $\omega$ induced by the canonical $1$-form $\alpha\in{\mathscr{C}^\infty}(T^*X^o,T^*(T^*X^o))$ as $\omega=\dd\alpha$. This $1$-form can be recovered from $\omega$ by setting $\alpha=\varrho^\psi\lrcorner\,\,\omega$ for the radial vector field $\varrho^\psi$ on ${\mathscr{C}^\infty}(T^*X^o)$, which is given by $\varrho^\psi=\xi\cdot\partial_{\xi}$ in canonical coordinates. \\
We now write $(\mathbf{x},\boldsymbol{\xi})=(\rho_X,x,\rho_\Xi,\xi)$ for the coordinates in the mwc ${}^\scat \,\overline{T}^*X$ which are obtained from the rescaled canonical coordinates under radial compactification in the fiber, cf. \cite{MZ}. Then $\varrho^\psi$ corresponds to $\rho_\Xi\partial_{\rho_\Xi}$ on ${\mathscr{C}^\infty}(\overline{T}^*X^o)$. For the purpose of scattering geometry, it is natural to rescale further and define, on $T^*({}^\scat \,\overline{T}^*X)^o$,
$$\alpha^\psi:=\rho_\Xi^2\partial_{\rho_\Xi}\lrcorner\,\omega.$$
There exists another form of interest, namely
\begin{align*}
\alpha^e:=\rho_X^2\partial_{\rho_X}\lrcorner\,\omega.
\end{align*}
We now extend these forms to $T^*({}^\scat \,\overline{T}^*X)$ and define the boundary restrictions of $\alpha^\bullet$.
Observe that, while their explicit form depends on the choice of bdfs, the induced contact structure at the boundary does not, see next Lemma \ref{lem:alphaext}
\begin{lem}\label{lem:alphaext}
The forms $\alpha^\bullet$ extend to $1$-forms on $\mathcal{W}^\bullet$, denoted by the same letter. The induced contact structures do not depend on the choice of bdfs.
\end{lem}
\begin{ex}
On $T^*{\RR^d}\cong {\RR^d}\times{\RR^d}$, with canonical coordinates $(x,\xi)$, the vector fields $\varrho^\psi$ and $\varrho^e$ correspond to $\varrho^\psi=\xi\cdot\partial_\xi$ and $\varrho^e=x\cdot\partial_x$. The symplectic $2$-form is $\sum_j\dd\xi_j\wedge\dd x_j$ and hence
$$\varrho^\psi\lrcorner\,\omega=\xi\cdot \dd x\quad \text{and}\quad\varrho^e\lrcorner\,\omega=-x\cdot \dd \xi.$$
Obviously, the coefficients of these forms diverge as $[\xi]\rightarrow \infty$ and $[x]\rightarrow \infty$. The rescaled forms ``at the boundary at infinity'' then correspond to
$$\alpha^\psi=\frac{\xi}{[\xi]}\cdot \dd x\quad \text{and}\quad \alpha^e=-\frac{x}{[x]}\cdot \dd \xi.$$
After a choice of coordinates near the respective boundaries, this is the general local geometric situation.
\end{ex}
We are now in the position to formulate the symplectic properties of $\Lambda_\varphi$, cf. \cite{CoSc}. Recall that a submanifold $N$ of a symplectic manifold $(M,\omega)$ is Lagrangian if $\omega|_{TN}=0$ and a submanifold $N$ of a contact manifold $(M,\alpha)$ is Legendrian if $\alpha|_{TN}=0$.
\begin{prop}
The immersed manifolds defined in Theorem \ref{thm:lpsubm} satisfy:
\begin{itemize}
\item[1.)] $L_\varphi^o$ is an immersed Lagrangian submanifold with respect to the $2$-form $\omega$ on $({}^\scat \,\overline{T}^* X)^o\cong T^*X$;
\item[2.)] ${\Lambda_\varphi^\psi}$ is Legendrian with respect to the canonical $1$-form $\alpha^\psi$ on $\Wt^\psi\cong S^*(X^o)$;
\item[3.)] ${\Lambda_\varphi^e}$ is Legendrian with respect to the $1$-form $\alpha^e$ on $\Wt^e\cong T^*_{\partial X}X$.
\end{itemize}
\end{prop}
We take this as the definition of an ${\mathrm{sc}}$-Lagrangian, cf. \cite{CoSc2}.
\begin{defn}[${\mathrm{sc}}$-Lagrangians]\label{def:scLagr}
Let $\Lambda:=\overline{\Lambda^\psi}\cup\overline{\Lambda^e}\subset \mathcal{W}$. $\Lambda$ is called an ${\mathrm{sc}}$-Lagrangian if:
\begin{itemize}
\item[1.)] $\Lambda^\psi=\Lambda\cap\Wt^\psi$ is Legendrian with respect to the canonical $1$-form $\alpha^\psi$ on $\Wt^\psi = {}^\scat \,S^*_{X^o}X$;
\item[2.)] $\Lambda^e=\Lambda\cap\Wt^e$ is Legendrian with respect to the $1$-form $\alpha^e$ on $\Wt^e = {}^\scat \,T^*_{\partial X}X$;
\item[3.)] $\overline{\Lambda^\psi}$ has a boundary if and only if $\overline{\Lambda^e}$ has a boundary, and, in this case,
$$\Lambda^{\psi e}:=\partial \overline{\Lambda^\psi}=\partial \overline{\Lambda^e}=\overline{\Lambda^\psi}\cap\partial \overline{\Lambda^e},$$
with clean intersection.
\end{itemize}
\end{defn}
Figure \ref{fig:Lpintersect}, which is taken from \cite{CoSc2}, summarizes, schematically, the relative positions of ${\Lambda_\varphi^e}$ and ${\Lambda_\varphi^\psi}$ near the corner in $W$.
\begin{figure}[ht!]
\begin{center}
\begin{tikzpicture}[scale=1.2]
\node (A) at (2.5,3.5) {$\Wt^\psi$};
\node (B) at (6,0.5) {$\Wt^e$};
\node (C) at (6.7,2.7) {$\Wt^{\psi e}$};
\node (D) at (3.2,1) {$\Lambda^{\psi e}$};
\draw[opacity=0.5,->] (A) -- (3.2,2.8);
\draw[opacity=0.5,->] (B) -- (5.5,1.5);
\draw[opacity=0.5,->] (C) -- (5.6,2.8);
\draw[opacity=0.5,->] (D) -- (3.95,1.95);
\draw[opacity=0.75] (3,1.5) -- (6,3);
\draw[dotted,opacity=0.5] (1.5,2.25) -- (2.25,1.875);
\draw[dotted,opacity=0.75] (2.25,1.875) -- (3,1.5) -- (3,0);
\draw[dotted,opacity=0.75] (4.5,3.75) -- (5.25,3.375);
\draw[dotted,opacity=0.75] (5.25,3.375) -- (6,3) -- (6,1.5);
\fill[opacity=0.04] (3,0) -- (3,1.5) -- (6,3) -- (6,1.5);
\fill[opacity=0.01] (1.5,2.25) -- (3,1.5) -- (6,3) -- (4.5,3.75);
\draw[->,ultra thick] (4,2) -- (5,2.5) node [below] {$\quad x,\xi$};
\draw[->,ultra thick] (4,2) -- (3,2.5) node [left] {$\rho_X$};
\draw[->,ultra thick] (4,2) -- (4,1) node [left, below] {$\rho_\Xi$};
\draw[thick] (4,2) .. controls (4,1.1) and (4.8,1.3) .. (4.7,0.85) node [right] {$\Lambda^e$};%
\draw[thick] (4,2) .. controls (3,2.5) and (4,3) .. (3.5,3.25) node [right] {$\ \Lambda^\psi$};%
\end{tikzpicture}
\caption{Intersection of $\Lambda^\psi\subset\Wt^\psi$ and $\Lambda^e\subset\Wt^e$ at the corner $\Wt^{\psi e}$}
\label{fig:Lpintersect}
\end{center}
\end{figure}
We may take the analysis one step further in order to stress the Legendrian character of the boundary components near the corner and to reveal the symplectic properties of $\Lambda^{\psi e}$ by blow-up. For the sake of brevity here, we move this analysis to the appendix, Section \ref{sec:blowup}.
We may sum up our previous analysis by stating the next Theorem \ref{thm:imphlagr}.
\begin{thm}\label{thm:imphlagr}
For a clean phase function $\varphi$, the image $\Lambda_\varphi$ under ${\lambda_\varphi}$ of $\mathcal{C}_\varphi$ is an immersed ${\mathrm{sc}}$-Lagrangian.
\end{thm}
\begin{defn}
We say that an ${\mathrm{sc}}$-Lagrangian $\Lambda$ is locally parametrized by a phase function $\varphi$ if, over the domain of definition of $\varphi$, we have $\Lambda=\Lambda_\varphi$.
\end{defn}
In particular, if $\Lambda$ is locally parametrized by a phase function, then it is admissible. Conversely, we have the following result, cf. \cite{CoSc2}.
\begin{prop}
\label{prop:locpar}
If $\Lambda$ is an ${\mathrm{sc}}$-Lagrangian, then it is locally parametrizable by a clean phase function $\varphi$, that is $\Lambda^\bullet\cap U^\bullet=\Lambda_\varphi^\bullet\cap U^\bullet$ for some open $U\subset \mathcal{W}^\bullet$. In particular, $\Lambda$ arises as the boundary of some Lagrangian submanifold $L_\varphi$ of ${}^\scat \,\overline{T}^* X$.
\end{prop}
\begin{rem}
The proof of Proposition \ref{prop:locpar} in \cite{CoSc2} is based on concrete parametrizations in ${\RR^d}\times{\RR^d}$. It applies here nonetheless, since any $d$-dimensional manifold with boundary $X$ can be locally modelled by ${\BB^d}$. Hence, ${}^\scat \,\overline{T}^*X$ can be locally modelled by ${\BB^d}\times{\BB^d}$ and thus, under inverse radial compactification (applied to both factors), by ${\RR^d}\times{\RR^d}$.
Note that in \cite{CoSc2} we imposed additional conditions, namely
\begin{equation}
\label{eq:nonzerosec}
\Lambda^e\cap (\partial X\times\iota(\{0\}))=\emptyset,
\end{equation}
and that $x\cdot \xi=0$ in local canonical coordinates on $\Lambda^{\psi e}$, since this is always true for a parametrized Lagrangian (see \eqref{eq:conormbi} below). However, condition \eqref{eq:nonzerosec} is equivalent to the stronger assumption that ${}^\scat \dd\varphi\neq 0$ also on $\B^e$, which we do not impose here. The assumption $x\cdot \xi=0$, in turn, is superfluous, since it already follows from the symplectic assumptions on $\Lambda^{\psi e}$, as we now show.
Assume that both $\xi\cdot \dd x\equiv 0$ and $-x\cdot \dd \xi\equiv 0$ on a bi-conic submanifold $L$ of ${\mathbb{R}}^d\times{\mathbb{R}}^d$. Then we must have $\dd(x\cdot\xi)=0$. However, when $|x|$ and $|\xi|$ tend to $\infty$, this blows up unless $x\cdot \xi=0$. This shows that $x\cdot \xi=0$ is indeed automatically fulfilled.
This corresponds to the fact that, for the bi-homogenous principal symbol of a phase function $\varphi^{\psi e}$, we have, when
$\nabla_\theta\varphi(x,\theta)=0$, that (cf. \cite{CoSc2})
\begin{equation}
\label{eq:conormbi}
\langle x,\nabla_x\varphi(x,\theta)\rangle=\varphi(x,\theta)=\langle \theta,\nabla_\theta\varphi(x,\theta)\rangle=0,
\end{equation}
where we have used Euler's identity for homogeneous functions twice.
\end{rem}
\subsection{Scattering conormal bundles}
\label{sec:conorm}
In this section, we consider the simple example of a scattering conormal bundle. Consider a $k$-dimensional submanifold $X'\subset X$ which intersects the boundary of $X$ cleanly or not at all (called $p$-submanifold in \cite{Melrosemwc}). In the following, we assume an intersection with the boundary.
Then there exist local coordinates $(\rho_X,x^\prime,x'')$ such that $X'$ is locally given by
$$X'=\{(\rho_X,x^\prime,x'')\mid \rho_X\geq 0, x^\prime=0\in{\mathbb{R}}^{d-1-k}, x''\in{\mathbb{R}}^{k-1}\}.$$
We can now consider the compactified scattering conormal ${}^\scat \,\overline{T}^*X'\subset{}^\scat \,\overline{T}^*_{X'}X$. The boundary faces of ${}^\scat \,\overline{T}^*X'$ constitute a Lagrangian.
In fact, write $X=\iota({\RR^d})$, so that $X'$ corresponds to a subspace of ${\RR^d}$ of the form
$$X'=\{(x^\prime,x'')\mid x'=0\in{\mathbb{R}}^{d-k}, x''\in{\mathbb{R}}^{k}\}.$$
We can then introduce $Y=\iota({\mathbb{R}}^{d-k})$ and $\phi(x,y)=x'\cdot y$ on ${\RR^d}\times{\mathbb{R}}^{d-k}$, which is an ${\mathrm{SG}}$-phase function, taking into account \eqref{eq:SGphaseineq}. The true phase function on $X\times Y$ is then $(\iota^{-1}\times\iota^{-1})^*\phi$. We can then compute $C_\varphi=X'\times Y$ and $\Lambda_\varphi={}^\scat \,\overline{T}^*X'$.
Indeed, in the Euclidean setting, $\Lambda_\varphi$ corresponds to the the three conic manifolds
\begin{align*}
{\Lambda_\varphi^e}&=\{(0,x'',\xi^\prime,0)\}\subset ({\RRd\setminus\{0\}})\times{\RR^d}\\
{\Lambda_\varphi^{\psi e}}&=\{(0,x'',\xi^\prime,0)\}\subset ({\RRd\setminus\{0\}})\times({\RRd\setminus\{0\}})\\
{\Lambda_\varphi^\psi}&=\{(0,x'',\xi^\prime,0)\}\subset {\RR^d}\times({\RRd\setminus\{0\}})
\end{align*}
which have the claimed symplectic properties. Compactification of the ${\RR^d}$-components and projection of the conic $({\RRd\setminus\{0\}})$-component to the corresponding sphere then yields the compactified notions in ${}^\scat \,\overline{T}^*X$.
\section{Phase functions which parametrize the same Lagrangian}
\label{sec:exchphase}
In this section, we adapt the classical techniques for exchanging the phase function locally parametrizing a given Lagrangian, see \cite[Chapter 8.1]{Treves}, to the setting with boundary. Since $\Lambda_\varphi$, not $L_\varphi$, is our true object of interest, we say that two phase functions $\varphi_i$, $i=1,2$, locally parametrize the same Lagrangian at $p_0\in\mathcal{W}$ if $\Lambda_{\varphi_1}=\Lambda_{\varphi_2}$ in a small (relatively) open neighbourhood of $p_0$ in the respective boundary faces.
Our first observation is the following:
\begin{lem}
\label{lem:phaseplussmooth}
If $\varphi\in\rho_X^{-1}\rho_{{\mathbb{B}}^s}^{-1}{\mathscr{C}^\infty}(X \times {\mathbb{B}}^s)$ is a local phase function and $r\in {\mathscr{C}^\infty}(X \times {\mathbb{B}}^s)$, then $\varphi+r$ is still a local phase function and it parametrizes the same Lagrangian as $\varphi$.
\end{lem}
\begin{proof}
Since $r\in{\mathscr{C}^\infty}(X \times {\mathbb{B}}^s)$, ${}^\scat \dd r=0$ when restricted to the boundary. Therefore, $\varphi+r$ is still a local phase function. By the same reason, $\mathcal{C}_{\varphi}=\mathcal{C}_{\varphi+r}$. Finally, we have
$$\lambda_{\varphi+r}(\mathbf{x},by)=(\mathbf{x},\iota({}^\scat \dd_X(\varphi + r))).$$
Computing ${}^\scat \dd_X(\varphi + r)$ in coordinates, see \eqref{eq:scdxexpl},
$$
{}^\scat \dd_X\varphi=\rho_Y^{-1} \left((-f+\rho_X\partial_{\rho_X}f+\rho_Y\rho_X^2\partial_{\rho_X}r) \frac{\dd \rho_X}{\rho_X^2} +
\sum_{j=1}^{d-1}(\partial_{x_j}f+\rho_Y\rho_X\partial_{x_j}r)\frac{\dd x_j}{\rho_X} \right),
$$
we observe that at $\rho_X=0,$ the contribution from $r$ vanishes. The same is true in the limit of $\rho_Y\rightarrow 0$ under application of $\iota$, see also Lemma \ref{lem:horror}.
\end{proof}
\subsection{Increasing fiber variables}
Given a clean phase function $\varphi \in \rho_X^{-1}\rho_{{\mathbb{B}}^s}^{-1}C^\infty(X \times {\mathbb{B}}^s)$ with excess $e$, define $\widetilde{\psi} \in \rho_X^{-1} \rho_{{\mathbb{B}}^s}^{-1} C^\infty(X \times {\mathbb{B}}^s \times (-\varepsilon, \varepsilon))$ as follows:
\[\widetilde{\psi}(\mathbf{x},\mathbf{y}, \tilde y) = \varphi(\mathbf{x}, \mathbf{y}) + \frac{\tilde y^2}{\rho_X \rho_{{\mathbb{B}}^s}}.\]
We see that ${}^\scat \dd\widetilde{\psi} \neq 0$ when ${}^\scat \dd\varphi\neq 0$ and ${}^\scat \dd_{{\mathbb{B}}^s\times(-\epsilon,\epsilon)} \tilde{\psi} = 0$ if and only if $\tilde y = 0$ and ${}^\scat \dd_{{\mathbb{B}}^s} \varphi = 0$.
Thus,
$$C_{\widetilde{\psi}} = \left\{(\mathbf{x},\mathbf{y}, 0)\mid (\mathbf{x},\mathbf{y}) \in C_\varphi\right\},$$
which implies that the excess is not changed, and $\Lambda_{\widetilde{\psi}} = \Lambda_{\varphi}$. Summing up, $\psi$ is a local clean phase function in $s+1$ fiber variables with the same excess $e$ as $\varphi$ and (locally) parametrizing the same Lagrangian as $\varphi$.
This construction may once again be moved to balls, by using Example \ref{ex:embdball} and setting $\psi = \Psi^*\widetilde{\psi}$. Then $\psi \in \rho_X^{-1}\rho_{{\mathbb{B}}^{s+1}}^{-1}C^\infty(X \times U)$.
Using the fact that ${}^\scat \dd \psi = \Psi^*\widetilde{\psi}$, we see that $\psi$ is a clean phase function parametrizing $\Lambda_{\varphi}$ with excess $e$.
Again, $X\times{\mathbb{B}}^s$ can be exchanged by any relatively open subset, hence starting with local phase functions.
\subsection{Reduction of the fiber variables}\label{subs:fbred}
Starting again from a clean phase function $\varphi \in \rho_X^{-1}\rho_{{\mathbb{B}}^s}^{-1}C^\infty(X \times {\mathbb{B}}^s)$ with excess $e$, we now construct a (local) phase function $\psi$ in the smallest possible number of phase variables (without changing the excess) which (locally) parametrizes the same Lagrangian.
The argument is similar to the classical one, but extra attention needs to be paid at to what happens near points with $\rho_Y=0$, namely, we never seek to get rid of $\rho_Y$ as a parameter.
\begin{rem}
In the classical theory, meaning for homogeneous phase functions, it is possible to reduce the number of fiber variables under the assumption that the matrix $\partial^2_{\theta\theta}\varphi(x,\theta)$ has rank $r>0$ on $C_\varphi$.
However, since a classical phase function $\varphi$ is homogeneous in $\theta$, it holds that $\theta\cdot \nabla_\theta\varphi=\varphi$ and hence the second radial derivative is automatically zero on $C_\varphi$. Furthermore, the radial variable can always be chosen to parametrize $\Lambda_\varphi$.
\end{rem}
We proceed as in the proof of Theorem \ref{thm:lpsubm}. We first recall that, for $p_0\in C_\varphi$, writing
$\varphi=\rho_Y^{-1}\rho_X^{-1}f$ with $f\in C^\infty(X \times {\mathbb{B}}^s)$,
we have there
\begin{equation}\label{eq:scDyphi}
0={}^\scat \dd_Y\varphi=\left(-f+\rho_Y\partial_{\rho_Y}f,\partial_{y_k}f \right).
\end{equation}
We then identify $T_Y{}^\scat \dd_Y\varphi$ in coordinates with the matrix
\begin{equation}
\label{eq:DyDyphi}
J_Y\varphi = \begin{pmatrix}
\rho_Y\partial_{\rho_Y}^2f & -\partial_{y_j}f+\rho_Y \partial_{y_j}\partial_{\rho_Y} f \\
\partial_{\rho_Y}\partial_{y_k} f & \partial_{y_j}\partial_{y_k} f
\end{pmatrix}.
\end{equation}
We see, using \eqref{eq:scDyphi}, that on $\mathcal{C}_\varphi^\psi \subset \{\rho_Y = 0\}$ this becomes
\begin{equation}
\label{eq:Tyscdphipsi}
J_Y\varphi\big|_{\mathcal{C}_\varphi^\psi}= \begin{pmatrix}
0 & 0 \\
\partial_{\rho_Y}\partial_{y_k} f & \partial_{y_j}\partial_{y_k} f
\end{pmatrix}.
\end{equation}
Therefore, the rank of this matrix is at most $s-1$. Indeed, we observe that, by \eqref{eq:cleanbdry}, at $\rho_Y=0$ we have $\dd\rho_Y\neq 0$ on $TC_\varphi^\psi$ and hence we can always choose $\rho_Y$ as a parameter to locally describe $C^\psi_\varphi$.
\begin{rem}
By the same argument, $\rho_X$ can be chosen
as a parameter close to $\B^e$, while, close to $\B^{\psi e}$,
both $\rho_X$ and $\rho_Y$ can be chosen as parameters to represent
$C_\varphi$.
\end{rem}
We now seek to reduce the remaining set of variables under the assumption that
\begin{equation}
\label{eq:DyDyass0}
\text{The matrix }\big(\partial_{y_j}\partial_{y_k} \rho_X\rho_Y\varphi\big)_{jk} \text{ has rank }r>0\text{ at }p_0\in\mathcal{C}_\varphi^\psi\cup\mathcal{C}_\varphi^{\psi e}.
\end{equation}
Since at points where $\rho_Y\neq 0$ the variable $\rho_Y$ behaves like all other variables, the same restriction does not hold near a point $p\in \mathcal{C}_\varphi^e$. Here, we simply assume that
\begin{equation}
\label{eq:DyDyass1}
\text{The matrix } T_Y{}^\scat \dd_Y\varphi \text{ has rank }r>0\text{ at }p_0\in\mathcal{C}_\varphi^e.
\end{equation}
Since up to multiplication by $\rho_Y>0$ in one row, \eqref{eq:DyDyphi} is the Hessian of $h$ (with respect to $\mathbf{y}$), this is equivalent to
$\mathrm{rk}(H_Y f)=r>0$.
The two conditions may be summarized into one. Namely, consider the scattering Hessian (with respect to the $\mathbf{y}$-variables) of $\varphi$
\begin{equation}
\begin{aligned}
{}^\scat H_Y\varphi&=\begin{pmatrix}
\rho_Y^2\rho_X\partial_{\rho_Y}\rho_Y^2\rho_X\partial_{\rho_Y}\varphi & \rho_Y\rho_X\partial_{y_j}\rho_Y^2\rho_X\partial_{\rho_Y}\varphi \\
\rho_Y^2\rho_X\partial_{\rho_Y}\rho_Y\rho_X\partial_{y_k}\varphi & \rho_Y\rho_X\partial_{y_j}\rho_Y\rho_X\partial_{y_k}\varphi
\end{pmatrix}
\\
&= \rho_Y\rho_X\begin{pmatrix}
\rho_Y^2\partial_{\rho_Y}^2f & -\partial_{y_j}f+\rho_Y \partial_{y_j}\partial_{\rho_Y} f \\
\rho_Y\partial_{\rho_Y}\partial_{y_k} f & \partial_{y_j}\partial_{y_k} f
\end{pmatrix}.
\end{aligned}
\end{equation}
Then $\rho_Y^{-1}\rho_X^{-1}\,{}^\scat H_Y\varphi$ becomes, at a point in $\mathcal{C}_\varphi$:
\begin{align*}
\rho_Y^{-1}\rho_X^{-1}{}^\scat H_Y\varphi&=\begin{pmatrix}
0 & 0 \\
0 & \partial_{y_j}\partial_{y_k} f
\end{pmatrix},\quad\text{ if }p_0\in\mathcal{C}_\varphi^\psi\cup\mathcal{C}_\varphi^{\psi e}; \\
\rho_Y^{-1}\rho_X^{-1}\,^{\mathrm{sc}\hspace{-2pt}} H_Y\varphi&=\begin{pmatrix}
\rho_Y^2\partial_{\rho_Y}^2f & \rho_Y \partial_{y_j}\partial_{\rho_Y} f \\
\rho_Y\partial_{\rho_Y}\partial_{y_k} f & \partial_{y_j}\partial_{y_k} f
\end{pmatrix},\quad\text{ if }p_0\in\mathcal{C}_\varphi^e.
\end{align*}
Notice that we can factorize these matrices as
\begin{equation}
\label{eq:scatHessfactor}
\begin{pmatrix} \rho_Y & 0 \\
0 & \mathbbm{1} \end{pmatrix}
\begin{pmatrix}
\partial_{\rho_Y}^2f & \partial_{y_j}\partial_{\rho_Y} f \\
\partial_{\rho_Y}\partial_{y_k} f & \partial_{y_j}\partial_{y_k} f
\end{pmatrix}
\begin{pmatrix} \rho_Y & 0 \\
0 & \mathbbm{1} \end{pmatrix},
\end{equation}
the rank of which therefore is, for $\rho_Y\neq 0$, that of the standard Hessian of $f$, $H_Y f$. Therefore, our assumption may be expressed as:
\begin{equation}
\label{eq:DyDyass2}
\text{The matrix } \rho_Y^{-1}\rho_X^{-1}\,^{\mathrm{sc}\hspace{-2pt}} H_Y\varphi \text{ has rank }r>0\text{ at }p_0\in\mathcal{C}_\varphi.
\end{equation}
We may now proceed as in the standard theory and introduce a splitting of variables $\mathbf{y}=(\mathbf{y}^\prime,\mathbf{y}^{\prime \prime})$
such that $(\partial_{\mathbf{y}^{\prime\prime}}\partial_{\mathbf{y}^{\prime\prime}}f)_{jk}$ is an invertible $r\times r$ matrix. We can then apply the implicit function theorem to
$$0={}^\scat \dd_Y\varphi=\left(-f+\rho_Y\partial_{\rho_Y}f,\partial_{y_k}f \right)$$
at $p_0$. We obtain a map from an open neighbourhood of $p_0$,
$$k:(\mathbf{x},\mathbf{y}^{\prime})\mapsto \big(\mathbf{x},\mathbf{y}^\prime,\mathbf{y}^{\prime\prime}(\mathbf{x},\mathbf{y}^\prime)\big),$$
such that $C_\varphi$ and the range of $k$ locally coincide. Note that $k$ is a scattering map, since $\rho_Y$ is always one of the $\mathbf{y}^\prime$ near the $\psi$-face.
Then $\varphi_{\red}=\varphi\circ k$ is a clean local phase function in $d\times (s-r)$ variables with excess $e$, and $k$ provides a local isomorphism $C_{\varphi_{\red}}\rightarrow C_\varphi$. Furthermore, at stationary points $p_0$ and $k(p_0)$, we have that $\iota({}^\scat \dd_X \varphi_{\red})=\iota({}^\scat \dd_X \varphi)$, since ${}^\scat \dd_Y\varphi=0$ there.
Hence, $\varphi_{\red}$ locally parametrizes the same Lagrangian as $\varphi$.
\begin{rem}
Note that, after applying a change of coordinates in the $\mathbf{y}$ variables,
$\varphi_{\red}$ may be assumed to be defined on ${\BB^d}\times{\mathbb{B}}^{s-r}$, see also Lemma \ref{lem:CLFstarff} below.
\end{rem}
Summing up, we can formulate the next Proposition \ref{prop:fiberred}.
\begin{prop}
\label{prop:fiberred}
Let $\varphi\in\rho_Y^{-1}\rho_X^{-1}{\mathscr{C}^\infty}(X\times{\mathbb{B}}^s)$ be a local clean phase function of excess $e$. Assume
$$\rho_Y^{-1}\rho_X^{-1}\,^{\mathrm{sc}\hspace{-2pt}} H_Y\varphi\text{ has rank }r>0\text{ at a stationary boundary point }p_0\in\mathcal{C}_\varphi.$$
We may then define a local phase function $\varphi\in\rho_Y^{-1}\rho_X^{-1}{\mathscr{C}^\infty}(X\times{\mathbb{B}}^{s-r})$ of excess $e$ parametrizing the same Lagrangian.
\end{prop}
We mention that, locally, the minimal number of fiber variables $y$ that a clean phase function of excess $e$ locally parametrizing $L_\varphi$ has to possess is $$s_{\mathrm{min}}=d+e-n,$$ where $n$ is the (local) number of independent $x$ variables on $L_\varphi$. This follows from a simple dimension argument: the dimension of $L_\varphi$ is $d$, that of $C_\varphi$ is $d+e$, and the one of the projection to $x$ of $C_\varphi$ coincides with that of $L_\varphi$. Note that, by cleanness of the intersection $\mathcal{C}_\varphi\cap\B^\psi$, near $\Lambda^\psi$ we have $s_{\mathrm{min}}>0.$
\subsection{Increasing the excess}
Given a (local) clean phase function $\varphi \in \rho_X^{-1}\rho_{{\mathbb{B}}^s}^{-1}C^\infty(X \times {\mathbb{B}}^s)$ with excess $e$, define $\psi:={\mathrm{pr}}_{X\times{\mathbb{B}}^s}^*\varphi$ on $X\times({\mathbb{B}}^s\times(-\varepsilon,\varepsilon))$, viewing ${\mathbb{B}}^s\times(-\varepsilon,\varepsilon)$ as an open subset of ${\mathbb{B}}^s\times{\mathbb{S}}^1$, which is a manifold with boundary whose boundary defining function may be chosen as ${\mathrm{pr}}_{{\mathbb{B}}^s}^*\rho_{{\mathbb{B}}^s}$. In particular we have, with the obvious identifications,
$${}^\scat \dd_{{\mathbb{B}}^s\times(-\varepsilon,\varepsilon)}\psi={\mathrm{pr}}_{X\times{\mathbb{B}}^s}^*\left({}^\scat \dd_{{\mathbb{B}}^s}\varphi\right).$$
Then $C_\psi=C_\varphi\times(-\varepsilon,\varepsilon)$ and hence $\dim(C_\psi^\bullet)=\dim(C_\varphi^\bullet)+1.$ Furthermore, $\lambda_\psi={\mathrm{pr}}_{X\times{\mathbb{B}}^s}^*{\lambda_\varphi}$ and $\Lambda_\varphi=\Lambda_\psi$. Summing up, $\psi$ is a local clean phase function in $s+1$ fiber variables with excess $e+1$, defined and (locally) parametrizing the same Lagrangian as $\varphi$.
As before, we may choose to keep working on balls by invoking the construction from Example \ref{ex:embdball} and replacing $\psi$ with
$$\Psi^*\psi=\widetilde{\Psi}^*\varphi \in \rho_X^{-1}\rho_{{\mathbb{B}}^{s+1}}^{-1}{\mathscr{C}^\infty}(X \times U).$$
In this way, since $\Psi$ is a diffeomorphism, $\psi$ becomes a clean phase function with excess $e+1$ defined on a relatively open subset of $X\times{\mathbb{B}}^{s+1}$ and similarly we may raise the excess by any natural number.
\begin{ex}
The standard Fourier phase on ${\mathbb{R}}\times{\mathbb{R}}$, $\varphi(x,\xi)=x\cdot \xi$, cannot be seen as an ${\mathrm{SG}}$-phase on all of ${\mathbb{R}}\times{\mathbb{R}}^2$ by setting $\psi(x,\xi,\eta)=x\cdot\xi$. Indeed,
\begin{align}
\label{eq:SGphasextend}
\jap{x}^2|\nabla_x \varphi(x)|^2+\jap{(\xi,\eta)}|\nabla_{\xi,\eta}\varphi|^2&=(1+x^2)\xi^2+(1+\xi^2+\eta^2)x^2\\
\notag &=\jap{x}\jap{\xi}+x^2\eta^2-1
\end{align}
For $\xi=0$ and $x=0$ and $\eta\rightarrow \infty$, this vanishes but should be bounded from below by $c(1+|\eta|)^2$ if $\psi$ were an ${\mathrm{SG}}$-phase function, given \eqref{eq:SGphaseineq}.
Reviewing Example \ref{ex:embdball}, the ray $\xi=0$, $x=0$ and $\eta\neq 0$ corresponds precisely to the poles in Figure \ref{fig:fiberball} which were cut off. Indeed, \eqref{eq:SGphasextend} is bounded from below by $\jap{x}^2\jap{(\xi,\eta)}^2$ in any neighbourhood where $\frac{|\xi|}{|\eta|}>c $ and hence a local phase function in such sets.
\end{ex}
\subsection{Elimination of excess}
\label{sec:phaseexelim}
Assume now that $\varphi$ is a phase function on $X \times {\mathbb{B}}^s$ with excess $e$ and that at some point $p_0=(\rho_{X,0},x_0,\rho_{Y,0},y_0)\in \mathcal{C}_\varphi$ we have ${\lambda_\varphi}(p_0)=(\rho_{X,0},x_0,\rho_{\Xi,0},\xi_0)$. Then, by Lemma \ref{lem:lpfibration}, the preimage of $(\rho_{X,0},x_0,\rho_{\Xi,0},\xi_0)$ under ${\lambda_\varphi}$, meaning the fiber in $\mathcal{C}_\varphi$ through $p_0$, is an $e$-dimensional smooth submanifold. Locally, since ${\lambda_\varphi}$ is a submersion we may, by \cite[Prop. 5.1]{Joyce}, reduce to the case of a projection, that is, we may find a splitting $y=(y^\prime,y^{\prime\prime})$ near $p_0$ such that ${\lambda_\varphi}$ does not depend on $y^{\prime\prime}$. Then,
$$\tilde{\varphi}(\rho_{X},x,\rho_{Y},y^\prime):=\varphi(\rho_{X},x,\rho_{Y},y^\prime,y^{\prime\prime}_0)$$
defines a phase function without excess (i.e., a non-degenerate phase function) that parametrizes the same Lagrangian as $\varphi$. As usual, we may again reduce to the case of a ball and hence replace $\varphi$ by a phase function on an open subset of $X\times{\mathbb{B}}^{s-e}$.
\subsection{Equivalence of phase functions}
We will now discuss the changes of phase function under a change of coordinates and which phase functions can be considered equivalent. We first check how the stationary points of a phase function transform under changes by local diffeomorphisms.
\begin{lem}\label{lem:CLFstarff}
Let $X_1$, $Y_1$, $X_2$, $Y_2$ be mwbs, set $B_i=X_i\times Y_i$, $i\in\{1,2\}$, and let $\varphi \in \rho_{X_2}^{-1}\rho_{Y_2}^{-1}C^\infty(B_2)$ be a (local) phase function. Assume $g:X_1\rightarrow X_2$, $h:Y_1\rightarrow Y_2$ to be diffeomorphisms, and set $F=g\times h$. Then, $F^*\varphi\in \rho_{X_1}^{-1}\rho_{Y_1}^{-1}C^\infty(B_1)$ is a (local) phase function with the same excess of $\varphi$, and we have
\begin{align*}
C_{F^*\varphi}&=\big\{(\mathbf{x}_1,\mathbf{y}_1)\in B_1\,|\,F(\mathbf{x}_1,\mathbf{y}_1)
\in C_\varphi \big\},\quad
L_{F^*\varphi}=({}^\scat \,\overline{T}^*g)(L_\varphi).
\end{align*}
\end{lem}
\begin{rem}
%
This means that, while the boundary defining function $\rho_{\Xi_1}$ of ${}^\scat \,\overline{T}^* X_1$ does not vanish,
$L_{F^*\varphi}$ can then be computed as
%
\[
L_{F^*\varphi}=
\big\{(\mathbf{x}_1, \,\iota(^t (Jg) \iota^{-1}(\boldsymbol{\xi}_1))\in
{}^\scat \,\overline{T}^*X_1\mid (g(\mathbf{x}_1), \boldsymbol{\xi}_1)\in L_\varphi\big\}.
\]
%
As $\rho_\Xi\rightarrow 0$, $\Lambda^\psi_{F^*\varphi}$ is obtained by taking interior limits, see also Lemma \ref{lem:horror}.
\end{rem}
\begin{proof}[Proof of Lemma \ref{lem:CLFstarff}]
%
%
%
The result for $C_\varphi$ follows immediately from
the first assertion in Lemma \ref{lem:scdpullback}.
%
The statement for $L_\varphi$ then follows by writing
%
\begin{equation}\label{eq:Lfpullback}
\lambda_{F^*\varphi}(\mathbf{x}_1,\mathbf{y}_1)=
({}^\scat \,\overline{T}^*g)(\lambda_\varphi(\mathbf{x}_2,\mathbf{y}_2))
\end{equation}
%
near a point
$(\mathbf{x}_1,\mathbf{y}_1) \in (C_{F^*\varphi})^o$ such that
$(\mathbf{x}_2,\mathbf{y}_2)=(g(\mathbf{x}_1),h(\mathbf{x}_1,\mathbf{y}_1))$. Indeed, at
these stationary points,
${}^\scat \dd_X F^*\varphi=F^*({}^\scat \dd_X\varphi$),
since there ${}^\scat \dd_Y\varphi=0$.
Since equality \eqref{eq:Lfpullback} holds in the interior,
the result at the boundary faces can be
obtained as interior limits (see also Lemma \ref{lem:lpsct}).
\end{proof}
%
\begin{rem}
The diffeomorphism $g\times h$ may be replaced by a single diffeomorphism $F:X_1\times Y_1\rightarrow X_2\times Y_2$ locally of product type near the boundary faces of $X_2\times Y_2$, i.e., a (local) diffeomorphism that is
a fibered-map at the boundary.
\end{rem}
We now define in which sense two phase functions may be considered equivalent.
\begin{defn}\label{def:phequiv}
Let $X$, $Y_1$, $Y_2$ be mwbs, $B_i=X\times Y_i$. Let
$\varphi_i\in\rho_{X}^{-1}\rho_{Y_i}^{-1}C^\infty(B_i)$.
We say that $\varphi_1$ and $\varphi_2$ are equivalent at a pair of boundary points
$(\mathbf{x}^0,\mathbf{y}_1^0)\in\mathcal{B}_1$ and $(\mathbf{x}^0,\mathbf{y}_2^0)\in\mathcal{B}_2$ if there exists a local diffeomorphism
$F:X\times Y_2\rightarrow X\times Y_1$ of the form $F=\mathrm{id}\times g$ with $g(\mathbf{x}^0,\mathbf{y}_2^0)=\mathbf{y}_1^0$ such that the following two conditions are met:
%
\begin{equation}
\label{eq:eqv1}
F^*\varphi_1-\varphi_2\text{ is smooth in a neighbourhood $U$ of }(\mathbf{x}^0,\mathbf{y}_2^0),
\end{equation}
%
\begin{equation}
\label{eq:eqv2}
\rho_X\rho_{Y_2} \left(F^*\varphi_1-\varphi_2\right) \text{ restricted to } \mathcal{C}_{\varphi_2}\cap \partial U \text{ vanishes to second order.}
\end{equation}
%
\end{defn}
\begin{lem}
\label{lem:equivlag}
Equivalent phase functions parametrize the same Lagrangian, meaning $\Lambda_{F^*\varphi}=\Lambda_{\varphi}$ and we have $\mathcal{C}_{F^*\varphi_1}=\mathcal{C}_{\varphi_2}$.
\end{lem}
\begin{proof}
This follows from Lemmas \ref{lem:phaseplussmooth} and \ref{lem:CLFstarff}.
\end{proof}
We now associate to any local phase function its \emph{principal phase part}, which corresponds in the ${\mathrm{SG}}$-case to the leading homogeneous components of $\varphi$. From the fact that the principal part of Definition \ref{def:princpart} is obtained from the boundary restrictions of $\varphi$, we observe, using $F=\mathrm{id}\times\mathrm{id}$ and Lemma \ref{lem:princpart}:
\begin{lem}\label{lem:phpequiv}
A local phase function $\varphi$ and its principal part $\varphi_p$ are equivalent.
\end{lem}
\begin{rem}
In particular, each phase function is locally equivalent at the $e$- and $\psi$-face, respectively, to a homogeneous (w.r.t. $\rho_X$ or $\rho_Y$) phase function, after a choice of collar decomposition. In general, this is not true near the corner $\B^{\psi e}$.
\end{rem}
Since the difference in condition \eqref{eq:eqv2} is restricted to the boundary, it does not restrict the behavior of $F^*\varphi_1-\varphi_2$ into the direction transversal to the boundary, e.g. $\partial_{\rho_X}\rho_X\rho_{Y_2}(F^*\varphi_1-\varphi_2)$ at $\mathcal{C}_{\varphi_2}^e$. The following lemma states the transformation behavior of this directional derivative.
\begin{lem}
Let $X,Y_1,Y_2$ be mwbs and let $F : X\times Y_2 \to X\times Y_1$ be a ${\mathrm{sc}}$-map of the form $F = \mathrm{id} \times \Psi$. Set $h = \rho_{Y_2}^{-1} F^*\rho_{Y_1}$.
Consider a clean phase function $\varphi$ on $X \times Y_1$. Write $f = \rho_X \rho_{Y_2} \varphi$.
Then we have the following transformation laws:
\begin{align*}
hF^*\partial_{\rho_{Y_1}} \rho_X^{-1}f &= \partial_{\rho_{Y_2}} F^*\rho_X^{-1}f, \quad\text{ on } F^*\mathcal{C}_{\varphi}^\psi,\\
F^*\rho_{Y_1}^{-1}\partial_{\rho_X} f &= \partial_{\rho_X} F^*\rho_{Y_1}^{-1}f, \quad\text{ on } F^*\mathcal{C}_{\varphi}^e.
\end{align*}
\end{lem}
\begin{proof}
On $F^*\mathcal{C}_\varphi^\psi$, we have that
\begin{align*}
\partial_{\rho_{Y_2}} F^*f &= hF^*\partial_{\rho_{Y_1}}f + F^*(\partial_{y_1} f)\partial_{\rho_{Y_2}}y_1= hF^*\partial_{\rho_{Y_1}}f,
\end{align*}
where we have used $\partial_{y_1} f = 0$ on $F^*\mathcal{C}_\varphi^\psi$.
This proves the first equality.
On $F^*\mathcal{C}_\varphi^e$, we compute
\begin{align*}
\partial_{\rho_X} F^*\rho_{Y_1}^{-1}f_1 &= F^*\rho_{Y_1}^{-1}\partial_{\rho_X} f_1 + F^*(\partial_{\rho_{Y_1}} \rho_{Y_1}^{-1}f_1)\, \partial_{\rho_X} F^*\rho_{Y_1} + F^*(\rho_{Y_1}^{-1}\partial_{y_1} f_1)\, \partial_{\rho_X} F^*y_1\\
&= \rho_{Y_2}^{-1}h^{-1}F^*\partial_{\rho_X} f_1.
\end{align*}
Therein, we used $\partial_{y_1}f_1 = 0$ and $\partial_{Y_1} \rho_{Y_1}^{-1} f_1 = 0$ on $\mathcal{C}_{\varphi_1}$.
\end{proof}
\begin{rem}\label{rem:strictness}
The previous lemma, combined with Lemma \ref{lem:phpequiv}, will imply that, away from the corner, any phase function can be replaced by an equivalent phase function without radial derivative (at $\mathcal{C}_\varphi$) and the vanishing of this derivative at $\mathcal{C}_\varphi$ is preserved under application of scattering maps.
This corresponds to the fact that, in the classical theory, one can always choose a homogeneous phase functions. The (non-homogeneous) terms of lower order which arise in transformations can be absorbed into the amplitude.
\end{rem}
The rest of this section will be dedicated to establishing a necessary and sufficient criterion for the local equivalence of phase functions.
\begin{lem}\label{lem:arrange}
Let $X$, $Y_1$, $Y_2$ be mwbs such that $\dim(Y_1)=\dim(Y_2)$, and set $B_i=X\times Y_i$, $i\in\{1,2\}$. Let $\varphi_i \in \rho_{X}^{-1}\rho_{Y_i}^{-1}C^\infty(B_i)$ be phase functions which have the same excess, and assume that there exist $p^0_i=(\mathbf{x}^0,\mathbf{y}^0_i)\in\mathcal{C}_{\varphi_i}$, $i\in\{1,2\}$, such that
\begin{align*}
\lambda_{\varphi_1}(\mathbf{x}^0,\mathbf{y}^0_1)&=\lambda_{\varphi_2}(\mathbf{x}^0,\mathbf{y}^0_2),
\end{align*}
and, close to $(\mathbf{x}^0,\mathbf{y}^0_i)$, $i\in\{1,2\}$, both phases parametrize the same Lagrangian $\Lambda$, i.e., locally $\Lambda=\Lambda_{\varphi_i}$,
$i\in\{1,2\}$.
Then, there exists a local diffeomorphism $F\colon B_2\to B_1$ of the
form $F=\mathrm{id}\times g$ with $F(\mathbf{x}^0,\mathbf{y}^0_2)=(\mathbf{x}^0,\mathbf{y}^0_1)$, such that
$F^*\varphi_1=\rho_X\rho_{Y_2}\widetilde{f}_1$
with $\mathcal{C}_{F^*\varphi_1}=\mathcal{C}_{\varphi_2}$,
locally. Moreover, locally near $(\mathbf{x}^0,\mathbf{y}^0_2)$,
\begin{equation}\label{eq:prsymbeq}
(f_2-\widetilde{f}_1)|_{\mathcal{B}_2}
\text{ vanishes of second order at any point of
$\mathcal{C}_{\varphi_2}$.
}
\end{equation}
\end{lem}
\begin{rem}
%
Notice that \eqref{eq:prsymbeq} means that the principal
part of $F^*\varphi_1$ and $\varphi_2$ in Lemma \ref{lem:arrange}
coincide on $\mathcal{C}_{\varphi_2}$. %
\end{rem}
\begin{proof}[Proof of Lemma \ref{lem:arrange}]
Since $\lambda_{\varphi_i}$ are local fibrations from
$\mathcal{C}_{\varphi_i}$ to $\Lambda_{\varphi_i}$, $i\in\{1,2\}$,
and $\Lambda_{\varphi_1}=\Lambda_{\varphi_2}=\Lambda$,
there is a local fibered diffeomorphism $F\colon B_2\to B_1$
of the form $F=\mathrm{id}\times g$, locally
locally near $(\mathbf{x}^0,\mathbf{y}^0_1)=F(\mathbf{x}^0,\mathbf{y}^0_2)$,
such that the following diagram is
commutative.
%
\vspace{-0.5cm}
\begin{center}
\begin{tikzpicture}
\matrix (m) [matrix of math nodes,row sep=2.5em,column sep=1.5em,minimum width=2em,minimum height=7mm,
text depth=0.5ex,
text height=2ex,
inner xsep=1pt,
outer sep=1pt]
{
\ & \Lambda & \ \\
\mathcal{C}_{\varphi_2} & & \mathcal{C}_{\varphi_1} \\
};
\path[->]
(m-2-1) edge node [left] {$\lambda_{\varphi_2}$}(m-1-2);
\path[->]
(m-2-3) edge node [right] {$\lambda_{\varphi_1}$}(m-1-2);
\path[->]
(m-2-1) edge node [above] {$\exists F$}(m-2-3);
\end{tikzpicture}
\end{center}
\vspace{-0.5cm}
Note that $F$ is not uniquely determined, not even on $\mathcal{C}_{\varphi_2}$ when the phases are merely clean and not non-degenerate.
After application of $F$, we may assume that $Y_1=Y_2=:Y$, $\mathbf{y}_1^0=\mathbf{y}_2^0=:\mathbf{y}^0$ and,
locally, $\mathcal{C}_{\varphi_1}=\mathcal{C}_{\varphi_2}=:\mathcal{C}_\varphi$.
We now show that the restriction of $f_1$ and $f_2$ to a relative
neighbourhood of $(\mathbf{x}^0,\mathbf{y}^0)$ in $\mathcal{C}_\varphi$ vanishes of
second order. Recall that, since ${}^\scat \dd_Y\varphi_1={}^\scat \dd_Y\varphi_2=0$,
for any $p=(\mathbf{x},\mathbf{y})\in\mathcal{C}_\varphi$ we have
%
\begin{equation}\label{eq:scdmapphfncs}
\begin{pmatrix}\rho_Y\partial_{\rho_Y}f_1-f_1
&
\partial_{y_k}f_1
\end{pmatrix}
=
\begin{pmatrix}\rho_Y\partial_{\rho_Y}f_2-f_2
&
\partial_{y_k}f_2
\end{pmatrix}=0
\end{equation}
%
Furthermore, since $\varphi_1$ and $\varphi_2$ parametrize the same
Lagrangian, we also have
$\lambda_{\varphi_1}(p)=\lambda_{\varphi_2}(p)$, that is,
$\iota({}^\scat \dd_X\varphi_1(p))=\iota({}^\scat \dd_X\varphi_2(p))$.
We treat separately the cases $p\in\mathcal{C}_\varphi^e$ and $p\in\mathcal{C}_\varphi^\psi\cup\mathcal{C}_\varphi^{\psi e}$.
If $p\in\mathcal{C}_\varphi^e$, we then find
%
\begin{equation}\label{eq:scdxequal}
\iota((\rho_Y^{-1}\rho_X\partial_{\rho_X}f_1(p)-f_1(p),
\rho_Y^{-1}\partial_{x_k}f_1(p)))
=
\iota((\rho_Y^{-1}\rho_X\partial_{\rho_X}f_2(p)-f_2(p),
\rho_Y^{-1}\partial_{x_k}f_2(p))).
\end{equation}
%
Since $\rho_Y\not=0$ on $\mathcal{C}_\varphi^e$,
and $\iota$ is a diffeomorphism on the interior, this implies
%
\[
f_1(p)=f_2(p), \quad \partial_{x_k}f_1(p)=\partial_{x_k}f_2(p),
\; k = 1, \dots, d-1.
\]
%
Combining this with \eqref{eq:scdmapphfncs}, this further implies
%
\[
\partial_{\rho_Y}f_1(p)=\partial_{\rho_Y}f_2(p),
\quad
\partial_{y_k}f_1(p)=\partial_{y_k}f_2(p), \;
k=1, \dots, s-1.
\]
%
Since $(x,\mathbf{y})$ are a complete set of variables on $\B^e$, we
can indeed
conclude that $f_1-f_2$ vanishes of second order along $\mathcal{C}_\varphi^e$.
If $p\in\mathcal{C}_\varphi^\psi$ or $p\in\mathcal{C}_\varphi^{\psi e}$, \eqref{eq:scdmapphfncs} implies that
%
\[
f_1(p)=f_2(p)=0, \quad
\partial_{y_k}f_1(p)=\partial_{y_k}f_2(p), \; k=1,\dots, s-1.
\]
%
We have to evaluate \eqref{eq:scdxequal} as a limit $\rho_Y\to0^+$,
using, as in Lemma \ref{lem:horror},
$\iota(z)=\frac{z}{|z|}(1-\frac{1}{|z|})$. We obtain that, with
%
\[
v_1=(\rho_X\partial_{\rho_X}f_1, \partial_{x_k}f_1),
\quad
v_2=(\rho_X\partial_{\rho_X}f_2, \partial_{x_k}f_2),
\]
%
$\frac{v_1}{\|v_1\|}=\frac{v_2}{\|v_2\|}$, but not necessarily
$v_1=v_2$, in which case the proof would be complete. We now modify
$F$ in order to achieve $v_1=v_2$. Notice that, since $\varphi_1$ and $\varphi_2$ are phase functions,
we have $v_1\not=0$ at $\mathcal{C}_\varphi$. We can therefore scale $\varphi_1$
by means of the local diffeomorphism (near $\mathcal{C}_\varphi$)
%
\[
\widetilde{F}\colon(\rho_Y, y)\to
(\rho_Y \, r(\rho_X,x,\rho_Y,y), y),
\]
%
where $r(\rho_X,x,\rho_Y,y)=\frac{\|v_2\|}{\|v_1\|}$. Notice that,
by our previous computations, $r|_{\mathcal{C}_\varphi^e\cup\mathcal{C}_\varphi^{\psi e}}=1$,
and $\widetilde{F}$
is the identity for $\rho_Y=0$. Therefore, by Lemma \ref{lem:CLFstarff},
%
\[
\mathcal{C}_{\widetilde{F}^*\varphi_1}=\mathcal{C}_{\varphi_1},
\;\text{ and }\;
\Lambda_{\widetilde{F}^*\varphi_1}=\Lambda_{\varphi_1}.
\]
%
By definition, for $\widetilde{F}^*\varphi_1$ we have
%
\[
\widetilde{f}_1:=\rho_X\rho_Y\widetilde{F}^*\varphi_1=
\frac{\|v_2\|}{\|v_1\|}(F^*f_1).
\]
%
Therefore,
%
\[
(\rho_X\partial_{\rho_X}\widetilde{f}_1, \partial_{x_k}\widetilde{f}_1)
=
\frac{\|v_2\|}{\|v_1\|}\cdot
(\rho_XF^*(\partial_{\rho_X}{f}_1), F^*(\partial_{x_k}\widetilde{f}_1))
=:\widetilde{v}_1,
\]
%
since the derivatives acting on $r$ produce a $\rho_Y$ factor, and
then vanish along $\mathcal{C}_\varphi^\psi$. Hence, $\widetilde{v}_1=v_2$, which completes
the proof.
%
\end{proof}
\begin{rem}
%
The additional computations in the proof of the previous lemma near the face $\mathcal{C}_\varphi^\psi$ correspond to the fact that, classically, $x\cdot\theta$ and $x\cdot(2\theta)$ both parametrize
%
\[
\Lambda=\left\{(0,\xi)\mid \xi\in{\RR^d}\setminus\{0\}\right\}.
\]
%
In fact, we observe from the same proof that we may choose
the norm of $(\rho_X\partial_{\rho_X}f_1,\partial_{x_k}f_1)$
at any point of ${\Lambda_\varphi^\psi}$ without changing $\Lambda_\varphi$.
%
\end{rem}
\begin{thm}[Equivalence of phase functions]
\label{thm:equivphase}
Let $X$, $Y_1$, $Y_2$ be mwbs such that $\dim(Y_1)=\dim(Y_2)$, and set $B_i=X\times Y_i$, $i\in\{1,2\}$. Let $\varphi_i \in \rho_{X}^{-1}\rho_{Y_i}^{-1}C^\infty(B_i)$, $i\in\{1,2\}$,
be phase functions which have the same excess, assume that there exist $(\mathbf{x}^0,\mathbf{y}^0_i)\in\mathcal{C}_{\varphi_i}$, $i\in\{1,2\}$, such that
\begin{align*}
\lambda_{\varphi_1}(\mathbf{x}^0,\mathbf{y}^0_1)&=\lambda_{\varphi_2}(\mathbf{x}^0,\mathbf{y}^0_2),
\end{align*}
and, close to $(\mathbf{x}^0,\mathbf{y}^0_i)$, $i\in\{1,2\}$, both phase functions parametrize the same Lagrangian $\Lambda$, i.e., locally $\Lambda=\Lambda_{\varphi_i}$,
$i\in\{1,2\}$. Then, it is necessary and sufficient
for $\varphi_1$ and $\varphi_2$ to be equivalent at $(\mathbf{x}^0,\mathbf{y}^0_1)$
and $(\mathbf{x}^0,\mathbf{y}^0_2)$ that there it holds that
\begin{equation}\label{eq:sgncond}
\mathrm{sgn}\left(\,\rho_{Y_1}^{-1}\rho_X^{-1}\,{}^\scat H_{Y_1}\varphi_1\right)
=\mathrm{sgn}\left(\rho_{Y_2}^{-1}\rho_X^{-1}\, {}^\scat H_{Y_2}\varphi_2\right).
\end{equation}
\end{thm}
\begin{rem}
\label{rem:scHess}
Before we go into the details of the proof, we recall the expression for
the differential in condition \eqref{eq:sgncond} in coordinates. By \eqref{eq:scatHessfactor} we have, writing
$\varphi=\rho_X^{-1}\rho_Y^{-1}f$,
$$
\rho_{Y}^{-1}\rho_X^{-1} \, {}^\scat H_Y\varphi=
\begin{pmatrix} \rho_Y & 0 \\
0 & \mathbbm{1} \end{pmatrix}
\begin{pmatrix}
\partial_{\rho_Y}^2f & \partial_{y_j}\partial_{\rho_Y} f \\
\partial_{\rho_Y}\partial_{y_k} f & \partial_{y_j}\partial_{y_k} f
\end{pmatrix}
\begin{pmatrix} \rho_Y & 0 \\
0 & \mathbbm{1} \end{pmatrix}.
$$
Hence, for $\rho_Y\neq 0$, the signature of this matrix is that of $H_Y f$, whereas for $\rho_Y=0$ it is that of the Hessian of $f$ \emph{restricted to $\rho_Y=0$}, that is,
only with respect to the boundary variables,
$\left(\partial_{y_j}\partial_{y_k} f(0,y)\right)_{jk}$.
\end{rem}
\begin{proof}[Proof of Theorem \ref{thm:equivphase}]
%
%
We first prove that condition \eqref{eq:sgncond} is necessary.
In view of Lemma \ref{lem:equivlag}, we only need to compare ${}^\scat H_{Y_1}\varphi_1$ and ${}^\scat H_{Y_2}\varphi_2$ by writing
%
\begin{equation}
{}^\scat H_{Y_2}\varphi_2={}^\scat H_{Y_2}F^*\varphi_1+{}^\scat H_{Y_2}(\varphi_2-F^*\varphi_1).
\end{equation}
%
We write $r=(\varphi_2-F^*\varphi_1)$, which, by assumption,
satisfies $r\in{\mathscr{C}^\infty}(X\times Y_2)$.
Therefore, $\rho_{Y_2}^{-1}\rho_X^{-1}\,{}^\scat H_{Y_2}r$
vanishes at the boundary.
Indeed, in local coordinates we have
%
$$
\rho_Y^{-1}\rho_{X}^{-1}\,{}^\scat H_{Y_2} r=\begin{pmatrix}
\rho_Y\rho_X \partial_{\rho_Y}\rho_Y^2\partial_{\rho_Y}r & \rho_Y^2\rho_X\partial_{y_j}\partial_{\rho_Y}r \\
\rho_Y\rho_X\partial_{\rho_Y}\rho_Y\partial_{y_k}r & \rho_Y\rho_X\partial_{y_j}\partial_{y_k}r
\end{pmatrix}.
$$
Thus, we have, at the boundary,
\begin{equation}\label{eq:sgneq}
\mathrm{sgn}\left(\,\rho_{Y_2}^{-1}\rho_X^{-1} \, {}^\scat H_{Y_2}F^*\varphi_1\right)
=\mathrm{sgn}\left(\rho_{Y_2}^{-1}\rho_X^{-1} \,{}^\scat H_{Y_2}\varphi_2\right).
\end{equation}
By computing these differentials in coordinates at corresponding stationary points, using \eqref{eq:scatHessfactor}, this implies \eqref{eq:sgncond}.
For the sufficiency of \eqref{eq:sgncond}, we assume familiarity of the reader with the equivalence of phase function theorem in the usual homogeneous setting, see \cite[Prop. 4.1.3]{Treves}, \cite[Prop. 4.1.3]{Treves} and sketch briefly that the argument goes through with little modification.
By Lemma \ref{lem:arrange} we may assume $Y_1=Y_2$. Note that equivalence is achieved for $\varphi_i=\rho_X\rho_Y f_i$ if the $f_i$ agree on the boundary. The condition on ${}^\scat H_Y\varphi_i$
means precisely that the signatures of the Hessians of the $f_i$ in the tangential derivatives agree in the interior and the signatures of the Hessians of the restriction of the $f_i$ to $\rho_Y=0$ as well, see Remark \ref{rem:scHess}. As such, we may use the same techniques as in the classical situation to construct a diffeomorphism on the boundary which transforms the restriction of $f_1$ into that of $f_2$, cf. also \cite{CoSc2}. This diffeomorphism is then extended by means of Proposition \ref{prop:cornerdiffeo} into the interior. For sake of brevity, we omit the details here.
%
%
\end{proof}
\begin{rem}
Note that near $(\mathbf{x}^0,\mathbf{y}^0)\in\mathcal{C}_\varphi^\psi$, we can also invoke the classical equivalence theorem directly. We need to find a transformation
$$F:(\mathbf{x},0,y)\mapsto (\mathbf{x},0,\tilde{y}(\mathbf{x},y))$$
such that $F^*\varphi_1=\varphi_2$. For $\lambda>0$ we set $\phi_j(\mathbf{x},\lambda,y)=\lambda f_j(\mathbf{x},0,y)$, $j\in\{1,2\}$. Then $\phi_j$ are equivalent \emph{phase functions in the usual homogeneous sense} on $X\times ({\mathbb{R}}_+\times Y)$. Indeed, evaluating $\dd\phi_j$ and ${}^\scat \dd\varphi_j$ in coordinates, we see that $\dd\phi_j\neq 0$ and $\phi_j$ is manifestly homogeneous. Furthermore, the signatures of $H_Y\phi_j$ are the same as those of ${}^\scat H_Y\varphi_j$. Since the $f_j$ are equal up to second order, the $\phi_j$ are equivalent in the usual sense and there exists a $\lambda$-homogeneous $G: (\mathbf{x},\lambda,y)\mapsto (\mathbf{x},\lambda,\tilde{y}(\lambda,\mathbf{x},y))$ which is homogeneous such that $G^*\phi_1=\phi_2$. Setting $F=G|_{\lambda=1}$ and possibly applying a scaling, as in the proof of Lemma \ref{lem:arrange}, concludes the proof for $(\mathbf{x}^0,\mathbf{y}^0)\in\mathcal{C}_\varphi^\psi$.
\end{rem}
\section{Lagrangian distributions}
\label{sec:Lagdist}
In this section, we will address the class of Lagrangian distributions on scattering manifolds.
First, we introduce oscillatory integrals associated with a phase function and show that they are well-defined in the usual sense.
Then, we define Lagrangian distributions as a locally finite sum of oscillatory integrals, where the phase function parametrizes a
Lagrangian submanifold.
Using the results from the previous section, we are able to reduce the number of fiber-variables to a minimum
and see that the order of the Lagrangian distribution is well-defined independently of the dimension of the fiber.
\subsection{Oscillatory integrals associated with a phase function}
\begin{defn}
Let $Y$ be a mwb. For the remainder of this section, $m_\varepsilon$ with $\varepsilon\in(0,1]$,
denotes a family of functions $m_\varepsilon \in{\dot{\mathscr{C}}^\infty_0}(Y)$ such that for all $k\in{\NNz_0}$, $\alpha \in {\mathbb{N}}_0^{d-1}$ and $\epsilon>0$,
\begin{equation}
\label{eq:approxone}
\left|(\rho_Y^2\partial_{\rho_Y})^k (\rho_Y\partial_y)^\alpha m_\varepsilon(\mathbf{y})\right| \leq C_{k,\alpha}\,\rho_Y^{k + |\alpha|},
\end{equation}
such that, for all $\mathbf{y} \in Y^o$, we have $m_\varepsilon(\mathbf{y}) \to 1$ as $\varepsilon \to 0.$
\end{defn}
\begin{rem}
We make the observation that \eqref{eq:approxone} does not depend on the choice of bdf and is preserved under pullbacks by ${\mathrm{sc}}$-maps.
It is possible to find such a family on any manifold with boundary. In fact, any choice of tubular neighbourhood $U$ of $\partial Y$ such that $U\cong [0,\delta)\times \partial Y$ with coordinates $(\rho_Y,y)$ introduces a dilation in the first variable.
Take a function $\chi \in {\mathscr{C}^\infty_{c}}[0,\infty)$ such that $\chi(x) = 1$ on $[0,\delta]$.
Then set $m_\varepsilon=1$ on $Y\setminus U$ and
\[m_\varepsilon(\rho_Y,y)=\begin{cases}
\chi(\varepsilon\rho_Y^{-1}) & \text{if }\varepsilon\rho_Y^{-1}> \delta/2,\\
1 & \text{otherwise}.
\end{cases}\]
\end{rem}
\begin{defn}
Consider $X$, $Y$ mwbs, $U\subset X\times Y$ an open subset, $\varphi\in\rho_X^{-1}\rho_Y^{-1}{\mathscr{C}^\infty}(U)$ a phase function
and $a\in\rho_X^{-m_e}\rho_Y^{-m_\psi}{\mathscr{C}^\infty}(X\times Y, {}^\scat\, \Omega^{1/2}(X) \times {}^\scat\, \Omega^1(Y))$ an amplitude supported in $U$.
Then $I_\varphi(a)\in({\dot{\mathscr{C}}^\infty_0})^\prime(X,{}^\scat\, \Omega^{1/2}(X))$ is defined as the distributional $1/2$-density acting on $f\in{\dot{\mathscr{C}}^\infty_0}(X,{}^\scat\, \Omega^{1/2}(X))$ by
\begin{equation}
\label{eq:oscidef}
\langle I_\varphi(a),f\rangle:= \lim_{\varepsilon\searrow 0} \iint_{X\times Y} \left(e^{i\varphi} a \cdot (f \otimes m_\varepsilon)\right).
\end{equation}
\end{defn}
\begin{rem}
If $X$ and $Y$ are equipped with a scattering metric, we have a canonical identification of functions and $1$-densities provided by the volume form.
Therefore, we can freely choose whether to view functions and distributions as matching (distributional) $1$-, $0$- or $\frac{1}{2}$-densities.
\end{rem}
\begin{rem}
When $X={\BB^d}$ and $Y={\BB^s}$, these oscillatory integrals correspond, under (inverse) radial compactification, to the tempered oscillatory integrals analyzed in \cite{CoSc2,Schulz}.
\end{rem}
\begin{lem}\label{lem:osciwelldef}
The expression \eqref{eq:oscidef} yields a well-defined tempered distribution (density) on $X$.
In particular, it is independent of the choice of $m_\varepsilon$.
\end{lem}
\begin{proof}
Assume, without loss of generality, that we have a fixed scattering metric and we can identify scattering densities and functions.
Let $U \subset X \times Y=: B$ be an open neighborhood of the boundary $\mathcal{B}^\psi$ such that ${}^\scat \dd \varphi \not = 0$ on $U$.
On $X\times Y \setminus U$, the dominated convergence theorem implies that \eqref{eq:oscidef} is well-defined. The integrand $u_\varepsilon = e^{i\varphi} a (f \otimes m_\varepsilon)$ converges pointwise and is dominated by $|a\cdot f|$, which is bounded for $\rho_Y>c$.
On $U$, as in the classical theory, we can define a first order scattering differential $L \in \mathrm{Diff}^1_{{\mathrm{sc}}}(U)$ which has the property that $Le^{i\varphi} = e^{i\varphi}$. By Proposition 1 from \cite{Melrose1}, we see that $L^t \in \mathrm{Diff}^1_{{\mathrm{sc}}}(U)$.
Using repeated integration by parts and \eqref{eq:approxone}, we are able to increase the order in $\rho_X$ and $\rho_Y$ to arbitrary powers, and an application of the dominated convergence theorem then finishes the proof.
\end{proof}
After an arbitrary choice of scattering metrics, we may locally identify $(X,g_X)$ and $(Y,g_Y)$ with subsets of ${\mathbb{B}}^d$ and ${\mathbb{B}}^s$, respectively. Then, using some explicit local isomorphism $\Psi=\Psi_X\times\Psi_Y$, we can identify densities with functions using the induced measures $\mu_X$ and $\mu_Y$. After use of a partition of unity, we may locally express \eqref{eq:oscidef} as
\begin{align}\label{eq:oscilocdef}
\langle I_\varphi(a),f\rangle:= \lim_{\varepsilon\searrow 0} \iint_{{\BB^d}\times {\BB^s}} \Psi^*\left(e^{i\varphi(\rho_X,x,\rho_Y,y)} a(\rho_X,x,\rho_Y,y) m_\varepsilon(\rho_Y,y) f(\rho_X,x)\right)\\
\label{eq:oscilocdefbis}
= \lim_{\varepsilon\searrow 0} \iint_{{\BB^d}\times {\BB^s}} e^{i\Psi^*\varphi(\rho_X,x,\rho_Y,y)} \wt m_\varepsilon(\rho_Y,y) \wt a(\rho_X,x,\rho_Y,y) \wt f(\rho_X,x) \dd \mu_{{\BB^d}} \dd \mu_{{\BB^s}}
\end{align}
where $\wt f=\Psi^*f |\dd \mu_{\BB^d}|^{-1/2}$ and $\wt a\in \rho_{\BB^d}^{-m_e}\rho_{\BB^s}^{-m_\psi}{\mathscr{C}^\infty}({\BB^d}\times {\BB^s})$ satisfies $\wt a \wt f \dd \mu_{{\BB^d}} \dd \mu_{{\BB^s}}=a f$.
Summing up, we may always transform to locally work on ${\BB^d}\times{\BB^s}$ and in local coordinates we work with usual oscillatory integrals.
Since \eqref{eq:oscidef} does not depend on the choice of $m_\varepsilon$, as it is usual we drop it from the notation and write, \emph{in the sense of oscillatory integrals},
\begin{equation}\label{eq:osciformdef}
I_\varphi(a):= \int_{Y} e^{i\varphi}a.
\end{equation}
\subsubsection{Singularities of oscillatory integrals}
Recall that there is a notion of wavefront-set adapted to the pseudo-differential scattering calculus, called the scattering wavefront-set, cf. \cite{Cordes,Melrose1,CoMa}.
\begin{defn}
Let $u\in({\dot{\mathscr{C}}^\infty_0})^\prime(X,{}^\scat\, \Omega^{1/2})$.
A point $z_0 \in \mathcal{W}=\partial\big({}^\scat \,\overline{T}^*X\big)$ is not in the scattering wavefront-set, and we write $z_0 \notin \mathrm{WF}_\sct(u)$,
if there exists a scattering pseudo-differential operator $A$ whose symbol is elliptic at $z_0$ such that $Au\in{\dot{\mathscr{C}}^\infty_0}(X,{}^\scat\, \Omega^{1/2}).$
\end{defn}
\begin{prop}
\label{prop:WFosci}
For the oscillatory integral in \eqref{eq:oscidef}, we have
$$\mathrm{WF}_\sct(I_\varphi(a))\subseteq \Lambda_\varphi.$$
Furthermore, if $z\in \Lambda_\varphi$ and $a$ is rapidly decaying near $\lambda_\varphi^{-1}(z)$, then $z\notin \mathrm{WF}_\sct(I_\varphi(a))$.
\end{prop}
\begin{rem}
\label{rem:css}
The (${\mathrm{sc}}$-)singular support of $u$ is defined as follows:
a point $p_0\in X$ is contained in $\mathrm{singsupp}_{\mathrm{sc}\hspace{-2pt}}(u)$ if and only if for every $f\in {\mathscr{C}^\infty}(X)$ with $f(p_0)=1$ we have $fu\notin {\dot{\mathscr{C}}^\infty_0}(X)$.
Similar to the classical wavefront-set and singular support, we have that ${\mathrm{pr}}_1(\mathrm{WF}_\sct(u))=\mathrm{singsupp}_{\mathrm{sc}\hspace{-2pt}}(u)$.
Thus, in particular, if $a$ is rapidly decaying near $\mathcal{C}_\varphi$, then $I_\varphi(a)\in {\dot{\mathscr{C}}^\infty_0}(X)$.
\end{rem}
We refer the reader to \cite{CoSc,Schulz} for the details of this analysis of the wavefront-sets. The proof is carried out as in the classical setting: first, a characterization of $\mathrm{WF}_\sct$ in terms of cut-offs and the Fourier transform is achieved, and then one estimates $\Fu I_\varphi(a)$ in coordinates.
Proposition \ref{prop:WFosci} gives another insight why we consider
$\Lambda_\varphi$ as the true object of interest associated with a phase function, not $L_\varphi$. In fact, considering \eqref{eq:oscidef} once more, we see that we may modify phase function and amplitude in the integral by any real valued function $\psi\in{\mathscr{C}^\infty}(X\times Y)$, writing
$$e^{i\varphi} a=e^{i(\varphi+\psi)} \left(e^{-i\psi}a\right).$$
Then $e^{-i\psi}a\in \rho_X^{-m_e}\rho_Y^{-m_\psi}{\mathscr{C}^\infty}(X\times Y)$, and hence it is still an amplitude, and $\varphi+\psi$ is a new local
phase function. Now, while in general $L_\varphi\neq L_{\varphi+\psi}$, we have $\Lambda_\varphi= \Lambda_{\varphi+\psi}$, by Lemma \ref{lem:phaseplussmooth}.
This underlines that only $\Lambda_\varphi$ and not $L_\varphi$ can be associated with $I_\varphi(a)$ in an intrinsic way.
Nevertheless, it is often convenient to have $L_\varphi$ available during the proofs.
\subsection{Definition of Lagrangian distributions}
The class of oscillatory integrals associated with a Lagrangian is -- as in the classical theory -- not a good distribution space, since in general it is not possible to find a single global phase function to parametrize $\Lambda$. Instead, we introduce the following class of Lagrangian distributions. Note that, by our previous findings, we may always reduce an oscillatory integral on $X\times Y$ into a finite sum of oscillatory integrals over $X\times{\mathbb{B}}^s$ for $s=\dim(Y)$.
\begin{defn}[${\mathrm{sc}}$-Lagrangian distributions]\label{def:Lagdist}
Let $X$ be a mwb, $\Lambda\subset \partial{}^\scat \,\overline{T}^*X$ a ${\mathrm{sc}}$-Lagrangian. Then, $I^{m_e,m_\psi}(X,\Lambda)$, $(m_e,m_\psi)\in {\mathbb{R}}^2$,
denotes the space of distributions that can be written as a finite sum of (local) oscillatory integrals as in \eqref{eq:osciformdef}, whose phase functions are clean and locally parametrize $\Lambda$, plus an element of ${\dot{\mathscr{C}}^\infty_0}(X)$. More precisely, $u\in I^{m_e,m_\psi}(X,\Lambda)$ if, modulo a remainder in ${\dot{\mathscr{C}}^\infty_0}(X)$,
\begin{equation}
\label{eq:Lagdistdef}
u=\sum_{j=1}^N \int_{Y_j} e^{i\varphi_j}a_j,
\end{equation}
where for $j=1,\dots,N$:
\begin{enumerate}
\item[1.)] $Y_j$ is a mwb of dimension $s_j$;
\item[2.)] $\varphi_j\in \rho_{Y_j}^{-1}\rho_X^{-1}{\mathscr{C}^\infty}(X\times Y_j)$ is a local clean phase function with excess $e_j$, defined on an open neighbourhood of the support of $a_j$,
which locally parametrizes $\Lambda$;
\item[3.)] $a_j\in \rho_{Y_j}^{-m_{\psi,j}}\rho_X^{-m_{e,j}}{\mathscr{C}^\infty}\big(X\times Y_j, {}^\scat\, \Omega^{1/2}(X) \times {}^\scat\, \Omega^1(Y)\big)$ with
\[
(m_{\psi,j},m_{e,j})=\left(m_\psi+\frac{d}{4}-\frac{s_j}{2}-\frac{e_j}{2},m_e-\frac{d}{4}+\frac{s_j}{2}-\frac{e_j}{2}\right).
\]
\end{enumerate}
We also set
\begin{align*}
I^{-\infty,-\infty}(X,\Lambda) &= \bigcap_{(m_\psi,m_e)\in{\mathbb{R}}^2} I^{m_\psi,m_e}(X,\Lambda),
\\
I(X,\Lambda) =I^{+\infty,+\infty}(X,\Lambda)&= \bigcup_{(m_\psi,m_e)\in{\mathbb{R}}^2} I^{m_\psi,m_e}(X,\Lambda).
\end{align*}
\end{defn}
\begin{rem}
The reason for the choice of the $a_j$ in the scattering amplitude densities spaces of order $(m_{e,j}, m_{\psi,j})$
will be explained in Section \ref{ssec:order}.
\end{rem}
The next result follows from Proposition \ref{prop:WFosci}.
\begin{prop}
Let $\Lambda\subset \partial\, {}^\scat \,\overline{T}^*X$ be a ${\mathrm{sc}}$-Lagrangian, and $u\in I(X,\Lambda)$. Then $\mathrm{WF}_\sct(u)\subseteq \Lambda.$
\end{prop}
As in the classical case, the class of Lagrangian distributions contains the globally regular functions (cf. Treves~\cite[Chapter VIII.3.2]{Treves}):
\begin{lem}\label{lem:smoothpart}
Let $\Lambda\subset \partial\, {}^\scat \,\overline{T}^*X$ be a ${\mathrm{sc}}$-Lagrangian. Then
\begin{equation}
{\dot{\mathscr{C}}^\infty_0}(X,{}^\scat\, \Omega^{1/2}(X)) = I^{-\infty,-\infty}(X,\Lambda).
\end{equation}
\end{lem}
\begin{proof}
We first prove the inclusion ``$\supseteq$''. Choose a finite covering of ${}^\scat \,\overline{T}^*X$ with open sets $\{X_j\}_{j=1}^N$ such that there exists a clean phase function $\varphi_j$ on each $X_j$
parametrizing $\Lambda \cap {}^\scat \,\overline{T}^*X_j$, $j=1,\dots,N$. Let $\{g_j\}_{j=1}^N$ be a smooth partition of unity subordinate to such covering. We view $X_j$ as a subset
of $X \times {\mathbb{B}}^d$, $j=1,\dots,N$.
Let $\chi \in {\dot{\mathscr{C}}^\infty_0}({\mathbb{B}}^d, {}^\scat\, \Omega^1({\mathbb{B}}^d))$ such that $\int \chi = 1$. For any $f \in {\dot{\mathscr{C}}^\infty_0}(X,{}^\scat\, \Omega^{1/2}(X))$ we set
\begin{align*}
a_j = e^{-i\varphi_j}g_j \cdot (f \otimes \chi),\quad
f_j = \int_{{\mathbb{B}}^d} e^{i\varphi_j} a_j, \quad j=1,\dots,N.
\end{align*}
We see that
\[a_j \in {\dot{\mathscr{C}}^\infty_0}(X\times {\mathbb{B}}^d, {}^\scat\, \Omega^{1/2}(X) \times {}^\scat\, \Omega^1({\mathbb{B}}^d)), \quad j=1,\dots,N,\]
and, summing up,
\begin{align*}
\sum_{j=1}^N f_j(x) &= \int_{{\mathbb{B}}^d} \left(\sum_{j=1}^N g_j(x,y)\right) \cdot (f(x) \otimes \chi(y))= f(x).
\end{align*}
The inclusion ``$\subseteq$'' is achieved by differentiation under the integral sign.
\end{proof}
\subsection{Examples}
We have the following examples of (scattering) Lagrangian distributions.
\begin{enumerate}
\item Standard Lagrangian distributions of compact support, \cite{HormanderFIO,Hormander4}, in particular Lagrangian distributions on compact manifolds $X$ without boundary, are scattering Lagrangian distributions, using the identification
\begin{align*}
\textrm{Fiber-conic sets in }T^*X\setminus\{0\}\longleftrightarrow \textrm{Sets in }S^*X
\stackrel{\textrm{rescaling}}{\longleftrightarrow} \textrm{Sets in }\Wt^\psi.
\end{align*}
\item Legendrian distributions of \cite{MZ}. Here, the distributions are smooth functions whose singularities at the boundary are of Legendrian type, meaning in $\Wt^e$.
\item Conormal distributions, meaning the distributions where the Lagrangian, see Section \ref{sec:conorm}, is $\partial\big({}^\scat \,\overline{T}^*X'\big)$ for a ($k$-dimensional) $p$-submanifold $X'\subset Y$. These distributions correspond, under compactification of base and fiber, to the oscillatory integrals given in local (pre-compactified) Euclidean coordinates by
$$u(x',x'')=\int e^{ix'\xi} a(x,\xi)\,\dd \xi, \qquad a(x,\xi)\in{\mathrm{SG}}_{\mathrm{cl}}^{m_e,m_\psi}({\RR^d}\times{\mathbb{R}}^{d-k}).$$
A prototypical example is given by (derivatives of) $\delta_{0}(x')\otimes 1$. These arise as (simple or multiple) layers when solving partial differential equations along infinite boundaries or Cauchy surfaces.
\item Examples of scattering Lagrangian distributions which are of none of the previous types arise in the parametrix construction to hyperbolic equations on unbounded spaces, for example the two-point function for the Klein-Gordon equation. For a discussion of this example consider \cite{CoSc2}.
\end{enumerate}
\begin{rem}
Note that, at this stage, the kernels of pseudo-differential operators on $X\times X$ are \emph{not} scattering conormal distributions associated with the diagonal $\Delta\subset X\times X$ when $X$ is a manifold with boundary. In fact, in this case $X\times X$ is a manifold with corners. Furthermore $\Delta\subset X\times X$ does not hit the corner $\partial X\times\partial X$ in a clean way, that is,
$\Delta\subset X\times X$ is not a $p$-submanifold. Similarly, the phase function associated to the ${\mathrm{SG}}$-phase $(x-y)\xi\in{\mathrm{SG}}^{1,1}_{\mathrm{cl}}({\mathbb{R}}^{2d}\times{\RR^d})$ is not clean.
However, the formulation of the theory developed in this paper admits a natural extension to manifolds with corners. The geometric obstruction of $\Delta\subset X\times X$ -- or more generally the graphs of (scattering) canonical transformations -- not being a $p$-submanifold can be overcome by lifting the analysis to a blow-up space, see \cite{MZ,Melroseb}. We postpone this theory of compositions of canonical relations and calculus of scattering Fourier integral operators to a subsequent paper.
\end{rem}
\subsection{Transformations of oscillatory integrals}
In Section \ref{sec:exchphase} we have seen several procedures that allow
to switch from one phase function to others that parametrize the same Lagrangian. We will now exploit these to transform oscillatory integrals into ``standard form''. In the sequel, we will always assume, by
a partition of unity, that the support of the amplitude is suitably small.
\subsubsection{Transformation behavior and equivalent phase functions}
\label{sec:moves}
Now we reconsider \eqref{eq:oscilocdef}, to express the transformation
behavior of the oscillatory integrals under fiber-preserving
diffeomorphisms. With the chosen notation and a local
phase function $\varphi_1$, we have
\begin{equation}\label{eq:oscintsimpl}
I_{\varphi_1}(a)= \int_{Y_1} e^{i\varphi_1}a=\int_{Y_2} e^{iF^*\varphi_1}F^*a=I_{F^*\varphi_1}(F^*a)
\end{equation}
for any diffeomorphism $F:X\times Y_2\rightarrow X\times Y_1$ of the form $F=\mathrm{id}\times g$.
Assume that $\varphi_2$ is equivalent to $\varphi_1$ by $F$,
see Definition \ref{def:phequiv}. After the transformation,
we rewrite \eqref{eq:oscintsimpl} as
\begin{equation}
\int_{Y_2} e^{i\varphi_2}e^{i(F^*\varphi_1-\varphi_2)}F^*a.
\end{equation}
Now, since $F^*\varphi_1-\varphi_2$ is smooth up to the boundary,
the same holds for $e^{i(F^*\varphi_1-\varphi_2)}$ and this factor can be seen as part of the amplitude.
Therefore, we may write
\begin{equation}\label{eq:oscintequiv}
I_{\varphi_1}(a)=I_{\varphi_2}
\big((F^*a)\,\exp(i(F^*\varphi_1-\varphi_2))\big).
\end{equation}
In particular, we can express $I_\varphi(a)$,
near any boundary point of the domain of definition,
using the principal part of $\varphi$ introduced in Definition \ref{def:princpart}, namely
\begin{equation}\label{eq:oscintstd}
I_{\varphi_p}(\widetilde{a}), \text{ with }
\widetilde{a}=a\,\exp\big(i(\varphi-\varphi_p)\big).
\end{equation}
By Lemma \ref{lem:phpequiv}, $\varphi - \varphi_p \in {\mathscr{C}^\infty}$ and thus $\widetilde{a} \in \rho_X^{-m_e}\rho_Y^{-m_\psi} {\mathscr{C}^\infty}(B)$.
In the following constructions, we always assume that $\varphi$ is replaced by its principal part, cf. Remark \ref{rem:strictness}.
\subsubsection{Reduction of the fiber}\label{ssbs:redfbr}
We will now analyze the change of boundary behavior under a reduction of fiber variables near $p_0\in\operatorname{supp}(a)\cap\mathcal{C}_\varphi$. Hence, we assume that
$$\rho_Y^{-1}\rho_X^{-1}\,^{\mathrm{sc}\hspace{-2pt}} H_Y\varphi\text{ has rank }r>0\text{ at }p_0\in\mathcal{C}_\varphi.$$
We assume, as explained above, that the oscillatory integral
is in the form \eqref{eq:oscintstd}, namely, $\varphi$ is replaced
by its principal phase part. We observe that, at the boundary point $p_0$,
$$\rk(\rho_Y^{-1}\rho_X^{-1}\,^{\mathrm{sc}\hspace{-2pt}} H_Y\varphi)=\rk(\rho_Y^{-1}\rho_X^{-1}\,^{\mathrm{sc}\hspace{-2pt}} H_Y\sigma(\varphi_p)).$$
By Proposition \ref{prop:fiberred}, we can define a local phase function
$\varphi_{\red}$ parametrizing the same Lagrangian as
$\varphi$. In particular, after a change of coordinates by a scattering map,
we can assume $(\mathbf{x},\mathbf{y})\in X\times{\mathbb{B}}^{s-r}\times(-\varepsilon,\varepsilon)^{r}$,
and $\varphi_{\red}$ is given by
\[
\varphi_{\red}(\mathbf{x},\rho_Y,y^\prime)=\varphi(\mathbf{x},\rho_Y,y^\prime,0),
\]
where $\rho_Y=\rho_{{\mathbb{B}}^{s-r}}$ is the boundary defining function on ${\mathbb{B}}^{s-r}$ and on ${\mathbb{B}}^{s-r}\times(-\varepsilon,\varepsilon)^r$.
We introduce
\begin{equation}\label{eq:wtp}
\widetilde{\varphi}(\mathbf{x},\mathbf{y})=\varphi_{\red}(\mathbf{x},\rho_Y,y^\prime)+
\frac{1}{2}\rho_X^{-1}\rho_Y^{-1}Q(y^{\prime\prime}),
\end{equation}
where $Q$ is a non-degenerate quadratic form with the same signature as
$\partial_{y^{\prime\prime}}\partial_{y^{\prime\prime}}f$ at $p_0$.
Then, by Theorem \ref{thm:equivphase},
$\varphi$ is equivalent to $\widetilde{\varphi}$ by a local
diffeomorphism $F=\mathrm{id}\times g$.
Note that $\varphi_\red$ is equal to its principal part, because we assumed that $\varphi$ is replaced by $\varphi_p$.
We may assume that $a$ is supported in an arbitrarily small neighbourhood of the stationary points of $\varphi$. Indeed, we may achieve this for a general amplitude $a$ by applying a cut-off in $y^{\prime\prime}$ and writing $a=\phi a + (1-\phi) a$. The oscillatory integral with amplitude $(1-\phi)a$ produces a term in ${\dot{\mathscr{C}}^\infty_0}(X, \Omega^{1/2}(X))$, by Remark \ref{rem:css}.
Therefore, choosing the support of $a$ small enough, we may perform the change of variables by the local diffeomorphism $F$ as in \eqref{eq:oscintequiv}. We write, motivated by Lemma \ref{lem:intdensity} and Example \ref{ex:embdball},
$$a_\red(\mathbf{x},\wby)\,\frac{|\dd \wby''|}{\rho_{\wtY}^{r} \cdot [h(\mathbf{x},\wby)]^r}=(F^*a)(\mathbf{x},\wby),$$
which is assumed supported in some compact subset of $(-\epsilon,\epsilon)^r$. Then $I_\varphi(a)$ is transformed into
$I_{\varphi_{\red}}(b)$
where
\begin{equation}
b(\mathbf{x},\rho_{Y},y^\prime)=\rho_{Y}^{-r}\int_{(-\varepsilon,\varepsilon)^r} e^{\frac{i}{2}\rho_X^{-1}\rho_{Y}^{-1}Q(y^{\prime\prime})}
\Big(e^{i(F^*\varphi(\mathbf{x},\mathbf{y})-\widetilde{\varphi}(\mathbf{x},\mathbf{y}))}\,a_\red(\mathbf{x},\mathbf{y})\Big)\dd y^{\prime\prime}. \label{eq:fiberredsymb}
\end{equation}
We claim that
$b(\mathbf{x},\rho_{Y},y^\prime)$
is again a (density valued) amplitude. First, it is clear that $b$ decays rapidly at $(\mathbf{x},\rho_{Y},y^\prime)$ if $a$ decays rapidly at $(\mathbf{x},\rho_{Y},y^\prime,0)$. In particular, $b$ is smooth away from $\mathcal{B}$.
We now we apply the stationary phase lemma \cite[Lem. 7.7.3]{Hormander1} to \eqref{eq:fiberredsymb}, which yields the asymptotic equivalence, as $\rho_Y\rho_X\rightarrow 0$,
\begin{multline}
\label{eq:princred1}
b(\mathbf{x},\rho_Y,y^\prime)= \rho_X^{r/2}\rho_Y^{-r/2}|\det Q|^{-1/2} e^{\frac{i}{4}\pi \mathrm{sgn}(Q)} e^{i(F^*\varphi(\mathbf{x},\rho_Y,y^\prime,0)-\widetilde{\varphi}(\mathbf{x},\rho_Y,y^\prime,0))} a_\red(\mathbf{x},\rho_Y,y^\prime,0) \\
+{\mathcal{O}}\big(\rho_Y^{-m_\psi-\frac{r}{2}+1}\rho_X^{-m_e+\frac{r}{2}+1}\big).
\end{multline}
Similar asymptotics hold for all derivatives of $b$.
We may hence view $b$ as a (density valued) amplitude of the order
\begin{equation}\label{eq:order-fiber}
(m_e',m_\psi')=\left(m_e-\frac{r}{2},m_\psi+\frac{r}{2}\right).
\end{equation}
By Remark \ref{rem:strictness} we see that, away from the corner, $F^*\varphi-\widetilde{\varphi}$ vanishes at $\mathcal{C}_\varphi$. Therefore, the principal part of $b$ does not depend on $\varphi$.
Hence, by comparision of principal parts, cf. Lemma \ref{lem:princpart}, \eqref{eq:princred1} reduces to
\begin{equation}
\label{eq:princred}
b(\mathbf{x},\rho_Y,y^\prime)\sim \rho_X^{r/2}\rho_Y^{-r/2}|\det Q|^{-1/2} e^{\frac{i}{4}\pi \mathrm{sgn}(Q)} a_\red(\mathbf{x},\rho_Y,y^\prime,0)
\end{equation}
modulo terms of lower order.
\subsubsection{Elimination of excess}\label{subss:elexcess}
Assume now that $\varphi$ is a clean phase function of excess $e>0$. Near some point in $\mathcal{C}_\varphi$, as described in Section \ref{sec:phaseexelim}, we may make the following geometric assumptions after application of some diffeomorphism $F$: We assume that $Y={\mathbb{B}}^{s-e}\times(-\epsilon,\epsilon)^e$ and that the fibers of $\mathcal{C}_\varphi\rightarrow \Lambda_\varphi$ are given by constant $(\mathbf{x},\rho_Y,y')$ and arbitrary $y''$.
We proceed as in \cite{Treves} and define
\begin{equation}\label{eq:wtredex}
\wt{\varphi}(\rho_{X},x,\rho_{Y},y^\prime):=\varphi(\rho_{X},x,\rho_{Y},y^\prime,0).
\end{equation}
We observe that for any fixed $y^{\prime\prime}$ the phase function $\phi(y^{\prime\prime})$, defined as
\begin{equation}\label{eq:phiypp}
[\phi(y^{\prime\prime})](\mathbf{x},\rho_{Y},y^\prime)=\varphi(\mathbf{x},\rho_{Y},y^\prime,y^{\prime\prime}),
\end{equation}
is equivalent to $\wtp$. Indeed, since $\partial_{y''}{}^\scat \dd_Y\varphi=0$, the differential ${}^\scat H_Y\phi(y^{\prime\prime})$
has the same signature as ${}^\scat H_{{\mathbb{B}}^{s-e}}\wt\varphi$ and both parametrize the same Lagrangian with the same number of phase variables $(s-e)$.
Therefore, Theorem \ref{thm:equivphase} guarantees the existence of a family of diffeomorphisms $G(y^{\prime\prime}):(\mathbf{x},\rho_Y,y')\mapsto (\mathbf{x},g(\mathbf{x},\rho_Y,y',y^{\prime\prime}))$
such that, defining $ \wt{G}\colon (\mathbf{x},\mathbf{y})=(\mathbf{x},\rho_Y,y^\prime,y^{\prime\prime})\mapsto(\mathbf{x}, g(\mathbf{x},\rho_Y,y',y^{\prime\prime}), y^{\prime\prime})$,
\begin{equation}\label{eq:diffeoG}
\wt{G}^*\varphi-\wt{\varphi}
\end{equation}
is smooth everywhere, and vanishes on $\cC_\wtp$ away from the corner by Remark~\ref{rem:strictness}.
Then we may express $I_\varphi(a)$ as $I_{\wt\varphi}(b)$, where
\begin{equation}\label{eq:excesssymb}
b(\mathbf{x},\rho_Y,y^\prime)=\rho_Y^{-e} \int_{(-\varepsilon,\varepsilon)^e} e^{i(\wt{G}^*\varphi-\wt{\varphi})(\mathbf{x},\rho_Y,y^\prime,y^{\prime\prime})} (\wt{G}^*a)_{\red}(\mathbf{x},\rho_Y,y^\prime,y^{\prime\prime})\,\dd y''
\end{equation}
and
$$ (\wt{G}^*a)_{\red}(\mathbf{x},\mathbf{y})\,\frac{|\dd y''|}{\rho_{\wtY}^{e} \cdot [h(\mathbf{x},\mathbf{y})]^e}=(\wt{G}^*a)(\mathbf{x},\mathbf{y}).$$
Since $\wt{G}^*\varphi-\wt\varphi$ is smooth, $b$ is again an amplitude of order
\begin{equation}\label{eq:order-excess}
(\tilde{m}_e,\tilde{m}_\psi)=\left(m_e,m_\psi+e\right).
\end{equation}
Notice that at points in $\mathcal{C}_\varphi$ away from the corner, $\wt{G}^*\varphi-\wt{\varphi}$ vanishes and hence \eqref{eq:excesssymb} reduces to
\begin{equation}\label{eq:excesssymbbis}
b(\mathbf{x},\rho_Y,y^\prime)=\rho_Y^{-e} \int_{(-\varepsilon,\varepsilon)^e} (\wt{G}^*a)_{\red}(\mathbf{x},\rho_Y,y^\prime,y^{\prime\prime})\,\dd y''.
\end{equation}
\subsection{The order of a Lagrangian distribution}\label{ssec:order}
We will now obtain the definition of the order of $I_\varphi(a)$ which is invariant with respect to all the three steps
described above.
\begin{lem}
The numbers $\mu_\psi = m_\psi + s/2 + e/2$ and $\mu_e = m_e - s/2 +e/2$ remain constant under reduction of fiber-variables and elimination of excess.
\end{lem}
\begin{proof}
Consider a Lagrangian distribution $A = I_\varphi(a)$ where $a$ has order $m_\psi, m_e$ and $\dim Y = s$ with excess $e$ and $r$ reduceable fiber variables.
After the reduction of fiber, we obtain an amplitude $a'$ with order $m_e' = m_e -r/2, m_\psi' = m_\psi + r/2$ (cf. \eqref{eq:order-fiber}), with excess $e' = e$ and number of fiber variables $s' = s - r$.
The elimination of excess yields an amplitude $a^\#$ with order $m_e^\# = m_e, m_\psi^\# = m_\psi + e$ (cf. \eqref{eq:order-excess}), excess $e^\# = 0$ and $s^\# = s - e$.
It is now straightforward to check that
\begin{alignat*}{2}
m_\psi + s/2 + e/2 &= m_\psi' + s'/2 + e/2 & &= m_\psi^\# + s^\#/2+e^\#/2,\\
m_e - s/2 + e/2 &= m_e' - s'/2 + e/2 & &= m_e^\# - s^\# / 2+e^\#/2.
\end{alignat*}
\end{proof}
This shows that the tuple $(\mu_\psi, \mu_e)$ can be used to define the order of a Lagrangian distribution.
We still have the freedom to add arbitrary constants to both orders.
In order to choose these constants, we compare our class of Lagrangian distributions with H\"ormander's Lagrangian distributions and the Legendrian distributions of Melrose--Zworski~\cite{MZ}.
First, consider the Delta-distribution $\delta_0$, which is in the H\"ormander class $I^{d/4}$ and $\mu_\psi = d/2$. Therefore, we choose $m_\psi = \mu_\psi - d/4$ to obtain the same $\psi$-order for $\delta_0$.
Similarly, the constant function is a Legendrian distribution of order $-d/4$ and $\mu_e = 0$, and therefore we choose $m_e = \mu_e + d/4$.
Note that we use the opposite sign convention for the $m_e$-order then in \cite{MZ}.
\section{The principal symbol of a Lagrangian distribution}\label{sec:symb}
We will now define the principal symbol map $j^\Lambda_{m_e,m_\psi}$ on $I^{m_e,m_\psi}(X,\Lambda)$. Similarly to the classical theory,
it takes values in a suitable (density) bundle on $\Lambda$. This is coherent with the notion of principal symbol map $j_{m_e,m_\psi}$ for scattering
operators, see \cite{Melrose1,Melrose2}, as well as of principal part for classical ${\mathrm{SG}}$ symbols, see \cite{ES, Schulz},
which both provide smooth objects defined on $\mathcal{W}=\partial{}^\scat \,\overline{T}^*X\supset\Lambda$. We adapt the construction in \cite{Treves} (see also \cite{Hormander4,HormanderFIO}), starting from the simplest case of local non-degenerate phase functions parametrizing $\Lambda$, up to the general case of local clean functions.
Let $\Lambda\subset\mathcal{W}$ be an ${\mathrm{sc}}$-Lagrangian, which on $B=X\times Y$
is locally parametrized by a local non-degenerate phase function
$\varphi\in \rho_{Y}^{-1}\rho_X^{-1}{\mathscr{C}^\infty}(U)$, $U\subset B$.
Let $a\in \rho_{Y}^{-m_{\psi}}\rho_X^{-m_{e}}{\mathscr{C}^\infty}\big(X\times Y, {}^\scat\, \Omega^{1/2}(X)\times {}^\scat\, \Omega^{1}(Y)\big)$ be
supported in $U$, and let $I_\varphi(a)$ be a (micro-)local representation of $u\in I^{m_e,m_\psi}(X,\Lambda)$ as a single oscillatory integral.
We now fix a $1$-density $\mu_X$ on $X$. Any choice of $1$ density $\mu_Y$ on $Y$ then trivializes the one-dimensional bundle ${\mathscr{C}^\infty}(X\times Y, {}^\scat\, \Omega^{1/2}(X)\otimes{}^\scat\, \Omega^{1}(Y))$, and any element is given by a multiple of $\rho_X^{-(d+1)/2}\rho_Y^{-s-1}\sqrt{\mu_X}\otimes\mu_Y$.
Any choice of coordinates $(\rho_Y,y)$ in $Y$ allows for us to express $\mu_Y$ locally as $\frac{\partial \mu_Y}{\partial (\rho_Y,y)}\,\dd\rho_Y\dd y$, meaning as having a smooth density factor with respect to the (local) Lebesgue measure. As such, we rewrite the amplitude $a\in\rho_Y^{-m_\psi}\rho_X^{-m_e}{\mathscr{C}^\infty}(X\times Y, {}^\scat\, \Omega^{1/2}(X)\otimes{}^\scat\, \Omega^{1}(Y))$ in any choice of local coordinates as
\begin{align}
\label{eq:canampl}
\rho_Y^{m_\psi}\rho_X^{m_e}a(\mathbf{x},\mathbf{y})&=\ap(\mathbf{x},\mathbf{y})\,\rho_X^{-(d+1)/2}\rho_Y^{-s-1}\sqrt{\mu_X}\dd\rho_Y\dd y.
\end{align}
for $\ap\in {\mathscr{C}^\infty}(X\times Y)$.
\subsection{Non-degenerate equivalent phase functions}\label{subs:ndg}
As above (cf. \eqref{eq:scdxexpl}), when $U$ is a neighbourhood of a point close to the boundary $\mathcal{B}$, we can there identify ${}^\scat \dd_Y\varphi$ with the map,
\[
(\mathbf{x},\mathbf{y})\mapsto\Phi(\mathbf{x},\mathbf{y})=\big(-f(\mathbf{x},\mathbf{y})+\rho_Y\partial_{\rho_Y}f(\mathbf{x},\mathbf{y}) \quad \partial_yf(\mathbf{x},\mathbf{y})\big) \in{\mathbb{R}}^s,
\]
locally well-defined on a neighbourhood of $C_\varphi$ within $U$.
In view of the non-degeneracy of $\varphi$, $\Phi$ has a surjective differential, so that we can consider the pullback of distributions $d_\varphi=\Phi^*\delta$, with $\delta=\delta_0\in\cD^\prime({\mathbb{R}}^s)$
the Dirac distribution, concentrated at the origin, on ${\mathbb{R}}^s$ (cf. \cite[Ch. VI]{Hormander1}). More explicitly, choosing functions $(t_1, \dots, t_d)=:t$, which
restrict to a local coordinate system (up to the boundary) on $C_\varphi$, the pull-back $d_\varphi$ can be expressed locally as the density
\[
d_\varphi=\left| \det\frac{\partial(t,\Phi)}{\partial(\mathbf{x}, \mathbf{y} )}\right|^{-1}\dd t = \Delta_\varphi(t)\, \dd t.
\]
Consider another local non-degenerate
phase function $\wtp$ parametrizing $\Lambda$,
defined on an open subset $\wtU\subset X\times \wtY$, such that $\wtp=F^*\varphi$, with a (local, fibered)
diffeomorphism $F=\mathrm{id}\times g\colon X\times\wtY\to X\times Y$.
Since $F$ is a ${\mathrm{sc}}$-map, there exists a function $h \in {\mathscr{C}^\infty}(X\times Y)$ such that $(F^*\rho_Y)(\mathbf{x},\wby)=\rho_{\wtY}\cdot h(\mathbf{x},\wby)$.
As above, we identify ${}^\scat \dd_Y\wt\varphi$ with the map $\wtP$ and define $d_\wtp$ and $\Delta_\wtp(\widetilde{t})$ in terms of the functions ${\wt t}_j=F^*t_j$, which are
local coordinates on $C_\wtp$, provided $\wtU$ is small enough.
In the sequel, we show how objects defined in these two choices $(t,\varphi)$ and $(\wt t,\wt \varphi)$ are related. For that, we implicitly assume all objects evaluated at corresponding points $(\mathbf{x},\mathbf{y})\in C_\varphi$ (parametrized by $t$) and $(\mathbf{x},\wt \mathbf{y})=F(\mathbf{x},\mathbf{y})\in C_{\wtp}$ (parametrized by $\wt t$).
\begin{lem}\label{lem:trDelta}
%
The functions $\Delta_\wtp(\widetilde{t})$ and $\Delta_\varphi(t)$ are related by
%
\[
\Delta_\wtp(\widetilde{t}) =
h(\mathbf{x},\mathbf{y})^{s+1} \left| \det\frac{\partial g(\mathbf{x},\wby)}{\partial \wby }\right|^{-2} \, \Delta_\varphi(t(\widetilde{t})).
\]
%
\end{lem}
\begin{proof}[Proof of Lemma \ref{lem:trDelta}]
By direct computation, $\wtP$ and $\Phi$ are related by a matrix $M_{\Phi\wtP}$ via
\begin{equation}\label{eq:MPtP}
\wtP(\mathbf{x},\wby)= \Phi(F(\mathbf{x},\wby)) \cdot M_{\Phi\wtP}(\mathbf{x},\wby),
\end{equation}
where
\begin{align*}
M_{\Phi\wtP}(\mathbf{x},\wby)&=
\begin{pmatrix}
[h(\mathbf{x},\wby)]^{-2} \dfrac{\partial\rho_Y}{\partial\rho_{\wtY}}(\mathbf{x},\wby)
&
[h(\mathbf{x},\wby)]^{-2} \rho_{\wtY}^{-1}\dfrac{\partial\rho_Y}{\partial \wty}(\mathbf{x},\wby)
\\
[h(\mathbf{x},\wby)]^{-1} \rho_{\wtY} \dfrac{\partial y}{\partial\rho_{\wtY}}(\mathbf{x},\wby) \rule{0mm}{9mm}
&
[h(\mathbf{x},\wby)]^{-1} \dfrac{\partial y}{\partial \wty}(\mathbf{x},\wby)
\end{pmatrix}
\end{align*}
and
\[
|\det M_{\Phi\wtP}(\mathbf{x},\wby)|=h(\mathbf{x},\wby)^{-s-1}\cdot\left| \det \frac{\partial g(\mathbf{x},\wby)}{\partial \wby} \right|.
\]
Differentiating \eqref{eq:MPtP}, we obtain, using that $\wtP(\mathbf{x},\mathbf{y})=\Phi(F(\mathbf{x},\wby))=0$ on $C_{\widetilde{\varphi}}$,
\begin{equation}\label{eq:wpJ}
\begin{aligned}
\frac{\partial\wtP}{\partial (\mathbf{x},\wby)}(\mathbf{x},\wby)
&={{}^t}\!M_{\Phi\wtP}(\mathbf{x},\wby) \cdot \frac{\partial(\Phi(F(\mathbf{x},\wby)))}{\partial(\mathbf{x},\wby)}
\\
&={{}^t}\!M_{\Phi\wtP}(\mathbf{x},\wby) \cdot \left[\frac{\partial\Phi}{\partial(\mathbf{x},\mathbf{y})}(F(\mathbf{x},\wby))\right]\cdot\frac{\partial F}{\partial(\mathbf{x},\wby)}(\mathbf{x},\wby).
\end{aligned}
\end{equation}
Furthermore, we have
\[
\frac{\partial\wtl}{\partial(\mathbf{x},\wby)}(\mathbf{x},\wby)=\left[\frac{\partial t}{\partial(\mathbf{x},\mathbf{y})}(F(\mathbf{x},\wby))\right]\cdot\frac{\partial F}{\partial(\mathbf{x},\wby)}(\mathbf{x},\wby).
\]
Summing up, we find
\begin{equation}\label{eq:wtwpJ}
\begin{aligned}
\frac{\partial(\wtl,\wtP)}{\partial(\mathbf{x}, \wby)}(\mathbf{x}, \wby) &=
\mathrm{diag}(\mathbbm{1}_{d}, {{}^t}\!M_{\Phi\wtP}(\mathbf{x},\wby)) \cdot \left[ \frac{\partial(t,{\Phi})}{\partial(\mathbf{x}, {\mathbf{y}})}(F(\mathbf{x},\wby))\right] \cdot
\frac{\partial F}{\partial (\mathbf{x},\wby)}(\mathbf{x},\wby),
\end{aligned}
\end{equation}
which in turn implies, using $F=\mathrm{id}\times g$,
\begin{equation*}
\Delta_{\wtp}(\widetilde{t}) =
\left| \frac{\partial(\wtl,\wtP)}{\partial(\mathbf{x}, \wby)}(\mathbf{x}, \wby) \right|^{-1}
= [h(\mathbf{x},\wby)]^{s+1} \left| \det \frac{\partial g(\mathbf{x},\wby)}{\partial \wby} \right|^{-2} \Delta_\varphi(t(\widetilde{t})),
\end{equation*}
as claimed.
\end{proof}
We define
\begin{equation}\label{eq:gamma}
w_\varphi=(\rho_X^{-m_e}\rho_Y^{-m_\psi-(s+1)/2} \ap)|_{C_\varphi} \cdot \sqrt{|d_\varphi|},
\end{equation}
with $\ap$ given in \eqref{eq:canampl}, which is a half-density on (the interior of) $C_\varphi$.
To define $w_\wtp$ accordingly, we check that $I_\varphi(a)$ transforms under the action of $F$ as
\begin{align*}
\int_Y e^{i\varphi}a&=\int_{\wtY} e^{i(F^*\varphi)(\mathbf{x},\wby)}
F^*\!\!\left[\rho_X^{-m_e}\rho_{Y}^{-m_\psi}\ap\,\rho_X^{-(d+1)/2}\rho_Y^{-s-1}\sqrt{\mu_X}\otimes \dd\rho_Y \dd y\right](\mathbf{x},\wby)
\\
&= \int_{\wtY} e^{i\wtp(\mathbf{x},\wby)}\rho_X^{-m_e}\rho_{\wtY}^{-m_\psi}\wtap(\mathbf{x},\wby)\,
(\rho_X^{-(d+1)/2}\rho_{\wtY}^{-s-1}\sqrt{\mu_X}\otimes \dd\rho_\wtY \dd\wty),
\end{align*}
where
\begin{equation}\label{eq:trap}
\wtap(\mathbf{x},\wby)=\ap(F(\mathbf{x},\wby)) h(\mathbf{x},\wby)^{-m_\psi-s-1} \left| \det \frac{\partial g(\mathbf{x},\wby)}{\partial \wby} \right|.
\end{equation}
We define, coherently with \eqref{eq:gamma}, $w_\wtp = \rho_X^{-m_e}\rho_{\wtY}^{-m_\psi-(s+1)/2}\wtap\sqrt{|d_{\wtp}|}$.
\begin{lem}\label{lem:wphi}
The half-densities $w_\wtp$ and $w_\varphi$ are related by
\begin{align*}
w_\wtp =
F^*w_{\varphi}
\end{align*}
in (the interior of) $C_{\wtp}$.
\end{lem}
\begin{proof}
We obtain from \eqref{eq:trap} and Lemma \ref{lem:trDelta} that
\begin{align*}
\wtap(\mathbf{x},\wby)\left|\Delta_\wtp(\widetilde{t})\right|^{1/2} =\ap(F(\mathbf{x},\wby)) h(\mathbf{x},\wby)^{-m_\psi-(s+1)/2} \left| \Delta_\varphi(t(\widetilde{t}))\right|^{1/2}.
\end{align*}
Then, using the local coordinates $t$ and $\wt t=F^*t$ introduced above, on $C_\wtp$ we find
\begin{align*}
w_\wtp &= F^*\hspace*{-3pt}\left(\rho_X^{-m_e}\rho_Y^{-m_\psi-(s+1)/2} \ap\right) \left| \Delta_\varphi(t(\widetilde{t}))\right|^{1/2} \sqrt{\left|\dd \widetilde{t}\right|}\\
&=
F^*\hspace*{-3pt}\left(\rho_X^{-m_e}\rho_Y^{-m_\psi-(s+1)/2} \ap
\left|\Delta_\varphi(t)\right|^{1/2}
\sqrt{|\dd t|}\right)=
F^*w_{\varphi}.
\end{align*}
\end{proof}
As a half-density valued amplitude, $w_\varphi$ is
of order $(m_e,m_\psi-(s+1)/2)$, as shown by the computations above.
In accordance with the definition of the principal part (cf. Definition \ref{def:princpart}), we set
\[
\wpp=\left.\left(\ap \cdot \sqrt{|d_\varphi|}\right)\right|_{\mathcal{C}_\varphi}.
\]
As seen above, $\wpp$ transforms to
$\wtpp$ under the pull-back via
$F$. Since $\lambda_\varphi$ is a local diffeomorphism $C_\varphi\to L_\varphi$, we can also consider
\[
\alpha_\varphi=(\lambda_\varphi)_*(\wpp),
\]
which yields a local half-density on $\Lambda_\varphi$. The fact that, for the two
equivalent phase functions $\varphi$ and $\wtp$, we have
$\lambda_\wtp=\lambda_{\varphi}\circ F$, together with
the transformation properties of $\wpp$, shows
that
\[
\alpha_\wtp=\alpha_{\varphi}=\alpha,
\]
that is, $\alpha_\wtp$ and $\alpha_\varphi$ are equivalent local
representations of a half-density $\alpha$ defined on $\Lambda$,
in the local parametrizations $\Lambda_\wtp$ and $\Lambda_\varphi$,
respectively.
We now prove that the same holds true if $\wtp$ is merely a non-degenerate phase function equivalent to $\varphi$ in the sense of Definition \ref{def:phequiv}.
First, if we repeat the construction of $\sqrt{|d_\wtp|}$ described above, all the computations remain valid modulo terms, generated by $\wtP$,
which contain an extra factor $\rho_X\rho_{\wtY}$. This is due to
\begin{align*}
F^*\varphi-\wtp&\in{\mathscr{C}^\infty}(\wtU)
\\
&\Leftrightarrow
\rho_X^{-1}\rho_{\wtY}^{-1}
\widetilde{f}(\mathbf{x},\wby)
=\rho_X^{-1}\rho_{\wtY}^{-1}h(\mathbf{x},\wby)^{-1}
(F^*f)(\mathbf{x},\wby)+g(\mathbf{x},\wby),
g\in{\mathscr{C}^\infty}(\wtU)
\\
&\Leftrightarrow
\widetilde{f}(\mathbf{x},\wby)
=h(\mathbf{x},\wby)^{-1}(F^*f)(\mathbf{x},\wby)
+\rho_X\rho_{\wtY} g(\mathbf{x},\wby),
g\in{\mathscr{C}^\infty}(\wtU).
\end{align*}
Then, by rescaling $w_{\wtp}$ through multiplication by $\rho_X^{m_e}\rho_{\wtY}^{m_\psi+(s+1)/2}$ and then restricting $\wpp$ on $\mathcal{C}_\wtp$,
such additional terms identically vanish.
Moreover, by Lemma \ref{lem:phpequiv} and Remark \ref{rem:strictness},
we know that, in a neighbourhood $\wtU$ of any point in the interior of
$\mathcal{C}_\wtp^e$ or $\mathcal{C}_\wtp^\psi$, which does not intersect
$\mathcal{C}_\wtp^{\psi e}$, it can be assumed, after passage to the principal parts, that $\wtp=F^*\varphi$ on $\mathcal{C}_\wtp\cap\partial \wtU$, see Section \ref{sec:moves}. It follows that the factor
$\exp(i(F^*\varphi-\wtp))$, appearing in $\wtap$ (cf. \eqref{eq:oscintequiv}) also disappears, away from the corner, when
restricting to the faces $\mathcal{C}_\wtp^e$ or $\mathcal{C}_\wtp^\psi$.
Finally, we observe that $\wpp$ and $\wtpp$ are obtained
as restrictions of smooth objects on $X\times Y$ and $X\times \wt Y$ to their respective boundaries. As such, their transformation behavior extends, by continuity, to the corner as well,
producing smooth objects on $\mathcal{C}_\varphi$ and $\mathcal{C}_\wtp$.
By push-forward
through $\lambda_\wtp$ and ${\lambda_\varphi}$, we find again that
$\alpha_\wtp=\alpha_\varphi=\alpha$ locally on
$\Lambda_\wtp=\Lambda_\varphi=\Lambda$.
\subsection{Non-degenerate phase functions, reduction of the fiber}\label{subs:ndgfbred}
We now consider a $\varphi$ such that reduction of fiber variables, see Section \ref{subs:fbred}, is possible. By the argument in Section \ref{subs:ndg}, we may then write $I_\varphi(a)=I_{\pred}(b)$
with $b$ from \eqref{eq:fiberredsymb}.
We now compare $\alpha_\varphi$ to the analogously defined half-density $\beta_\pred$. We can
replace the phase function $\varphi$ by the equivalent phase function given in \eqref{eq:wtp}, and this does not affect $\alpha_\varphi$. Hence we may assume that $\varphi$ is of the form $\varphi(\mathbf{x},\mathbf{y})=\pred(\mathbf{x},\mathbf{y}')+\frac{1}{2}\rho_X^{-1}\rho_Y^{-1} \langle Qy'', y''\rangle$.
As such, we assume, in this splitting of coordinates, $C_\varphi\subset\{(\mathbf{x},\mathbf{y}',0)\}$.
We find:
\begin{lem}
\label{lem:dtransfnondeg}
Under the identification $C_{\pred}\times\{0\}=C_\varphi$, we have
\begin{equation*}
\sqrt{|d_{\varphi}|}=|\det Q|^{-\frac{1}{2}} \sqrt{|d_{\pred}|}.
\end{equation*}
\end{lem}
\begin{proof}
We compute
\begin{align*}
\Phi(\mathbf{x},\mathbf{y})
&=\big(-\fred(\mathbf{x},\byp)+\rho_Y\partial_{\rho_Y}\fred(\mathbf{x},\byp)\quad\partial_{y^\prime}\fred(\mathbf{x},\byp)\quad 0\big)\\
&\quad+\big(-\dfrac{1}{2}\langle Qy'', y''\rangle\quad 0 \quad \partial_{\ypp}Q(\ypp)\big)\\
&=:(\Pred(\mathbf{x},\byp)\quad0)
+\left(\Psi(\ypp)\quad Q\ypp\right)\in{\mathbb{R}}^{s-r}\times{\mathbb{R}}^{r}.
\end{align*}
Therefore,
\begin{align}
\nonumber
\frac{\partial(t,\Phi)}{\partial(\mathbf{x},\mathbf{y})}(\mathbf{x},\mathbf{y})&=
\begin{pmatrix}
\dfrac{\partial t}{\partial\mathbf{x}}(\mathbf{x},\mathbf{y}) & \dfrac{\partial t}{\partial\byp}(\mathbf{x},\mathbf{y}) & \dfrac{\partial t}{\partial \ypp}(\mathbf{x},\mathbf{y})
\\
\rule{0mm}{9mm}\dfrac{\partial\Pred}{\partial\mathbf{x}}(\mathbf{x},\byp) & \dfrac{\partial\Pred}{\partial\byp}(\mathbf{x},\byp) & -\dfrac{1}{2}\dfrac{\partial \Psi}{\partial \ypp}(\ypp)
\\
\rule{0mm}{6mm}0 & 0 & Q
\end{pmatrix}.
\end{align}
Consequently,
\begin{align*}
\sqrt{|d_{\varphi}|}&= \left|\det\frac{\partial(t,\Phi)}{\partial(\mathbf{x},\mathbf{y})}\right|^{-1/2}_{C_\wtp}
\sqrt{|dt|}
\\
&=\left|\det\frac{\partial (t,\Pred)}{\partial(\mathbf{x},\byp)}\right|^{-\frac{1}{2}}_{C_{\pred}}\cdot|\det Q|^{-\frac{1}{2}}
\sqrt{|dt|}
\\
&=|\det Q|^{-\frac{1}{2}} \sqrt{|d_{\pred}|}.
\end{align*}
\end{proof}
Notice that\footnote{Observe that $\mathfrak{a}_{red}$ is obtained by splitting of the density and weight factors in two steps.} $\mathfrak{a}=\mathfrak{a}_{\red}$. We compute, by \eqref{eq:princred1}, modulo amplitudes of lower order,
\begin{multline}
\label{eq:orderb}
b(\mathbf{x},\byp)= \rho_X^{-m_e+r/2}\rho_Y^{-m_\psi-r/2} |\det Q|^{-1/2} e^{i\frac{\pi}{4} \mathrm{sgn}(Q)} \ap(\mathbf{x},\byp,0)
\sqrt{\mu_X} (\rho_Y^{-(s-r+1)/2}|d\byp|).
\end{multline}
We observe that $b$ is an amplitude of order $(m_e-r/2, m_\psi+r/2)$ and find
\begin{align*}
\bp(\mathbf{x},\byp)&=|\det Q|^{-1/2} e^{i\frac{\pi}{4} \mathrm{sgn}(Q)} \ap(\mathbf{x},\byp,0) + {\mathcal{O}}\big(\rho_X \rho_Y\big),
\end{align*}
which implies, using Lemma \ref{lem:dtransfnondeg},
\begin{align*}
\wredp&=\left.\left( \bp(\mathbf{x},\byp) \sqrt{|d_{\pred}|} \right)\right|_{\mathcal{C}_\pred}
\\
&=e^{i\frac{\pi}{4} \mathrm{sgn}(Q)}\left.\left( \mathfrak{a}(\mathbf{x},\mathbf{y}) \sqrt{|d_\varphi|}\right)
\right|_{\mathcal{C}_\wtp}
\\
&=e^{i\frac{\pi}{4} \mathrm{sgn}(Q)}\wpp.
\end{align*}
This, in turn, finally gives
\[
\beta_{\pred}=(\lambda_{\pred})_*(\wredp)=e^{i\frac{\pi}{4} \mathrm{sgn}(Q)}\cdot(\lambda_\varphi)_*(\wpp)=e^{i\frac{\pi}{4} \mathrm{sgn}(Q)}\cdot\alpha_\varphi.
\]
\subsection{Clean phase functions, elimination of the excess}\label{subs:clnphf}
We now proceed with the last reduction step, namely, we consider a clean phase function and eliminate its excess. As in Section \ref{subss:elexcess}, we assume $Y={\mathbb{B}}^{s-e}\times(-\epsilon,\epsilon)^e$ with the fibers of $\mathcal{C}_\varphi\rightarrow \Lambda_\varphi$ given by constant $(\mathbf{x},\rho_Y,y')$ and arbitrary
$y''\in(-\epsilon,\epsilon)^e$.
Switching to the phase function $\wt{\varphi}$ in \eqref{eq:wtredex}, we may write $I_\varphi(a)=I_{\wt\varphi}(b)$ with $b$ defined in \eqref{eq:excesssymb}. We apply the
construction of the previous section, and obtain
the density $\beta_{\wt{\varphi}}=(\lambda_{\wtp})_*\left(\bp\cdot \sqrt{|d_\wtp|}\right)_{\mathcal{C}_\wtp}$ from the data $(\wtp,b)$.
Alternatively, we may study the parameter dependent family of oscillatory integrals
$I_{\phi(y^{\prime\prime})}(a(y^{\prime\prime}))$ with phase functions $\phi(y^{\prime\prime})$ defined in \eqref{eq:phiypp}
and amplitudes
\[
a(y^{\prime\prime})\colon(\mathbf{x},\rho_Y,y^\prime)\mapsto \rho_Y^{-e}\,a(\mathbf{x},\rho_Y,y^\prime,y^{\prime\prime})=\rho_Y^{-e}\,a(\mathbf{x},\mathbf{y}),
\]
with corresponding principal parts $\ap(y^{\prime\prime})$. Since $\phi(y^{\prime\prime})$ is non-degenerate, we can define the
parameter dependent family of half-densities on $\Lambda$
$$
\alpha_\phi(y^{\prime\prime})=(\lambda_{\phi(y^{\prime\prime})})_*\left(\ap(y^{\prime\prime}) \cdot \sqrt{|d_{\phi(y^{\prime\prime})}|}\right)_{\mathcal{C}_{\phi(y^{\prime\prime})}},
$$
and finally set
\begin{equation}\label{eq:prsymbexc}
\gamma_\wtp=\int_{(-\varepsilon,\varepsilon)^e}\alpha_\phi(y^{\prime\prime})\,dy^{\prime\prime}.
\end{equation}
\begin{prop}
The half-densities on $\Lambda_{\wtp}=\Lambda_{\varphi}=\Lambda$ given by $\gamma_\wtp$ and $\beta_\wtp$ coincide.
\end{prop}
\begin{proof}
We consider the smooth family of diffeomorphisms $G(y^{\prime\prime})=\mathrm{id}\times g(y^{\prime\prime})$, depending on the parameter $y^{\prime\prime}$,
involved in $\wt{G}$ from \eqref{eq:diffeoG}. Assuming the amplitudes $a(y^{\prime\prime})$ supported away from the corner points, we can suppose, as above,
$G(y^{\prime\prime})^*\phi(y^{\prime\prime}) - \wtp=0.$
We now compute, using Lemma \ref{lem:CLFstarff} and the expression \eqref{eq:excesssymb}, together with the transformation properties of $\wpp$,
\begin{align*}
\left(\bp_\wtp\cdot \sqrt{|d_\wtp|}\right)(\mathbf{x},\rho_Y,y^\prime)|_{\cC_\wtp} &=
\bp_\wtp(\mathbf{x},\rho_Y,y^\prime)|_{\cC_\wtp}\left| \det\frac{\partial(\widetilde{t},\wtP)}{\partial(\mathbf{x}, \mathbf{y}^\prime )}\right|^{-\frac{1}{2}}_{\cC_\wtp}\sqrt{|d\widetilde{t}|}
\\
(\eqref{eq:trap} \Rightarrow) \qquad &=\int_{(-\varepsilon,\varepsilon)^e}\hspace*{-12pt}
\ap(G(\mathbf{x},\mathbf{y}))|_{\cC_\wtp}\,
\left|\det\frac{\partial g}{\partial\mathbf{y}^\prime}(\mathbf{x},\mathbf{y})\right|_{\cC_\wtp} \hspace*{-2pt} [h(\mathbf{x},\mathbf{y})]_{\cC_\wtp}^{-m_\psi-s-1}\times
\\
&\phantom{=\int_{(-\varepsilon,\varepsilon)^e}}
\times\left| \det\frac{\partial(\widetilde{t},\wtP)}{\partial(\mathbf{x}, \mathbf{y}^\prime )}\right|^{-\frac{1}{2}}_{\cC_\wtp}\sqrt{|d\widetilde{t}|}\,d y^{\prime\prime}
\\
(\text{Lemma }\ref{lem:trDelta} \Rightarrow) \qquad &=\int_{(-\varepsilon,\varepsilon)^e}\hspace*{-12pt}
G(y^{\prime\prime})^*\hspace*{-3pt}\left[\ap(\mathbf{x},\mathbf{y})|_{\cC_{\phi(y^{\prime\prime})}}\hspace*{-3pt}
\left| \det\frac{\partial(t,\Phi(y^{\prime\prime}))}{\partial(\mathbf{x}, \mathbf{y}^\prime )}\right|^{-\frac{1}{2}}_{\cC_{\phi(y^{\prime\prime})}}\hspace*{-18pt}\sqrt{|dt|}\right]\hspace*{-2pt}d y^{\prime\prime}
\\
(\text{Def. of }d_{\phi(y^{\prime\prime})}\Rightarrow) \qquad &=\int_{(-\varepsilon,\varepsilon)^e}\hspace*{-12pt}
G(y^{\prime\prime})^*\hspace*{-3pt}\left[\left(\ap(y^{\prime\prime})\cdot \sqrt{|d_{\phi(y^{\prime\prime})}|}\right)(\mathbf{x},\rho_Y,y^\prime)\right]_{\cC_{\phi(y^{\prime\prime})}}\hspace*{-3pt}d y^{\prime\prime}.
\end{align*}
Applying $(\lambda_\wtp)_*$ to the left-hand side, we obtain $\beta_{\wtp}$.
To apply $(\lambda_\wtp)_*$ to the right-hand side, we first recall that $\wtp$ and $\phi(y^{\prime\prime})$ are equivalent by $G(y^{\prime\prime})$. Using again Lemma \ref{lem:CLFstarff}
(see also Lemma \ref{lem:arrange}), this implies
\begin{equation}\label{eq:wtpphequiv}
\lambda_\wtp=\lambda_{\phi(y^{\prime\prime})}\circ G(y^{\prime\prime})\Rightarrow(\lambda_\wtp)_*=(\lambda_{\phi(y^{\prime\prime})})_*\circ G(y^{\prime\prime})_* .
\end{equation}
Since $\lambda_\wtp$ does not depend on $y^{\prime\prime}$, we can take it inside the integral and use \eqref{eq:wtpphequiv}, finally obtaining
\begin{align*}
\beta_\wtp&=
(\lambda_\wtp)_*\left[\int_{(-\varepsilon,\varepsilon)^e}
G(y^{\prime\prime})^*\hspace*{-3pt}\left[\left(\ap(y^{\prime\prime})\cdot \sqrt{|d_{\phi(y^{\prime\prime})}|}\right)\right]_{\cC_{\phi(y^{\prime\prime})}}\hspace*{-3pt}d y^{\prime\prime}\right]
\\
&=\int_{(-\varepsilon,\varepsilon)^e}(\lambda_{\phi(y^{\prime\prime})})_*\circ G(y^{\prime\prime})_*\circ
G(y^{\prime\prime})^*\hspace*{-3pt}\left[\left(\ap(y^{\prime\prime})\cdot \sqrt{|d_{\phi(y^{\prime\prime})}|}\right)\right]_{\cC_{\phi(y^{\prime\prime})}}\hspace*{-3pt}
d y^{\prime\prime}
\\
&=\int_{(-\varepsilon,\varepsilon)^e}(\lambda_{\phi(y^{\prime\prime})})_*
\hspace*{-3pt}\left[\left(\ap(y^{\prime\prime})\cdot \sqrt{|d_{\phi(y^{\prime\prime})}|}\right)\right]_{\cC_{\phi(y^{\prime\prime})}}\hspace*{-3pt}
d y^{\prime\prime}=\int_{(-\varepsilon,\varepsilon)^e}\alpha_\phi(y^{\prime\prime})\,dy^{\prime\prime}=\gamma_\wtp.
\end{align*}
Extension to the corner points as in the previous subsections proves the claim.
\end{proof}
We already showed that the half-density $\alpha$ associated with $I_\varphi(a)$ is invariant under a change of equivalent non-degenerate phase functions.
Together with the argument above, this also shows that the half-density $\gamma$ associated with
$I_\varphi(a)$ remains the same under the change of equivalent phase functions which are clean with the same excess.
\subsection{Principal symbol and principal symbol map} Let $u\in I^{m_e,m_\psi}(X,\Lambda)$. Consider any local representation of $u$, as introduced in Definition \ref{def:Lagdist},
with clean phase function $\varphi$ with excess $e$ associated with $\Lambda$ and $a$ some local symbol density.
The arguments in the previous subsections show how to associate with these data a half-density $\gamma$,
defined on $\Lambda$. We also showed that switching to an equivalent phase function, as well as the elimination of the excess, do not change $\gamma$. The reduction of the fiber variables replaces $\gamma$
with $\gamma^\prime$ such that
\[
\gamma^\prime=e^{i\frac{\pi}{4}\mathrm{sgn}(Q)}\,\gamma,
\]
with $Q$ from \eqref{eq:wtp}. Let $\widetilde{\gamma}$ be the half-density defined by an integral representation $I_\wtp(\widetilde{a})$, with another
phase function $\widetilde{\varphi}$ associated with $\Lambda$. Then, similarly to \cite{Treves}, in general we have
\begin{equation}\label{eq:diffsigma}
\widetilde{\gamma}=e^{i(\sigma-\widetilde{\sigma})\frac{\pi}{4}}\,\gamma,
\end{equation}
where $\sigma=\mathrm{sgn}\left(\,\rho_{Y}^{-1}\rho_X^{-1}\,{}^\scat H_{Y}\varphi\right)$, and $\widetilde{\sigma}=\mathrm{sgn}\left(\,\rho_{\wtY}^{-1}\rho_X^{-1}\,{}^\scat H_{\wtY}\wtp\right)$.
Denote by $\widetilde{r}$ the number of fiber variable for $\wtp$, $\widetilde{s}$ the dimension of $\wtY$ and $\widetilde{e}$ the excess of $\wtp$, and define the integer number
\[
\kappa=\frac{1}{2}(\sigma-\widetilde{\sigma}-s+\widetilde{s}+e-\widetilde{e}).
\]
Then, \eqref{eq:diffsigma} is equivalent to
\begin{equation}\label{eq:coherence}
i^\kappa e^{i(s-e)\frac{\pi}{4}}\,\gamma = e^{i(\widetilde{s}-\widetilde{e})\frac{\pi}{4}}\,\widetilde{\gamma}.
\end{equation}
We are then led to the following definition of principal symbol map.
\begin{defn}\label{def:prsymb}
Let $u\in I^{m_e,m_\psi}(X,\Lambda)$. We define $\mathscr{I}(u)=\{(Y_j,\varphi_j)\}$ as the collection of manifolds and associated clean phase functions $(Y_j,\varphi_j)$
locally parametrizing $\Lambda$, giving rise to local representations of $u$ in the form $I_{\varphi_j}(a_j)$.
With each pair $(Y,\varphi)\in\mathscr{I}(u)$ we associate the half-density $\gamma$, as described in Subsection \ref{subs:clnphf},
in such a manner that, for any other element $(\wtY,\wtp)\in\mathscr{I}(u)$, we have
the coherence relation \eqref{eq:coherence} in $\lambda_\varphi(Y)\cap\lambda_\wtp(\wtY)$. We call the collection of half-densities $\{\gamma_j\}$, each one associated
with $(Y_j,\varphi_j)\in\mathscr{I}(u)$, the \emph{principal symbol of $u$}, and write $j^\Lambda_{m_e,m_\psi}(u)=\{\gamma_j\}$.
\end{defn}
By an argument completely similar to the one in \cite{Treves}, we can prove the following result.
\begin{thm}
%
Let $\Lambda$ be a ${\mathrm{sc}}$-Lagrangian on $X$. Then, the map
%
\begin{equation}\label{eq:prsymbmap}
j^\Lambda_{m_e,m_\psi}\colon I^{m_e,m_\psi}(X,\Lambda)\ni u\mapsto \{\gamma_j\}
\end{equation}
%
given in Definition \ref{def:prsymb} is surjective.
Moreover, the null space of the map \eqref{eq:prsymbmap} is $I^{m_e-1,m_\psi-1}(X,\Lambda)$, and thus
\eqref{eq:prsymbmap} defines a bijection
%
\[
\text{classes in } I^{m_e,m_\psi}(X,\Lambda) / I^{m_e-1,m_\psi-1}(X,\Lambda) \mapsto \{\gamma_j\}.
\]
%
The image space of $j^\Lambda_{m_e,m_\psi}$ can be seen as ${\mathscr{C}^\infty}(\Lambda, M_\Lambda\otimes\Omega^{1/2})$, where $M_\Lambda$ is the Maslov bundle over $\Lambda$.
\end{thm}
|
{
"timestamp": "2018-05-21T02:08:38",
"yymm": "1802",
"arxiv_id": "1802.08816",
"language": "en",
"url": "https://arxiv.org/abs/1802.08816"
}
|
\section{Introduction}
\label{sec:intro}
Transiting hot Jupiters (giant planets with periods P$<$10d) have been efficiently
detected by several ground- and space-based surveys \citep[e.g.,][]{bakos:2004,pollacco:2006,
borucki:2010,bakos:2013}. This great number of discoveries has been key for constraining
theories of their formation, structure and evolution \citep[for a recent review see][]{dawson:2018}, but several unsolved
theoretical challenges have emerged from these observations as well.
For example, the specific source of the inflated radius of highly irradiated hot Jupiters
is a topic of active research. While several mechanisms have been proposed
\citep[for a review see][]{spiegel:2012},
their validation is not straightforward because in most cases the structural
composition (i.e. heavy element content) of these planets is not known, and therefore the
problem becomes degenerate.
Another long standing theoretical challenge is the actual existence of these
massive planets at short orbital separations, because most theoretical models of
formation require that Jovian planets are formed beyond the snow line where solid
embryos are efficiently accreted \citep{rafikov:2006}. While some orbital displacement
of the planet due to exchange of angular momentum with a gaseous proto-planetary disc
is expected to happen, it is not clear that this type of interaction can account for
the currently known population of giant planets with semi-major axes shorter than 1~AU.
One particular challenge is that a significant fraction of the hot Jupiter systems have been
found to have large spin-orbit angles, which are not expected to arise in a gentle disc
migration scenario \citep[for a review see ][]{winn:2015} .
While high eccentricity migration models predict the existence of highly misaligned
spin-orbit angles, a direct comparison between the model predictions and the obliquity distribution of hot Jupiters can be affected by the possible realignment of the
outer layers of the star due to tidal and/or magnetic interactions \citep{dawson:2014,li:2016}.
Transiting warm Jupiters (giant planets with periods $P>10$ d) are valuable systems
in the above mentioned context.
Due to their relatively long planet-star separations ($a \gtrsim 0.1$ AU), the internal structure
of warm Jupiters is not significantly affected by the tidal, magnetic and/or radiative
mechanisms that can significantly affect hot Jupiters. For this reason, theoretical models can be used
to investigate the internal composition of giant planets and how this depends on the global properties of the system (i.e., stellar mass, [Fe/H], multiplicity) in a more straightforward fashion.
Along the same line, given that for warm Jupiters the planet-star interaction is in general not
strong enough to realign the outer layers of the star, they are better suited systems
to test the predictions of high eccentricity migration models by studying the distribution
of spin-orbit angles \citep{petrovich:2016}.
Unfortunately, the detection of warm Jupiters around bright stars is hindered by
strong detection biases. The transit probability is proportional
to $a^{-1}$, and additionally the duty cycle required to discover transiting planets
with periods longer than 10 days is usually too high for typical ground-based
surveys, which are the ones that have made the most significant contribution to the
population of transiting giant planets with precisely determined masses and radii.
One workaround to this problem is to build longitudinal networks of identical telescopes
to counteract the diurnal cycle \citep[e.g. HATSouth,][]{bakos:2013}. This configuration
has allowed the detection of planets with periods as long as 16 days \citep{brahm:2016:hs17}
Another solution is the use of space-based telescopes. Due to the precise and continuous $\approx$2 month observations per field, the Kepler K2 mission \citep{howell:2014}
is able to detect warm Jupiters \citep{smith:2017, shporer:2017}.
Additionally, it has an increased probability of detecting these type of system
on bright stars compared to the original \textit{Kepler} mission because it surveys a
larger area of the sky.
In this study we present the discovery of a warm Saturn around a bright star with the \textit{Kepler} telescope.
This discovery was performed in the context of the
K2CL collaboration, which uses spectroscopic facilities located in Chile to confirm and characterize transiting planets from K2 \citep{brahm:2016:k2,espinoza:2017:k2,jones:2017,soto:2018}. The structure of the paper is as follows. In \S~\ref{sec:obs} we present the
photometric and spectroscopic observations that allowed the discovery of EPIC247098361b,
in \S~\ref{sec:ana} we derive the planetary and stellar parameters, and we discuss our findings in \S~\ref{sec:disc}.
\section[]{Observations}
\label{sec:obs}
\subsection{Kepler K2}
\label{sec:k2}
EPIC247098361 was observed by the Kepler K2 mission between March and May 2017, while carrying out the
monitoring for campaign 13. The photometric data was reduced from pixel-level products using the EVEREST algorithm \citep{EVEREST1,EVEREST2}. Long-term trends in the data are corrected with a gaussian-process regression. Transiting planet detection is performed by using the
Box-fitting Least Squares algorithm \citep{BLS} on the processed light curves.
With this procedure we identified a $11.17$ day periodic signal, with a depth
consistent with that of a giant planet transiting a main sequence star.
The (detrended) K2 photometry for this target star is shown in Figure~\ref{fig:photometry}.
Due to the clear box-shaped transits and the brightness of the star, EPIC247098361
was selected as a high priority target for spectroscopic follow-up observations.
\begin{figure*}
\includegraphics[width=2\columnwidth]{photometry.pdf}
\caption{Detrended K2 photometry of EPIC 247098361. The transits of our planet candidate have been identified by red marks below each event.}
\label{fig:photometry}
\end{figure*}
\subsection{Spectroscopic Observation}
\label{sec:spec}
High resolution spectroscopic observations are required to characterize the
host star, identify possible false positive scenarios, and to confirm the
planetary nature of the transiting companion via mass determination from the
radial velocity signal. The spectroscopic facilities that were used in this
work are summarized in Table~1, along with the main
properties of the observations.
\begin{table*}
\centering
\begin{minipage}{180mm}
\caption{Summary of spectroscopic observations for EPIC247098361.}
\begin{tabular}{@{}ccccccc@{}}
\hline
Instrument & UT Date(s) & N Spec. & Resolution & S$/$N range & $\gamma_{RV} $[km s$^{-1}$] & RV Precision [m s$^{-1}$] \\
\hline
Coralie / 1.2m Euler/Swiss & 2017 Oct 31 -- Nov 2 & 3 &60000 & 27 -- 44 & 22.327 & 10 \\
FEROS / 2.2m MPG & 2017 Oct 03 -- 2018 Jan 28 & 18 & 50000 & 106 -- 167 & 22.398 & 7 \\
HARPS / 3.6m ESO & 2017 Nov 01 -- 2017 Nov 08 & 8 & 115000 & 36 -- 50 & 22.416 & 6 \\
\hline
\end{tabular}
\end{minipage}
\label{tab:specobs}
\end{table*}
We obtained three spectra of EPIC247098361 with the Coralie spectrograph \citep{CORALIE} mounted
on the 1.2m Euler/Swiss telescope located at the ESO La Silla observatory.
Observations were obtained on three consecutive nights in October 2017, and
they were acquired with the simultaneous calibration mode \citep{baranne:96}
where a secondary fiber is illuminated by a Fabry-Perot etalon in order to
trace the instrumental velocity drift produced by the changes in the environmental
conditions of the instrument enclosure.
Coralie data was processed and analyzed with the CERES automated package \citep{jordan:2014,brahm:2017:ceres}. On top of the reduction and optimal
extraction of the spectra, CERES delivers precision radial velocity and
bisector span measurements by using the cross-correlation technique, and
an initial estimate of the atmospheric parameters by comparing the reduced
spectra with a grid of synthetic models \citep{coelho:2005}.
These three spectra allowed us to conclude that EPIC247098361 is a dwarf star
($\log(g)\approx 4.2$) with an effective temperature of T$_{eff}\approx 5900$ K.
Additionally, there was no evidence of additional stellar components in the
spectra that could be linked to blended eclipsing binary scenarios, and the
radial velocity measurements rejected the presence of large velocity variations
caused by a stellar mass orbital companion. These properties boosted the follow-up observations of EPIC247098361 and we proceeded to obtain spectra with more powerful facilities.
We obtained 18 spectra of EPIC247098361 between October of 2017 and January 2018
with the FEROS spectrograph \citep{kaufer:99} mounted on the MPG2.2m telescope,
and another eight spectra of the same target in November 2017 with the HARPS
spectrograph \citep{mayor:2003} mounted on the ESO 3.6m telescope. Both facilities
are located at the ESO La Silla Observatory. The FEROS observations were
performed with the simultaneous calibration mode where a Thorium-Argon lamp
illuminates a second fiber during the science observations.
Given that the nightly instrumental drift of the HARPS spectrograph is
significantly smaller than the expected radial velocity variation produced
by a giant planet, the secondary fiber of this instrument was not used to
trace the velocity drift. Reductions and analysis of FEROS and HARPS spectra
were performed with the CERES automated package. The radial velocity
and bisector span measurements are presented in Table~4, and the
radial velocity curve is plotted in Figure~\ref{rvs-t}
As can be seen in this figure, the velocities obtained with the three
instruments are consistent with the radial velocity variation produced by a
giant planet with an eccentric orbit.
Additionally, no significant correlation was detected between the radial
velocities and bisector span measurements, as can be seen in
the radial velocity vs. bisector span (BIS) scatter plot on Figure~\ref{rvs-bs}.
We computed the distribution for the Pearson correlation coefficient between
the radial velocities and bisector span measurements, finding that it lies between
-0.13 and 0.65 at 95\% confidence level, and is therefore consistent with
no correlation. These spectroscopic observations allowed us to confirm that
the transit-like signal observed in the K2 data is produced by a planetary
mass companion.
\begin{figure*}
\includegraphics[width=\textwidth]{ws.pdf}
\caption{ Radial velocity (RV) curve obtained with FEROS (red), Coralie (green) and HARPS (blue). The black line corresponds to the Keplerian model with the posterior parameters found in Section 3. }
\label{rvs-t}
\end{figure*}
\begin{figure}
\includegraphics[width=\columnwidth]{CL009-13-bs-rv-correl.pdf}
\caption{ Radial velocity (RV) versus bisector span (BIS) scatter plot using data from our spectroscopic observations of EPIC247098361. No significant correlation was found. }
\label{rvs-bs}
\end{figure}
\section{Analysis}
\label{sec:ana}
\subsection{Stellar parameters}
\label{sec:stellar-parameters}
We used the co-added FEROS spectra to estimate the stellar atmospheric parameters of EPIC247098361 by
using the ZASPE code \citep{brahm:2015,brahm:2016:zaspe}. ZASPE determines T$_eff$, $\log(g)$, [Fe/H], and vsin(i) by
comparing the observed spectra to synthetic ones in the spectral regions most sensitive to changes in those parameters.
Additionally, reliable uncertainties are obtained from the data by performing Monte Carlo simulations that take into account the systematic mismatches between data and models.
Using this procedure we obtain the following parameters:
T$_{eff}$ = 6020 $\pm$ 83 K, $\log(g)$ = 4.22 dex, [Fe/H] = 0.04 dex, and vsin(i) = 4.0 km s$^{-1}$, which are consistent with the initial estimates provided by CERES.
EPIC247098361 was observed by GAIA and its parallax is given on its DR1 \citep[$p$ = 7.69 $\pm$ 0.27 $mas$,][]{gaia:2016, gaia:2016:dr1}. We used this parallax value coupled to the
reported magnitudes in different bandpass to estimate the stellar radius, following an approach similar to \citet{barragan:2017}.
Specifically, we used the \texttt{BT-Settl-CIFIST} spectral models from \citet{baraffe:2015}, interpolated in \ensuremath{T_{\rm eff}} and log(g),
to generate a synthetic spectral energy distribution (SED) consistent with the atmospheric parameters of EPIC247098361.
We then integrated the SED in different spectral regions to generate synthetic magnitudes that were weighted by the corresponding
transmission functions of the passband filters.
The synthetic SED along with the observed flux density in the different filters are plotted in Figure \ref{sed}.
These synthetic magnitudes were used to infer the stellar radius (R$_{\star}$) and the extinction factor (A$_V$) by comparing them to the observed
magnitudes after applying a correction of the dilution of the stellar flux due to the distance. Specifically, our data was the stellar luminosity obtained
by multiplying the observed flux density $F_{\rm obs}^{\lambda_i}$ at the different passband filters ($\lambda_i$) with the square of the distance inferred from the GAIA parallax ($D$):
\begin{equation}
L_{\rm obs} = 4 \pi F_{\rm obs}^{\lambda_i} D^{2} .
\end{equation}
While the adopted model was:
\begin{equation}
L_{\rm mod}= 4 \pi F_{\rm syn}^{\lambda_i} R_{\star}^{2} e^{-A(\lambda_i)/2.5},
\end{equation}
where $F_{\rm syn}^{\lambda_i}$ is the synthetic flux density at the different passband filters, and $A(\lambda_i)$ is the
wavelength dependent extinction factor, which we take to be a function of visual extinction ($A_V$) as described in \citet{cardelli:89}.
We used the \texttt{emcee} \texttt{Python} package \citep{emcee:2013} to sample the posterior distribution of R$_{\star}$ and $A_V$.
Figure~\ref{rstar} shows the posterior distribution for the parameters. The estimated stellar radius from the parallax measurement was coupled to the
estimated T$_{eff}$ to obtain the mass and evolutionary stage of the star by using the Yonsei-Yale isochrones \citep{yi:2001}. Figure~\ref{iso} shows
the isochrones in the T$_{eff}$--R$_{\star}$ plane for different ages, with the values for EPIC247098361 indicated with a blue cross. This analysis allowed us
to obtain a more precise estimate for the stellar $\log(g)$ than the value obtained with ZASPE. This new $\log(g)$ value was held fixed in a new ZASPE iteration,
which was followed by a new estimate of the stellar radius and a new comparison with the theoretical isochrones. After this final iteration,
the stellar $\log(g)$ value converged to 4.389 $\pm$ 0.017, and the other stellar properties to the values listed in Table~2.
We found that EPIC247098361 is a late F-dwarf star (M$_{\star}$=1.192 $\pm$ 0.025 M$_{\odot}$, R$_{\star}$=1.161 $\pm$ 0.022 R$_{\odot}$)
and that it is slightly metal rich ([Fe/H]=0.1 $\pm$ 0.04).
\begin{table}
\label{tab:stellar}
\centering
\caption{Stellar properties and parameters for EPIC247098361.}
\begin{tabular}{@{}lcc@{}}
\hline
Parameter & Value & Method / Source \\
\hline
\\
Names & EPIC247098361 & -- \\
RA & 04:55:03.96 & -- \\
DEC & 18:39:16.33 & \\
Parallax [$mas$] & 7.69 $\pm$ 0.27 & GAIA\\
\hline
\\
$K_p$ (mag) & 9.789 & EPIC\\
B (mag) &10.469 $\pm$ 0.029 & APASS\\
g (mag) &10.286 $\pm$ 0.184 & APASS\\
V (mag) & 9.899 $\pm$ 0.039 & APASS\\
r (mag) & 9.749 $\pm$ 0.033 & APASS\\
i (mag) & 9.663 $\pm$ 0.011 & APASS \\
J (mag) & 8.739 $\pm$ 0.025 & 2MASS\\
H (mag) & 8.480 $\pm$ 0.011 & 2MASS\\
Ks (mag) & 8.434 $\pm$ 0.014& 2MASS\\
W1 (mag) & 8.380 $\pm$0.024& WISE\\
W2 (mag) & 8.419 $\pm$ 0.019& WISE\\
W3 (mag) & 8.391 $\pm$ 0.027& WISE\\
\hline
\\
T$_{eff}$ [K] & 6154 $\pm$ 60 & ZASPE \\
log(g) [dex] & 4.379 $\pm$ 0.017 & ZASPE \\
$[$Fe/H] [dex] & 0.10 $\pm$ 0.04 & ZASPE \\
$v \sin{i}$ [km s$^{-1}$] & 4.16 $\pm$ 0.282 & ZASPE \\
\hline
\\
M$_\star$ [M$_\odot$] & 1.192$_{-0.024}^{+0.025}$ & ZASPE + GAIA + YY \\
R$_\star$ [R$_\odot$] & 1.161$_{-0.021}^{+0.023}$ & ZASPE + GAIA \\
L$_\star$ [L$_\odot$] & 1.718$_{-0.086}^{+0.101}$ & ZASPE + GAIA + YY\\
Age [Gyr] & 1.26$_{-0.74}^{+0.71}$ & ZASPE + GAIA + YY \\
A$_V$ & 0.129$_{-0.062}^{+0.065}$ & ZASPE + GAIA \\
\hline
\end{tabular}
\end{table}
\begin{figure*}
\includegraphics[width=\textwidth]{CL009-13_sed.pdf}
\caption{Spectral energy distribution of the \texttt{BT-Settl-CIFIST} model with atmospheric parameters similar to EPIC247098361. The observed
flux densities for the different passband filters are identified as red circles.}
\label{sed}
\end{figure*}
\begin{figure}
\includegraphics[width=\columnwidth]{CL009-13_rstar_av.png}
\caption{Triangle plot for the posterior distributions of R$_{\star}$ and $A_V$ obtained from the observed magnitudes and GAIA parallax.}
\label{rstar}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{CL009-13_iso.pdf}
\caption{Yonsei-Yale isochrones for the metallicity of EPIC247098361 in the T$_{eff}$--R$_{\star}$ plane. From left to right the isochrones corresponds to 0.1, 1, 3, 5, 7, 9 Gyr.
The position of EPIC247098361 in this plane is shown with a blue cross.}
\label{iso}
\end{figure}
\subsection{Global modelling}
We performed a joint analysis of the \textit{Kepler} K2 data and follow-up radial velocities in order to determine the transit and orbital parameters of the
planetary system. For this purpose we used the \texttt{exonailer} code which is described in detail in \citet{espinoza:2016:exo}.
Briefly, we model the transit light curves using the \texttt{batman}
package \citep{kreidberg:2015} and we fit them with the resampling
method described in \citet{ kipping:2013} in order to account for the smearing effect of the K2 long-cadence light curves. Following the results of \citet{EJ:2015}, we fit for the limb-darkening coefficients simultaneously with the other transit parameters, and followed \citet{espinoza:2016:lds}
to select the quadratic limb-darkening as the optimal
law to use for the case of EPIC247098361, as this law provides the lowest expected mean-squared error
in the planet-to-star radius ratio. The limb-darkening coefficients were fit using the uninformative sampling technique of \citet{Kipping:LDs}. A photometric jitter term was also included in the fit of the photometry, in order to empirically estimate the noise of the light curves. The radial velocities are modelled with the \texttt{rad-vel} package \citep{fulton:2018}, where we consider a
different systemic velocity and jitter factor for each instrument. Additionally, we
consider the eccentricity and argument of periastron passage as free parameters with uniform priors (an eccentric fit to the whole dataset is preferred to a circular model with a $\Delta \textnormal{BIC}=14$ in favor of the eccentric fit), and put a prior on $a/R_*$ using the value obtained by our procedures described in \S~\ref{sec:stellar-parameters}, which gave $a/R_* = 19.30 \pm 0.35$, a more precise value than the one obtainable from the transit light curve alone. The priors and posteriors of our
modelling are listed in Table~3. The transit and radial velocity models
generated from the posterior distribution are presented in Figures \ref{exonailerlc} and \ref{exonailerrv}, along with the observed data.
We used these transit and orbital parameters to obtain
the physical parameters of the planet by using the stellar properties obtained in the
previous subsections. We found that EPIC247098361b has a Saturn-like mass of
M$_P$ = 0.397 $\pm$ 0.037 M$_J$, a Jupiter-like radius of R$_P$ = 1.000 $\pm$
0.020 R$_J$, and an equilibrium temperature of T$_{eq}$ = 991 $\pm$ 12 K.
Additionally we found that the planet has a significantly eccentric orbit of $e$ = 0.258 $\pm$ 0.025.
\begin{table*}
\centering
\begin{center}
\caption{Transit, orbital, and physical parameters of EPIC247098361b. On the priors, $N(\mu,\sigma)$ stands for a normal distribution with mean $\mu$ and standard deviation $\sigma$, $U(a,b)$ stands for a uniform distribution between $a$ and $b$, and $J(a,b)$ stands for a Jeffrey's prior defined between $a$ and $b$.}
\label{tab:planet}
\begin{threeparttable}
\begin{tabular}{@{}lcc@{}}
\hline
Parameter & Prior & Value \\
\hline
Light-curve parameters \\
P (days) & $N(11.169,0.1)$ & 11.168454 $\pm$ 0.000023\\
T$_0$ (days) & $N(2457825.350,0.1)$ & 2457825.3497822732 $\pm$ 0.000093\\
R$_P$/R$_{\star}$ & $U(0.001,0.2)$ & 0.08868$^{+0.00044}_{-0.00042}$\\
$a/$R$_{\star}$ & $N(19.30,0.35)$ & 19.25$^{+0.27}_{-0.31}$\\
$i$ & $U(70,90)$ & 89.14$^{+0.13}_{-0.11}$ \\
q$_1$ & $U(0,1)$ & 0.417$^{+0.038}_{-0.037}$\\
q$_2$ & $U(0,1)$ & 0.318$^{+0.029}_{-0.028}$\\
$\sigma_w$ (ppm) & $J(10,5000)$ & 51.68$^{+0.68}_{-0.64}$\\
\hline
RV parameters\\
K (m s$^{-1}$) & $N(35,100)$ & $33.42^{+3.12}_{-3.02}$\\
$e$ & $U(0,1)$ & 0.258 $\pm$ 0.025 \\
$\omega$ (deg) & $U(0,360)$ & 207 $^{+3.6}_{-3.8} $\\
$\gamma_{coralie}$ (km s$^{-1}$) & $N(22.35,0.05)$ & 22.3394$^{+0.0087}_{-0.0092} $ \\
$\gamma_{feros}$ (km s$^{-1}$) & $N(22.40,0.05)$ & 22.3965$^{+0.0023}_{-0.0022} $\\
$\gamma_{harps}$ (km s$^{-1}$) & $N(22.40,0.05)$ & 22.3917$^{+0.0029}_{-0.0030} $ \\
$\sigma_{coralie}$ (km s$^{-1}$) & $J(10^{-4},0.1)$ & 0.0011$^{+0.0013}_{-0.0010} $ \\
$\sigma_{feros}$ (km s$^{-1}$) & $J(10^{-4},0.1)$ & 0.0029$^{+0.0042}_{-0.0025} $ \\
$\sigma_{harps}$ (km s$^{-1}$) & $J(10^{-4},0.1)$ & 0.0008$^{+0.0024}_{-0.0006} $ \\
\hline
Derived parameters\\
M$_P$ (M$_{J}$) & -- & 0.397 $\pm$ 0.037 \\
R$_P$ (R$_J$]) & -- & 1.000$_{-0.020}^{+0.019}$ \\
$\langle$T$_{eq} \rangle$$^{a}$ (K) & -- & 1030 $\pm$ 15 \\
$a$ (AU)& -- & 0.10355$_{-0.00076}^{+0.00078}$ \\
\hline
\end{tabular}
*Time averaged equilibrium temperature using equation 16 of \citet{mendez:2017}.\\
\end{threeparttable}
\end{center}
\end{table*}
\begin{figure}
\includegraphics[width=\columnwidth]{CL009-13_transit.pdf}
\caption{ The top panel shows the phase folded Kepler $K2$ photometry (black points) as a function of time at the time of transit for EPIC247098361b,
and the model constructed with the derived parameters of exonailer (blue line). The bottom panel shows the corresponding residuals.}
\label{exonailerlc}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{CL009-13_rv.pdf}
\caption{ The top panel presents the radial velocities (colored circles) obtained with the three spectrographs as a function of the orbital phase.
The RV model with the derived orbital parameters for EPIC247098361b is also plotted (blue line).The bottom panel shows the residuals obtained
for these radial velocity measurements.
}
\label{exonailerrv}
\end{figure}
\subsection{Rotational modulation and search of additional transits}
A search for additional transits was performed on the photometry using the BLS algorithm \citep{BLS} with the transits of EPIC 247098361b masked out. No significant signals were found, which puts a limit of $\approx 1.5R_\oplus$ to any transiting companion orbiting in periods smaller than $\approx 38$ days. Additionally, a Generalized Lomb-Scargle periodogram \citep{zk:2009} was ran in order to search for any periodic signals in the data, but the only periods that stood out were at $1.04$ and $0.74$ days, most likely instrumental as the phased data at those periods does not show any significant, physically interpretable signal. No secondary eclipses or phase curve modulations were found in the data, which is expected given that the secondary eclipse amplitude due to reflected light would be at most $(R_p/a)^2=21 \pm 0.71$ ppm, significantly below the photometric precision of 51 ppm.
\section{Discussion}
\label{sec:disc}
\begin{figure*}
\includegraphics[width=\textwidth]{obs.pdf}
\caption{V magnitude as a function of orbital period for the population
of transiting planets with masses and radii measured with a precision
better than 20\%. The size of the points represent the
transit depth, while the color is related to the radial velocity
semi-amplitude. EPIC247098361b (inside the black square) lies
in a sparsely populated region and is one of the few giant planets with
P$>$10d and V$<$10.}
\label{vmag}
\end{figure*}
\begin{figure*}
\includegraphics[width=\textwidth]{teq.pdf}
\caption{Population of transiting giant planets with radii and masses
determined at better than 20\% precision level plotted in the
T$_{eq}$--R$_{\star}$ plane. The green vertical bar
indicates the transition region for which planets having
lower equilibrium temperatures have structures that are not significantly
affected by the inflation mechanism of hot Jupiters,
EPIC247098361b (inside the black square), lies just inside this
transition region. }
\label{teqrp}
\end{figure*}
EPIC247098361b is compared with the full population of transiting planets
with available determinations of radii and masses at the level of 20\% in Figures
\ref{vmag} and \ref{teqrp}. Due to its orbital period of $P=11.2$ d, EPIC247098361b lies in
a relatively sparsely populated region of parameter space.
Its time averaged equilibrium temperature of 1030 K lies just in the
transition region where the mechanism responsible for inflating the
radii of hot Jupiters stops playing a significant role
\citep{kovacs:2010,demory:2011}. Additionally, its current planet-to-star
separation at pericenter is large enough that the effects that
tidal and/or magnetic interactions can have on the structure and orbital
evolution of the system are expected to be small \citep{dawson:2014}.
EPIC247098361b is remarkably similar to WASP-117b \citep{lendl:2014}.
Both have have Saturn-like masses, Jupiter-like radii, eccentricities
close to 0.3, orbital periods slightly longer that 10 days, and late
F-type host stars. WASP-117b has a slightly less metal rich host star
than EPIC247098361 and its density is lower. Both systems are excellent
targets for performing detailed follow-up observations to further
understand the structure and evolution of giant planets that are
not affected by proximity effects. According to TEPCat \citep{tepcat}, there are only
$\approx 20$ other well characterised transiting giant planets with periods
longer than 10 days. EPIC247098361b ($V=9.9$) stands out as the
system with the brightest host star after HD~17156 \citep[$V=8.2$,][]{barbieri:2007}
and HD~80606 \citep[$V=9.1$,][]{moutou:2009}.
\subsection{Structure}
Due to the moderately low insolation levels received from its parent star,
the internal structure of EPIC247098361b can be studied by comparing its
measurements of mass and radius with the predictions of theoretical models.
We used the \citep{fortney:2007} models of planetary structure and evolution to
determine the mass of a possible central rocky core. These simple models assume
that all the solid material is concentrated in this core, which is likely a
simplification of the problem but serves as an illustration of the possible
internal composition of the planet. We find that, given the evolutionary
status of the EPIC247098361 system, the measured mass and radius of the planet
are consistent with having a relatively massive central core of M$_{core}$ = 20
$\pm$ 7 M$_{\oplus}$. This value is also consistent with the relation found by
\citet{thorngren:2016} between the mass of the planet and the total mass in
heavy elements. This relation was obtained using the properties of the known
transiting warm giant planets and more realistic models in which the solids are
also mixed in an H/He dominated envelope.
The prediction for the heavy element mass of EPIC247098361b is M$_Z$=33$\pm$12 M$_{\oplus}$,
where in this case only 10 M$_{\oplus}$ of solids are located in the central core,
and a large amount of this material is required to be distributed in the planet
envelope to reproduce the observed radius of EPIC247098361b.
These properties are consistent with the core accretion model of planet formation \citep{pollack:96}
in which the planet starts a runaway accretion of gaseous material as soon as
the solid embryo reaches a mass of 10 M$_{\oplus}$. In this process,
the planet keeps accreting rocky and icy planetesimals that have been decoupled
from the gaseous disc.
EPIC247098361b is a suitable system on which to use envelope-enriched
models to get an independent estimate of M$_Z$. If put in the context
of the full population of transiting warm giant planets, this estimate
of M$_Z$ can be used to probe for correlations with other physical and
orbital properties. Specifically, \citet{miller:2011} found a tentative
correlation between the M$_Z$ and the stellar [Fe/H], which then
was put in to question by \citet{thorngren:2016} using a bigger sample
of systems and new structural models. Nonetheless, the parameters used
in the \citet{thorngren:2016} study were not obtained following an
homogeneous procedure, and additionally $\approx 25\%$ of their sample consisted
of low irradiated systems with orbital periods of $P<5$ days, whose structure
can suffer from other proximity effects (e.g. tidal, magnetic).
The combination of GAIA parallaxes, allowing an homogeneous characterization of
the host stars, coupled to new discoveries of transiting giant planets with
$P>10$ days by new ground-based (e.g. HATPI\footnote{https://hatpi.org/}),
and space-based missions \citep[e.g. TESS,][]{tess}, will be fundamental for linking the
inferred heavy element content of the planets with the global properties of
the systems.
\subsection{Migration}
While the current eccentricity of
EPIC247098361b is too low to produce significant
migration by tidal friction \citep{jackson:2008}, it can still
be migrating if the system is being affected by secular gravitational
interactions produced by a third distant body
\citep{kozai:62,lidov:62,li:2014,petrovich:2015}. These interactions produce
periodic variations of the eccentricity and inclination of the system, where
the interior planet migrates during the very high eccentricity stage by tidal friction, but most of the time the planet presents
moderate eccentricities. \citet{petrovich:2016} presented a model in which only 20\% of the warm Jupiter population
is migrating through this process. While this conclusion was reached from the eccentricity distribution of radial velocity discovered planets,
a stricter test will need a study of the distribution of spin-orbit angles of warm Jupiters. The number of warm
Jupiters with measured spin-orbit angles is still low (only 10 studied systems according to TEPcat), and EPIC247098361b is a well
suited target for measuring this angle through the Rossitter-McLaughlin effect.
\subsection{Possible follow-up observations}
The bright host star coupled to the nearly equatorial declination of the system makes of EPIC247098361
one of the most promising warm giant planets to perform detailed follow-up observations using Northern
and Southern facilities.
EPIC247098361b is a well suited system to study the atmospheres of moderately low irradiated giant planets.
While its expected transmission signal is $\delta_{trans}\approx450$ ppm, which is small compared to that of typical hot Jupiter systems, there have been reported measurements of transmission spectra for
systems with $\delta_{trans}<500$ and transit depths similar to that of EPIC247098361b. The system is specially interesting for atmospheric studies since it has been predicted that warm Saturns like EPIC247098361b, given its metal enrichment, should have low C/O ratios as compared to that of their host stars \citep{Espinoza:2017}, a picture that has been also predicted for their hotter counterparts from population synthesis models \citep{mordasini:2016} and thus this might be an excellent system to put this picture to test. In addition, the eccentricity of the system is interesting as different temperature regimes may be at play during transit and secondary eclipse, providing an interesting laboratory for exoplanet atmosphere models.
The EPIC247098361 system is also an ideal target for measuring the spin-orbit angle through the
Rossitter-McLaughlin effect. The estimated $v\sin{i}=4.2$ km $^{-1}$ coupled to the planet-star size ratio
would produce a anomalous radial velocity signal with a semi-amplitude of $\approx$ 35 m s$^{-1}$ for an aligned orbit,
which is similar to the orbital semi-amplitude of the system, and which can be measured by numerous
spectroscopic facilities.
Warm Jupiters have been proposed to have a significant number of companions in comparison to
hot Jupiter systems \citep{huang:2016}. The presence of outer planetary-mass companions is also
required for the migration of inner planets through secular gravitational interactions \citep{dong:2014, petrovich:2016}.
EPIC247098361 can be the target of long term radial velocity follow-up observations to detect additional velocity signals
or trends that can be associated with distant companions. Finally, EPIC247098361 is a well suited
system to search for transit timing variations because its transits can be observed with relatively
small aperture telescopes from Southern and Northern facilities.
\section*{Acknowledgments}
R.B.\ gratefully acknowledges support by the
Ministry of Economy, Development, and Tourism's Millennium
Science Initiative through grant IC120009, awarded to The
Millennium Institute of Astrophysics (MAS).
A.J.\ acknowledges support from FONDECYT project 1171208,
BASAL CATA PFB-06, and project IC120009 ``Millennium Institute
of Astrophysics (MAS)" of the Millennium Science Initiative,
Chilean Ministry of Economy.
A.J.\ warmly thanks the Max-Planck-Institut f\"ur Astronomie for the hospitality during a sabbatical year where part of this work was done.
M.R.D.\ acknowledges support by CONICYT-PFCHA/Doctorado Nacional-21140646, Chile.
J.S.J.\ acknowledges support by FONDECYT project 1161218 and partial support by BASAL CATA PFB-06.
This paper includes data collected by the K2 mission. Funding
for the K2 mission is provided by the NASA Science Mission directorate.
Based on observations collected at the European Organisation for Astronomical
Research in the Southern Hemisphere under ESO programme 0100.C-0487(A).
\bibliographystyle{mnras}
|
{
"timestamp": "2018-02-28T02:07:14",
"yymm": "1802",
"arxiv_id": "1802.08865",
"language": "en",
"url": "https://arxiv.org/abs/1802.08865"
}
|
\section{#2\label{sec:#1}}}
\newcommand{\Subsection}[2]{\subsection{#2\label{sec:#1}}}
\renewcommand{\sec}[1]{{s}ection \ref{sec:#1}}
\newenvironment{enum}
{\begin{list}{\makebox[\labelwidth][l]{(\arabic{enumi})}}{\usecounter{enumi}}
\setcounter{enumi}{\value{equation}}}
{\setcounter{equation}{\value{enumi}} \end{list}}
\newcommand{\meti}[2]{\item #2 \label{eqn:#1}}
\newcommand{\abbrevEnvir}{
\expandafter\newcommand\expandafter{\csname bi\endcsname}{\begin{itemize}}
\expandafter\newcommand\expandafter{\csname ei\endcsname}{\end{itemize}}
\expandafter\newcommand\expandafter{\csname be\endcsname}{\begin{enumerate}}
\expandafter\newcommand\expandafter{\csname ee\endcsname}{\end{enumerate}}
\expandafter\newcommand\expandafter{\csname bc\endcsname}{\begin{center}}
\expandafter\newcommand\expandafter{\csname ec\endcsname}{\end{center}}
}
\newcommand{\colon}{\colon}
\newcommand{\mathbb{N}}{\mathbb{N}}
\renewcommand{\Re}{\mathbb{R}}
\author[F.~Borhani]{Fatemeh Borhani}
\email{fatemeborhani@gmail.com}
\author[E.~J.~Green]{Edward J.~Green}
\address{Department of Economics, Pennsylvania State University}
\email{eug2@psu.edu}
\newcommand{\hcf}{the Human Capital
Foundation, (\url{http://www.hcfoundation.ru/en/}) and
particularly Andrey P.\ Vavilov, for research support through the
Center for the Study of Auctions, Procurements, and Competition
Policy (CAPCP, \url{http://capcp.psu.edu/}) at the Pennsylvania
State University.\ }
\title[Identifying cognitive bias]
{Identifying the occurrence or non occurrence\\ of cognitive bias in
situations\\ resembling the \uppercase{M}onty \uppercase{H}all problem}
\date{2018.02.10}
\keywords{inference, heuristic reasoning, cognitive bias, Bertrand's
box paradox, Monty Hall problem}
\thanks{JEL Subject classes: D01, D03, D81}
\thanks{The authors thank Nageeb Ali for discussion and comments that
have enhanced both the substance and exposition of the
paper. Borhani's research was partly conducted as a visiting faculty
member of the University of Pittsburgh.}
\newcommand{\text{\ and\ }}{\text{\ and\ }}
\newcommand{is twinned to\ }{is twinned to\ }
\newcommand{twinned}{twinned}
\newcommand{\twinnedp\ }{twinned\ }
\abbrevEnvir
\newcommand{\hide}[1]{\relax}
\renewcommand{\Pr}{\pi}
\newcommand{\len}[1]{\ell(#1)}
\newcommand{\mathsf{E}}{\mathsf{E}}
\newcommand{\! \oslash \!}{\! \oslash \!}
\newcommand{\branches}[1]{\mathfrak{B}_{#1}}
\newcommand{\avoid}{\theta}
\newcommand{\point}{\eta}
\newcommand{\br}{{\mathfrak b}}
\newcommand{\x}{\xi}
\newcommand{\level}{r}
\newcommand{\rho}{\rho}
\newcommand{\Psi}{\Psi}
\newcommand{\str}[2]{#1^{#2}}
\newcommand{\asse}[1]{\subseteq^{\scriptstyle{#1}}\!\!}
\newcommand{\aseq}[1]{=^{\scriptstyle{#1}}\!\!}
\newcommand{\asneq}[1]{\neq^{\scriptstyle{#1}}\!\!}
\newcommand{\asss}[1]{\subset^{\scriptstyle{#1}}\!\!}
\newcommand\B{\mathcal{B}}
\newcommand\BB{\mathbb{B}}
\newcommand\BBB{\mathfrak{B}}
\newcommand\C{\mathcal{C}}
\newcommand\D{\mathcal{D}}
\newcommand\E{\mathcal{E}}
\newcommand\EE{\mathbb{E}}
\newcommand\EEE{\mathfrak{E}}
\newcommand\F{\mathcal{F}}
\newcommand\II{\mathbb{I}}
\renewcommand\O{\mathcal{O}}
\renewcommand\P{\mathcal{P}}
\newcommand\R{\mathcal{R}}
\renewcommand\S{\mathcal{S}}
\newcommand\T{\mathcal{T}}
\newcommand\U{\mathcal{U}}
\newcommand\X{\mathbb{X}}
\newcommand\Z{\mathbb{Z}}
\newcommand{\field}{\mathfrak{F}}
\newcommand{\sigf}{\mathfrak{C}}
\DeclareMathOperator{\wms}{\sigma}
\DeclareMathOperator{\wmt}{\sigma^*}
\DeclareMathOperator{\sms}{\Sigma}
\DeclareMathOperator{\smt}{\Sigma^*}
\DeclareMathOperator{\immms}{\Sigma_1}
\DeclareMathOperator{\immmt}{\Sigma_1\hspace{-4pt}\vphantom{\Sigma}^*}
\DeclareMathOperator{\eqs}{\overset \sigma =}
\DeclareMathOperator{\eqt}{\overset {\phantom{.}_*} =}
\newcommand{\nothing}{\mathsf{r}}
\newcommand{\incompat}[1]{\perp^{\scriptstyle{#1}}}
\DeclareMathOperator{\incompatt}{\perp\hspace{-5pt}{\scriptstyle *}}
\newcommand{\eqcl}[1]{[#1]}
\newcommand{\rightrightarrows}{\rightrightarrows}
\newcommand{\inv}[1]{#1^{-1}}
\newcommand{{\mathfrak e}}{{\mathfrak e}}
\newcommand{{\mathfrak e^*}}{{\mathfrak e^*}}
\newcommand{{\mathfrak c}}{{\mathfrak c}}
\newcommand{{\mathcal Y}}{{\mathcal Y}}
\newcommand{R}{R}
\newcommand{\immed}[1]{Y(#1)}
\newcommand{\I}[1]{I_{#1}}
\newcommand{\tau}{\tau}
\newcommand{\zeta}{\zeta}
\newcommand{\eta}{\eta}
\newcommand{\mpty}{{\mathit{E}}}
\newcommand{\textrm{`Empty'}}{\textrm{`Empty'}}
\newcommand{{\mathit{F}}}{{\mathit{F}}}
\newcommand{\textrm{`Full'}}{\textrm{`Full'}}
\newcommand{{\mathit{G}}}{{\mathit{G}}}
\newcommand{{\mathit{H}}}{{\mathit{H}}}
\newcommand{\mathfrak{P}}{\mathfrak{P}}
\newcommand{\mathfrak{p}}{\mathfrak{p}}
\newcommand{$\E$-$P\/$}{$\E$-$P\/$}
\newcommand{\mathbf{0}}{\mathbf{0}}
\newcommand{\mathbf{1}}{\mathbf{1}}
\newcommand{\mathsf{A}}{\mathsf{A}}
\renewcommand{\a}{\mathsf{a}}
\renewcommand{\d}{\mathsf{d}}
\newcommand{\mathsf{w}}{\mathsf{w}}
\newcommand{\mathsf{e}}{\mathsf{e}}
\newcommand{\mathsf{f}}{\mathsf{f}}
\newcommand{\mathsf{h}}{\mathsf{h}}
\newcommand{\setbrac}[1]{ \left\{ #1 \right\} }
\begin{document}
\begin{abstract}
People reason heuristically in situations resembling
inferential puzzles such as Bertrand's box paradox and the
Monty Hall problem. The practical significance of that fact
for economic decision making is uncertain because a departure
from sound reasoning may, but does not necessarily, result in
a ``cognitively biased'' outcome different from what sound
reasoning would have produced. Criteria are derived here,
applicable to both experimental and non-experimental
situations, for heuristic reasoning in an inferential-puzzle
situations to result, or not to result, in cognitively
bias. In some situations, neither of these criteria is
satisfied, and whether or not agents' posterior probability
assessments or choices are cognitively biased cannot be
determined.
\end{abstract}
\maketitle
\Section{1}{Introduction}
People use heuristic reasoning in decision situations, and thus
potentially make ``cognitively biased'' decisions that deviate from
what they would have done if they had reasoned soundly. This article
concerns conditions under which a particular type of heuristic
Bayesian inference will, or will not, deviate from sound inference in
a situation, and provides examples of plans (that is, patterns of
evidence-based choices) that result from sound inference in some
situations, and from heuristic inference in others, while being
demonstrably inconsistent with the other sort of reasoning. This
explicit concern with demonstrability (with identifiability, in
statistical or econometric parlance), rather than with the simple
occurrence or non occurence of cognitive bias, may distinguish the
present research with respect to behavioral-economics research on the
whole.
The paradigmatic situation to be studied is the ``box
paradox'' formulated by
\citet[p.\ 2]{Bertrand-1889}. \citet{Gardner-1959} and others
have subsequently formulated isomorphic
problems. \citet{BarhillelFalk-1982} recognized and elucidated
the significance of those problems for cognitive psychology.
\citet{ShimojoIchikawa-1989} conducted a pioneering
cognitive-psychology experiment to understand better the logic
of the heuristic reasoning by which people analyze such
situations. \citet{Selvin-1972a}, inspired by an eponymous
television producer's adaptation of such a situation for
entertainment, formulated the ``Monty Hall
problem''.\footnote{Mark Feldman has mentioned to the authors
that the Monty Hall problem is isomorphic to the situation
of ``restricted choice'' in the game of bridge.} The
distinctive feature of this problem is that a person is
required to make a utility-maximizing choice among a set of
alternate gambles, rather than to express a numerical
probability judgment. That is, it is a ``behavioralistic'' (in
the sense of \citet{Savage-1972}) version of the box
paradox. \citet{GranbergBrown-1995}, followed by
\citet{Friedman-1998} and others, have used that problem as
the basis for an experimental protocol.
The experiments that have just been mentioned, are designed to
show a detectable, observable outcome from which an
unobservable cause can be inferred. The outcome is an
incorrect probability assessment or a biased decision, and the
cause is the subject's use of heuristic reasoning rather than
of sound reasoning. The import of the experiments is that,
even though the outcomes of heuristic reasoning are typically
not detectable by casual observation of non-experimental
situations (but, rather, require an insightfully designed
experimental protocol to become apparent), heuristic reasoning
is presumably endemic in everyday situations.
Heuristic reasoning is called `heuristic' for a reason: that
in some, albeit not all, cases that it is employed, it
providentially leads to correct or approximately correct
conclusions. Thus, that people endemically employ heuristic
inductive logic does not necessarily imply that faulty
posterior-probability assessments or misguided choices are
endemic.\footnote{Contemplating heuristic reasoning broadly,
some researchers (such as \citet{Simon-1955}) have been
inclined to believe that such cognitive bias is endemic in
fact, and that experimental situations are exceptional only
in point of the bias being demonstrable. Others (such as
\citet{Friedman-1953}) have leaned toward the view that
providential outcomes are normal, and that experimental
situations are exceptional because cognitive bias occurs at
all.} The program of this article is to examine, in the
specific context of situations resembling the box paradox and
the Monty Hall problem, what are the characteristics of
situations in which outcomes (posterior-probability
assessments or choices) are, or are not, informative about
whether cognitive bias has occurred. That is, the goal is to
distinguish among three types of situation.
\begin{itemize}
\item[\emph{Type 1}]{In some situations, including
experiments, some outcomes may be observed that demonstrably
reflect heuristic reasoning and are inconsistent with sound
reasoning. That is, persons (or \emph{agents\/}) making
those choices exhibit cognitive bias.}
\item[\emph{Type 2}]{There may also be situations in which
heuristic reasoning will lead demonstrably to the same outcome
as sound reasoning would have produced, given identical
probability beliefs regarding potentially observable
events. That is, even if the agent is reasoning
heuristically, no cognitive bias will result from it.}
\item[\emph{Type 3}]{Finally, there may be situations in which
some observable outcome can be imputed to heuristic
reasoning by making one set of assumptions about the agent's
beliefs (and about utilities of available alternatives, if
the outcome is a choice or decision), but different
assumptions about the agent lead to the conclusion that the
same outcome has resulted from sound reasoning. That is,
given that outcome, although there is cognitive bias if the
first set of assumptions is correct, the bias is not
demonstrable because the alternate set of assumptions
cannnot be ruled out.}
\end{itemize}
Sections \secref{2} and \secref{3} are devoted to formalizing
a broad class of situations resembling the box paradox and
Monty Hall problem, and to articulating what it means for
heuristic reasoning to be justified by sound reasoning in such
a situation. \uppercase\prp{1} (in \sec{4}) provides a criterion for
posterior beliefs reached by heuristic reasoning to be
justifiable by sound reasoning, if specific prior beliefs are
imputed to the agent. But the criterion does not rule out the
possibility that those beliefs are cognitively biased outcomes
of different prior beliefs. That is, a situation that meets
the criterion might be of either type 2 or type 3. The
trichotomy of situations is studied further in sections
\secref{5} and \secref{6}. \uppercase\prp{2} (in \sec{6}) provides
conditions that are sufficient (and, under an auxilliary
assumption, necessary) for a situation to be of one or another
of the three types. \uppercase\sec{7} concerns the
``behavioralistic'' framework, in which outcomes of reasoning
are taken to be choices rather than posterior-probability
assessments. This framework invokes more parsimonious
assumptions about what is observable to a researcher than the
``verbalistic'' framework of sections \secref{3}--\secref{6}
makes. Not surprisingly, it becomes more difficult to infer
from outcomes how an agent has reasoned. Nonetheless,
\xmpl{4} exhibits a pattern of choices that can only arise as
an outcome of sound reasoning, while \xmpl{6} exhibits a
pattern that can only result from heuristic reasoning (and
thus is cognitively biased).
\Section{2}{An example of heuristic inference}
Here we formulate, and analyze in ad hoc terms, an example of the sort
of heuristic inference that is the subject of this
article.\footnote{The example is formulated to avoid some features of
the Monty Hall problem that \citet{GranbergBrown-1995} and
\citet{Friedman-1998} have identified as being related to other
biases---involving revisions of choices and breaking of indifference
among alternatives---to which some subjects' performance might be
imputed. The relationship between the example and
the Monty Hall problem will be examined in \sec{7.3}.} It
illustrates the type of bias that is routinely observed in the
performance of subjects in cognitive-science
experiments.\footnote{Early studies, such as those of
\citet{GranbergBrown-1995} and \citet{Friedman-1998} established
that about 90\% of subjects initially give biased responses that
would be derived from the heuristic analysis to be specified below,
and that roughly 50\% of subjects persist in giving those responses
after many repetitions of the problem. Subsequent studies, such as
the one by \citet{KlugerFriedman-2010} and those that they cite,
establish that some experimental treatments can reduce the incidence
of cognitively biased responses, but not dramatically so.}
\Subsection{2.1}{The broken-fuel-gauge (BFG) problem}
Your car has a broken fuel gauge. It always shows either \textrm{`Full'}\ or
\textrm{`Empty'}. When the tank is more than $70\%$ full, the gauge always shows
\textrm{`Full'}. When the tank is less than $30\%$ full, the gauge always
shows \textrm{`Empty'}. In between, the gauge might be in either state.
You have been on vacation---away from your car---for a month. You no
longer recall how far you drove after last having filled the
tank. Before reading the gauge, your beliefs about the amount of fuel
in the tank correspond to a uniform distribution.
When you look, the gauge shows \textrm{`Empty'}. What is now your degree of
belief that the tank is at most $30\%$ full? In the notation of
probability theory, what is $P[\text{Tank is\ }\! \le 30\%
\text{\ full} \mid \text{\textrm{`Empty'}}]$?
\Subsection{2.2}{Heuristic analysis}
Let $F_x$ denote the event that the tank is at most $x\%$ full. Then $P(F_x)
= x/100$.
The \emph{heuristic analysis} of the BFG problem is based on the
assumption that the gauge showing \textrm{`Empty'}\ corresponds to $F_{70}$. Of
course, there are other assumptions that an agent might conceivably
substitute for the more complex and subtle assumtion that sound
reasoning would require, but this particular assumption is one that
succeeds in accounting for way that experimental subjects tend to
respond to such situations.
To an agent who reasons heuristically, then, a particular
configuration of the gauge denotes the set of states of nature
in which the gauge can possibly be in that configuration. If
you reason heuristically, then, when asked what is $P[F_{30}
\vert \text{\textrm{`Empty'}}]$, you interpret that conditional
probability as being $P[F_{30} \vert F_{70}]$. By Bayes's
rule,
\display{1}{P[F_{30} \vert F_{70}] = \frac{P(F_{30} \cap
F_{70})}{P(F_{70})} = \frac{P(F_{30})}{P(F_{70})} = \frac{3}{7}}
\Subsection{2.3}{Sound analysis}
A conceptually correct, or \emph{sound,} analysis of the BFG problem
proceeds according to the logic articulated by
\citet{Harsanyi-1967}. This analysis emphasizes that your having
observed the tank to show \textrm{`Empty'}\ is a fact about you, rather than
being \emph{per se} a fact about the tank or its contents.
Whether the gauge shows \textrm{`Empty'}\ or \textrm{`Full'}\ determines your
\emph{type.}\footnote{When this example is formalized below, a third
type---corresponding to you not having yet observed the gauge (and
thus holding your prior beliefs)---will be added. That change will
not affect the present calculation.}
Your type is random, from an \emph{ex ante} point of
view. This randomness is modeled as a type-valued function
$\tau \colon \Phi \to \{ \textrm{`Empty'}, \textrm{`Full'} \}$, where $\Phi$ is the
set of states of the world. The gauge showing \textrm{`Empty'}\ (and
you observing that fact) corresponds to the event that the
state of the world is in $\inv \tau(\textrm{`Empty'})$. Thus, in
contrast to the heuristic analysis, $P[F_{30} \vert
\text{\textrm{`Empty'}}]$ means $P[F_{30} \vert \inv \tau(\textrm{`Empty'})]$.
\display{2}{P[F_{30} \vert \inv \tau(\textrm{`Empty'})] = \frac{P(F_{30} \cap
\inv \tau(\textrm{`Empty'}))}{P(\inv \tau(\textrm{`Empty'}))}
= \frac{P(F_{30})}{P(\inv \tau(\textrm{`Empty'}))}}
Note that $\inv \tau(\textrm{`Empty'}) = F_{30} \cup (\inv
\tau(\textrm{`Empty'}) \cap (F_{70} \setminus F_{30}))$. Implicit in
the specification that ``The gauge might show either
\textrm{`Empty'}\ or \textrm{`Full'}\ in event $F_{70}$,'' is the idea that
$P(\inv \tau(\textrm{`Empty'}) \cap (F_{70} \setminus F_{30})) <
P(F_{70} \setminus F_{30})$.\footnote{This idea is formalized
for heuristic reasoning in the third clause of condition
\eqn{7} below. By condition \eqn{20}, the idea extends to
sound reasoning also.} Therefore,
\display{3}{P[F_{30} \vert \inv \tau(\textrm{`Empty'})] > \frac{P(F_{30})}{P(F_{70})} = \frac{3}{7}}
In conclusion, comparing \eqn{1} and \eqn{3} shows that the sound
analysis yields a higher answer than the heuristic analysis does to
the question about your posterior belief that the tank is truly near
empty after having observed \textrm{`Empty'}.
\Section{3}{Models of evidence and of beliefs}
Two types of structures will be defined in this section, and how they
apply to the BFG problem will be explained. A \emph{model
of evidence} formalizes heuristic Bayesian inference. A \emph{model
of beliefs} formalizes sound Bayesian inference. According to a
model of beliefs, the agent reasons introspectively about the grounds
for his/her beliefs. That is, the agent asks, what determines the
relationship of its own cognitive state to the objective facts about
the world? An agent whose reasoning is represented by a model of
evidence, is not introspecting. Rather, the agent is thinking solely
in terms of objective events. Within each framework, the agent revises
beliefs (that is, subjective probabilities) according to Bayes's rule.
The question to be addressed is under what conditions a model of
evidence reflects---that is, is justified by---a model of beliefs.
\Subsection{3.1}{Model of evidence} A model of evidence is a structure,
$(\Omega, \O, P, \E)$, where
\begin{enum}
\meti{4}
{$\Omega$ comprises the \emph{states of nature}}
\meti{5}
{$\O$ is a $\sigma\/$-field of \emph{objective events} on $\Omega$}
\meti{6}
{$P \colon \O \to [0,1]$ is a countably additive probability measure}
\meti{7}
{$\E \subseteq \O$ comprises the \emph{evidential events.}}
\bi
\item
$\E$ is countable
\item
$B \in \E \implies P(B) > 0$
\item
$[B \in \E \text{\ and\ } C \in \E \text{\ and\ } C \not \subseteq B] \implies
P(C \setminus B) > 0$
\item
$\Omega \in \E$
\item
$\bigcup (\E \setminus \setbrac{\Omega}) = \E$
\ei
\end{enum}
Clearly an agent requires no evidence to be certain
that $\omega \in \Omega$. It will be convenient to have a
notation for the non-trivial evidential events, that is, for
those in $\E \setminus \setbrac{\Omega}$. Define
\display{8}{\E' = \E \setminus \setbrac{\Omega}}
The assumptions made in condition \eqn{7} reflect the focus of
this article. Notably the assumptions that $P(B) > 0$ and that
$P(C \setminus B) > 0$ simplify the analysis of the specific
cognitive bias studied here, to which subtle questions that
arise concerning conditioning on events of prior probability
zero have no apparent relevance. That is, although questions
regarding how to extend conditional probability to
conditioning events of prior probability zero are crucial for
some issues in game theory, they are arcane in the context of
belief revision and choice by a single agent.
It is assumed that $\E$ is countable because, otherwise, that $P(B) > 0$
for every $B \in \E$ would be impossible.\footnote{It would be possible to
define a more general structure that would not require $\E$ to be
countable, analogously to the way that full-support probability
distributions on continuously distributed random variables are defined
in probability theory, but to do so would greatly complicate the
mathematical arguments to be made here without making a corresponding
conceptual gain.}
The assumption that $\Omega \in \E$ is a convention that will
play a role in defining what it means for a model of beliefs
to justify a model of evidence. $\E'$ is actually the set of
entities that formalizes the intuitive notion of a non-trivial
evidential event. In principle, there might be some state of
the world for which no corroborating evidence could possibly
be found. That is, conceivably $\bigcup \E' \neq \Omega$. A
condition, \emph{balancedness,} will be defined in \sec{4.1},
that will fail if $P(\bigcup \E') < 1$. \uppercase\prp{1} will assert
that balancedness of a model of evidence is a necessary and
sufficient condition for there to be some model of beliefs
that justifies it. Thus it is known that if, the definition of
a model of evidence were relaxed to permit that $P(\bigcup
\E') < 1$, then such model would represent a situation of type
1 (in the taxonomy of the introduction). But, rather than
complicate the exposition of \prp{1} and other results by
explicit consideration of that possibility of un-corroborable
states of the world (and to avoid arcane complications of
dealing with probability-zero events), it has been stipulated
that $\bigcup \E' = \Omega$.
As usual in Bayesian decision theory, the probability space, $(\Omega,
\O, P)$ models an agent's prior beliefs. The events in $\E'$
model observations that the agent might make, on the
basis of which evidence the agent would form posterior beliefs. Those
beliefs are formed by conditionalization, where conditional
probability,\\ $P \colon \O \times \E \to [0,1]$ is defined as
usual:\footnote{Throughout this article, $A$ should be interpreted to
range over all of $\O$, and $B$ to range only over $\E$, absent a
statement to the contrary.}
\display{9}{P[A | B] = \frac{P(A \cap B)}{P(B)}}
Let's see how the heuristic analysis of the BFG problem is formalized
as a model of evidence. The description of the problem in \sec{2.2} is
made more simple here, by assuming that there are just three states of
nature, $\Omega = \{ e,h,f \}$. State $e$ represents the situation
that the fuel tank is nearly empty ($0 \le x < 30$ in the setting of
\sec{2.2}); $h$ represents the situation that the fuel tank is half
full ($30 \le x < 70$); and $f$ represents the situation that the
fuel tank is nearly full ($70 \le x \le 100$).
\exmpl{1}{Define $(\Omega, \O, P, \E)$ as follows. $\Omega = \{ e,h,f
\}$ and $\O = 2^\Omega$. By additivity, $P$ is defined by the
probabilities of singleton events in $\O$. Specify that $P( \setbrac{e} )
= P( \setbrac{f} ) = 0.3$ and that $P( \setbrac{h} ) = 0.4$. There are two
non-trivial evidential events: $\mpty = \{ e, h \}$ and ${\mathit{F}} = \{
h,f \}$. $\E = \{ \Omega, \mpty, {\mathit{F}} \}$.}
Note that, corresponding to \eqn{1} in \sec{2.2}, $P[\setbrac{e} | \mpty] =
3/7$.
\Subsection{3.2}{Model of beliefs}
\citet{Harsanyi-1967} introduced a structure that he called a beliefs
space, which consists of a probability space, $(\Phi, \B, Q)$ and a
\emph{type mapping,} $\tau \colon \Phi \to T$, where $T$ is an abstract
set. The elements of $\Phi$ are \emph{states of the world} and the
elements of $T$ are \emph{types} of the agent. The types in Harsanyi's
structure, \emph{per se,} are nothing but arbitrary labels. What is
meaningful are the inverse images, $\inv \tau(t)$, of the
types. These \emph{information sets} partition the states of the
world. In the event that an agent is of type $t$, then the agent's
posterior belief regarding an event, $C$, is $Q[C|\inv\tau(t)]$,
where $Q \colon \B \times \inv\tau(T) \to [0,1]$ is defined
analogously to \eqn{9}.
A model of beliefs, $(\Phi, \B, Q, \Omega, \E, \tau)$, is a slight variant of
a beliefs space.
\begin{enum}
\meti{10}
{$\Omega$ comprises the \emph{states of nature.} $\E \subset 2^\Omega
\setminus \setbrac{\emptyset}$ is countable. $\Omega = \bigcup \E' \in \E$.}
\meti{11}
{$\Phi$ comprises the
\emph{states of the world.} $\B$ is a $\sigma\/$-field on $\Phi$ and
$Q \colon \B \to [0,1]$ is a countably additive probability measure.}
\meti{12}
{The \emph{type function,} $\tau \colon \Phi \to \E'$, maps
$\Phi$ \emph{onto} $\E'$. $\Omega$ is called the \emph{prior-beliefs
type,} and elements of $\E'$ are called \emph{posterior-beliefs
types.}\footnote{This nomenclature corresponds to standard
terminology in decision theory regarding a single agent. In game
theory, probability assessments conditioned on a player's type are
generally called \emph{interim beliefs.}}}
\meti{13} {$\inv\tau(B) \in \B$ and $0 < Q(\inv\tau(B)) < 1$}
\end{enum}
It will be convenient to extend $\inv\tau$, the inverse
correspondence of $\tau$, to a correspondence, $\beta \colon \E \to
\B$.
\display{14}{\phi \in \beta(B) \iff [B = \Omega \text{\ or\ }
\tau(\phi) = B]}
It is a trivial formal change of Harsanyi's framework to specify
that the agent's types are sets of states of nature rather than being
arbitrary labels, and to introduce a new type that is not realized in
any state of the world. Nonetheless, this modification enables a
model of evidence and a beliefs space to be compared explicitly as
representations of Bayesian inference. Before framing a systematic
comparison in the next section, let's see how a model of belief
contrasts with a model of evidence as a representation of inference in
the BFG problem. The intuition behind the formalization of the BFG
problem as a model of beliefs is as follows. $\Omega$, $P$, and $\E$
are as in \xmpl{1}. Type $\Omega$ represents the agent's prior
beliefs, while types $\mpty$ and ${\mathit{F}}$ represent posterior beliefs
after having observed the gauge to show \textrm{`Empty'}\ and
\textrm{`Full'}\ respectively. Conditionally on
the state of nature being $e$ or $f$, if the agent's type is not
$\Omega$, then it must be $\mpty$ or ${\mathit{F}}$ respectively. However, if
the state of nature is $h$ and the agent's type is not $\Omega$, then
the type may be either $\mpty$ or ${\mathit{F}}$. Assume that, conditionally on the
state of nature being $h$ and the agent's type not being $\Omega$, the
other two types are equally probable.
\exmpl{2}{Let $(\Omega, \B, P, \E)$ be as in \xmpl{1}. $\Phi = \Omega
\times \E' = \{ e,h,f \} \times \{ \mpty, {\mathit{F}} \}$. $\B =
2^\Phi$. $Q$ is specified according to the following table. Each
cell of the table is a probability. The column
labeled `$\Omega$' shows marginal probabilities of $Q$ on
$\Omega$. The cell in the row labeled `$\omega\/$' and the column
labeled `$B\/$', for $B \in \E'$, shows the probability of the
corresponding state of the world, $(\omega,B)$.
\display{15}{ \begin{tabular}[c]{|c||c|c|c|}
\hline
$Q(\omega,B)$ & $\Omega$ & $\mpty$ & ${\mathit{F}}$\\
\hline \hline
$e$ & .3 & .3 & 0\\
\hline
$h$ & .4 & .2 & .2\\
\hline
$f$ & .3 & 0 & .3\\
\hline
\end{tabular} }
\hide{
$Q( \setbrac{e} \times \{ \mpty \} ) = Q( \setbrac{f} \times \{
{\mathit{F}} \} ) = 0.3$. $Q( \setbrac{e} \times \{ {\mathit{F}} \} ) = Q( \setbrac{f}
\times \{ \mpty \} ) = 0$. $Q( \setbrac{h} \times \{ \mpty \} ) = Q( \{
h \} \times \{ {\mathit{F}} \} ) = 0.2 = P( \setbrac{h} )/2$.
}
The type function is defined by $\tau(\omega,B) = B$ for all $\omega
\in \Omega$ and $B \in \E'$. Note that $Q(A \times \Omega) = P(A)$ for
all $A \in \O$, where $P$ is the probability measure constructed in
\xmpl{1}.}
If $A$ is a set of states of nature, then the event that the state of
nature is in $A$ is $A \times \E' \in \B$. Thus, as explained above,
the posterior probability held by an agent of type $B \in \E'$ that the
state of nature is in $A$ is $Q[A \times \E' | \beta(B)]$. In
particular, taking $A = \{e \}$ and $B = \mpty$, the agent's posterior
probability that the state of nature is $e$ is $3/5$. Since it has
been shown that $P[\setbrac{e} | \mpty] = 3/7$ in \xmpl{1}, the formal
representations of the BFG problem via a model of evidence and a model
of belief reproduce the overall conclusion, inequality \eqn{3}, of \sec{2.2}.
\Subsection{3.3}{Reflection/justification}
In the context of comparing examples \rsltref{1}{xmpl} and
\rsltref{2}{xmpl}, it was just suggested that $A \times \E'$ is the
event in $\B$ that is associated with a set, $A \in \O$, of states of
nature. In fact, this association defines an isomorphism, $A \mapsto A
\times \E'$, of $\O$ with a sub $\sigma\/$-field of $\B$ in the
example. Capitalizing on this idea, the relationship between a model
of evidence and a model of beliefs that was implicitly defined in that
discussion can be stated in an explicit and general way.
In order to make this generalization, an isomorphism of probability
spaces must be defined in a slightly more permissive way than the
obvious one. In the preceding paragraph, `isomorophism' was used in
the obvious way, to denote a mapping that preserves Boolean
relationships among sets. However, in general---when the situation
does not have the convenient feature that the domain of the
isomorphism is a Cartesian factor of its range---such an exact
relationship may not hold. Rather, if $(\Xi, \C, R)$ and $(\Psi, \D,
S)$ are probability spaces, then define $\alpha \colon \C \to \D$ to be
a \emph{measure isomorphism} from $(\Xi, \C, R)$ to $(\Psi, \D, S)$ iff
the following conditions hold.\footnote{Definition \eqn{16} is tantamount to
the definition of a measure isomorphism given by
\citet[p.\ 202]{Sikorski-1969}.}
\display{16}{\begin{aligned}
&\text{For every $C \in \C$,} \enspace S(\alpha(C)) = R(C) \\
&\text{For every countable $\F
\subseteq \C$,} \enspace S \left(\ \alpha \left( \bigcup \F \right)
\triangle \bigcup \{ \alpha(B) \mid B \in \F \} \right) = 0\\
&\text{For every $D \in \D$, there exists $C \in \C$ such
that} \enspace Q(D \triangle \alpha(C)) = 0
\end{aligned}}
Throughout the remainder of this
article, \emph{isomorphism} refers to a measure isomorphism. When the
full specifications of $(\Xi, \C, R)$ and $(\Psi, \D, S)$ are clear
from context, $\alpha$ will be called an isomorphism from $\C$ to $D$.
Let $\Omega \in \E \subseteq 2^\Omega \setminus \{ \emptyset \}$.
A model of beliefs, $(\Phi, \B, Q, \Omega, \E, \tau)$, \emph{conforms} to a
model of evidence, $(\Omega, \O, P, \E)$, via $\alpha \colon \O \to \B$ iff
\begin{enum}
\meti{17}
{For some sub $\sigma\/$-field, $\C$, of $\B$,
$\alpha \colon \O \to \B$ is a measure isomorphism from $(\Omega, \O,
P)$ to $(\Phi, \C, Q \upharpoonright \C), and$}
\meti{18}
{For all $B \in \E$, $\beta(B) \subseteq \alpha(B)$}
\end{enum}
A model of evidence, $(\Omega, \O, P, \E)$, \emph{reflects} a model of
beliefs, $(\Phi, \B, Q, \Omega, \E, \tau)$, (equivalently, the model of
beliefs \emph{justifies} the model of evidence) via $\alpha \colon \O
\to \B$ under the following conditions.\footnote{Subsequently, unless
specific information about $\alpha$ is important to a discussion,
``via $\alpha$'' will sometimes not stated explicitly, although
conditions stated in terms of $\alpha$ may be invoked.}
\begin{enum}
\meti{19}
{$(\Phi, \B, Q, \Omega, \E, \tau)$, conforms to $(\Omega, \O, P, \E)$, via
$\alpha$, and}
\meti{20}
{For all $A \in \O$ and $B \in \E$, $Q[\alpha(A) \vert \beta(B)] = P[A
\vert B]$}
\end{enum}
Note that, when this relationship holds, there is a clear reason to
view the agent's types as being evidential events rather than mere
abstract labels. The agent's type is the most specific objective event
(not only the most specific evidential event) that the agent believes
to obtain almost surely. That is, for any $A \in \O$ and $B \in \E$,
$Q(\beta(B) \setminus \alpha(B)) = 0$ and, if $P(A) < P(B)$, then
$Q(\beta(B) \setminus \alpha(A)) > 0$.
Condition \eqn{20} is central to this article, because it formalizes
what it means for cognitive bias \emph{not} to occur. An agent is
envisioned to have authentic probability beliefs, either fully
articulated or inchoate, that are represented by a model of beliefs,
$(\Phi, \B, Q, \Omega, \E, \tau)$. With respect to the events in some sub
$\sigma\/$-field of $\B$, at least, the agent's beliefs are envisioned
to be fully articulated. Specifically it is envisioned that there is a
model of evidence, $(\Omega, \O, P, \E)$, such that $(\Phi, \B, Q, \Omega, \E,
\tau)$ conforms to $(\Omega, \O, P, \E)$ via some measure
isomorphism, $\alpha$, and that the agent's authentic probability
beliefs about events in the image of $\O$ under $\alpha$ are fully
articulated. That is, condition \eqn{19} is satisfied. What determines
whether or not $(\Omega, \O, P, \E)$ reflects $(\Phi, \B, Q, \Omega, \E,
\tau)$, is condition \eqn{20}. If \eqn{20} is not satisfied, then
$(\Omega, \O, P, \E)$ does not reflect $(\Phi, \B, Q, \Omega, \E,
\tau)$. Intuitively that is the case in which, if the agent reasons
heuristically according to the model of evidence, $(\Omega, \O, P,
\E)$, then the agent exhibits cognitive bias relative to sound
inference from authentic beliefs (that is, relative to inference based
soundly on the model of beliefs, $(\Phi, \B, Q, \Omega, \E, \tau)\,\strut$).
\claim{3}{The model of evidence in \xmpl{1} does not reflect any model
of beliefs. The model of beliefs in \xmpl{2} does not justify any
model of evidence.}
\begin{proof}
Suppose that $\alpha \colon \O \to \B$ satisfies conditions \eqn{17}
and \eqn{19}, and that either $(\Omega, \O, P, \E)$ is the model of
evidence in \xmpl{1} or else $(\Phi, \B, Q, \Omega, \E, \tau)$ is the model of
beliefs in \xmpl{2}. In either case, $P(\mpty) = P({\mathit{F}}) = 7/10$ and
$P( \setbrac{h} ) = 4/10$. so $P[ \setbrac{h} | \mpty] = P[ \setbrac{h} | {\mathit{F}}] =
4/7$. Since $(\Phi, \B, Q, \Omega, \E, \tau)$ is a model of beliefs, $0 <
\min \{ \beta(\mpty),\beta({\mathit{F}}) \}$ and $\{ \beta(\mpty),
\beta({\mathit{F}}) \}$ partitions $\Phi$. Therefore $\{ \alpha( \setbrac{h} )
\cap \beta(\mpty), \alpha( \setbrac{h} ) \cap\beta({\mathit{F}}) \}$ partitions
$\alpha( \setbrac{h} )$.
A contradiction will be derived from the assumption that condition
\eqn{20} is also satisfied (that is, that $(\Omega, \O,
P, \E)$ reflects $(\Phi, \B, Q, \Omega, \E, \tau)$). Since $P[ \setbrac{h}
|{\mathit{F}}] = 4/7$, condition \eqn{20} requires that $Q(\alpha( \setbrac{h}
) \cap \beta({\mathit{F}})) = 4Q({\mathit{F}})/7 > 0$. Therefore $Q(\alpha( \setbrac{h}
) \cap \beta(\mpty)) < Q(\alpha( \setbrac{h} ))$. Then, by \eqn{19},
\display{21}{\begin{gathered}
Q[\alpha( \setbrac{h} ) | \beta(\mpty)] = Q(\alpha( \setbrac{h} ) \cap
\beta(\mpty))/Q(\beta(\mpty))\\ < Q(\alpha( \setbrac{h} ))/Q(\beta(\mpty)) =
P( \setbrac{h} )/P(\mpty) = P[ \setbrac{h} | P(\mpty) ]
\end{gathered}}
This contradicts \eqn{20}.
\end{proof}
A fact about conformity, to be used later in the proof of \prp{2},
is stated and proved now.
\lema{1}{For every model of evidence, there is a conforming model of
beliefs.}
\begin{proof}
Consider a model of evidence, $\Omega, \O, P, \E)$. Let $\langle B_s
\rangle_{s \in S}$ enumerate $\E'$. ($S = \{ 1,\dotsc, n \}$ if $\E'$
has $n$ elements, and $S = \setbrac{1,2,3\dots}$ if $\E'$ is
infinite.) Let $t = \sum_{s \in S} 2^{-s}$. Define $\Phi = \Omega
\times \E'$ and $\B = \Sigma(\O \times 2^{\E'})$.\footnote{If $\C
\subseteq 2^\Phi$, then $\Sigma(C)$ is the smallest $\sigma\/$-field
containing $\C$.} Begin to define $Q$ by $Q(A \times B_s) = 2^{-s}
P(A)/t$. This definition extends by countable additivity to $\Sigma(\O
\times 2^{\E'})$.\footnote{Specifically, this extension is a measure
by Caratheodory's theorem. (Cf.\ \citet[theorem
10.23]{AliprantisBorder-2006}).} Define $\mu \colon \Omega \to S$
by $\mu(\omega) = \min \setbrac{ s \mid \omega \in B(s)}$, and define
$\tau \colon \Phi \to \E'$ by
\display{22}{\tau(\omega, B) = \begin{cases}
B & \text{if\ } \omega \in B\\
B_{\mu(\omega)} & \text{if\ } \omega \notin B
\end{cases}}
Define $\alpha \colon \O \to \B$ by $\alpha(A) = A \times \E'$.
It is routinely verified that $(\Phi, \B, Q, \Omega, \E, \tau)$ is a model
of beliefs that conforms to $(\Omega, \O, P, \E)$ via $\alpha$.
\end{proof}
\Section{4}{When does some model of beliefs justify a model of evidence?}
In this section, a necessary and sufficient condition will be derived
for a model of evidence to reflect some model of beliefs.
To set the stage, let us point out a feature of the BFG problem that seems to
be conducive for cognitive bias to occur. Consider the model of
beliefs presented in \xmpl{2}. There are only 2 posterior-beliefs types:
$\mpty = \{ e,h \}$ and ${\mathit{F}} = \{ h,f \}$. The image of $\setbrac{h}$
under $\alpha$ is split between $\beta(\mpty)$ and $\beta({\mathit{F}})$.
Its probability mass is correspondingly split in the model of beliefs.
In contrast, $\alpha( \setbrac{e} ) \subseteq \beta(\mpty) \text{\ and\ } \alpha(
\setbrac{f} ) \subseteq \beta({\mathit{F}})$. The probability mass of these
states of nature therefore is not split.
Reflection is impossible because probability mass of some states of
nature, but not others, must be split. This situation results from a
particular state of nature being in both $\mpty$ and ${\mathit{F}}$, while
others are only in one of them. A state of nature that belongs
to more
evidential events than others do, is
under weighed relative to those others by $Q$ within the image under $\beta$ of each of
the evidential events to which it belongs.
Clearly this problem of under weighting cannot occur if $\E'$ is a
partition of $\Omega$. The following example shows that, to avoid
under weighting, it is not necessary for $\E'$ to be a partition, or
even for there to exist a partition of $\Omega$ by elements of
$\E'$. A condition that the example does satisfy, and that will be
generalized below, is that each state of nature belongs to the same
number (3, in the example) of evidential events.
\exmpl{3}{Define $(\Omega, \B, P, \E)$ by setting $\Omega = \{ 0,1,2
\}$, $\B = 2^\Omega$, and $P(\{ \omega \}) = 1/3$, and by defining
$\E'$ to be the set of two-element subsets of $2^\Omega$. For each
$\omega \in \Omega$, define $\omega' \equiv \omega+1 \pmod 3$ and
$\omega'' \equiv \omega-1 \pmod 3$, and define $E_\omega = \{
\omega, \omega' \}$.\footnote{The states of nature and the
evidential events in \protect\xmpl{1} can be embedded in this
structure by assigning $e \mapsto 0$, $h \mapsto 1$, $f \mapsto
2$, $\mpty \mapsto B_0$, and ${\mathit{F}} \mapsto B_1$.} Define $\Psi =
\{ 0,\dotsc,5 \}$, $\C = 2^\Psi$, and define $R( \{ \psi\} ) = 1/6$
for each $\psi$. Let $(\Phi, \B, Q)$ be the product of $(\Omega, \O,
P)$ and $(\Psi, \C, R)$. Using the unique representation of $\psi =
2j+k$ (with $0 \le j \le 2$, $0 \le k \le 1$), define $\tau$ by
\display{23}{\begin{gathered}
\tau(\omega, \psi) = E_i \text{, where\ } i =
\begin{cases}
\omega, \text{\ if\ } k = 0\\
\omega'', \text{\ if\ } k = 1
\end{cases} \end{gathered}}
}
There is no partition of $\Omega$ by elements of $\E'$. Nonetheless,
defining $\alpha(A) = A \times \Psi$, $(\Phi, \B, Q, \Omega, \E, \tau)$
justifies $(\Omega, \O, P, \E)$.
\Subsection{4.1}{Balancedness defined}
Define $(\Omega, \O, P, \E)$ to be \emph{balanced} iff, for
some $\theta$,
\display{24}{\theta \colon \E' \to \left( 0,1 \right] \quad \text{\ and\ } \quad P \left( \left\{
\omega \mid \sum_{\omega \in B \in \E'} \theta(B) = 1 \right\} \right) = 1}
Call $\theta$ a \emph{balancing function} (for $P$ and $\E$).
Note that, with $\chi_B \colon \Omega \to \setbrac{0,1}$ being the indicator
function of $B$, \eqn{24} is equivalent to
\display{25}{{\theta \colon \E' \to \left( 0,1 \right] \quad \text{\ and\ } \quad P \left( \left\{
\omega \mid \sum_{B \in \E'} \theta(B) \chi_B(\omega) = 1 \right\} \right) = 1}
}
By setting $\theta(B_0) = \theta(B_1) =
\theta(B_2) = 1/2$, it is seen that the model of evidence in \xmpl{3}
is balanced. In contrast, for the model of evidence in \xmpl{1}, if
$\theta \colon \E \to (0,1)$, then $\sum_{h \in B} \theta(B) -
\sum_{e \in B} \theta(B) = \theta({\mathit{F}}) > 0$, so either $\sum_{h \in
B} \theta(B) > 1$ or $\sum_{e \in B} \theta(B) < 1$ and therefore
\eqn{24} cannot hold. In each of these two examples, then, the model of
evidence being balanced is equivalent to it reflecting some model
of beliefs.
\Subsection{4.2}{Balancedness and the justifiability of a model of
evidence}
The following proposition follows immediately from the two lemmas that
are subsequently proved.
\propo{1}{A model of evidence is balanced if, and only if, it reflects
some model of beliefs.}
\lema{2}{A model of evidence is balanced, if some model of belief
justifies it.}
\begin{proof}
Suppose that $(\Phi, \B, Q, \Omega, \E, \tau)$ justifies $(\Omega, \O, P,
\E)$. By conditions \eqn{12}, \eqn{13} and \eqn{14}, $\setbrac{
\beta(B) \mid B \in \E'}$ is a measurable partition of $\Psi$. By
\eqn{16}, \eqn{17} and \eqn{20}, $Q(\alpha(A) \cap \beta(B)) =
P[A|B] \, Q(\beta(B))$.
Define $\theta \colon \E' \to \left( 0,1 \right]$ by
\display{26}{\theta(B) = \frac{Q(\beta(B))}{P(B)}}
Then
\display{27}{\begin{gathered}
P(A) = Q(\alpha(A)) = \sum_{B \in \E'} Q(\alpha(A) \cap \beta(B)) =
\sum_{B \in \E'} P[A|B] \, Q(\beta(B))\\
=\sum_{B \in \E'} \int_A \frac{\chi_B}{P(B)} Q(\beta(B)) \, dP
= \int_A \sum_{B \in \E'} \theta(B)
\chi_B \, dP
\end{gathered}}
Given that \eqn{27} holds for all $A \in \O$, condition \eqn{25} is
satisfied, so equation \eqn{26} defines a balancing function.
\end{proof}
\lema{3}{If a model of evidence is balanced, then some model
of beliefs justifies it.}
\begin{proof}
Let $\theta$ be a balancing function for a model of evidence,
$(\Omega, \O, P, \E)$. A model of beliefs, $(\Phi, \B, Q, \Omega, \E, \tau)$,
that justifies $(\Omega, \O, P, \E)$ is now constructed.
Let $\Psi = \left[ 0,1 \right)$ and let $\C$ and $R$ be the
$\sigma\/$-field of Borel sets on $\Psi$ and the Lebesgue measure.
Specify that $\Phi = \Omega \times \Psi$, $\B = \Sigma(\O \times
\C)$, and $Q = P \times R$.\ Define an isomorphism, $\alpha \colon \O
\to \B$, by
\display{28}{\alpha(A) = A \times \Psi}
Let $\langle B_s \rangle_{s \in S}$ enumerate $\E'$. For $n \in \{ 0
\} \cup S$, define
\display{29}{g_0(\omega) = 0 \text{\ and\ } g_{n+1}(\omega) = g_n(\omega) +
\theta(B_{n+1}) \chi_{B_{n+1}}(\omega)}
If $\langle x_s \rangle_{s \in S}$ is a sequence of numbers, then
define $\lim_{s \to \max S} x_s = x_{\max S}$ if $S$ is finite and
$\lim_{s \to \max S} x_s = \lim_{s \to \infty} x_s$ if $S$ is infinite.
Define $N = \{ \omega \mid \lim_{s \to \max S} g_s(\omega) \neq 1 \}$.
Since $\theta$ is a balancing function,
\display{30}{P(N) = 0}
Define $\tau \colon \Phi \to \E'$ by
\display{31}{\tau(\omega, \psi) = B_s \iff \begin{cases}
g_{s-1}(\omega) \le \psi < g_s(\omega) &\text{if\ } s \notin N\\
s = 1 &\text{if\ } s \in N
\end{cases}}
From this definition, it follows that
\display{32}{\begin{gathered}
\{ (\omega, \psi) \mid \omega \in B_s \text{\ and\ }
g_{s-1}(\omega) \le \psi < g_s(\omega) \} \subseteq
\beta(B_s)\\ \subseteq
\{ (\omega, \psi) \mid \omega \in B_s \text{\ and\ }
g_{s-1}(\omega) \le \psi < g_s(\omega) \} \cup N
\end{gathered}}
Then, from Fubini's theorem and \eqn{30} and \eqn{32}, it follows
that, for all $A \in \O$,
\display{33}{\begin{aligned}
Q(\alpha(A) \cap \beta(B_s)) &=
\int_{A \cap B_s} \int_{g_{s-1}(\omega)}^{g_s(\omega)} 1 \, dR \, dP \\
&= \int_{A \cap B_s} g_s(\omega) - g_{s-1}(\omega) \, dP =
\theta(B_s) P(A \cap B_s)
\end{aligned}}
Conditions \eqn{30} and \eqn{32} and \eqn{33} imply that
\display{34}{Q(\beta(B_s)) = Q(\alpha(B_s) \cap \beta(B_s)) = \theta(B_s) P(B_s)}
so, for $A \in \O$ and $B \in \E$,
\display{35}{Q[\alpha(A) | \beta(B)] = \frac{Q(\alpha(A) \cap
\beta(B))}{Q(B)} = \frac{\theta(B)P(A \cap B)}{\theta(B)P(B)} = P[A | B]}
That is, $(\Phi, \B, Q, \Omega, \E, \tau)$ justifies $(\Omega, \O, P, \E)$.
\end{proof}
\Section{5}{Situations}
An undefined term, \emph{situation}, played an important role in the
introductory discussion of alternate views about the import of
psychological experiments for economics. In this section, building on
the framework introduced in the preceding section, a situation will be
formally defined.
In the introduction, it was mentioned that
\citet{ShimojoIchikawa-1989} used a situation isomorphic to Bertrand's
box paradox as the basis for an experiment to exhibit subjects'
cognitive bias. In that situation, there are three states of
nature. Shimojo and Ichikawa stipulated that, given the way that the
situation was described to subjects in the experimental protocol,
their prior beliefs would be that each state of nature has probability
$1/3$. (That is, those would be the subjects' probability assessments
after having received the description of the situation, but before
having observed evidence that would be presented in the course of the
experiment.) However, those researchers did not make any assumption
regarding a subject's beliefs about the correlation between the state
of nature and the evidence that would be observed. They did not need
to make any such assumption, because most subjects reported posterior
probability assessments that were inconsistent with \emph{any} model
of beliefs corresponding to the stipulated prior probability beliefs
about states of nature. That is, the outcomes of the experiment were
generated by a situation of type {1} according to the trichotomy
presented in the introduction.
The idea of a ``model of beliefs corresponding to the stipulated prior
probability beliefs about states of nature'' is formalized by the
definition of conformity. Shomojo and Ichikawa's assumptions about
subjects' beliefs regarding the states of nature can be represented as
a model of evidence. As has been discussed following the definition of
reflection in the previous section, their discussion of their
experiment presupposed that each subject had authentic probability
beliefs about the state of the world that could be represented as some
model of beliefs or other, but they did not pretend to know anything
about that model beyond the fact that it conformed to the stipulated
model of evidence. The general form of Shimojo and Ichikawa's way of
thinking about their experiment is captured by the following
definition. That is, a situation is a structure that formally
describes a researcher's assumptions regarding both the observable and
unobservable aspects of a what that researcher assumes to be a
subject's authentic prior-probability beliefs. Specifically, the
research assumes everything that is common to all of the models of
belief in the situation.
A \emph{situation} is a structure, $((\Omega, \O, P, \E), \S)$,
comprising a model of evidence and a non empty set, $\S$, of ordered pairs. Each
element of $S$ is of the form $((\Phi, \B, Q, \Omega, \E, \tau), \alpha)$, where
$(\Phi, \B, Q, \Omega, \E, \tau)$ is a model of beliefs that conforms to
$(\Omega, \O, P, \E)$ via $\alpha \colon \O \to \B$. Where no confusion
will result from abuse of notation, `$\S\/$' will be used to name
the situation. Also, a statement such as ``Some model of beliefs in
$\S$ justifies $(\Omega, \O, P, \E)$'' should be understood as
``For some $((\Phi, \B, Q, \Omega, \E, \tau), \alpha) \in \S$,
$(\Phi, \B, Q, \Omega, \E, \tau)$ justifies $(\Omega, \O, P, \E)$ via $\alpha$.''
As Shimojo and Ichikawa have done, a researcher may assume nothing at
all about a subject's beliefs, except that those beliefs conform to
the model of evidence that is communicated in the experimental
protocol. That situation is represented by $((\Omega, \O, P, \E),
\S)$, where $(\Omega, \O, P, \E)$, is communicated in the protocol and
$\S$ comprises \emph{all\/} of the pairs, $(\Phi, \B, Q, \Omega, \E, \tau),
\alpha)$, such that $(\Phi, \B, Q, \Omega, \E, \tau)$ conforms to $(\Omega,
\O, P, \E)$ via $\alpha$. Such a situation will be called \emph{full}.
\Section{6}{Characterizing the types of situation}
In terms of the definition of a situation just given, the trichotomy
discussed in the introduction is formalized by
\display{36}{((\Omega, \O, P, \E), \S) \text{\ is of type\ }
\begin{cases}
{1} &\text{if no model of beliefs in $\S$}\\
&\strut\qquad \text{justifies $(\Omega, \O, P, \E)$}\\
{2} &\text{if every model of beliefs in $\S$}\\
&\strut\qquad \text{justifies $(\Omega, \O, P,
\E)$}\\
{3} &\text{otherwise}
\end{cases}}
If $(\Omega, \O, P, \E)$ is a model of evidence, then define $\E'$ to
be an \emph{almost sure partition} iff, for every pair of distinct
elements, $C$ and $D$, of $\E'$, $P(C \cap D) = 0$.
\propo{2}{Let $((\Omega, \O, P, \E), \S)$ be a situation. Then
\begin{enum}
\meti{37}
{\strut\qquad If situation $\S$ is not balanced, then it is of type 1.}
\meti{38}
{\strut\qquad If situation $S$ is full and of type 1, then it is not balanced.}
\meti{39}
{\strut\qquad If $\E'$ is an almost sure partition, then situation $\S$ is of type 2.}
\meti{40}
{\strut\qquad If situation $\S$ is full and of type 2, then $\E'$ is an
almost sure partition.}
\meti{41}
{\strut\qquad There exist situations of each of the three types.}
\end{enum}
}
\begin{proof}Consider each of the claims.
\noindent[\eqnref{37}]\indent
{This follows from \prp{1}.}
\noindent[\eqnref{38}]\indent
Equivalently, if situation $\S$ is full and balanced, then it is not
of type 1. Assume the antecedent. Because the situation is balanced, it
reflects some model of beliefs. Because the situation is full, that
model of beliefs is in $\S$. That is, the situation is not of type 1.
\noindent[\eqnref{39}]\indent
Suppose that $\E'$ is an almost sure partition and that $((\Phi, \B, Q,
\E, \tau), \alpha) \in \S$. Since $(\Phi, \B, Q, \Omega, \E, \tau)$ conforms
to $(\Omega, \O, P, \E)$, condition \eqn{18} holds. Together with the
fact that $\E'$ is an almost sure partition, \eqn{18} implies that, for
every $B \in \E'$, $Q(\alpha(B) \triangle \beta(B)) = 0$. It follows
that $(\Omega, \O, P, \E)$ reflects $(\Phi, \B, Q, \Omega, \E,
\tau)$. Thefore the situation is of type 2.
\noindent[\eqnref{40}]\indent
Equivalently, if the situation is full and $\E'$ is not an almost sure
partition, then the situation is not of type 2. That is, in that case,
there is some model of beliefs, $(\Phi, \B, Q, \Omega, \E, \tau)$, in $\S$
that does not reflect $(\Omega, \O, P, \E)$. Since $\E'$ is not an
almost sure partition, there are two distinct elements of $\E'$, $C$
and $D$, such that $P(C \cap D) > 0$.
If the situation is not balanced, then it is of type 1 by
\eqn{37}, so assume that it is balanced. Let $\theta$ be a
balancing function. A model of evidence that conforms to
$(\Omega, \O, P, \E)$, but that does not justify $(\Omega, \O,
P, \E)$, will be constructed. To do so, let $\Phi$, $\B$,
$Q$, $\tau$, and the enumeration of $\E'$, $\langle B_s
\rangle_{s \in S}$, be as in the proof of \lem{3}. (Again,
suppose that $S = \{ 1,\dotsc, n \}$ if $\E'$ has $n$ elements
and that $S = \setbrac{1,2,3\dots}$ if $\E'$ is infinite.)
Without loss of generality, assume that $P(B_1 \cap B_2) > 0$
and that $B_2 \not \subseteq B_1$. By \eqn{7}, $P(B_2
\setminus B_1) > 0$. Since the situation is full, the
situation to be constructed is in $\S$, so the situation
cannot be of type 2.
To construct the model of beliefs, a type function, $\sigma
\colon \Phi \to \E'$, will be constructed. Begin by noting that
$Q(\Omega \times [0, 1/2)) = 1/2$. On $\Omega \times [0,
1/2)$, $\sigma$ will satisfy $\sigma(\omega, \psi) =
\tau(\omega, 2\psi)$. This specification ensures that, for
each $B \in \E'$, $0 < Q(\sigma^{-1}(B)) < 1$ as required
by condition \eqn{13} of the definition of a model of
beliefs. On $\Omega \times [1/2,1)$, define $\sigma$ to
make $\beta(B_1)$ as large as condition \eqn{18} will
permit
Specifically, define $\mu(\omega) = \min \setbrac{ s \mid
\omega \in B_s}$, and define $\sigma$ by
\display{42}{\sigma(\omega, \psi) = \begin{cases} \tau(\omega,
2\psi) & \text{if\ } \psi < 1/2\\
B_{\mu(\omega)} & \text{otherwise}
\end{cases}}
With $\beta(B) = \tau^{-1}(B)$ and $\gamma(B) =
\sigma^{-1}(B)$, note that
\display{43}{\begin{aligned}
Q(\gamma(B_2)) &= Q((\gamma(B_2) \cap (\Omega \times \left[
0,1/2 \right))) \cup ((\gamma(B_2) \cap
(\Omega \times \left[ 1/2,1 \right)))))\\
&= Q((\gamma(B_2) \cap (\Omega \times \left[
0,1/2 \right)))) + Q(((B_2 \setminus B_1) \times
(\Omega \times \left[ 1/2,1 \right))))\\
& > Q((\gamma(B_2) \cap (\Omega \times \left[ 0,1/2
\right))))
\end{aligned}}
and
\display{44}{\alpha(B_1 \cap B_2) \cap \gamma(B_2) =
\alpha(B_1 \cap B_2) \cap (\gamma(B_2) \cap (\Omega \times
\left[ 0,1/2 \right)))}
Also note that, for all $A \in \O$ and $B \in \E'$,
%
\display{45}{Q(\alpha(A) \cap (\gamma(B) \cap (\Omega \times
\left[ 0,1/2 \right)))) = \frac{Q(\alpha(A) \cap \beta(B))}{2}}
%
%
(and specifically, taking $A = \Omega$, $Q(\gamma(B) \cap (\Omega \times
\left[ 0,1/2 \right))) = Q(\beta(B))/2$).
Then
\display{46}{\begin{aligned}
Q[\alpha(B_1 \cap B_2) | \gamma(B_2)] &=
\frac{Q(\alpha(B_1 \cap B_2) \cap
\gamma(B_2))}{Q(\gamma(B_2))}\\
&< \frac{Q(\alpha(B_1 \cap B_2) \cap (\gamma(B_2) \cap
(\Omega \times \left[ 0,1/2 \right))))} {Q(\gamma(B_2)
\cap (\Omega \times \left[ 0,1/2 \right)))}\\
&= \frac{Q(\alpha(B_1 \cap B_2) \cap
\beta(B_2))}{Q(\beta(B_2))}\\
&= Q[\alpha(B_1 \cap B_2) | \beta(B_2)]
\end{aligned}}
By the construction of $(\Phi, \B, Q, \Omega, \E, \tau)$ in
\lem{3}, $Q[\alpha(B_1 \cap B_2) | \beta(B_2)] = P[B_1 \cap
B_2 | B_2]$. Therefore, by \eqn{46}, $Q[\alpha(B_1 \cap B_2)
| \gamma(B_2)] < P[B_1 \cap B_2 | B_2]$. That is, $(\Phi,
\B, Q, \Omega, \E, \sigma)$ conforms to $(\Omega, \O, P, \E)$
but it does not justify $(\Omega, \O, P, \E)$. Since situation
$\S$ is full, $(\Phi, \B, Q, \Omega, \E, \sigma) \in
\S$. Therefore, situation $\S$ is not of type 2.
\noindent[\eqnref{41}]\indent
By \lem{1}, there is a situation, and hence a full situation,
corresponding to every model of evidence. There are models of evidence
that are not balanced, so, by \eqn{37}, there are full situations of
type 1. There are models of evidence such that $\E'$ is an almost sure
partition, so, by \eqn{39}, there are situations of type
2. \uppercase\xmpl{3} is a balanced model of evidence, for which $\E'$
is not an almost sure partition, so, by \eqn{38} and \eqn{40}, the
full situation corresponding to that model of evidence is of type 3.
\end{proof}
\Section{7}{Conditioning expected utility on evidence and beliefs}
\citet{ShimojoIchikawa-1989} elicited subjects' reports of their
posterior beliefs. They analyzed that data under the assumptions that
(a) their experimental protocol induced specific prior beliefs that
the researchers intended subjects to hold, and (b) subjects were
capable of reporting precise numerical subjective probabilities of
events and were willing to report those probabilities truthfully.
When those assumptions do not hold, another approach must be taken.
One such approach, inspired by the characterization of subjective
utility provided by \citet{Ramsey-1931} and \citet{Savage-1972}, is to
infer subjects' prior and posterior probability measures from data
regarding their choices among alternatives offered in the experiment.
Experimenters studying the Monty Hall problem, such as those
of \citet{GranbergBrown-1995} and \citet{Friedman-1998}, have adopted
a hybrid approach. They have assumed subjects to hold particular prior
probabilities, but have inferred posterior probabilities from observed
choices. In principle, though, some of the reasons to prefer choice-based
imputation of posterior probabilities to subjects should apply to
prior probabilities also. Subjects could be given opportunities to
make choices both before and after having received evidence, with the
former choices revealing information about subjects' prior beliefs and
the latter ones revealing information about posterior beliefs.
Such a thoroughly behavioralistic protocol will be considered now. The
notion of a plan, to be defined momentarily, will play a cognate role
to that of a situation in preceding sections. Essentially, subjects'
choices will be treated as statistics of prior and posterior
beliefs. To observe a statistic of a probability distribution is less
informative than to observe the distribution
directly. Correspondingly, the precise characterization of the various
types of situation in \prp{2} will not have a counterpart
here. Nonetheless, \xmpl{4} will exhibit a plan that can only be
chosen by a subject who reasons according to a model of beliefs,
while \xmpl{6} will exhibit a plan that can only be
chosen by a subject who reasons according to a model of evidence.
\Subsection{7.1}{Plans and conditional expected utility}
Consider an agent who may choose from a set, $\mathsf{A}$, of
alternatives. Suppose that the set of feasible alternatives does not
depend on the state of nature. Let $\E$ be a set of evidential
events. (As specified in \sec{3.1}, $\Omega \in \E \subseteq \O
\setminus \{ \emptyset \} \subseteq 2^\Omega$.) Intuitively, a plan
is a correspondence that assigns a non-empty set of alternatives to
each evidential event.
A question that it would be typical to pose in decision theory is:
what are the conditions under which a plan, $\zeta \colon \E \rightrightarrows \mathsf{A}$,
may represent an agent's choices according to maximization of
conditional expected utility? That is, when can $\E$ be associated
with a probability space, and can state-contingent utilities be
imputed to the various alternatives, such that for each $B$,
$\zeta(B)$ is the set of alternatives that maximize expected utility
conditional on $B$?
However, regarding the decisions of experimental subjects and of other
agents, there are really two questions. The probability space with
respect to which conditional probabilities are formed might be taken
to be either a model of evidence or else a model of beliefs. If a
model of evidence, then the agent conditions on the event itself. If a
model of beliefs, then the agent conditions on the distinct event,
$\beta(B)$, that the evidential event, $B$, is the agent's type.
The two questions are formulated explicitly as follows.
\begin{enum}
\renewcommand{\theequation}{\arabic{enumi}.\arabic{equation}} \setcounter{equation}{0}
\meti{47}
{Under what conditions on $(\Omega, \O, \E)$ and $\zeta$ do there exist a
probability space, $(\Psi, \C, P)$, an isomorphism $\alpha \colon \O
\to \C$, and a set of (bounded, $\O\/$-measurable,
state-contingent) utility functions, $\langle u_a \colon \Psi \to
\Re \rangle_{a \in \mathsf{A}}$, such that, for all $a\in A$ and $B \in \E$,
{ \setlength{\parindent}{2pt} \indent\parbox{10cm}{
\display{47.1}{\int_{\alpha(B)} u_a \, dP = \max_{b \in \mathsf{A}}
\int_{\alpha(B)} u_b \, dP \iff a \in \zeta(B)}}
}\\
That is, under what conditions do there exist a model of evidence,\\
$(\Psi, \C, P, \alpha(\E))$, and a set of utility functions that
rationalize $\zeta$?}
\setcounter{equation}{0}
\meti{48}
{Under what conditions on $\E$ and
$\zeta$ do there exist a model of beliefs,\\ $(\Phi, \B, Q, \Omega, \E,
\tau)$ and a set of bounded utility functions,
$\langle v_a \colon \Phi \to \Re \rangle_{a \in \mathsf{A}}$, such that, for
all $a \in \mathsf{A}$ and $B \in \E$,
{ \setlength{\parindent}{2pt} \indent\parbox{10cm}{
\display{48.1}{\int_{\beta(B)} v_a \, dQ = \max_{b \in \mathsf{A}}
\int_{\beta(B)} v_b \, dQ \iff a \in \zeta(B)}
}}
}
\end{enum}
\setcounter{equation}{\value{enumi}}
\renewcommand{\theequation}{\arabic{equation}}
A model of evidence that satisfies the condition stated in \eqn{47}
\emph{rationalizes $\zeta$ by evidence}. A model of beliefs that
satisfies the condition stated in \eqn{48} \emph{rationalizes $\zeta$
by beliefs}. A plan that is rationalized by some model is called
\emph{rational} with respect to that type of model.
The definition of rationalization by a model of beliefs shows why the
prior-beliefs type, $\Omega$, is needed although it is not in the
range of the type function. If the definition of $\E$ were
amended so that $\Omega \notin \E$, then any plan could be
rationalized by beliefs.
The reason is that, since $\setbrac{\beta(B) \mid B \in \E'}$
is a partition of $\Phi$, functions $v_a$ can be defined by
\display{49}{v_a(\phi) = \begin{cases}
1 & \text{if\ } a \in \zeta(\tau(\phi))\\
0 & \text{otherwise}
\end{cases}}
However, because the definition of rationality with respect to beliefs
requires that \eqn{48} must be satisfied also by $B=\Omega$, \eqn{49}
does not automatically define utility functions that rationalize
$\zeta$. This observation reflects the basic principle that the force of
Bayesian decision theory comes from the relationship between choices
based on prior versus posterior beliefs, not solely on relationships
among choices conditioned on alternate posterior beliefs.
\uppercase\sec{7.2} concerns an example of a plan that is
rational with respect to beliefs, but not with respect to
evidence. \uppercase\sec{7.3} concerns an example of a plan
that is rational with respect to evidence, but not with
respect to beliefs. In fact, this example formalizes the Monty
Hall problem that has been studied in various experiments
cited earlier.
\Subsection{7.2}{Rationality with respect to beliefs does not imply
rationality with respect to evidence}
In this section, first a model of beliefs will be constructed and will
be shown to rationalize a plan. Then, it will be shown that no model
of evidence can rationalize that plan.
\exmpl{4}{ The model of beliefs closely resembles \xmpl{2}. It is based
on a model of evidence that differs from \xmpl{1} by the addition of
a new evidential event, ${\mathit{H}}$. Thus, let $\Omega = \{e,h,f \}$ and
$\E = \{ \Omega, \mpty, {\mathit{H}}, {\mathit{F}} \}$, where $\mpty = \{ e,h \}$,
${\mathit{F}} = \{ h,f \}$, and ${\mathit{H}} = \setbrac{h}$. Besides the addition of
${\mathit{H}}$ to $\E$, the other change of the current example from from
\xmpl{2} is to put greater weight on $h$ than is specified in that
earlier example. The set of states of the world, $\Phi$, of the model of beliefs
will be $\Omega \times \E'$, and prior beliefs will be specified so
that $Q( \setbrac{h} \times \E') = 3/5$ and $Q( \setbrac{f} \times \E' )= Q(
\setbrac{e} \times \E' ) = 1/5$.
The specification of the model of beliefs is completed by taking $\B =
2^\Phi$, and $\tau(\omega, B) = B$, and by fully specifying $Q$
according to the following table. The cells are interpreted as in
table \eqn{15}.
\display{50}{ \begin{tabular}[c]{|c||c|c|c|c|}
\hline
$Q(\omega,B)$ & $\Omega$ & $\mpty$ & ${\mathit{H}}$ & ${\mathit{F}}$\\
\hline \hline
$e$ & .2 & .2 & 0 & 0\\
\hline
$h$ & .6 & .1 & .4 & .1\\
\hline
$f$ & .2 & 0 & 0 & .2\\
\hline
\end{tabular} }
Since $\tau(\omega, B) = B$, $\beta(B) = B \times \E'$. Specify $\mathsf{A}$
by $\mathsf{A} = \{ \mathsf{w}, \d \}$. Specify that $\zeta(\Omega) = \zeta({\mathit{H}}) =
\{ \mathsf{w} \}$ and $\zeta(\mpty) = \zeta({\mathit{F}}) = \{ \d \}$.}
\claim{11}{The plan specified in \xmpl{4} is rational with respect to beliefs, but
not with respect to evidence.}
\begin{proof}
It will be shown that there are utility functions that, together with
the model of beliefs constructed in \prp{1}, rationalize $\zeta$ in
\xmpl{4}. However, $\zeta$ is not rational with respect to evidence.
Intuitively, $\mathsf{w}$ is supposed to be
wagering that the state of nature is $h$ and $\d$ is supposed to be declining to
wager. Formally suppose that, for all $B \in \E'$, $v_\mathsf{w}(h,B) = 10$ and
$v_\mathsf{w}(e,B) = v_\mathsf{w}(f,B) = -10$ and that, for all $\omega \in \Omega$,
$v_\d(\omega) = 0$. Then
\display{51}{\begin{gathered}
\int_{\beta(\Omega)} v_\mathsf{w} - v_\d \; dQ = 2 \quad \text{and} \quad
\int_{\beta({\mathit{H}})} v_\mathsf{w} - v_\d \; dQ = 4 \quad \text{and}\\
\int_{\beta(\mpty)} v_\d - v_\mathsf{w} \; dQ = \int_{\beta({\mathit{F}})} v_\d - v_\mathsf{w}
\; dQ = 1
\end{gathered}}
so $\zeta$ is rational with respect to beliefs.
A contradiction will be obtained from supposing that some isomorphism,
$\alpha \colon \O \to \C$, model of evidence, $(\Psi, \C, R,
\alpha(\E))$, and set of utility functions, $\langle u_a \colon \Psi
\to \Re \rangle_{a \in \mathsf{A}}$, rationalize $\zeta$
\display{52}{\begin{aligned}
\zeta(\mpty) = \{ d \},\text{\ so\ } &\int_{\alpha(\mpty)} u_\d - u_\mathsf{w} \, dR > 0\\
\zeta({\mathit{H}}) = \{ w \},\text{\ so\ } &\int_{\alpha({\mathit{H}})} u_\d - u_\mathsf{w} \, dR < 0\\
\text{thus\ } &\int_{\alpha(\mpty) \setminus \alpha({\mathit{H}})} u_\d - u_\mathsf{w} \, dR > 0\\
\zeta({\mathit{F}}) = \{ w \},\text{\ so\ } &\int_{\alpha({\mathit{F}})} u_\d - u_\mathsf{w} \, dR > 0\\
\int_{\alpha(\Omega)} u_\d - u_\mathsf{w} \, dR = &\int_{\alpha({\mathit{F}})} u_\d - u_\mathsf{w} \, dR +
\int_{\alpha(\mpty) \setminus \alpha({\mathit{H}})} u_\d - u_\mathsf{w} \, dR > 0
\end{aligned}}
But, that $\int_{\alpha(\Omega)} u_\d - u_\mathsf{w} \, dR > 0$ and $\zeta(\Omega) = \{
\mathsf{w} \}$ contradicts condition \eqn{48.1} for $\alpha$, $(\Psi, \C, R,
\alpha(\E))$, and $\langle u_a \rangle_{a \in \mathsf{A}}$ to rationalize $\zeta$.
\end{proof}
\Subsection{7.3}{Rationality with respect to evidence does not
imply rationality with respect to beliefs}
The BFG problem is in what \citet{Savage-1972} called the
``verbalistic'' tradition, while the Monty Hall (MH) problem is in the
``behavioralistic'' tradition. That is, the BFG problem is formulated
in terms of eliciting first-person reports of an agent's probability
assessments, while the MH problem is formulated in terms of acquiring
evidence about the pattern of the agent's practically significant
decisions. In a situation where it is possible to observe an agent's
choices but not to query the agent about probability assessments, or
where it is thought that an agent's responses to such queries will
either over- or under-state the agent's capacity to act in conformity
to expected-utility maximization, the MH problem could be the more
advantageous one to consider.
Of course, the BFG problem can be reformulated in a behavioralistic
framework. This will be done now. It will be shown that the plan that
corresponds naturally to heuristic reasoning is rational with respect
to beliefs, as well as with respect to evidence. Thus, in the
situations just envisioned, the BFG problem cannot be used to design
an experiment, the outcome of which could rule out the possibility
that an agent reasons soundly according to a model of beliefs. The
plan that corresponds naturally to heuristic reasoning in the MH
problem is defined from the same evidential events and alternatives as
is the previous plan. That plan is rationalized by the model of
evidence presented in \xmpl{1}, amended so that the prior probabilities
are modified so that each is $1/3$, along with the same utility
functions by which the heuristic plan for the BFG problem is
rationalized. However, it will be shown that the heuristic plan for
the MH problem is not rational with respect to evidence.
Consider the behavioralistic formulation of the BFG problem.
\exmpl{5}{Specify $\Omega$, $\O$, and $\E$ as in \xmpl{1}, and let
$\mathsf{A} = \{ \mathsf{e}, \mathsf{h}, \mathsf{f} \}$. Specify that $\zeta(\Omega) = \zeta(\mpty) =
\zeta({\mathit{F}}) = \{ \mathsf{h} \}$.}
Define $\a \colon \Omega \to \mathsf{A}$ by $\a(e)
= \mathsf{e}$, $\a(h) = \mathsf{h}$, and $\a(f) = \mathsf{f}$. For $\omega \in \Omega$ and $b
\in \mathsf{A}$, specify that
\display{53}{u_b(\omega) = \begin{cases}
1 &\text{if\ } b = \a(\omega)\\
0 &\text{if\ } b \neq \a(\omega)
\end{cases}}
In \xmpl{1}, $P( \setbrac{h} ) = 2/5$ and $P( \setbrac{e} ) = P( \setbrac{f} ) =
3/10$. Consequently\\ $P[ \setbrac{h} | \mpty ] = P[ \setbrac{h} | {\mathit{F}} ] =
4/7$. Thus the model of evidence in \xmpl{1}, together with the
utility functions defined in \eqn{53}, rationalize $\zeta$ with
respect to evidence. That is, on the intuitive understanding of the
alternatives that was suggested above, $\zeta$ is the plan that
corresponds naturally to heuristic reasoning in the BFG problem.
Plan $\zeta$ is also rational with respect to beliefs. One way of
showing that is to appeal to the model of beliefs constructed in \xmpl{2},
and to specify that, for all $\phi \in \Phi$, $v_\mathsf{e}(\phi) =
v_\mathsf{f}(\phi) = 0$ and $v_\mathsf{h}(\phi) = 1$. Rationality with respect to
beliefs can also be shown by defining $v_b(\omega, B) = u_b(\omega)$
and by modifying $Q$ from \eqn{15} to the following specification, which
assigns very high probability to the event that $h$ is the state of nature.
\display{54}{ \begin{tabular}[c]{|c||c|c|c|}
\hline
$Q(\omega,B)$ & $\Omega$ & $\mpty$ & ${\mathit{F}}$\\
\hline \hline
$e$ & .1 & .1 & 0\\
\hline
$h$ & .8 & .4 & .4\\
\hline
$f$ & .1 & 0 & .1\\
\hline
\end{tabular} }
This second way has the feature that the alternatives continue to be
given their intuitive interpretations according to the assignment of
utilities, rather than $\mathsf{h}$ being treated as a dominant alternative.
In contrast, the way that subjects are understood to reason in Monte
Hall experiments suggests a plan that is rational with respect to
evidence, but is not rational with respect to
beliefs.\footnote{\citet[pp.\ 711, 712]{GranbergBrown-1995}
hypothesize that their subjects reason heuristically as in \xmpl{1},
in a setting tantamount to that example except
that the prior probability of each state of nature is $1/3$. This prior
makes the two alternatives consistent with the subject's type to be
equal to one another in expected utility. Subjects cannot express
indifference, given the forced-choice protocol of the
experiment. Granberg and Brown suggest that ``intertia'' or some
other tie-breaking consideration accounts for subjects' expressed
choices, implying that those choices represent a single-valued
selection from an underlying plan that is a multi-valued
correspondence. They hypothesize that some subjects may, in fact, be
randomizing between the two alternatives.} In its original form, the
MH problem involves an agent making a provisional choice, and
subsequently having an opportunity to revise that choice. The
specification of of $\E'$ in \xmpl{1}, which is incorporated in
following example, corresponds to the revised-choice stage of the MH
problem that would follow the agent having made $\mathsf{h}$ as the
provisional choice.
\exmpl{6}{The example is identical to \xmpl{5}, except that
\display{55}{\zeta(\Omega) = \mathsf{A} \qquad \zeta(\mpty) = \{ \mathsf{e}, \mathsf{h} \}
\qquad \zeta({\mathit{F}}) = \{ \mathsf{h}, \mathsf{f} \} }
}
\claim{14}{The plan specified by \eqn{55} in \xmpl{6} is rational
with respect to evidence, but not with respect to beliefs.}
\begin{proof}
Specify a model of evidence according to \xmpl{6}, along with the
specification that $P(\mathsf{e}) = P(\mathsf{h}) = P(\mathsf{f}) = 1/3$. This model and the
utility functions defined by \eqn{53} rationlize $\zeta$.
Now it will be shown by contradiction that there do not exist a model
of beliefs, $(\Phi, \B, Q, \Omega, \E, \tau)$ and utility functions $\langle
v_b \rangle_{b \in \mathsf{A}}$ that rationalize $\zeta$.
By \eqn{48.1}, since $\mathsf{h} \in \zeta(\Omega) \cap \zeta(\mpty)$
and $\mathsf{f} \in \zeta(\Omega) \setminus \zeta(\mpty)$,
\display{56}{\int_{\beta(\Omega)} v_\mathsf{h} \, dQ = \int_{\beta(\Omega)} v_\mathsf{f}
\, dQ}
and
\display{57}{\int_{\beta(\mpty)} v_\mathsf{h} \, dQ > \int_{\beta(\mpty)} v_\mathsf{f}
\, dQ}
Since $\beta(\Omega) = \Omega$, \eqn{56} implies that
\display{58}{\int_\Omega v_\mathsf{h} \, dQ = \int_\Omega v_\mathsf{f} \, dQ}
Because $\E' = \{ \mpty,{\mathit{F}} \}$, $\{ \beta(\mpty), \beta({\mathit{F}}) \}$
is a partition of $\Omega$. Therefore, \eqn{57} and \eqn{58} imply that
\display{59}{\int_{\beta({\mathit{F}})} v_\mathsf{f} \, dQ > \int_{\beta({\mathit{F}})} v_\mathsf{h} \, dQ}
But, given condition \eqn{48.1}, inequality \eqn{59} contradicts a clause of
assumption \eqn{55}, that $\mathsf{h} \in \zeta({\mathit{F}})$.
\end{proof}
|
{
"timestamp": "2018-02-27T02:08:40",
"yymm": "1802",
"arxiv_id": "1802.08935",
"language": "en",
"url": "https://arxiv.org/abs/1802.08935"
}
|
\section{Introduction}
Many scientific approaches that are related to the treatment of real world phenomena rely on the computation of integrals on high-dimensional domains which often cannot be treated analytically. Examples include physics \cite{binderbook}, computational finance \cite{Glasserman:2003}, econometrics \cite{monfortbook} and machine learning \cite{avron2016quasi,ijcai2017-207,rahimi}. In this paper, we aim for efficient and stable numerical methods to approximately
compute the integral
\[
I_d(f) \,:=\, \int_{[0,1]^d} f(\bsx) \,\mathrm{d}\bsx
\]
and give reliable error guarantees for a class $F_d$ of $d$-variate functions. In fact, we are particularly interested
in the worst-case error
\begin{equation}\label{eNFd}
e(n,F_d) := \sup\limits_{\|f\|_{F_d} \leq 1} |I_d(f) - Q_n^d(f)|,
\end{equation}
for special cubature formulas of type
\begin{equation}\label{eq:Q}
Q^d_n(f) \,=\, \frac{1}{n}\sum_{\boldsymbol{k} \in\Z^d}\, f(\bm{A}_n \boldsymbol{k}),
\end{equation}
where $\bm{A}_n:=n^{-1/d} \bm{A}$ is a suitable $d\times d$-matrix with $\det(\bm{A})=1$. This type of
cubature rule has a long history going back to the 1970s, see Frolov \cite{Fr76}. In \eqref{eq:Q} the function $f$
is assumed to be supported on a bounded domain $\Omega$ such that only finitely many summands contribute to the sum. Frolov
noticed that the property
\begin{equation}\label{Nm}
\mathrm{Nm} (\bm{A}) \coloneqq
\inf_{\bm{k}\in\Z^d\setminus\{0\}} \Big|\prod_{i=1}^d (\bm{Ak})_i\Big| > 0
\end{equation}
guarantees an optimal asymptotic worst-case behavior of \eqref{eNFd} with respect to functions
with $L_p$-bounded mixed derivative of order $r \in \N$ supported in $[0,1]^d$. In this context, optimality
means that the worst-case error \eqref{eNFd} can not be improved in the order sense by any other cubature
formula using the same number of points. Note, that in case $|\det \bm{A}| = 1$ it can be shown that
\begin{equation} \label{eqn_scalingVolume}
n^{-1}\abs{\{\boldsymbol{k}\colon \bm{A}_n \boldsymbol{k}\in\Omega\}}\to 1
\end{equation}
for every set $\Omega$ with (Lebesgue) volume $1$ \cite{Sk94}.
Frolov showed that the set of matrices satisfying \eqref{Nm} is not empty. Moreover, he gave a
rather sophisticated number theoretic construction with a lot of potential for numerical analysis, as we will see in this
paper. Starting with the irreducible (over $\mathbb{Q}$)
polynomial
\begin{equation} \label{eqn_classical_polynomial}
P_d(x) = \prod_{j=1}^d (x-2j+1)-1 = \prod_{i=1}^d (x-\xi_i)
\end{equation}
he defined the Vandermonde matrix
\begin{equation}\label{f21}
\bm{V} = \left(\begin{array}{cccc}
1 & \xi_1 & \cdots & \xi_1^{d-1}\\
1 & \xi_2 & \cdots & \xi_2^{d-1}\\
\vdots & \vdots & \ddots & \vdots\\
1 & \xi_{d} & \cdots & \xi_d^{d-1}
\end{array}\right)\,\in\mbox{GL}_d(\R)\,.
\end{equation}
One reason for the increasing interest in Frolov's cubature rule is certainly the fact that once a good matrix
\eqref{Nm} is fixed the integration nodes are simply given as the rescaled image of the integer lattice points $\Z^d$
under the matrix $\bm{V}$. The method is therefore comparably \emph{simple}. Another striking aspect is a property which is
sometimes called \emph{universality}. The method \eqref{eq:Q} is not designed for a specific class of functions
$F_d$ as it is often the case for the commonly used quasi-Monte Carlo methods based on digital nets. In other words, we do not need
to incorporate any a priori knowledge about the integrand (e.g. mixed or isotropic regularity etc.).
In this paper we are interested in an efficient implementation and the numerical performance of different
Frolov type cubature methods for functions on $[0,1]^d$. First of all, this requires the efficient
enumeration of Frolov lattice nodes in axis parallel boxes. It turned out that this is a
highly non-trivial task which has been already considered by several authors
\cite{Kac16}, \cite{KaOeUl17}, \cite{SuYo16} including three of the
present ones.
With a naive approach one may need to touch much more integer lattice points $\boldsymbol{k}\in \Z^d$ (overhead)
to check whether $\bm{A}\boldsymbol{k} \in [0,1]^d$. This increases the runtime of an enumeration algorithm drastically in high
dimensions. Here, the chosen irreducible polynomial for \eqref{f21} has a significant effect. In \cite{KaOeUl17} the
authors observed that for $d=2^m$ Chebyshev polynomials lead to an orthogonal lattice and an equivalent (orthogonal)
lattice representation matrix with entries smaller than two in modulus. By exploiting rotational
symmetry properties the mentioned overhead can be reduced and the enumeration procedure is less costly.
This observation already indicated that the choice of the polynomials in \eqref{f21} is crucial. Unfortunately,
Chebyshev polynomials and corresponding Vandermonde matrices \eqref{f21} only provide \eqref{Nm} if $d = 2^m$. This
has been shown for instance in Temlyakov \cite{tem93}. The question remains how to fill the gaps. The classical Frolov
polynomials are inappropriate in two respects. First, its roots spread in the range $[-d,d]$ such that \eqref{f21} gets
highly ill-conditioned. And secondly, although the lattice satisfies \eqref{Nm}, the points are not really ``spaces filling''
meaning that the points accumulate around a lower dimensional manifold. This has a severe numerical impact for the
worst-case error. In fact, the asymptotic rate of convergence is optimal but the preasymptotic behavior is useless for
any practical issues.
One of the main contributions of the paper is the list of new improved generating polynomials given in Section 3 below.
We give polynomials which are optimized according to the mentioned issues in dimensions $d=1,...,10$, especially with a
narrow distribution of its roots. As already mentioned above Chebyshev polynomials itself are not irreducible if $d$ is not a power of
two. However, they may provide admissible factors. This is the main idea of the construction and works if $d \in
\{2,3,4,5,6,8,9,10\}$. As for the case $d=7$ a brute force search led to a polynomial with roots in $(-2.25,1.75)$.
Due to the mentioned \emph{universality} of Frolov's cubature rule, it is enough to fix the matrix and the corresponding
lattice once and for all. In fact, the point construction does note depend on the respective framework. Therefore, it makes sense
to generate the lattice points in a preprocessing step and make them available for practitioners.
Our enumeration algorithm is similar to the one in \cite{KaOeUl17} and extends to non-orthogonal lattices
by exploiting a $QR$-factorization, see Section 4. Based on the above list of
polynomials we generated a database of Frolov lattice nodes for dimensions up to $d=10$ and $N \approx 10^6$
points. The points
are available for download and direct use on the website
\begin{center}
\texttt{http://wissrech.ins.uni-bonn.de/research/software/frolov/}
\end{center}
Having generated the cubature points we are now able to test the
performance of various Frolov methods for functions with bounded mixed (weak) derivative, i.e,
\begin{equation}\label{mixed}
\langle f^{({\boldsymbol{r}})}, f^{({\boldsymbol{r}})}\rangle_{L_2} = \|f^{({\boldsymbol{r}})}\|_2^2 < \infty\,.
\end{equation}
where ${\boldsymbol{r}} = (r_1,...,r_d) \in \N^d$ is a smoothness vector with integer components satisfying
\begin{equation}\label{sm_vector}
r=r_1 = ... =r_\nu < r_{\nu+1} \leq r_{\nu+2} \leq ... \leq r_d\,.
\end{equation}
A natural assumption, see
\eqref{eq:Q}, is the restriction to functions $f$ supported inside the unit cube $\Omega = [0,1]^d$ satisfying
\eqref{mixed}. In this case the semi-norm \eqref{mixed} becomes a norm and the corresponding space a Hilbert space
which will be denoted with $\mathring{H}^{\mathbf{r}}_{\text{mix}}$\,.
The nowadays
well-known worst-case error
\begin{equation}\label{f22}
e(n,\mathring{H}^{\mathbf{r}}_{\text{mix}}) \asymp n^{-r}(\log n)^{(\nu-1)r},
\end{equation}
has been established in many classical papers \cite{Fr76}, \cite{Du92,Du97}, \cite{tem93}, see
also the more recent papers \cite{MU16} and \cite{UlUl16}\,. Note, that we encounter
another aspect of the \emph{universality}
property for this particular framework of \emph{anisotropic mixed smoothness}. When using for instance a sparse
grid approach (see e.g. Appendix A) for the numerical integration one has to know which direction is ``rough'' in the above sense to
adapt the sparse grid accordingly.
In fact, one samples more points in rough directions and less points in smoother direction. Frolov's method does
not need this a priori information and behaves according to the optimal rate of convergence given in \eqref{f22}.
We will again provide a streamlined and self-contained proof in Section 6 pointing explicitly on the dependence of the constants on the dimension $d$, since the rate of convergence given by \eqref{f22} completely hides this
dependence. In fact, in case of one minimal smoothness component in \eqref{sm_vector} even the logarithm disappears completely and we
have a pure polynomial rate as in the univariate setting. In Theorem \ref{thm:theor-error} below
we give a worst-case error bound which shows the influence of the dimension $d$. In addition, the result illustrates
how the lattice invariants, like the polynomial discriminant $D_P$ and the $\ell_\infty$-diameter of the smallest
fundamental cell enter the error estimates.
Since $\mathring{H}^{\mathbf{r}}_{\text{mix}}$ is embedded into the space of continuous functions a reproducing
kernel exists \cite{aronszajn}. We use the approach of Wahba \cite{Wah90} as a starting point
to derive its reproducing kernel.
Together with a standard correction procedure, cf. \cite[Lem. 3, Thm.\ 11]{berlinet}, we derive an explicit
formula given in Theorem \ref{repr_kernel} and \eqref{f7_2} below. The reproducing kernel is then being used
to simulate the exact worst-case errors which represent the norm of the error functional, i.e. its Riesz representer, which
can be computed exactly.
This approach allows to gain insights into the true behavior of the constants that are involved in the bounds for the integration error and usually only are estimated.
Let us emphasize once again that we simulate the worst-case error with respect to a whole function class rather than testing the algorithm on a single prototype test function.
Finally, in Section 7 we show the results of several numerical experiments. In the first part of the experiment section
we compare different well-known methods for numerical integration in the reproducing kernel Hilbert space framework
which we established in Sections 5 and 6. In particular, we compare Frolov lattices based on different generating polynomials,
the classical Frolov polynomials and the improved polynomials from Section 3. As one would expect, the numerical
behavior of the respective worst-case errors differ significantly for small $n$. Where the improved polynomials lead to a
rather satisfactory error decay, the classical method is numerically completely useless if the dimension increases.
Interestingly, in case $d=2$ Frolov lattices according to the golden ratio polynomial compete with the Fibonacci lattice rule.
We also compare Frolov lattices and sparse grids with respect to the numerical performance. Note, that the
sparse grid cubature method represents a further method which is able to benefit from higher (mixed) smoothness.
However, it is well known \cite{DuUl15} that sparse grids show a worse behavior in the logarithm
compared to Frolov lattices. Our experiments validate this theoretical fact. Among the considered methods (sparse grids, quasi-Monte Carlo) Frolov lattices
behave best in our setting. In addition, Frolov lattices do not have to be adapted to the present anisotropy when
considering anisotropic mixed smoothness. When considering one minimal smoothness component \eqref{sm_vector}
we observe the same pure polynomial rate in different dimensions, only the constant differs.
Note, that this effect would also be present for sparse grids
adapted to the smoothness vector, which one has to know in advance.
\textbf{Notation.} As usual $\N$ denotes the natural numbers,
$\Z$ denotes the integers,
and $\R$ the real numbers
.
The letter $d$ is always reserved for the underlying dimension in $\R^d, \Z^d$ etc. We denote
with $\bsx \cdot \bsy$
the usual Euclidean inner product in $\R^d$.
For $0<p\leq \infty$ we denote with $|\cdot |_p$ and $\|\cdot \|_p$ the ($d$-dimensional)
discrete $\ell_p$-norm and the continuous $L_p$-norm on $\R$, respectively,
where $B_p^d$ denotes the respective unit ball in $\R^d$.
The function $(\cdot)_+$ is given by $\max\{\cdot,0\}$.
With $\mathcal{F}$ we denote the Fourier transform given by $\mathcal{F} f(\boldsymbol{\xi}):=\int_{\R} f(x)\exp(-2\pi i \bsx\cdot \boldsymbol{\xi})\,\mathrm{d}\bsx$ for
a function $f\in L_1(\R^d)$ and $\boldsymbol{\xi} \in \R^d$. For two sequences of real numbers $a_n$ and $b_n$ we will write
$a_n \lesssim b_n$ if there exists a constant $c>0$ such that
$a_n \leq c\,b_n$ for all $n$. We will write $a_n \asymp b_n$ if
$a_n \lesssim b_n$ and $b_n \lesssim a_n$. With $\mathrm{GL}_d:=\mathrm{GL}_d(\mathbb{R})$ we
denote the group of invertible matrices over $\R$, whereas $\mathrm{SO}_d:=\mathrm{SO}_d(\R)$
denotes the group of orthogonal matrices over $\R$ with unit determinant. With $\mathrm{SL}_d(\mathbb{Z})$ we
denote the group of invertible matrices over $\Z$ with unit determinant.
The notation $D:=\mbox{diag}(x_1,...,x_d)$ with $\bsx = (x_1,...,x_d) \in \R^d$
refers to the diagonal matrix $D \in \R^{d\times d}$ with $\bsx$ at the diagonal.
With $\mathrm{gcd}(a,b)$ we denote the greatest common divisor of two positive integers $a,b$.
And finally, by $\mathbb{Z}[x]$ we denote the ring of polynomials with integer coefficients.
Although we consider different generating matrices for admissible lattices
in the forthcoming, we do not specify the matrix in the denotation $Q^d_n$.
This is, because we will fix, for every dimension $d$ under consideration,
a matrix that is optimal in a sense that will be explained later.
To be precise, for a given dimension $d$, the matrix $A$ will be a multiple of
the Vandermonde matrix as defined in Theorem~\ref{construction} with the
specific polynomials (and roots) as given in Table~\ref{PolTable}.
\section{Admissible lattices and their representation} \label{sec:Admissible}
\begin{comment}
Crucial for the performance of the Frolov cubature formula
is the choice of an underlying lattice.
This lattice needs to be admissible to achieve optimal convergence behavior.\\
\begin{definition}[Lattice]
A (full-rank) lattice $\Gamma\subset\R^d$ is a subgroup of $\R^d$
which is isomorphic to $\Z^d$ and spans the real vector space $\R^d$.
A set $\{t_1,...,t_d\} \subset \Gamma$ such that
$\operatorname{span}_{\Z} \{t_1,...,t_d\} =\{\sum_{i=1}^d k_it_i : k\in\Z^d\} = \Gamma$
is called generating set of $\Gamma$.
The matrix $T = (t_1|\cdots|t_d) \in \mbox{GL}_d(\R)$
is called a lattice representation for $\Gamma$, i.e. we can write
\begin{equation}
\Gamma = \{Tk:k\in \Z^d\} = T(\Z^d)\,.
\end{equation}
\end{definition}
\end{comment}
For a matrix $\bm{T}\in\mbox{GL}_d(\R)$, we call $\{\bm{Tk}:\bm{k}\in \Z^d\} = \bm{T}(\Z^d)$
a (full-rank) lattice with lattice representation matrix $\bm{T}$.
\begin{comment}
\mario{WIR BENUTZEN MANCHMAL $A$ UND MANCHMAL $V$.
MEIN VORSCHLAG: $A$ HAT IMMER DETERMINANTE 1.}
\chris{Dann nehme ich erstmal $T,t$ fuer allgemeine Matrizen/Vektoren,
$U$ fuer unimodulare mit Eintraegen in $\Z$
und $V,v$ fuer Vandermonde Matrizen.}
\end{comment}
For a matrix $\bm{U}\in\mbox{SL}_d(\Z)$, the matrices $\bm{T}$ and $\bm{TU}$ generate the same lattice,
and it can easily be shown that all possible lattice representations of $\bm{T}(\Z^d)$ are given this way.
Therefore, it makes sense to define the determinant of a lattice $\bm{T}(\Z^d)$ as $|\det \bm{T}|$.
We want to mention that for a given lattice,
it is often preferred to have a lattice representation matrix $\bm{T} = (\bm{t}_1|\cdots|\bm{t}_d) \in \mbox{GL}_d(\R)$
with column vectors $\bm{t}_1,\ldots,\bm{t}_d\in\R^d$ that are small with respect to some norm,
cf. Figure \ref{fig_equiv_lattice}.
\begin{figure}[t]
\centering
\includegraphics[width=0.48\linewidth]{fig/equivalent_lattice_representations-figure0.pdf}
\includegraphics[width=0.48\linewidth]{fig/equivalent_lattice_representations-figure1.pdf}
\caption{Equivalent lattice representations within the unit cube $\Omega=\left[-1/2, 1/2\right]^2$.
The highlighted lattice elements are the column vectors of the corresponding lattice representation matrix.}
\label{fig_equiv_lattice}
\end{figure}
\begin{comment}
A direct consequence of this circumstance
is the invariance of $|\det \bm{T}|$ for any lattice representation $\bm{T}$ corresponding to $\Gamma$.
This means that we can define $\det(\Gamma) \coloneqq |\det T|$ as the determinant of the lattice $\Gamma$.
We further introduce the dual lattice
\begin{equation}
\Gamma^\bot =\{ x\in\R^d : (x,y)\in\Z \text{ for all } y\in\Gamma\}\,
\end{equation}
which will be needed for the error analysis of the Frolov cubature formula.
It can be easily shown that $\Gamma^\bot$ is a lattice and
\begin{equation}
\Gamma = T(\Z^d) \quad \Leftrightarrow \quad \Gamma^\bot = T^{-\top}(\Z^d)\,.
\end{equation}
\end{comment}
Crucial for the performance of the Frolov cubature formula \eqref{eq:Q} will be the notion of \emph{admissibility}
which is settled in the following definition.
\begin{definition}[Admissible lattice]
A lattice $\bm{T}(\Z^d)$ is called admissible if
\begin{equation}\label{adm}
\mathrm{Nm} (\bm{T}) \coloneqq
\inf_{\bm{k}\in\Z^d\setminus\{0\}} \Big|\prod_{i=1}^d (\bm{Tk})_i\Big| > 0
\end{equation}
holds true.
\end{definition}
\noindent Figure \ref{fig_lattice_hc} illustrates this property.
In fact, lattice points different from $0$ lie outside of a hyperbolic
cross with 'radius' $\mathrm{Nm}(\bm{T})$.
\begin{figure}[t]
\centering
\includegraphics[width=0.5\linewidth]{fig/figure_lattice_and_hyperbolic_cross-figure0.pdf}
\caption{Admissible lattice and hyperbolic cross.} \label{fig_lattice_hc}
\end{figure}
Our construction of choice for admissible lattices is given by the following procedure.
\begin{proposition}\label{construction}
Let $P(x)$ be a polynomial of degree $d$ satisfying
\begin{itemize}
\item $P$ has integer coefficients,
\item $P$ has leading coefficient $1$,
\item $P$ is irreducible over $\Q$,
\item $P$ has $d$ different real roots $\xi_1, \ldots , \xi_d$\,.
\end{itemize}
The Vandermonde matrix
\begin{equation}\label{Vanderm}
\bm{V} = \left(\begin{array}{cccc}
1 & \xi_1 & \cdots & \xi_1^{d-1}\\
1 & \xi_2 & \cdots & \xi_2^{d-1}\\
\vdots & \vdots & \ddots & \vdots\\
1 & \xi_{d} & \cdots & \xi_d^{d-1}
\end{array}\right)\,\in\mbox{GL}_d(\R)
\end{equation}
generates an admissible lattice $\bm{V}(\Z^d)$ with $\mathrm{Nm}(\bm{V}) = 1$.
Its determinant equals the polynomial discriminant $D_P$ of $P$:
\begin{equation}\label{det_Vandermonde}
|\det \bm{V}| = \prod_{k<l}|\xi_k-\xi_l| = D_P\,.
\end{equation}
Moreover, it holds
\begin{equation}\label{DualityOfNorm}
\mathrm{Nm}(\bm{V}^{-\top}) = |\det \bm{V}|^{-2} = D_P^{-2}\,.
\end{equation}
\end{proposition}
The necessary prequisites of $P$ can be reformulated with concepts of algebraic number theory:
$P$ is the minimal polynomial of an algebraic integer of order $d$.
For the proof of this statement we refer to \cite{Kac16},
or \cite{GrLekk87} and \cite{Marcus} for a thorough introduction into the theory of algebraic integers.
The quantity \eqref{DualityOfNorm}
has a direct impact on the convergence behavior of the Frolov cubature formula
and we therefore are interested in polynomials which maximize this quantity for a fixed $d$,
i.e. have a small (or the smallest) polynomial discriminant $D_P$, cf. Figure \ref{2dLattices}.
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.22\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/grid17.pdf}
\caption{ \small{$P(x) = x^2-17$, \\$D_P = 2\sqrt{17}$}}
\end{subfigure}
\begin{subfigure}[b]{0.22\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/grid8.pdf}
\caption{ \small{$P(x) = x^2-8$, \\$D_P = 2\sqrt{8}$}}
\end{subfigure}
\begin{subfigure}[b]{0.22\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/grid3.pdf}
\caption{ \small{$P(x) = x^2-3$, \\$D_P = 2\sqrt{3}$}}
\end{subfigure}
\begin{subfigure}[b]{0.22\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/gridU.pdf}
\caption{ \small{$P(x) = x^2-x-1$, \\$D_P = \sqrt{5}$}}
\end{subfigure}
\caption{Lattices corresponding to different polynomials for $d=2$.
A small discriminant correlates with a good distribution of lattice points.}
\label{2dLattices}
\end{figure}
\begin{comment}
The quantity \eqref{DualityOfNorm},
which describes the quality of the lattice with respect to the admissibility property independent of its scaling,
has a direct impact on the convergence behavior of the Frolov cubature formula
and we therefore are interested in polynomials which maximize this quantity for a fixed $d$,
i.e. have a small (or the smallest) polynomial discriminant.
Using this proposition,
we can specify our choice of the matrices $\bm{A}$ and $\bm{A}_n$ used in Frolov's cubature formula \eqref{eq:Q} as
\begin{equation}
\bm{A} = |\det \bm{V}|^{-1/d}\bm{V}, \quad \bm{A}_n = N^{-1/d}\bm{A}\,.
\end{equation}
\end{comment}
Using Proposition \ref{construction}, we obtain the lattice $\bm{V}(\Z^d)$ represented by a Vandermonde matrix $\bm{V}$.
There are two problems with such matrices from the numerical point of view:
First, they have large column vectors and therefore a large condition number,
and second, its entries are of the form $\bm{V}_{ij} = \xi_i^{j-1}$
for which the calculation gets unstable for increasing $j$.
However, we can bypass this problem using special polynomials, which will be discussed in the next section.
\begin{lemma}\label{LatRep}
Let $P$ be a polynomial which satisfies the prequisites in Proposition \ref{construction},
and has roots $\xi_1,\ldots,\xi_d$ which lie in $(-2, 2)$.
Furthermore, let $\omega_1,\ldots,\omega_d\in (-1,1)$ be defined via the equation
\begin{equation*}
2\cos(\pi\omega_k) = \xi_k\,,\quad k=1,\ldots,d\,.
\end{equation*}
The lattice $\bm{V}(\Z^d)$ generated by the associated Vandermonde matrix
\begin{equation*}
\bm{V} = \left(\begin{array}{cccc}
1 & \xi_1 & \cdots & \xi_1^{d-1}\\
1 & \xi_2 & \cdots & \xi_2^{d-1}\\
\vdots & \vdots & \ddots & \vdots\\
1 & \xi_{d} & \cdots & \xi_d^{d-1}
\end{array}\right)\,\in\mbox{GL}_d(\R)
\end{equation*}
is also generated by the matrix $\bm{T}$ with
\begin{equation*}
\bm{T}_{kl} =
\begin{cases}
1&
l=1\,,\\
2\cos\left(\pi (l-1)\omega_k\right)&
l=2,\dots, d\,.
\end{cases}
\end{equation*}
\end{lemma}
The resulting matrix $\bm{T}$ has entries in $(-2,2)$ that can be calculated in a numerically stable way,
which is optimal for our purposes.
The proof is a straightforward application of Euler's identity
and can be found in \cite{Kac16, KaOeUl17}.
\begin{comment}
\begin{proof}[Proof of Theorem \ref{LatRep}]
For some $\omega \in \R$, define $\eta_m(\omega) = 2\cos(m\pi\omega)$.
We first show that $\eta_1(\omega)^l$ allows an additive representation in terms of $\eta_1(\omega),\ldots,\eta_l(\omega)$, i.e.
\begin{equation}
\eta_1(\omega)^l - \eta_l(\omega) = m_0^{(l)} + \sum_{j=1}^{l-1} m_j^{(l)}\eta_j(\omega)
\end{equation}
for some numbers $m_j^{(l)}\in\Z$ which are independent of $\omega$.
Using Euler's formula we see that
\begin{eqnarray*}
\eta_1(\omega)^l-\eta_l(\omega)&=& 2\cos(\pi\omega)^l - 2\cos(l\pi\omega)\\
&=&\left(\mathrm{e}^{\omega\pi \mathrm{i}}+\mathrm{e}^{-\omega\pi \mathrm{i}}\right)^l-\left(\mathrm{e}^{\omega\pi \mathrm{i} l}+\mathrm{e}^{-\omega\pi \mathrm{i} l}\right)\\
&=&\displaystyle\sum_{j=0}^l\binom{l}{j}\mathrm{e}^{\omega\pi \mathrm{i}(l-2j)}-\left(\mathrm{e}^{\omega\pi \mathrm{i} l}+\mathrm{e}^{-\omega\pi \mathrm{i} l}\right)\\
&=&\displaystyle\sum_{j=1}^{l-1}\binom{l}{j}\mathrm{e}^{\omega\pi \mathrm{i} (l-2j)}\\
&=& \begin{cases}
\displaystyle\sum_{j=1}^{\frac{l-1}{2}}\binom{l}{j}\left(\mathrm{e}^{\omega\pi \mathrm{i} (l-2j)}+\mathrm{e}^{-\omega\pi \mathrm{i} (l-2j)}\right)&
l \text{ odd}\,,\\
\displaystyle\sum_{j=1}^{\lfloor\frac{l-1}{2}\rfloor}\binom{l}{j}\left(\mathrm{e}^{\omega\pi \mathrm{i} (l-2j)}+\mathrm{e}^{-\omega\pi \mathrm{i} (l-2j)}\right)+\binom{l}{\frac{l}{2}}\mathrm{e}^{\omega\pi \mathrm{i} (l-l)}&
l \text{ even}
\end{cases}\\
&=&\begin{cases}
\displaystyle\sum_{j=1}^{\frac{l-1}{2}}\binom{l}{j}2\cos(\omega\pi(l-2j)) &
l \text{ odd}\,,\\
\displaystyle\sum_{j=1}^{\lfloor\frac{l-1}{2}\rfloor}\binom{l}{j}2\cos(\omega\pi(l-2j))+\binom{l}{\frac{l}{2}}&
l \text{ even}\,
\end{cases}\\
&=&\begin{cases}
\displaystyle\sum_{j=1}^{\frac{l-1}{2}}\binom{l}{j}\eta_{l-2j}(\omega) &
l \text{ odd}\,,\\
\displaystyle\sum_{j=1}^{\lfloor\frac{l-1}{2}\rfloor}\binom{l}{j}\eta_{l-2j}(\omega)+\binom{l}{\frac{l}{2}}&
l \text{ even}\,.
\end{cases}
\end{eqnarray*}
If we rewrite the matrices $V$ and $T$ as
\begin{equation}
V_{kl} =
\begin{cases}
1&
l=1\,,\\
\eta_1(\omega_k)^{l-1}&
l=2,\dots, d\,
\end{cases}
\end{equation}
and
\begin{equation}
T_{kl} =
\begin{cases}
1&
l=1\,,\\
\eta_{l-1}(\omega_k)&
l=2,\dots, d\,
\end{cases}
\end{equation}
it is easy to see that $T$ can be obtained from $V$ via $T=VS$, using a suitable column operation matrix $S\in\mbox{SL}_d(\Z)$.
This implies that $\Gamma = V(\Z^d) = VS(\Z^d) = T(\Z^d)$.
\end{proof}
\end{comment}
Finally,
we specify our choice of the matrices $\bm{A}$ and $\bm{A}_n$ used in Frolov's cubature formula \eqref{eq:Q} as
\begin{equation}
\bm{A} = |\det \bm{T}|^{-1/d}\bm{T}, \quad \bm{A}_n = n^{-1/d}\bm{A}
\end{equation}
with $\bm{T}$ as in Lemma \ref{LatRep}.
\section{Improved generating polynomials} \label{sec:polynomials}
In this section, we consider polynomials which can be used to create admissible lattices.
We will call such polynomials \emph{admissible},
i.e. a $d$-th order polynomial $P$ is admissible if it satisfies the prequisites of Proposition \ref{construction}.
At the end of this section,
we provide a list of admissible polynomials with small discriminant for $d=2,\ldots,10$.
The study of Chebyshev Polynomials of the first and second kind
provides us with a wide range of admissible polynomials.
The most important features are their real and pairwise different roots,
as well as the narrow distribution thereof.
It is also quite fortunate to us that the decomposition into irreducible factors is well-understood
and can be stated explicitly, see \cite{RTW98}.
\begin{definition}
The Chebyshev Polynomials of the first kind $T_d(x)$ are defined recursively via
\begin{eqnarray*}
T_0(x) &=& 1,\\ T_1(x) &=& x,\\ T_d(x) &=& 2xT_{d-1}(x) - T_{d-2}(x),\quad d\geq 2\,.
\end{eqnarray*}
The Chebyshev Polynomials of the second kind $U_d(x)$ are defined recursively via
\begin{eqnarray*}
U_0(x) &=& 1,\\ U_1(x) &=& 2x,\\ U_d(x) &=& 2xU_{d-1}(x) - U_{d-2}(x),\quad d\geq 2\,.
\end{eqnarray*}
\end{definition}
\begin{lemma}
The Chebyshev polynomial $T_d(x)$ has the roots
\begin{equation*}
\cos\left(\frac{\pi (2k-1)}{2d}\right),\quad k=1,\dots, d\,.
\end{equation*}
The Chebyshev polynomial $U_d(x)$ has the roots
\begin{equation*}
\cos\left(\frac{\pi k}{d+1}\right),\quad k=1,\dots, d\,.
\end{equation*}
\end{lemma}
The polynomials $T_d(x)$ and $U_d(x)$ are not admissible since they do not have leading coefficient $1$.
But they can be scaled appropriately to achieve this.
\begin{lemma}
The scaled Chebyshev polynomials ${\widetilde{T}}_d(x) = 2T_d(x/2)$
and ${\widetilde{U}}_d(x) = U_d(x/2)$ have leading coefficient $1$ and belong to $\Z[x]$.
The scaled Chebyshev polynomial ${\widetilde{T}}_d(x)$ has the roots
\begin{equation*}
t_{d,k} = 2\cos\left(\frac{\pi (2k-1)}{2d}\right),\quad k=1,\dots, d\,.
\end{equation*}
The scaled Chebyshev polynomial ${\widetilde{U}}_d(x)$ has the roots
\begin{equation*}
u_{d,k} = 2\cos\left(\frac{\pi k}{d+1}\right),\quad k=1,\dots, d\,.
\end{equation*}
\end{lemma}
>From this lemma it follows directly that irreducible factors of ${\widetilde{T}}_d(x)$
and ${\widetilde{U}}_d(x)$ are admissible and have roots that lie in $(-2,2)$.
As already stated above, \cite{RTW98} lists the complete decomposition of Chebyshev Polynomials into irreducible factors,
which we reformulate for the scaled versions in the next lemma.
\begin{lemma}\label{factorization}
For a fixed $d>1$, we have
\begin{equation*}
{\widetilde{T}}_d(x) = \prod_h D_{d,h}(x)\,,
\end{equation*}
where $h\leq d$ runs through all odd positive divisors of $d$ and
\begin{equation*}
D_{d,h}(x) = \prod_{\substack{k=1 \\ \gcd(2k-1,d)=h}}^d(x-t_{d,k})
\end{equation*}
are all irreducible. It also holds
\begin{equation*}
{\widetilde{U}}_d(x) = \prod_h E_{d,h}(x)\,,
\end{equation*}
where $h\leq d$ runs through all positive divisors of $2d+2$ and
\begin{equation*}
E_{d,h}(x) = \prod_{\substack{k=1\\\gcd(k,2d+2)=h}}^d(x-u_{d,k})
\end{equation*}
are all irreducible.
\end{lemma}
It has been shown in \cite{tem93} that ${\widetilde{T}}_d(x)$ is irreducible for $d=2^m, m\in\N$
and that the corresponding lattice is orthogonal \cite{KaOeUl17}.
However, in this paper we are more interested in the irreducible factors of ${\widetilde{U}}_d(x)$,
mainly for two reasons.
First, it can be easily seen that the irreducible factors of ${\widetilde{T}}_d(x)$ have paired roots, i.e.
\begin{equation*}
D_{d,h}(t_{d,k}) = 0 \Rightarrow D_{d,h}(t_{d,d-k+1}) = 0\,.
\end{equation*}
This means that either $D_{d,h}(x) = x$ or $D_{d,h}(x)$ is a polynomial of even degree,
limiting the usefulness to our purposes.
Second, it appears to be the case that the discriminant of ${\widetilde{U}}_d(x)$
is smaller than the discriminant of ${\widetilde{T}}_d(x)$,
which makes the factors of ${\widetilde{U}}_d(x)$ more attractive to us.
The following lemma is a consequence of Lemma \ref{factorization}.
\begin{sidewaystable}
\centering
\begin{tabular}{|c|c|l|c|}
\hline
dimension $d$ & notation & polynomial \& roots & discriminant $D_P$\\
\hline
\multirow{2}{*}{2} & \multirow{2}{*}{$E_{4,2}(x)$} & $x^2+x-1$ & \multirow{2}{*}{$2.24$} \\
&&$2\cos\left(\pi \frac{2i}{2d+1} \right),\, i=1,\ldots,d$&\\
\hline
\multirow{2}{*}{3} & \multirow{2}{*}{$E_{6,2}(x)$} & $x^3+x^2-2x-1$ & \multirow{2}{*}{$7$} \\
&&$2\cos\left(\pi \frac{2i}{2d+1} \right),\, i=1,\ldots,d$&\\
\hline
\multirow{2}{*}{4} & \multirow{2}{*}{$E_{14,2}(x)$} & $x^4-x^3-4x^2+4x+1$ & \multirow{2}{*}{$33.54$} \\
&&$2\cos\left(\pi \frac{2}{15}\right),2\cos\left(\pi \frac{4}{15}\right),2\cos\left(\pi \frac{8}{15}\right),2\cos\left(\pi \frac{14}{15}\right)$&\\
\hline
\multirow{2}{*}{5} & \multirow{2}{*}{$E_{10,2}(x)$} & $x^5+x^4-4 x^3-3 x^2+3 x+1$ & \multirow{2}{*}{$121$} \\
&&$2\cos\left(\pi \frac{2i}{2d+1} \right),\, i=1,\ldots,d$&\\
\hline
\multirow{2}{*}{6} & \multirow{2}{*}{$E_{12,2}(x)$} & $x^6+x^5-5 x^4-4 x^3+6 x^2+3 x-1$ & \multirow{2}{*}{$609.34$} \\
&&$2\cos\left(\pi \frac{2i}{2d+1} \right),\, i=1,\ldots,d$&\\
\hline
\multirow{2}{*}{7} & \multirow{2}{*}{$P_7(x)$} & $x^7+x^6-6x^5-4x^4+10x^3+4x^2-4x-1$ & \multirow{2}{*}{$4487.14$} \\
&&no explicit formula available&\\
\hline
\multirow{2}{*}{8} & \multirow{2}{*}{$E_{16,2}(x)$} & $x^8+x^7-7 x^6-6 x^5+15 x^4+10 x^3-10 x^2-4 x+1$ & \multirow{2}{*}{$20256.8$} \\
&&$2\cos\left(\pi \frac{2i}{2d+1} \right),\, i=1,\ldots,d$&\\
\hline
\multirow{2}{*}{9} & \multirow{2}{*}{$E_{18,2}(x)$} & $x^9+x^8-8x^7-7x^6+21x^5+15x^4-20x^3-10x^2+5x+1$ & \multirow{2}{*}{$130321$} \\
&&$2\cos\left(\pi \frac{2i}{2d+1} \right),\, i=1,\ldots,d$&\\
\hline
\multirow{3}{*}{10} & \multirow{3}{*}{$E_{24,2}(x)$} & $x^{10}-10x^8+35x^6+x^5-50x^4-5x^3+25x^2+5x-1$ & \multirow{3}{*}{$873464$} \\
&&$2\cos\left(\pi \frac{2}{25}\right),2\cos\left(\pi \frac{4}{25}\right),2\cos\left(\pi \frac{6}{25}\right),2\cos\left(\pi \frac{8}{25}\right),2\cos\left(\pi \frac{12}{25}\right),$&\\
&&$2\cos\left(\pi \frac{14}{25}\right),2\cos\left(\pi \frac{16}{25}\right),2\cos\left(\pi \frac{18}{25}\right),2\cos\left(\pi \frac{22}{25}\right),2\cos\left(\pi \frac{24}{25}\right)$&\\
\hline
\end{tabular}
\caption{Admissible polynomials with small discriminants for $d=2,\ldots, 10$.}
\label{PolTable}
\end{sidewaystable}
\begin{lemma}
Let $d>1$. If $p=2d+1$ is a prime, the $d$th-order polynomial
\begin{equation*}
E_{2d,2}(x) = \prod_{k=1}^d (x-u_{2d,2k})
\end{equation*}
is admissible.
\end{lemma}
\begin{proof}
Consider the factorization of ${\widetilde{U}}_{2d}(x)$. We have $2(2d)+2=4d+2=2p$,
therefore we have for $1\leq k\leq 2d$
\begin{equation*}
\gcd(k,2(2d)+2) =
\begin{cases}
1 & k\text{ odd} \\
2 & k\text{ even} \,.
\end{cases}
\end{equation*}
This implies that ${\widetilde{U}}_{2d}(x)=E_{2d,1}(x)E_{2d,2}(x)$, which both are of order $d$.
\end{proof}
This simple rule covers the cases $d\in\{2,3,5,6,8,9\}$.
For the cases $d=4$ and $d=10$ we also did find good factors.
\begin{lemma}
The polynomial $E_{14,2}(x)$ is of order $4$ and admissible
and the polynomial $E_{24,2}(x)$ is of order $10$ and admissible.
\end{lemma}
\begin{proof}
Both polynomials are admissible by definition, it remains to compute their order.
We first consider $E_{14,2}(x)$.
Here, $2\cdot 14 +2 = 30$, and for $1\leq k \leq 14$ one has $\gcd(k, 30) = 2$
if and only if $k\in\{2,4,8,14\}$.
Therefore, $E_{14,2}(x)$ is a polynomial of order $4$.
Now consider $E_{24,2}(x)$.
We have $2\cdot 24 + 2 = 50$, and for $1\leq k \leq 24$ one has $\gcd(k,50) = 2$
if and only if $k\in\{2,4,6,8,12,14,16,18,22,24\}$.
Therefore, $E_{24,2}(x)$ is a polynomial of order $10$.
\end{proof}
Unfortunately, the case $d=7$ is not covered by the factorization of all ${\widetilde{U}}_d(x)$ and ${\widetilde{T}}_d(x)$.
However, using a numerical brute force approach, we found the following polynomial.
\begin{lemma}
The polynomial
\begin{equation*}
P_7(x) = x^7 + x^6 - 6x^5 - 4x^4 + 10x^3 + 4x^2 - 4x - 1
\end{equation*}
is of order 7 and admissible.
\end{lemma}
\begin{proof}
We have to prove that $P_7(x)$ is irreducible over $\Q$.
It has leading coefficient $1$ and coefficients in $\Z$,
therefore it is irreducible over $\Q$ if and only if it is irreducible over $\Z$.
Here, we consider irreducibility over $\mathbb{F}_2$,
which is a sufficient condition for irreducibility over $\Z$.
In $\mathbb{F}_2$, one has
\begin{equation*}
P_7(x) \equiv x^7 + x^6 + 1\,.
\end{equation*}
Assume that this polynomial is reducible.
Because it has no roots in $\mathbb{F}_2$,
it would have to contain a factor of degree less then $4$ which also has no root in $\mathbb{F}_2$.
The possible candidates are therefore $x^2+x+1$, $x^3+x+1$ and $x^3+x^2+1$.
Doing a polynomial division with these three polynomials,
one finds that
\begin{align*}
x^7+ x^6 + 1 &\equiv (x^2+x+1)(x^5+x^3+x^2+1) &+ x \\
&\equiv (x^3+x+1)(x^4+x^3+x^2) &+ x^2 + 1 \\
&\equiv (x^3+x^2+1)(x^4+x+1) &+ x^2 + x
\end{align*}
and we have a contradiction.
Therefore, $P_7(x)$ is irreducible over $\mathbb{F}_2$, and subsequently also over $\Q$.
\end{proof}
Even though Lemma \ref{LatRep} is not applicable for this polynomial because its roots lie in $(-2.25, 1.75)$,
they still lie close to each other,
which results in a good polynomial discriminant.
Regarding the lattice representation issue in the $d=7$ case,
one has to compute the Vandermonde matrix $\bm{V}$
explicitly (using an arbitrary precision data type to avoid stability issues)
and find a good lattice representation matrix $\bm{T}$
by means of a lattice reduction algorithm, see for instance \cite{Lenstra1982}.
This completes our list of polynomials used for the dimensions $2\leq d\leq 10$.
We attach Table \ref{PolTable} collecting all polynomials and useful information.
\section{Efficient enumeration of Frolov lattices in $d$-cubes} \label{sec:enumeration}
\SetKwFunction{assemble}{assemble}
\SetKwProg{Fn}{Function}{}{}
\begin{algorithm}[t]
\textbf{Input:}\\
Integration domain $\Omega = [-1/2, 1/2]^d$,\\
Lattice representation matrix $\bm{T}=\bm{QR}$\\
\hrulefill\\
\textbf{set} $\mathcal{N} = \emptyset$\\
\textbf{set} $\bm{m} = (0,\ldots,0)^\top$\\
\textbf{run} \assemble $(\mathcal{N}, d, \bm{m})$\\
\hrulefill\\
\Fn{\assemble $(\mathcal{N}, j, \bm{m})$}{
\If{$j\geq 2$}{
Determine the set $K_j =
\{{k}_j \in\Z\,\colon\,g_j(0,\ldots,0,{k}_j,{m}_{j+1},\ldots,{m}_d)
\leq \frac{d}{4} - \sum_{i=j+1}^d g_i(\bm{m}) \}$\\
\ForAll{$k_j\in K_j$}{
\textbf{set} $m_j = k_j$\\
\assemble $(\mathcal{N}, j-1, \bm{m})$
}
\textbf{set} $m_j = 0$\\
}
\If{$j=1$}{
Determine the set
$K_1 = \{k_1\in\Z\,\colon\,g_j(k_1,m_{2},\ldots,m_d)
\leq \frac{d}{4} - \sum_{i=2}^d g_i(\bm{m}) \}$\\
\ForAll{$k_1\in K_1$}{
\textbf{set} $m_1 = k_1$\\
\If{$\bm{Tm}\in\Omega$}{
\textbf{set} $\mathcal{N} = \mathcal{N}\cup \{\bm{Tm}\}$
}
}
\textbf{set} $m_1 = 0$\\
}
}
\hrulefill\\
\textbf{Output}: Set of lattice points $\mathcal{N}$
\caption{Assemblation of the set $\mathcal{N} = \Omega\cap\bm{T}(\Z^d)$.} \label{QRassemble}
\end{algorithm}
In this section we present an enumeration algorithm
to determine the set of integration points for the Frolov cubature formula.
The approach is similar to the one in \cite{KaOeUl17} for orthogonal lattices,
Here, we generalize the method for arbitrary lattices.
\subsection{Enumeration of non-orthogonal Frolov lattices}
We fix the integration domain $\Omega = [-1/2, 1/2]^d$ and a lattice $\bm{T}(\Z^d)$
with lattice representation matrix $\bm{T}$.
We are interested in the discrete set
\begin{equation*}
\mathcal{N} = \Omega\cap\bm{T}(\Z^d) = \{\bm{Tk}\in\Omega \,\colon \, \bm{k}\in\Z^d\}\,.
\end{equation*}
Our strategy is to consider a slightly larger set $\mathcal{B} \supset \mathcal{N}$
which allows for explicit enumeration in a straightforward way.
We choose
\begin{equation*}
\mathcal{B} = B_{\sqrt{d}/2}(0)\cap\bm{T}(\Z^d)
= \left\{\bm{Tk} \,\colon\, \|\bm{Tk}\|_2^2\leq \frac{d}{4}, \bm{k}\in\Z^d\right\}\,.
\end{equation*}
Using the matrix decomposition
\begin{equation*}
\bm{T} = \bm{QR}\,,
\end{equation*}
where $\bm{Q}$ is an orthogonal matrix and $\bm{R}$ is an upper triangular matrix,
we can rewrite this set as
\begin{equation*}
\mathcal{B} = \left\{\bm{Tk} \,\colon\, \|\bm{Rk}\|_2^2\leq \frac{d}{4}, \bm{k}\in\Z^d\right\}\,.
\end{equation*}
The function $\|\bm{R}\cdot\|_2^2$ can be split up into additive parts
\begin{eqnarray*}
\|\bm{Rk}\|_2^2 &=& \sum_{i=1}^d g_i(\bm{k})\\
g_i(\bm{k}) &=& (\bm{Rk})_i^2,\quad i=1,\ldots,d\,,
\end{eqnarray*}
and from the upper triangular structure of $\bm{R}$ it follows that
$g_j(\bm{k})$ only depends on the components $k_j,\ldots,k_d$.
For an integer vector $\bm{k}$ we therefore have
\begin{equation}
\|\bm{Rk}\|_2^2 \leq \frac{d}{4}
\Longleftrightarrow g_j(\bm{k}) \leq \frac{d}{4} - \sum_{i=j+1}^d g_i(\bm{k})\,,\quad j=1,\ldots,d\,.
\end{equation}
Fixing the coordinates $k_{j+1},\ldots,k_d$ results in explicitly solvable inequalities for $k_j$,
since the right hand side is constant and the left hand side is a quadratic function in $k_j$.
Therefore, the set $\mathcal{N}$ can be assembled with Algorithm \ref{QRassemble}.
This algorithm iterates over all elements of $\mathcal{B}$,
which determines the complexity that is of order
\begin{equation*}
\mathrm{vol}_d\left(B_{\sqrt{d}/2}(0)\right)/|\det \bm{T}| \asymp 2^d \cdot |\mathcal{N}|\,.
\end{equation*}
This is certainly true if the sets $K_j$ appearing in the algorithm are all nonempty,
and this should be the case for a lattice with a small determinant and a good choice of its representation matrix.
The exponential dependence on $d$ is of minor importance here;
Once the Frolov integration points are computed and stored,
they can be reused for numerical integration.
\begin{comment}
However, generalizing this approach to the randomized setting would imply that
each random draw of Frolov integration points needs to be computed on the fly,
which renders this assemblation algorithm unpractical for higher dimensions.
\end{comment}
\subsection{Numerical results}
In Table \ref{tab_times} the running times for the enumeration of the Frolov lattice points in $[0,1]^d$ with Algorithm \ref{QRassemble} are provided for dimensionalities $d \in \{2,3, \ldots, 9\}$.
Firstly, we observe that the number of points $N$ converges to the scaling factor $n$, as $n$ becomes large, cf. \eqref{eqn_scalingVolume}.
Moreover, one can observe the \emph{linear runtime} of the algorithm in terms of the number of points $N$: If the number of points $N$ is quadrupled, then also the required time to assemble these $4N$ points is approximatively quadrupled.
However, comparing the runtimes for small $d$ and large $d$, it is apparent that a dimension-dependent constant is involved.
This is analogous to the orthogonal setting for $d = 2^k, k \in \N$, as it was treated in \cite{KaOeUl17}.
The resulting point sets for dimension $d \in \{2,3,\ldots, 10\}$ are available for download at \\
\texttt{http://wissrech.ins.uni-bonn.de/research/software/frolov/}.
\begin{table}[t]
\begin{tabular}{|l|l|l|l|}
\hline
Dim. $d$ & Scaling $n$ & Points $N$& Time (s)\\
\hline
2 & 1024 & 1023 & 4.4e-05\\
2 & 4096 & 4093 & 0.000158\\
2 & 16384 & 16387 & 0.00053\\
2 & 65536 & 65533 & 0.002117\\
2 & 262144 & 262147 & 0.00823\\
2 & 1048576 & 1048575 & 0.096369\\
\hline
3 & 1024 & 1021 & 0.000105\\
3 & 4096 & 4093 & 0.000341\\
3 & 16384 & 16387 & 0.001213\\
3 & 65536 & 65537 & 0.004547\\
3 & 262144 & 262149 & 0.017474\\
3 & 1048576 & 1048581 & 0.114605\\
\hline
4 & 1024 & 1023 & 0.00024\\
4 & 4096 & 4103 & 0.000805\\
4 & 16384 & 16395 & 0.002844\\
4 & 65536 & 65551 & 0.010464\\
4 & 262144 & 262155 & 0.038923\\
4 & 1048576 & 1048579 & 0.248508\\
\hline
5 & 1024 & 1021 & 0.00061\\
5 & 4096 & 4093 & 0.002072\\
5 & 16384 & 16359 & 0.007013\\
5 & 65536 & 65533 & 0.025019\\
5 & 262144 & 262141 & 0.129366\\
5 & 1048576 & 1048591 & 0.473579\\
\hline
\end{tabular}
\quad \, \quad
\begin{tabular}{|l|l|l|l|}
\hline
Dim. $d$ & Scaling $n$ & Points $N$& Time (s) \\
\hline
6 & 1024 & 1005 & 0.00146\\
6 & 4096 & 4087 & 0.004961\\
6 & 16384 & 16401 & 0.016533\\
6 & 65536 & 65513 & 0.059226\\
6 & 262144 & 262161 & 0.241978\\
6 & 1048576 & 1048585 & 0.943112\\
\hline
7 & 1024 & 1009 & 0.003208\\
7 & 4096 & 4099 & 0.011418\\
7 & 16384 & 16383 & 0.039014\\
7 & 65536 & 65531 & 0.13972\\
7 & 262144 & 262117 & 0.513067\\
7 & 1048576 & 1048573 & 2.0007\\
\hline
8 & 1024 & 1029 & 0.007961\\
8 & 4096 & 4051 & 0.025833\\
8 & 16384 & 16441 & 0.094269\\
8 & 65536 & 65539 & 0.329561\\
8 & 262144 & 262207 & 1.20636\\
8 & 1048576 & 1048767 & 4.59066\\
\hline
9 & 1024 & 997 & 0.017742\\
9 & 4096 & 4035 & 0.066017\\
9 & 16384 & 16517 & 0.223132\\
9 & 65536 & 65557 & 0.76848\\
9 & 262144 & 262107 & 2.77068\\
9 & 1048576 & 1048631 & 10.4136\\
\hline
\end{tabular}
\caption{Running times for the assemblation of Frolov cubature points in $[0,1]^d$.} \label{tab_times}
\end{table}
\section{Compactly supported functions with bounded mixed derivative in $L_2$} \label{sec:space}
\subsection{Characterization of the space}
We denote with $S(\R^d)$ the usual Schwartz space. Let $\mathbf{r} = (r_1,...,r_d) \in \N^d$ be a smoothness vector
with integer components. Then we define the semi-norm
$$
\|\varphi\|^2_{H^{\mathbf{r}}_{\text{mix}}} := \sum\limits_{e \subset [d]}
\Big\|\Big(\prod\limits_{i\in e} \frac{\partial^{r_i}}{\partial x_i^{r_i}}\Big)\varphi\Big\|^2_2
$$
where $\|\cdot\|_2$ denotes the $L_2(\R^d)$-norm. Clearly this norm is induced by an inner product.
By Plancherel's theorem together with well-known properties of the Fourier transform, see \eqref{FT} below,
we may rewrite
\begin{equation}\label{potential}
\begin{split}
\|\varphi\|_{H^{\mathbf{r}}_{\text{mix}}} &= \Big\|\mathcal{F}^{-1}\Big[
\Big(\prod\limits_{i=1}^d (1+|2\pi \xi_i|^{2r_i})\Big)^{1/2} \mathcal{F}\varphi(\boldsymbol{\xi})\Big]\Big\|_2\\
&= \Big\|\Big(\prod\limits_{i=1}^d (1+|2\pi \xi_i|^{2r_i})\Big)^{1/2}\mathcal{F}\varphi(\boldsymbol{\xi})\Big\|_2
= \|v_{{\boldsymbol{r}}}(\boldsymbol{\xi})\mathcal{F}\varphi\|_2\,,
\end{split}
\end{equation}
where we define
\begin{equation}
v_{\mathbf{r}}(\bsx):= \Big(\prod\limits_{i=1}^d (1+|2\pi x_i|^{2r_i})\Big)^{1/2}\,.
\end{equation}
Let now $\Omega$ be a bounded domain in $\R^d$. We denote with $C^{\infty}_0(\Omega)$ the space of all infinitely many times
differentiable (real-valued) functions $\varphi:\R^d \to \R$ with $\operatorname{supp} \varphi \subset \Omega$.
Finally, we define the space
\begin{equation}\label{f1}
\mathring{H}^{\mathbf{r}}_{\text{mix}}(\overline{\Omega}) := \overline{C_0^{\infty}(\Omega)}^{\|\cdot\|_{H^{\mathbf{r}}_{\text{mix}}}}
\end{equation}
by completion with respect to the norm $\|\cdot\|_{H^{\mathbf{r}}_{\text{mix}}}$\,. As a consequence we get that
$\mathring{H}^{\mathbf{r}}_{\text{mix}}(\overline{\Omega})$ is a Hilbert space which consists
of $r_i-1$ times continuously differentiable functions (mixed in each component) on $\R^d$
which vanish on $\R^d\setminus \Omega$\,.
We will now consider a more specific situation. Let $\Omega = (0,1)^d$. Then it holds
\begin{equation}\label{tensor}
\mathring{H}^{\mathbf{r}}_{\text{mix}}:=\mathring{H}^{\mathbf{r}}_{\text{mix}}([0,1]^d) = \mathring{H}^{r_1}([0,1]) \otimes \cdots \otimes \mathring{H}^{r_d}([0,1])
\end{equation}
in the sense of tensor products of Hilbert spaces, where $\mathring{H}^{r} = \mathring{H}^{r_i}([0,1])$ is the univariate version of the above
defined spaces. Functions in this class satisfy a left and a right boundary condition, namely
$f^{(j)}(0) = f^{(j)}(1) = 0$ for $j=0,...,r-1$.
The first assertion in the following lemma is a direct consequence of Taylor's theorem and the homogeneous
boundary condition of the function and all its derivatives. The second one follows from (i) together
with H\"older's inequality.
\begin{lemma}\label{lem51} Let ${\boldsymbol{r}} \in \N^d$.
{\em (i)} Every function $\varphi \in C_0^{\infty}((0,1)^d)$ admits the following
representation
$$
\varphi(x_1,...,x_d) = \int_0^1 \cdots \int_0^1 \varphi^{({\boldsymbol{r}})}(t_1,...,t_d)\prod\limits_{i=1}^d
\frac{(x_i-t_i)^{r_i-1}_+}{(r_i-1)!}\,dt_1...dt_d\,.
$$
{\em (ii)} Let $e \subset [d]$. Then
$$
\Big\|\Big(\prod\limits_{i\in e} \frac{\partial^{r_i}}{\partial x_i^{r_i}}\Big)\varphi\Big\|^2_2
\leq \|\varphi^{({\boldsymbol{r}})}\|^2_2\prod\limits_{i\in e} \frac{1}{[(r_i-1)!]^2(2r_i-1)2r_i}
$$
and therefore
$$
\|\varphi\|^2_{H^{{\boldsymbol{r}}}_{\text{mix}}} \leq \|\varphi^{({\boldsymbol{r}})}\|^2_2\sum\limits_{e \subset [d]}
\prod\limits_{i\in e} \frac{1}{[(r_i-1)!]^2(2r_i-1)2r_i} \,.
$$
\end{lemma}
\begin{remark} {\em (a)} Note, that the assertions in Lemma \ref{lem51} hold true for
any function $\varphi \in S(\R^d)$ with $\operatorname{supp} \varphi \subset \R_+^d$\,. We only need zero
boundary values at $0$.\\
{\em (b)} The previous lemma shows that the semi-norm $\|\cdot\|_{\mr{H}^{{\boldsymbol{r}}}_{\text{mix}}}$
induced by the bilinear form
\begin{equation}\label{f2}
\langle \varphi, \psi \rangle_{\mr{H}^{{\boldsymbol{r}}}_{\text{mix}}}:= \int_{[0,1]^d}
\varphi^{({\boldsymbol{r}})}(\bsx)\psi^{({\boldsymbol{r}})}(\bsx)\,d\bsx
\end{equation}
is actually a norm on $C^{\infty}_0((0,1)^d)$ since the bilinear form is positive definite as a consequence of (ii).
Hence, we could have also used this semi-norm for
the completion in \eqref{f1}. As it turns out Lemma \ref{lem51} and \eqref{f2} are actually
the key to derive the reproducing kernel for the space $\mathring{H}^{{\boldsymbol{r}}}_{\text{mix}}$.\\
{\em (c)} We have an explicit upper
bound for the norm equivalence constant in (ii). Suppose that we have a constant
smoothness vector ${\boldsymbol{r}} = (r,...,r)$ with $r \in \N$\,. Then it holds
\begin{equation}\label{f12}
\begin{split}
\sum\limits_{e \subset [d]}
\prod\limits_{i\in e} \frac{1}{[(r_i-1)!]^2(2r_i-1)2r_i} &= \sum\limits_{i=0}^d
\binom{d}{i} \Big(\frac{1}{[(r-1)!]^2(2r-1)2r}\Big)^i\\
&= \Big(1+\frac{1}{[(r-1)!]^2(2r-1)2r}\Big)^d\,.
\end{split}
\end{equation}
Hence, if $r=1$ the constant is bounded by $(3/2)^d$, in case $r=2$ we have $(13/12)^d$ and in case $r=3$
already $(61/60)^d$\,.
\end{remark}
\subsection{The reproducing kernel of $\mathring{H}^{\mathbf{r}}_\mix$} \label{sec:repro_kernel}
In the sequel we will identify the space $\mathring{H}^{\mathbf{r}}_{\text{mix}}$ as
a reproducing kernel Hilbert space. We are
looking for a kernel function $\mr{K}^{{\boldsymbol{r}}}_d(\bsx,\bsy)$ such that for every
$f\in \mathring{H}^{\mathbf{r}}_{\text{mix}}$
$$
\langle f(\cdot), \mr{K}^{{\boldsymbol{r}}}_d(\bsx,\cdot)\rangle_{\mr{H}^{{\boldsymbol{r}}}_{\text{mix}}} = f(\bsx)\quad,\quad \bsx \in [0,1]^d\,.
$$
To this end, we may derive the reproducing kernels of the univariate spaces $\mathring{H}^{r_i}$. The reproducing kernel of the tensor product space \eqref{tensor} is then given by the point-wise product of the univariate kernels
\begin{equation}
\mathring{K}^{{\boldsymbol{r}}}_{d}(\bsx, \bsy) = \prod_{\ell=1}^d \mathring{K}^r_{1}(x_\ell, y_\ell) .
\end{equation}
Therefore, the problem of computing $\mathring{K}_{d}^{\boldsymbol{r}}(\bsx, \bsy)$ is reduced to the construction of $\mathring{K}^r_{1}: [0,1] \times [0,1] \to \R$.
Let us first recall a general fact for Hilbert spaces and orthogonal sums.
To this end, let $U:=\mathrm{span}\{u_0,\ldots,u_{r-1}\} \subset \cH$ be an $r$-dimensional subspace of a
Hilbert space $\cH$. Using Gram-Schmidt orthogonalization, the orthogonal projection $P_U: \cH \to U$ is given by
\begin{equation} \label{eqn_orth_projection}
P_U(f)(x) = \sum_{j=0}^{r-1} \left( \sum_{k=0}^{r-1} G^{-1}_{j,k} \cdot \langle f,u_k\rangle_{\cH} \right) u_j ,
\end{equation}
where the Gramian matrix $\mathbf{G} = \left( \langle u_j, u_k \rangle_{\cH} \right)_{j,k=0}^{r-1} \in \R^{r \times r}$.
Moreover, the projection onto the orthogonal complement $U^\perp = \cH \ominus U$ is $P_{U^\perp}f = (\mathrm{Id} - P_U)f$.
The next Lemma provides the necessary utilities to compute the reproducing kernel of closed subspaces
that are defined via homogeneous boundary conditions.
\begin{lemma}\label{modify}
Let $\cH_K$ be a RKHS with kernel $K: [0,1] \times [0,1] \to \R$.
Assuming that $K(x,\cdot)$ is $r$ times weakly differentiable,
let $u_{j} := K^{(0,j)}(\cdot, 1) :=
\frac{\partial^j}{\partial y^j} K(\cdot, y)_{|y=1}$ for $j=0,\ldots,r-1$ and $U =
\mathrm{span}\{u_0,\ldots, u_{r-1}\}$. Then it holds that
\begin{enumerate}
\item[(i)] For $j=0,\ldots,r-1$, the Riesz representer of the functional $f \mapsto f^{(j)}(1)$ in
$\cH_K$ is given by $u_{j}$, i.e.
\[
\langle f, u_{j} \rangle_{\cH_K} = f^{(j)}(1) \quad \text{ for all } f \in \cH_K .
\]
\item[(ii)] The reproducing kernel $K_{U^\perp}$ of $U^\perp \subset \cH_K$, i.e. the orthogonal complement of $U$ in $\cH_K$,
is given by
\begin{equation}\label{f6}
K_{U^\perp}(x,y) = P_{U^\perp} K(\cdot, y)(x) = K(x,y) - \sum_{j=0}^{r-1}
\sum_{k=0}^{r-1} G^{-1}_{j,k} u_j(x) u_k(y) .
\end{equation}
\item[(iii)] It holds that
$$
U^\perp = \{f \in \cH_K: f^{(j)}(1)=0, j=0,\ldots,r-1 \} .
$$
\end{enumerate}
\begin{proof}
(i) is \cite[Lem. 10]{berlinet} for the linear functional $f \mapsto f^{(j)}(1)$ and
(ii) follows by applying \cite[Thm. 11]{berlinet} to \eqref{eqn_orth_projection}.
Finally, regarding (iii) we note that it holds for all $f \in U^\perp$ that
$$\langle f, u_j \rangle_{\cH_K} = \langle f, K^{(0,j)}(\cdot, 1) \rangle_{\cH_K} = f^{(j)}(1) = 0 .$$
\end{proof}
\end{lemma}
We want to apply this machinery to $\mr{H}^r$ with $r\in \N$. The observation in Lemma \ref{lem51} together
with \eqref{f2} gives rise to use the
approach of Wahba \cite[1.2]{Wah90} as a starting point. Let us define the
kernel function
\begin{equation}\label{ker}
K^r_1(x,y) := \int_0^1 \frac{(x-t)^{r-1}_+}{(r-1)!}\cdot \frac{(y-t)^{r-1}_+}{(r-1)!}\,dt\quad,\quad x,y\in [0,1]\,.
\end{equation}
Then it is immediately clear from Lemma \ref{lem51},(i) (and a straight-forward density argument) that
$$
f(x) = \langle f(\cdot), K_1^r(x,\cdot) \rangle_{\mr{H}^{r}}\quad, \quad x\in [0,1]\,.
$$
Indeed, recall that the inner product $\langle\cdot,\cdot \rangle_{\mr{H}^r}$ stems from \eqref{f2} and that
$$
(K^r_1)^{(0,r)}(x,y) = \frac{(x-y)^{r-1}_+}{(r-1)!}\,.
$$
It is possible to give an explicit formula for \eqref{ker} by using that
\begin{equation}\label{f4}
K^r_1(x,y) := \int_0^{\min\{x,y\}} \frac{(x-t)^{r-1}_+}{(r-1)!}\cdot \frac{(y-t)^{r-1}_+}{(r-1)!}\,dt\,.
\end{equation}
Interpreting this as a Taylor remainder term we find
\begin{equation}\label{f5}
K^r_1(x,y) = \frac{(-1)^r}{(2r-1)!}\Big[\sum\limits_{k=r}^{2r-1}\binom{2r-1}{k}(-\min\{x,y\})^k\max\{x,y\}^{2r-1-k}\Big]\,.
\end{equation}
However, $\mr{H}^r$ is a only a closed subspace of $\mathcal{H}_{K_1^r}$ since the functions
$f \in \mathcal{H}_{K_1^r}$ may lack the right boundary condition which is $f^{(j)}(1) = 0$
if $j=0,...,r-1$, whereas the left boundary condition $f^{(j)}(0) = 0$ if $j=0,...,r-1$ is for free due to the construction.
Let us now apply the construction from Lemma \ref{modify} to $K^r_1$ to construct a
reproducing kernel $\mr{K}^r_1$ for the
closed subspace $\mr{H}^r$\,.
\begin{figure}[t]
\includegraphics[width=0.49\linewidth]{fig/3dplots-figure0.pdf}
\,
\includegraphics[width=0.49\linewidth]{fig/3dplots-figure1.pdf}
\caption{Plots of the kernel $\mathring{K}^r_1: [0,1] \times [0,1] \to \R$ with smoothness $r=1$ (left) and smoothness $r=2$ (right).} \label{fig_kernel_plot}
\end{figure}
First we compute the functions $u_j(\cdot) = (K_1^r)^{(0,j)}(\cdot,1)$ for $j = 0,...,r-1$
explicitly. Using again the formula \eqref{ker} we find
\begin{equation}\label{f7_1}
\begin{split}
u_j(x) &= \Big(\frac{d}{dy}\Big)^j\int_0^{\min\{x,y\}} \frac{(x-t)^{r-1}_+}{(r-1)!}\cdot \frac{(y-t)^{r-1}_+}{(r-1)!}\,dt{\bigg \vert}_{y=1}\\
&=\int_0^x \frac{(x-t)^{r-1}_+}{(r-1)!}\cdot \frac{(1-t)^{r-1-j}}{(r-1-j)!}\,dt\,,
\end{split}
\end{equation}
where we used the well-known formula for the differentiation of integrals.
Similar as above in \eqref{f4} we interpret this as a Taylor's remainder term for a specific polynomial.
It is not hard to verify that this polynomial
is given by
\begin{equation}\label{f7_b}
u_j(x) = \frac{(-1)^r}{(2r-1-j)!} \Big[\sum\limits_{k=r}^{2r-1-j}
\binom{2r-1-j}{k}(-x)^k\Big]\,\quad,\quad j=0,...,r-1\,.
\end{equation}
Looking at the functions $u_j$, $j=0,...,r-1$, we see immediately that
$\{x^r,...,x^{2r-1}\}$ is a basis of their span. Hence we may use the system
$\tilde{u}_j(x) = x^{j+r}/(j+r)!$ in \eqref{f6}\,. This gives the following representation for the
kernel $\mathring{K}^r_{1}(x,y)$, namely
\begin{equation}\label{f7_2}
\mathring{K}^r_{1}(x,y) = K_1^r(x,y) - \sum_{j=0}^{r-1} \sum_{k=0}^{r-1} \frac{G^{-1}_{j,k}}{(j+r)!(k+r)!}
x^{j+r} y^{k+r}\,,
\end{equation}
where $K_1^r(x,y)$ is given by \eqref{f5} and
$$\mathbf{G} = \Big(\frac{1}{j! k!(j+k+1)}\Big)_{\substack{j=0,...,r-1\\k=0,...,r-1}}\,.$$
Let us give two examples. Putting $r=1$ in \eqref{f7_2} we have
$$
\mr{K}^1_1(x,y) = \min\{x,y\} - xy,\quad,\quad x,y \in [0,1]\,.
$$
Furthermore, in case $r=2$ we obtain
$$
\mr{K}^2_1(x,y) = K^2_1(x,y) - x^2y^2+x^2y^3/2+x^3y^2/2-x^3y^3/3\,,
$$
where
$$
K^2_1(x,y) = \frac{1}{2}\min\{x,y\}^2\max\{x,y\}-\frac{1}{6}\min\{x,y\}^3,
$$
For $r=1,2,3$ we obtain the associated Gramian matrices
$$
(\mathbf{G}^1)^{-1} = \left(\begin{matrix}
1
\end{matrix}\right)\quad,\quad
(\mathbf{G}^2)^{-1} = \left(\begin{matrix}
4 & -6\\
-6 & 12
\end{matrix}\right)\quad,\quad
(\mathbf{G}^3)^{-1} = \left(\begin{matrix}
9& -36 & 60\\
-36 & 192 & -360\\
60 & -360 & 720
\end{matrix}\right)\,.
$$
In the case $d=1$ the kernels for $r=1$ and $r=2$ are depicted in Figure \ref{fig_kernel_plot}. The smoothness can be observed along the
diagonal $x = y$, where the kernel for $r=1$ exhibits a kink.
Regarding the multivariate kernel, we have arrived at the following result.
\begin{theorem}\label{repr_kernel}
Given a smoothness vector $\mathbf{r} = (r_1, r_2,\ldots,r_d) \in \N^d$, the reproducing kernel of the tensor
product space $\mathring{H}^{\mathbf{r}}_\mix = \mathring{H}^{r_1} \otimes \cdots \otimes \mathring{H}^{r_d}$ is given by
\begin{align}
\mathring{K}_{d}^{\mathbf{r}}(\bsx, \bsy) & = \prod_{\ell=1}^d \mathring{K}_{1}^{r_\ell}(x_\ell, y_\ell) \\
& = \prod_{\ell=1}^d \Big(K_{1}^{r_\ell}(x_\ell, y_\ell) -
\sum_{j=0}^{r_\ell-1} \sum_{k=0}^{r_\ell-1} (\mathbf{G}^{r_\ell})^{-1}_{j,k} (K_{1}^{r_\ell})^{(0,j)}(x_\ell,1)
\, (K_{1}^{r_\ell})^{(0,k)}(y_\ell,1) \Big),
\end{align}
where $u_j (x_\ell) = (K_{1}^{r_\ell})^{(0,j)}(x_\ell,1)$ are given in \eqref{f7_b}
and $(\mathbf{G}^{r_\ell})^{-1}$ are given in \eqref{f2}.
\end{theorem}
The explicit expression for the reproducing kernel of $\mathring{H}^{\mathbf{r}}_\mix$ allows to
compute the norms of arbitrary bounded linear functionals $L \in (\mathring{H}^{\mathbf{r}}_\mix)^\star$, since it holds
\begin{equation} \label{eqn_functional_norm}
\|L\|_{(\mathring{H}^{\mathbf{r}}_\mix)^\star} = \sup_{\|f\|_{ \mathring{H}^{\mathbf{r}}_\mix }
\leq 1} |L(f)| = \sqrt{ L^{(\bsx)} L^{(\bsy)} \mathring{K}_{d}^{\mathbf{r}}(\bsx, \bsy) }.
\end{equation}
The right-hand side involves the application of the functional
$L$ to both components of the kernel. We will use this in Section
\ref{sec:numerics} for the simulation of worst-case integration errors which can be
rewritten as norms of certain functionals \eqref{eqn_wce_formula} involving the integration functional
$L(f) = I_d(f) = \int_{[0,1]^d} f(\bsx) \, \,\mathrm{d} \bsx$. In the sequel we will compute
the norm and its Riesz representer. We have
\begin{equation} \label{eqn_initial_error}
\|I_d\|^2_{(\mathring{H}^{\mathbf{r}}_\mix)^\star} = \sup_{\|f\|_{ \mathring{H}^{\mathbf{r}}_\mix } \leq 1} |I_d(f)|^2 =
\prod_{\ell=1}^d \left( \int_0^1 \int_0^1 \mathring{K}_{1}^{r_\ell}(x_\ell, y_\ell) \, \,\mathrm{d} x_\ell \,\mathrm{d} y_\ell \right)
\end{equation}
where
\begin{align*}
\int_0^1 \int_0^1 \mathring{K}_{1}^{r}(x, y) \, \,\mathrm{d} x \,\mathrm{d} y = & \int_0^1 \int_0^1 K_1^r(x,y) - \sum_{j=0}^{r-1} \sum_{k=0}^{r-1} \frac{G^{-1}_{j,k}}{(j+r)!(k+r)!}
x^{j+r} y^{k+r} \, \,\mathrm{d} x \,\mathrm{d} y \\
= & \int_0^1 \int_0^1 K_1^r(x,y) \, \,\mathrm{d} x \,\mathrm{d} y - \sum_{j=0}^{r-1} \sum_{k=0}^{r-1} \frac{G^{-1}_{j,k}}{(j+r + 1)!(k+r + 1)!} \\
= & \frac{1}{ (r!)^2 (2r+1)} - \sum_{j=0}^{r-1} \sum_{k=0}^{r-1} \frac{G^{-1}_{j,k}}{(j+r + 1)!(k+r + 1)!}\,.
\end{align*}
The last identity follows from the representation \eqref{f4} and
\begin{align*}
\int_0^1 \int_0^1 K_1^r(x,y) \, \,\mathrm{d} x \,\mathrm{d} y & = \int_0^1 \left( \int_0^1 \int_0^1 \frac{(x-t)^{r-1}_+}{(r-1)!} \frac{(y-t)^{r-1}_+}{(r-1)!} \, \,\mathrm{d} x \,\mathrm{d} y \right) \,\mathrm{d} t \\
& = \int_0^1 \left( \int_t^1 \frac{(x-t)^{r-1}}{(r-1)!} \int_t^1 \frac{(y-t)^{r-1}}{(r-1)!} \, \,\mathrm{d} x \,\mathrm{d} y \right) \,\mathrm{d} t \\
& = \int_0^1 \left( \frac{(1-t)^r}{r!} \frac{(1-t)^r}{r!} \right) \,\mathrm{d} t = \int_0^1 \frac{(1-t)^{2r}}{(r!)^2} \ \,\mathrm{d} t \\
& = \frac{1}{(2r)! (2r+1) } .
\end{align*}
For the Riesz representer of $f \mapsto \int_{[0,1]^d} f(\bsx) \,\mathrm{d}\bsx = \langle f, R_{I_d} \rangle_{\mr{H}^{{\boldsymbol{r}}}_{\text{mix}}}$ it holds
$$
R_{I_d}(\bsy) = \int_{[0,1]^d} \mathring{K}_{d}^{\mathbf{r}}(\bsx, \bsy)\,\,\mathrm{d} \bsx
= \prod_{\ell=1}^d \left( \int_0^1 \mathring{K}_{1}^{r_\ell}(x_\ell, y_\ell) \, \,\mathrm{d} x_\ell \right)\quad,\quad
\bsy = (y_1,...,y_d)\,.
$$
Clearly, we have
$$
\int_0^1 \mathring{K}_{1}^{r_\ell}(x, y) \, \,\mathrm{d} x = \int_0^1 K_{1}^{r_\ell}(x, y) -
\sum_{j=0}^{r-1} \sum_{k=0}^{r-1} \frac{G^{-1}_{j,k}}{(j+r)!(k+r)!}
x^{j+r} y^{k+r} \, \,\mathrm{d} x\,.
$$
A similar computation as above together with the identity
$$
\int_0^1 \frac{(y-t)^{r-1}_+}{(r-1)!}\frac{(1-t)^r}{r!}\,\mathrm{d} t = \frac{(-1)^r}{(2r)!}\sum\limits_{k=r}^{2r}
\binom{2r}{k}(-y)^k
$$
(see the computation after \eqref{f7_1}) leads to the following explicit formula
\begin{equation} \label{eqn_riesz_representer}
\int_0^1 \mathring{K}_{1}^{r_\ell}(x, y) \, \,\mathrm{d} x =\frac{(-1)^r}{(2r)!}\sum\limits_{k=r}^{2r} \binom{2r}{k}(-y)^k - \sum_{j=0}^{r-1} \sum_{k=0}^{r-1} \frac{G^{-1}_{j,k}}{(j+r+1)!(k+r)!}y^{k+r}\,.
\end{equation}
\section{Worst-case error estimates with respect to $\mathring{H}^{\mathbf{r}}_{\text{mix}}$}\label{sec:TheorBounds}
In this section, we are interested in the behavior of the worst-case error
\begin{equation}\label{eq:error}
e(n,d,{\boldsymbol{r}}) \,:=\, \sup_{\|f\|_{\mr{H}^{{\boldsymbol{r}}}_{\text{mix}} \leq 1}}\, \abs{Q_n^d(f)-I_d(f)}.
\end{equation}
of Frolov's cubature rule $Q_n^d$ with respect to the unit ball
in the norm $\|\cdot\|_{\mr{H}^{\boldsymbol{r}}_{\text{mix}}}$, see \eqref{f2}.
Recall that
\[
Q_n^d(f) \,=\, \frac{1}{n}\sum_{\boldsymbol{k}\in\Z^d}\, f(\bm{A}_n \boldsymbol{k}),
\]
where $\bm{A}_n=n^{-1/d}\bm{A}$ and $\bm{A}=\bigl(\det(\bm{V})\bigr)^{-1/d}\bm{V}$
with $\bm{V}$ from Theorem~\ref{construction}. Let further $\bm{B}_n = (\bm{A}_n)^{-\top}$
The main tool for analyzing \eqref{eq:error} is
Poisson's summation formula. Let $\varphi \in S(\R^d)$ be a multivariate
Schwartz-function. With $\mathcal{F}\varphi$ we denote the Fourier transform
\begin{equation}\label{FT}
\mathcal{F}\varphi(\xi) = \int_{-\infty}^{\infty} \varphi(x)\exp(-2\pi i\bsx\cdot\mathbf{\xi})\,d\bsx\quad,\quad \mathbf{\xi}\in \R^d\,.
\end{equation}
Then it holds
$$
\sum\limits_{\boldsymbol{m} \in \Z^d} \varphi(x+\boldsymbol{m}) = \sum\limits_{\boldsymbol{k} \in \Z^d} \mathcal{F}\varphi(\boldsymbol{k})\exp(2\pi i\boldsymbol{k}\cdot \bsx)
$$
with absolute convergence on both sides. The following consequence is of particular importance. Let
$\bm{A}:\R^d \to \R^d$ be a regular matrix with
$\det \bm{A} \neq 0$. Let further $\bm{B}=\bm{A}^{-\top}$\,. Then
we have
\begin{equation}\label{poiss1}
\det \bm{A} \sum\limits_{\boldsymbol{m} \in \Z^d} \varphi(\bm{A}(\bsx+\boldsymbol{m})) = \sum\limits_{\boldsymbol{k} \in \Z^d} \mathcal{F}
\varphi(\bm{B}\boldsymbol{k})\exp(2\pi i\boldsymbol{k}\cdot \bsx)
\end{equation}
Let us finally mention the following special case by putting $\bsx = 0$
\begin{equation}\label{poiss2}
\det \bm{A} \sum\limits_{\boldsymbol{m} \in \Z^d} \varphi(\bm{A}\boldsymbol{m}) =
\sum\limits_{\boldsymbol{k} \in \Z^d} \mathcal{F}\varphi(\bm{B}\boldsymbol{k})
\end{equation}
A more general variant (with respect to the regularity of the participating functions) can be found in \cite[Thm.\ 3.1, Cor.\ 3.2]{UlUl16}
In this section we show the by now well-known upper bounds
on the worst-case error of Frolov's cubature formula for the
Sobolev spaces $\mr{H}^{\mathbf{r}}_{\text{mix}}$.
We give relatively short proofs here with special emphasis on the
constants. In particular, we will see how the invariants of the used lattice
will affect the error estimates.
We will see that only two invariants will play a role in the upper bounds,
which we want to discuss shortly.
For this note that the lattices under consideration are generated by a
multiple of a
Vandermonde matrix $\bm{V}$, which is defined via a generating polynomial $P$
as in Theorem~\ref{construction}.
The first invariant is the determinant,
or in other words the discriminant of the generating polynomial
\[
D_P \,=\, \det(\bm{V}).
\]
For example, we know from Theorem~\ref{construction} that
$\mathrm{Nm}(\bm{V}^{-\top})
=1/D_P^{2}$.
The second invariant is
\begin{equation}\label{eq:b}
B_P \,:=\, \min_{\bm{U}} \|\bm{V}\bm{U}\|_\infty,
\end{equation}
where the minimum is over all $\bm{U}\in\mbox{SL}_d(\Z)$.
This constant is an upper bound for the diameter (in $\ell_\infty$) of
the ``smallest'' fundamental cell of the lattice.
To see this, note that every \emph{fundamental cell}, i.e.~a parallelepiped
with corners on the lattice with no lattice point in the interior,
is of the form $T([0,1]^d)$, where $T\in\R^{d\times d}$ is a generating matrix
for the lattice. Moreover, it is well-known that every generating matrix of the
lattice that is generated by $\bm{V}$ is of the form $\bm{V}\bm{U}$ for some unimodular,
integer-valued matrix $\bm{U}$. We will see that both, $D_P$ and $B_P$, should be small to obtain a small upper
bound on the errors. This justifies the choice of the generating polynomials in the
previous section. Here is the main result of this section.
\begin{theorem}\label{thm:theor-error}
Let ${\boldsymbol{r}}=(r_1,\dots,r_d) \in \N^d$ and $\eta:=\#\{j\colon r_j=r\}$. Then we have for any $f\in\mr{H}^{\boldsymbol{r}}_{\mix}$
\begin{equation}\label{f13}
\hspace{-.5cm}
\begin{split}
&\abs{Q_n^d(f)-I_d(f)}\\
&\le\; C(d,\eta,{\boldsymbol{r}}) \cdot \max\left\{D_P, \frac{(2 B_P)^d}{n}\right\}^{1/2}
\left(\frac{D_P}{n}\right)^{r}\,\Bigl(2+\log\left(n/D_P\right)\Bigr)^{(\eta-1)/2}
\,\norm{f}_{\mr{H}^{{\boldsymbol{r}}}_{\text{mix}}},
\end{split}
\end{equation}
where
\begin{equation}\nonumber
\begin{split}
&C(d,\eta,{\boldsymbol{r}})\\
&:=\, 2^{d+1}\, \left(1-2^{-2(r'-r)}\right)^{-(d-\eta)/2}\,
\left(1-2^{(1-2r)}\right)^{-\eta/2}\left(\sum\limits_{e \subset [d]}
\prod\limits_{i\in e} \frac{1}{[(r_i-1)!]^2(2r_i-1)2r_i}\right)^{1/2}
\end{split}
\end{equation}
with $r':=\min_j\{r_j\colon r_j\neq r\}$.
\end{theorem}
Let us prove the following estimate first.
\begin{proposition} Let $\varphi \in C_0^{\infty}((0,1)^d)$. Then
\begin{equation}\label{f11}
\abs{Q_n^d(\varphi)-I_d(\varphi)} \leq
\sqrt{\frac{M(\bm{A}_n)}{\det \bm{B}_n}}\Big(\sum\limits_{\boldsymbol{k} \in \Z^d\setminus \{\mathbf{0}\}} |v_{{\boldsymbol{r}}}(
\bm{B}_n\boldsymbol{k})|^{-2}\Big)^{1/2} \|\varphi\|_{H^{{\boldsymbol{r}}}_\text{mix}}\,,
\end{equation}
where
\begin{equation}\label{f9}
M(\bm{A}_n) \,:=\, \min_{\bm{U}\in\mbox{SL}_d(\Z)}\#\Bigl\{\boldsymbol{m}\in\Z^d\colon \bm{UA}_n\bigl(\boldsymbol{m}+(0,1)^d\bigr)\cap[0,1]^d\neq\emptyset\Bigr\}\,,
\end{equation}
is the minimal number of fundamental cells of the integration lattice necessary
to cover the unit cube.
\end{proposition}
\begin{proof} The above special case of Poisson's summation formula \eqref{poiss2} gives
\begin{equation}\label{f7}
\begin{split}
&\abs{Q_n^d(\varphi)-I_d(\varphi)} = \Big|\sum\limits_{\boldsymbol{k}\in \Z^d\setminus \{\mathbf{0}\}}
\mathcal{F}\varphi(\bm{B}_n \boldsymbol{k})\Big|\\
&~~\leq \Big(\sum\limits_{\boldsymbol{k}\in \Z^d\setminus \{\mathbf{0}\}}v_{\mathbf{r}}(\bm{B}_n\boldsymbol{k})^{-2}\Big)^{1/2}
\Big(\sum\limits_{\boldsymbol{k} \in \Z^d} |v_{\mathbf{r}}(\bm{B}_n\boldsymbol{k})
\mathcal{F}\varphi(\bm{B}_n\boldsymbol{k})|^2\Big)^{1/2}\\
\end{split}
\end{equation}
By the definition of $v_{{\boldsymbol{r}}}$ we may rewrite
$$
|v_{\mathbf{r}}(\bm{B}_n\boldsymbol{k})
\mathcal{F}\varphi(\bm{B}_n\boldsymbol{k})|^2= \sum\limits_{e \subset [d]}\Big|\mathcal{F}\Big[\Big(\prod\limits_{i\in e}
\frac{\partial^{r_i}}{\partial x_i^{r_i}}\Big)\varphi\Big](\bm{B}_n\boldsymbol{k})\Big|^2
$$
Using this for the second factor in \eqref{f7} we find
\begin{equation}\nonumber
\begin{split}
&\sum\limits_{\boldsymbol{k} \in \Z^d} |v_{\mathbf{r}}(\bm{B}_n\boldsymbol{k})
\mathcal{F}\varphi(\bm{B}_n\boldsymbol{k})|^2\\
&~~=\sum\limits_{e \subset [d]}\int\limits_{[0,1]^d}
\Big|\sum\limits_{\boldsymbol{k} \in \Z^d}\mathcal{F}\Big[\Big(\prod\limits_{i\in e}
\frac{\partial^{r_i}}{\partial x_i^{r_i}}\Big)\varphi\Big](\bm{B}_n\boldsymbol{k})\exp(2\pi i\boldsymbol{k}\cdot \bsx)\Big|^2\,d\bsx\,.
\end{split}
\end{equation}
Now we apply Poisson's summation formula in the form \eqref{poiss1} to the integrand and find
\begin{equation}
\begin{split}
&\sum\limits_{\boldsymbol{k} \in \Z^d} |v_{\mathbf{r}}(\bm{B}_n\boldsymbol{k})
\mathcal{F}\varphi(\bm{B}_n\boldsymbol{k})|^2\\
&~~=(\det \bm{A}_n)^2\sum\limits_{e \subset [d]}\int\limits_{[0,1]^d}
\Big|\sum\limits_{\boldsymbol{m} \in \Z^d}\Big[\Big(\prod\limits_{i\in e}
\frac{\partial^{r_i}}{\partial x_i^{r_i}}\Big)\varphi\Big](\bm{A}_n(\bsx+\boldsymbol{m}))\Big|^2\,d\bsx\\
&~~\leq(\det \bm{A}_n)^2M(\bm{A}_n)\sum\limits_{e \subset [d]}\sum\limits_{\boldsymbol{m} \in \Z^d}
\int\limits_{[0,1]^d}\Big|\Big[\Big(\prod\limits_{i\in e}
\frac{\partial^{r_i}}{\partial x_i^{r_i}}\Big)\varphi\Big](\bm{A}_n(\bsx+\boldsymbol{m}))\Big|^2\,d\bsx\\
&~~=(\det \bm{A}_n)M(\bm{A}_n)\sum\limits_{e \subset [d]}
\int_{\R^d}\Big|\Big[\Big(\prod\limits_{i\in e}
\frac{\partial^{r_i}}{\partial x_i^{r_i}}\Big)\varphi\Big](\bsy)\Big|^2\,d\bsy\\
&= \frac{M(\bm{A}_n)}{\det \bm{B}_n}\|\varphi\|^2_{H^{{\boldsymbol{r}}}_{\text{mix}}}\,,
\end{split}
\end{equation}
where we used H\"older's inequality and the fact that $\varphi$ and all its partial derivatives have
compact support in $(0,1)^d$ together with \eqref{f9}.
\end{proof}
\begin{remark} Let us comment on the number $M(\bm{A}_n)$. Clearly, all the
fundamental cells are contained in
$[-L(n,P),1+L(n,P)]^d$ with $L(n,P):=(D_P n)^{-1/d} B_P$ and
$B_P$ from~\eqref{eq:b}. Here, we used that $\bm{A}_n=(D_P n)^{-1/d}\bm{V}$.
Therefore, $M(\bm{A}_n)$ is bounded
by the number of lattice points $\bm{A}_n(\Z^d)$ in this set.
This number can be controlled by \eqref{f10} below, which will be also
of some importance later. For a proof see e.g.~\cite[Lem.\ 5]{MU17}.
In fact, for every axis-parallel box $\Omega\subset\R^d$ and every $T\in\R^{d\times d}$
we have
\begin{equation}\label{f10}
\#\left(\bm{T}(\Z^d)\cap\Omega\right) \,\le\,
\frac{\mathrm{vol}_d(\Omega)}{\mathrm{Nm}(\bm{T})} +1.
\end{equation}
With all the definitions from above and $\mathrm{Nm}(\bm{V})=1$, we obtain that
\begin{equation}\label{eq:M}
M(\bm{A}_n) \,\le\, n\, D_P\,\left(1+\frac{2 B_P}{(D_P n)^{1/d}}\right)^d + 1
\,\le\, n\,2^d\, \max\left\{D_P,\, \frac{(2 B_P)^d}{n}\right\}.
\end{equation}
We see that the bound of the second factor of the above error bound depends
asymptotically only on $\sqrt{D_P}$ (and the norm of $f$).
However, for preasymptotic bounds also the term $B_P^{d/2}/\sqrt{n}$
plays an important role.
\end{remark}
\begin{proof}
To finish the proof of Theorem \ref{thm:theor-error} it remains to estimate the middle factor in \eqref{f11}.
In fact, the statement \eqref{f13} then follows by a straight-forward density argument recalling \eqref{f1}.
If ${\boldsymbol{r}}=(r,\dots,r)$ with $r\in\N_0$ is a constant smoothness vector, the following proof can be found
in several articles, see e.g.~\cite{tem93} or \cite[p.~580]{MU16}. Note, that it also works for fractional
$r>1/2$, which is essentially shown in \cite{UlUl16}.
Although the optimal order of convergence is known also in the non-constant case,
we were not able to find a proof with explicit constants.
Therefore, we give it here.
We assume without restriction that $r_1=\dots= r_\eta<r_{\eta+1}\le\dots\le r_d$
for some $\eta\in\{1,\dots,d\}$.
First, for $\boldsymbol{m}=(m_1,\dots,m_d)\in\N_0^d$, we define the sets
\[
\rho(\boldsymbol{m}) \,:=\, \{\bsx\in\R^d\colon \lfloor2^{m_j-1}\rfloor\le |x_j|<2^{m_j}
\text{ for } j=1,\dots,d\}.
\]
Note that $\prod_{j=1}^d|x_j|<2^{\|\boldsymbol{m}\|_1}$ for all $x\in\rho(\boldsymbol{m})$.
Since $B_n=n^{1/d}B=(D_p n)^{1/d}\, \bm{V}^{-\top}$ we have
\[
\mathrm{Nm}(B_n) \,=\, \inf_{\boldsymbol{k}\in\Z^d\setminus\{0\}}\prod_{j=1}^d (B_n \boldsymbol{k})_j
\,=\, \frac{n}{D_p}.
\]
This shows that $|(B_n(\Z^d)\setminus0)\cap\rho(\boldsymbol{m})|=0$ for all
$\boldsymbol{m}\in\N_0^d$ with $\|\boldsymbol{m}\|_1< R_n$, where
\[
R_n \,:=\, \left\lceil \log_2\bigl(n/D_p\bigr) \right\rceil.
\]
Moreover, for $B_n\boldsymbol{k}\in\rho(\boldsymbol{m})$, we have
\[
\nu_{*,{\boldsymbol{r}}}(B_n\boldsymbol{k}) \,\ge\, \nu_{{\boldsymbol{r}}}(B_n\boldsymbol{k})
\,\ge\, \prod_{j=1}^d\max\{1,2\pi\lfloor2^{m_j-1}\rfloor\}^{r_j}
\,\ge\, 2^{r_1 m_1+\dots+r_d m_d}.
\]
Since $\rho(\boldsymbol{m})$ is a union of $2^d$ axis-parallel boxes
each with volume less than $2^{\|\boldsymbol{m}\|_1}$, \eqref{f10} implies that
$\abs{B_n(\Z^d)\cap\rho(\boldsymbol{m})} \le 2^d(D_p 2^{\|\boldsymbol{m}\|_1}/n+1)
\le 2^{d+2} 2^{\|\boldsymbol{m}\|_1-R_n}$
if $\|\boldsymbol{m}\|_1\ge R_n$.
Additionally, note that
$\abs{\{\boldsymbol{m}\in\N_0^\eta\colon \|\boldsymbol{m}\|_1=\ell\}}=\binom{\eta-1+\ell}{\eta-1}$.
With $r:=r_1$ and $r':=r_{\eta+1}$, we obtain
\[\begin{split}
\sum_{\boldsymbol{k}\in\Z^d\setminus0} &|\nu_{*,{\boldsymbol{r}}}(B_n\boldsymbol{k})|^{-2}
\,\le\, \sum_{m: \|m\|_1\ge R_n}\,
\abs{B_n(\Z^d)\cap\rho(m)}\, 2^{-2r_1 m_1-\ldots-2r_d m_d}\\
\,&\le\, 2^{d+2}\, \sum_{m: \|m\|_1\ge R_n}\,
2^{\|m\|_1-R_n}\, 2^{-2r (m_1+\ldots+m_\eta)-2r'(m_{\eta+1}+\ldots+m_d)} \\
\,&=\, 2^{d+2}\, \sum_{\ell=R_n}^\infty\sum_{m: \|m\|_1=\ell}\,
2^{\|m\|_1-R_n}\, 2^{-2r (m_1+\ldots+m_\eta)-2r'(m_{\eta+1}+\ldots+m_d)} \\
\,&=\, 2^{d+2}\, \sum_{\ell=R_n}^\infty\, \sum_{\ell'=0}^\ell \;
\sum_{\substack{m_1,\dots,m_\eta:\\ \sum_{j=1}^\eta m_j=\ell-\ell'}} \;
\sum_{\substack{m_{\eta+1},\dots,m_d:\\ \sum_{j=\eta+1}^d m_j=\ell'}}
2^{\ell-R_n}\, 2^{-2r (\ell-\ell')-2r'\ell'} \\
\,&=\, 2^{d+2}\, \sum_{\ell=R_n}^\infty\, \sum_{\ell'=0}^\ell
\binom{\eta-1+\ell-\ell'}{\eta-1}\, \binom{d-\eta-1+\ell'}{d-\eta-1}\,
2^{\ell-R_n-2r \ell }\, 2^{-2(r'-r)\ell'} \\
\,&\le\, 2^{d+2}\, \sum_{\ell=R_n}^\infty\, \binom{\eta-1+\ell}{\eta-1}\,
2^{\ell-R_n-2r \ell }\; \sum_{\ell'=0}^\ell
\binom{d-\eta-1+\ell'}{d-\eta-1}\,
2^{-2(r'-r)\ell'} \\
\end{split}\]
In the last estimate we used that $\binom{k+\ell}{k}\le\binom{k+\ell+1}{k}$ for
every $k,\ell\in\N$. To bound the two sums above we use the well-known binomial identity
\[
\sum_{\ell=0}^\infty \binom{D+\ell}{D}\, x^\ell \,=\, \frac{1}{(1-x)^{D+1}}
\]
as well as the bound
\[
\binom{D+\ell+R}{D} \,\le\, \binom{D+\ell}{D}\,(1+R)^D
\]
for $D,\ell,R\in\N$ and $x\in\C$ with $|x|<1$.
We obtain for the second sum that
\[
\sum_{\ell'=0}^\ell \binom{d-\eta-1+\ell'}{d-\eta-1}\, 2^{-2(r'-r)\ell'}
\,\le\, \left(1-2^{-2(r'-r)}\right)^{-(d-\eta)}
\]
and for the first sum that
\[\begin{split}
\sum_{\ell=R_n}^\infty\, \binom{\eta-1+\ell}{\eta-1}\, 2^{\ell-R_n-2r \ell }
\,&=\, \sum_{\ell=0}^\infty\, \binom{\eta-1+\ell+R_n}{\eta-1}\, 2^{\ell-2r(\ell+R_n) } \\
\,&\le\, 2^{-2r R_n}\,(1+R_n)^{\eta-1}\,\sum_{\ell=0}^\infty\, \binom{\eta-1+\ell}{\eta-1}\, 2^{(1-2r)\ell} \\
\,&=\, 2^{-2r R_n}\,(1+R_n)^{\eta-1}\, \left(1-2^{(1-2r)}\right)^{-\eta}
\end{split}\]
for $r>1/2$.
If we use
$\log_2\bigl(n/D_p\bigr) \le R_n \le 1+\log_2\bigl(n/D_p\bigr)$
we finally obtain Theorem~\ref{thm:theor-error}.\end{proof}
\section{Numerical results: Exact worst-case errors in $\mathring{H}^{\mathbf{r}}_\mix$} \label{sec:numerics}
In Section \ref{sec:TheorBounds} it has been shown that the Frolov method achieves the optimal rate of convergence in Sobolev spaces with both, uniform and
anisotropic mixed smoothness. However, as we have seen in Section \ref{sec:polynomials}, there are different ways to choose the polynomials,
which significantly influence the numerical performance. Therefore, even though the asymptotic
convergence rate of all (admissible) Frolov cubature rules have the optimal order
$\mathcal{O} (N^{-r} (\log N)^{(d-1)/2})$ for uniform smoothness $f$,
there might be huge constants involved. In order to investigate the influence of different Frolov polynomials on the preasymptotic
behavior of the integration error, we use a well-known technique for reproducing kernel Hilbert spaces
to compute the worst-case error explicitly. This supplements the theoretical bounds from Section \ref{sec:TheorBounds}.
Moreover, we compare the worst-case errors of Frolov cubature, the sparse grid method and quasi--Monte
Carlo methods in $\mathring{H}^r_\mix$.
\subsection{Exact worst-case errors via reproducing kernels}
The worst-case error of any linear cubature rule $Q_N(f) = \sum_{i=1}^N w_i f(\bsx_i)$ with prescribed
weights and nodes can be computed \emph{exactly} via the norm of the error functional $R_N(f) := I_d(f) - Q_N(f)$, cf. Eq. \eqref{eqn_functional_norm}. Applying $R_N$ to both components of the kernel $ \mathring{K}_{d}^{\boldsymbol{r}}(\bsx, \bsy)$, the well-known formula for the (absolute) worst-case error is obtained, i.e
\begin{equation} \label{eqn_wce_formula}
\begin{aligned}
\sup_{\|f\|_{\mathring{H}^{\boldsymbol{r}}_\mix} \leq 1} |R_N(f)|^2 & = \int\displaylimits_{[0,1]^d} \int\displaylimits_{[0,1]^d} \mathring{K}_{d}^{\boldsymbol{r}}(\bsx, \bsy) \, \,\mathrm{d} \bsx \,\mathrm{d} \bsy - 2 \sum_{i=1}^N w_i \int\displaylimits_{[0,1]^d} \mathring{K}_{d}^{\boldsymbol{r}}(\bsx_i, \bsy) \, \,\mathrm{d} \bsy \\
& \quad \quad + \sum_{i=1}^N \sum_{j=1}^N w_i w_j \mathring{K}_{d}^{\boldsymbol{r}}(\bsx_i, \bsx_j) .
\end{aligned}
\end{equation}
Often, \eqref{eqn_wce_formula} is normalized with respect to norm of
$I_d$ in the dual-space $(\mathring{H}^{\boldsymbol{r}}_\mix)^\star$, i.e.
\eqref{eqn_wce_formula} is divided by
$\|I_d\|_{(\mathring{H}^{\boldsymbol{r}}_\mix)^\star} = (\int_{[0,1]^d}\int_{[0,1]^d} \mathring{K}_{d}^{\boldsymbol{r}}(\bsx, \bsy) \, \,\mathrm{d} \bsx \,\mathrm{d} \bsy )^{1/2}$,
cf. \eqref{eqn_initial_error}.
The resulting quantity is called \emph{normalized worst-case error}.
In order to evaluate \eqref{eqn_wce_formula} for an arbitrary given cubature rule we use the closed-form
representation of the kernel $\mathring{K}^{{\boldsymbol{r}}}_{d}$ from Theorem \ref{repr_kernel} as well as the closed-form representation of the Riesz-representer \eqref{eqn_riesz_representer}.
Besides Frolov cubature rules, we will consider the sparse grid construction, which goes back to
Smolyak \cite{Smolyak:1963}, and also higher-order quasi--Monte Carlo integration \cite{DiPi10}.
Examples for the different point constructions are given in Figure \ref{fig_cubature}. Their properties will be discussed below.
The Frolov points are generated using our newly developed Algorithm \ref{QRassemble}. The resulting points obtained by the improved polynomial construction can also be downloaded from \url{http://wissrech.ins.uni-bonn.de/research/software/frolov}.
\begin{figure}[t]
\includegraphics[width=0.32\linewidth]{fig/FrolovPoints.pdf}
\includegraphics[width=0.32\linewidth]{fig/Order2Digitalnet.pdf}
\includegraphics[width=0.32\linewidth]{fig/sparsegrid.pdf}
\caption{A Frolov lattice (left), an order-$2$ digital net (middle) and a zero boundary sparse grid (right).} \label{fig_cubature}
\end{figure}
\subsection{Uniform mixed smoothness}
As a first step we compare worst-case errors for cubature formulas that
are known to work well in periodic Sobolev spaces,
of which $\mathring{H}^r_\mix$ is a subset. These are different Frolov
cubature rules, that are based on different choices of the
generating polynomial. In the following, ''Classical Frolov'' will refer to the classical generating polynomial in \eqref{eqn_classical_polynomial}, while ''Improved Frolov'' will refer to the lattices that are generated by the improved polynomials from Section \ref{sec:polynomials}. Moreover, we
consider the sparse grid method that is based on the
trapezoidal rule, see Appendix \ref{sec_sparsegrid}. Due to the zero-boundary condition in $\mathring{H}^r$, all points with one component equal to zero are left out, cf. Figure \ref{fig_cubature}.\footnote{This is similar to the open trapezoidal rule which, however, uses different weights, cf. \cite{Gerstner.Griebel:1998}.}
It achieves a
convergence rate of
order $\mathcal{O}(N^{-r} (\log N)^{(d-1)(r+1/2)})$ in
$\mathring{H}^r_\mix$, which is best possible for a sparse grid method, cf. Theorem \ref{cor_sg} below.
As an example for a higher order quasi--Monte Carlo method we use a
digital net of order $2$ that is obtained by interlacing
the digits of a $(2d)$-dimensional Niederreiter-Xing net. This is
obtained by using the implementation of Pirsic \cite{Pirsic} of
Xing-Niederreiter sequences \cite{NiederreiterXing} for rational places
in dimension $2d-1$. These are known to yield smaller $t$-values than
e.g. Sobol- or classical Niederreiter-sequences \cite{Dick2008572}.
Then, a $2d$-dimensional digital net is obtained by employing the
sequence-to-net propagation rule, cf. \cite{DiPi10,Niederreiter1996241} for
more details.
It is known that order-$2$ nets yield the optimal rate of convergence in periodic Sobolev spaces with bounded mixed derivatives of order $r < 2$, see \cite{Hinrichs.Markhasin.Oettershagen.ea:2016} and also \cite{GodaQMC}, since $\mathring{H}^r_\mix \subset H^r_\mix(\mathbb{T}^d)$.
Moreover, in the bivariate setting we also consider the Fibonacci
lattice, which is not just known to be an order-optimal cubature rule
for periodic Sobolev spaces with dominating mixed smoothness \cite{DTU16}, but also
represents the best possible point set for quasi -- Monte Carlo
intergation in this space, at least for small point numbers
\cite{Hinrichs.Oettershagen:2016}.
\begin{figure}
\centering
\includegraphics[width=0.45 \linewidth]{plots/r2_comp-figure0.pdf}\,
\includegraphics[width=0.45 \linewidth]{plots/r2_comp-figure1.pdf}
\caption{Worst-case errors for different cubature rules for uniform mixed smoothness $r=2$ in
dimensional $d=2$ (left) and dimension $d=4$ (right).} \label{fig_d2_d4_wce}
\end{figure}
In the left-hand-side picture of Figure \ref{fig_d2_d4_wce}, the
worst-case errors for smoothness $r=2$ are computed in dimension $d=2$.
Clearly, the Frolov lattice based on the improved polynomial performs
best in $\mathring{H}^r_\mix$. Of similar quality is the Fibonacci
lattice and the classical Frolov lattice is slightly worse. The sparse
grid also achieves the optimal main rate of $N^{-r}$, but it is known
that the exponent of its logarithm is smoothness dependent. This is also
apparent in Figure \ref{fig_d2_d4_wce}, where the sparse grid has an
asymptotic behavior that is inferior to all the other considered
methods. On the right-hand-side of Figure \ref{fig_d2_d4_wce}, the
worst-case errors for smoothness $r=2$ are computed in dimension $d=4$.
Here, the Fibonacci lattice is not considered. However, for all the
other methods we note that the picture does not change much, compared to
the case $d=2$. As before, the improved Frolov method performs best and
the classical Frolov obtains the same optimal asymptotic convergence
rate but a substantially worse constant. This effect is now much
stronger than in the bivariate setting, i.e. the classical Frolov lattice has
a worst-case error that is about two magnitudes larger than the one of
the improved Frolov lattice. Moreover, the order-$2$ digital net seems
to be competitive too, albeit with a substantially larger constant and
longer pre-asymptotic regime. Again, the worse logarithmic exponent of
the sparse grid method can be clearly observed.
In the Figures \ref{fig_r1}, \ref{fig_r2} and \ref{fig_r3} the influence of
the dimensionality and the smoothness onto the performance of the Frolov
cubature method is considered in more detail.
As an example for a cubature method with a less than optimal complexity, the sparse grid method is also included.
Especially the classical construction suffers from a strong growth of the constant as the
dimensionality increases. Also, the pre-asymptotic regime seems to last
longer. This effect can so far not be thoroughly explained by the
existing theory. In dimension $d=7$, the classical Frolov construction
needs more than $10^6$ points to achieve the error level of the
zero-algorithm, i.e. normalized worst-case error $1$. Note at this
point, that all given errors are normalized worst-case errors, which
can, for non optimally weighted cubature rules, be substantially larger
than $1$.
It is apparent that the classical Frolov method is practically useless
in dimension $d \geq 5$, due to its unfavorable pre-asymptotic behavior.
Our new approach, however, shows a much better dependence onto the
dimensionality and certainly allows the treatment of
moderate-dimensional integrals from Sobolev spaces with dominating mixed
smoothness of uniform type.
Moreover, we observe the universality of Frolov's method, i.e. without adaption to the respective parameters
it achieves the best possible rate of convergence in every $\mathring{H}^r_\mix$,
$r \in \{1,2,3\}$.
\subsection{Anisotropic mixed smoothness}
\begin{figure}[t]
\centering
\includegraphics[width=0.41\linewidth]{plots/Frolov_aniso_tino-figure0.pdf} \, \includegraphics[width=0.56\linewidth]{plots/Frolov_aniso_tino-figure1.pdf}
\caption{Worst-case errors for uniform mixed smoothness in various dimensions. Left-hand side: $r_1=1$ and $r_2 = \cdots = r_d = 2$. Left-hand side: $r_1=2$ and $r_2 = \cdots = r_d = 3$.} \label{fig_aniso}
\end{figure}
It has been shown in Theorem \ref{thm:theor-error}
that in Sobolev spaces with dominating mixed smoothness of different orders in each direction, only the lowest smoothness and associated dimension enters the error estimate. In order to make this phenomenon visible from a numerical perspective, we compute explicit worst-case errors in
\[
\mathring{H}^{\boldsymbol{r}}_\mix = \mathring{H}^{r_1} \otimes \cdots \otimes \mathring{H}^{r_d} ,
\]
where $r_1 = r$ and $r_2 = r_3 = \cdots = r_d = r+1$. Then, Theorem \ref{thm:theor-error} predicts that the worst-case error asymptotically behaves like in the univariate setting, i.e. decays at a rate of $\mathcal{O}(N^{-r})$. The question that is investigated in Figure \ref{fig_aniso} is how long it takes to overcome the preasymptotic regime until this favorable convergence rate becomes visible.
On the left-hand-side of Figure \ref{fig_aniso}, i.e. for $r=1$, already with less than $3000$ points the Frolov method follows the asymptotic regime of $N^{-1}$ in all the considered cases $d \in \{2,3,\ldots, 7\}$.
In contrast, on the right-hand-side of Figure \ref{fig_aniso}, i.e. for $r=2$, the dimension seems to have a much larger impact onto the length of the sub-optimal preasymptotic regime. For example, in $d=7$ the $N^{-2}$-rate becomes visible only when the number of points $N$ is larger than $\approx 10^5$.
We remark that the sparse grid method is also able to deal with anisotropic mixed smoothness vectors
${\boldsymbol{r}} = (r_1,\ldots, r_d)$. Then, however, the construction needs to be adjusted to
the smoothness vector which has to be known in advance,
see \cite[pp. 32,36,72]{Tem86}, the recent survey \cite[Sect.\ 10.1]{DTU16} and the
references therein. The resulting sparse grid construction therefore is not a universal cubature
formula.\footnote{Note that it is also possible to construct dimension-adaptive spare grids, which are are able to detect the smoothness vector in the process of approximation adaptively, cf. \cite{Gerstner.Griebel:2003}.}
However, both plots in in Figure \ref{fig_aniso} were computed with the exact same set of Frolov points, which automatically benefit from the anisotropic smoothness that is present in a given integration problem, i.e. in this case $r=1$ or $r=2$. Therefore, it is not necessary to estimate the smoothness of the integrand and tune the method appropriately.
\section*{Acknowledgement}
T.U. wishes to thank Winfried Bruns (Osnabrueck) for several fruitful discussions.
T.U. and J.O. gratefully acknowledge support by the German Research Foundation (DFG) Ul-403/2-1, GR-1144/21-1 and the Emmy-Noether programme, Ul-403/1-1.
\newpage
|
{
"timestamp": "2018-02-26T02:12:19",
"yymm": "1802",
"arxiv_id": "1802.08666",
"language": "en",
"url": "https://arxiv.org/abs/1802.08666"
}
|
\section{Introduction}
Narrow membrane tubes are ubiquitous in eukaryotic cells and are essential for a number of cell functions including signalling and trafficking.
Examples of tubular structures are the axons and dendrites of nerve cells, tubular networks in the Golgi body and the Endoplasmic reticulum. The curvature energy of a tubular protrusion $E \propto \kappa (l/r)$, where $l$ is the length of the tube, $r$ the radius and $\kappa$ the membrane bending rigidity. As a result the formation of narrow tubes (characterised by $l>>r$) requires
extremely high energies and hence they are not expected to be stable unless stabilised by external forces \cite{julichertube}.
Such forces can be applied by laser tweezers \cite{lasertweezer}, suction pressure \cite{micropipette} or processive molecular motors. \cite{motor}.
Forces internal to the cell or membrane vesicles, for example, growing bundles of actin \cite{actinmembranePlos,MesarecIglic2017} or microtubule (MT) filaments \cite{libchaberbuckledMT,Bausch} can also push out membrane protrusions which can grow into long tubes upon further polymerization
of the filaments.
Tubulation can also be promoted by spiral shaped protein filaments like dynamin
\cite{dynamin,hinshaw2000dynamin} and ESCRT-III \cite{ESCRT3}, or proteins with curved domains
like Bar \cite{peter2004bar}, ENTH \cite{ford2002curvature}, Exo70 \cite{zhao2013exo70}, etc that wrap around the tube. Such induced
membrane curvatures are interpreted as local spontaneous curvature \cite{kozlov}. Spontaneous membrane curvature can also naturally result from lipid heterogenity
\cite{Dimova} or lipid tilt \cite{lipidtilt} in a phospholipid membrane.
\begin{figure}[h]
\centering
\includegraphics[width=3in]{Fig-one}
\caption {(color online) Tubulation of a vesicle, (a) to (d), due to FtsZ filaments (fluorescent regions), in real time (minutes).
(e) shows loss of spherical shape due to excessive tubulation. (f) and (g) show high resolution images focused at the equatorial plane and the surface, respectively, of the tubulated vesicle. Nearly regular pattern of tubulation sites are visible. (h) shows reverse tubulation i.e., convex bulges on the vesicle surface accompanied by membrane invaginations protruding inward. (i) shows two different ways of anchoring FtsZ onto the membrane, causing opposite membrane curvatures. (j) shows parallel arrangement of FtsZ filaments along the tubular axis. Scale bars are $10\mu$m in the upper and middle rows, and $420nm$ in the lower row. Adapted from Fig.1,2,3 and 6 of Ref\cite{osawaEMBO}, with permission.}
\label{fig.expt}
\end{figure}
Here we address experiments \cite{osawaEMBO} where an array of membrane tubes emerge spontaneously due to surface active filamentous proteins FtsZ, attached to the outer surface of artificial membrane vesicles (liposomes). Interestingly, here the FtsZ filaments align with the tube axis, instead of wrapping around the tube. We attribute this to the
attractive interaction among parallel FtsZ filaments, in the presence of GTP, which gives rise
to FtsZ bundles \cite{bundle}. In our model, this bundling interaction will be the main driving force towards multiple tube formation.
Secondly, the regular arrangement of many tubes on the curved vesicle surface
is rather striking and has not been modelled before. As we will see later that the formation of such tubes cannot be reliably modelled by considering just a single axisymmetric tube because collective effects turn out to be important. For example,
in the present case of tubular array the boundary condition at the base of a single tube is not axisymmetric but instead has five or six-fold discrete
rotational symmetry depending on the number of neighbouring tubes surrounding it.
In this experiment Osawa et. al.~\cite{osawaEMBO} attached membrane targeted FtsZ filaments onto the outer surface of large Giant Unilamellar Vesicles (GUVs) of diameter $\sim 10\mu$m and observed tubes of diameter $50-200$nm grow spontaneously.
The filaments caused either concave depressions or convex bulges on the membrane, as illustrated in Fig.~\ref{fig.expt}(i),
depending on whether the membrane anchors (FtsA) were attached to the C terminal or the N terminal of the filaments, respectively. Depending on the type of attachment, membrane tubes either grew outward (see time series in Fig.~\ref{fig.expt}(a-d)) or inward (see Fig.~\ref{fig.expt}h). While elongation of tubes was observed only when GTP was in abundance, tubes shrunk when medium was depleted of GTP, implying the essential role played by GTP. But unlike in active systems here GTP hydrolysis neither generate any active forces
nor move the FtsZ filaments. In fact, here the FtsZ filaments are anchored to the membrane surface and they can at the most diffuse slowly.
In order to understand the FtsZ assisted membrane tubulation it is necessary to review the well known
physical properties of FtsZ filaments. FtsZ filaments have intrinsic curvature of order
($100$-$200$ nm)$^{-1}$ and they play important role during cytokinesis of rod shaped bacteria
like E. coli and B. subtilis \cite{ Sci17Huang,Sci17garner}.
In the presence of GTP, FtsZ filaments also condense into bundles via weak,
lateral, inter-filament attraction \cite{zringprl}. These bundles can also locally bend membrane
and generate constriction forces (few pN) \cite{osawaSci} on
relatively wide membrane tubes of diameter $1-2\mu$m.
Given these properties it has been unclear how these tubes form.
In this article, treating the filaments as a nematic field and using a generalised
Canham-Helfrich \cite{canham70,helfrich73, SunilPRE,Sunilbiopj,SunilMacromolecule}
model for the lipid membrane, we suggest a mechanism for vesicle tubulation and predict the
pattern of arrangement of FtsZ filaments on the vesicle.
According to the Hairy-Ball theorem \cite{prost} filaments, approximated as nematics here,
cannot be arranged on a closed surface (tangentially) without
forming topological defects and the topological charge of these defects must add up to two.
Arrangement of these defects is also a theoretically well studied
problem \cite{prost,nelson,bowickPRL}.
Keber et al \cite{Bausch} has recently studied active
dynamics of such defects on spherical as well as deformed vesicles. However the influence of
the nematics (filaments) on the elastic membrane is also very interesting because
it leads to nontrivial deformation of the vesicle shape \cite{osawaEMBO,Bausch}, which is
the focus of this article. In fact, in the case of FtsZ shape deformation of the vesicle
is accompanied by large number of high energy defects which was not considered before.
Qualitatively, Fig.~\ref{fig.expt}b-d,f suggest that concave depressions and tubes go hand in hand
and we can guess that tubes may form at the junctions where such concave patches meet.
The question then arises, how will these concave patches arrange themselves on a spherical surface.
Fig.~\ref{fig.expt}g further shows a regular arrangement of bright patches, with coordination number
five or six.
Can we then approximate the patches as the pentagons and hexagons that cover the surface of
a soccer ball ? The tubes would then emerge from the vertices. We will find out later that
a variant of this picture is correct.
\section{Model}
In our coarse-grained approach, we model the FtsZ coated membrane as a nematic field
adhering to a deformable fluid membrane surface. This model was developed and many of its
properties were studied by one of the authors \cite{SunilPRE,Sunilbiopj,SunilMacromolecule}.
We will later highlight the results that are new here. In this model the local orientation
of the nematic field is denoted by the unit vector $\hat n (\vec r)$ which lies in the local
tangent plane of the membrane and is free to rotate in this plane. Filament-membrane
interactions are modelled as anisotropic spontaneous curvatures of the membrane, in the
vicinity of the filament, while filament-filament interactions are modelled by the splay and
bend terms of the Frank's free energy for nematic liquid crystals. The total energy is
\begin{eqnarray}
E&=& \int dA \Big [\frac{\kappa}{2} (2H)^2 + \frac{\kappa_{\parallel}}{2}(H_{\parallel}-c_{\parallel})^2 +
\frac{\kappa_{\perp}}{2}(H_{\perp}-c_{\perp})^2 \Big ]\nonumber\\
&+& \int dA \Big [\frac{K_1}{2} (\tilde\nabla.\hat n)^2 + \frac{K_3}{2}(\tilde\nabla. \hat t)^2\Big ].
\label{eqn:totalE}
\end{eqnarray}
Here the first term is the Canham-Helfrich elastic energy for membranes~\cite{canham70,helfrich73} with bending rigidity $\kappa$ and membrane mean curvature $H=(c_1+c_2)/2$. Here, $c_1$ and $c_2$ are the local principal curvatures on the membrane surface along orthogonal tangent vectors $\hat t_1$ and $\hat t_2$. $\kappa_{\parallel}$ and $\kappa_{\perp}$ are the induced membrane bending rigidities and $c_{\parallel}$ and $c_{\perp}$ are the induced intrinsic curvatures along $\hat n$, the orientation of the filament in the local tangent plane, and $\hat t$, its perpendicular direction, respectively. Origin of a nonzero $c_{\perp}$, an induced curvature
perpendicular to the filament, is not obvious. In fact it will turn out to be the driving force towards
filament bundling which accompanies membrane tubulation. Tubulation can also occur due to
the $c{\parallel}$ term alone, however it does not promote formation of long straight filament bundles.
The filament orientations on the tubes are different in these two cases.
The membrane curvature along $\hat n$ and $\hat t$ are given by $H_{\parallel}=c_1 \cos^2 \phi + c_2 \sin^2 \phi$ and $H_{\perp}=c_1 \sin^2 \phi + c_2 \cos^2 \phi$, where $\phi$ denotes the angle between filament orientation $\hat n$ and principal direction $\hat t_1$. $K_1$ and $K_3$ are the splay and bend elastic constants for the in plane nematic interactions and $\tilde \nabla$ is the covariant derivative on the curved surface~\cite{Chaikin}.
As in Ref~\cite{SunilPRE}, we use a discrete form of this energy functional to perform Monte-Carlo
simulations on a triangulated membrane,
to study the equilibrium shapes.
Each vertex $i$ hosts a orientation vector $\hat n_i$. In particular, we use the standard Lebwohl-Lasher model $E_{nn}=-\epsilon_{LL}\sum _{i>j}(\frac{3}{2} (\hat n_i.\hat n_j)^2 - \frac{1}{2})$\;, \cite{LL} to mimic the in plane nematic interaction terms.
Here $\epsilon_{LL}$ is strength of the nematic interaction, in one constant approximation( $K_1=K_3$ ), and the sum
$\sum _{i>j}$ is over all the nearest neighbour $(i,j)$ vertices on the triangulated grid, promoting alignment among the
neighbouring orientation vectors.
Models with anisotropic membrane curvatures have been developed and used by various authors
\cite{iglic2005,AnisotropicMemCurv2016simul,iglic2018}. But all of these efforts focussed on membrane
structures that are axis-symmeric in nature and mainly sought to study formation of single tubes.
Previous work by one of the authors \cite{SunilPRE,Sunilbiopj,SunilMacromolecule} focussed on effect of nonzero
$\kappa_{\parallel}$ and $c_{\parallel}$. This amounts to introducing one preferred length
scale, namely the radius of the tube. In contrast, the present work focusses on the effects of non-zero
$\kappa_{\perp}$ and $c_{\perp}$. In fact, using these parameters we account for a new physical effect, namely
bundling of filaments due to inter-filament attraction.
Nonzero values of $\kappa_{\perp}$ and $c_{\perp}$ induce parallel alignment of filaments, on
the outer surface of a membrane tube, all pointing along the tube axis ($\hat z$). Higher the
$c_{\perp}$ smaller is the tube radius; this is equivalent to filament bundling. In our simulation,
with fixed number of vertices, the density of vertices goes up relatively at
the high curvature regions of the triangulated surface. This makes the number density of
vertices relatively higher at the tube surfaces, leading to higher filament density
on the tubes, consistent with filament bundle picture. Since our model does not distinguish
between the outer and the inner surfaces of a tube, this effect can also account for MT
bundles which are responsible for pushing out membrane tubes from a vesicle.
Note that, the nematic interaction term in our model is also minimized when the filaments align
with the tube axis ($\hat z$). This is also true for continuum Frank's free-energy where both
splay the bend terms are minimized. But the nematic interaction term does not control
the tube radius because it is not dependent on the curvature of the underlying membrane
as long as all the nematics on the tube are aligned along the common $\hat z$ axis of the straight tube.
We will later show that bundling assisted membrane tubulation is possible even in the absence of intrinsic
filament curvature (i.e., $c_{\parallel}=0$). This will be relevant for MT filaments which does not have
any intrinsic curvature. But in the case of FtsZ, since intrinsic curvature and bundling both are known to be
involved, we will use nonzero values for both $c_{\parallel}$ and $c_{\perp}$ for modeling FtsZ.
\section{Results}
Monte-Carlo simulations of our model (Eqn.~\eqref{eqn:totalE}) show that, for $c_{\parallel}<0, c_{\perp}>0$,
with $|c_{\parallel}|<< |c_{\perp}|$, regularly spaced narrow tubes emerge (see Fig.~\ref{fig.tubes}) from
an initially spherical vesicle. We checked that $c_{\perp}$ mainly controls the tube radii and
$c_{\parallel}$ controls the curvature of the valleys. However the total number of tubes is a joint effect
of both $c_{\perp}$ and $c_{\parallel}$.
Reversing the signs of the intrinsic curvatures, i.e.,
$c_{\parallel}>0, c_{\perp}<0$ tubes grow into the vesicle (see Fig.~\ref{fig.tubes} a,d,e).
Since FtsZ coated membrane is expected to be stiffer than the bare bilayer membrane (for which $\kappa\sim 20k_BT$,
we set $\kappa_{\parallel}=35k_BT$ and $\kappa_{\perp}=25k_BT$.
We found that setting $\kappa=0$ or $\kappa=20k_BT$ only makes minor qualitative
differences in the tubular structures; tubes are thicker and little less in number with
$\kappa=20k_BT$. Furthermore, nonzero $\kappa$ offers an initial energy
barrier for tube nucleation making tubulation slow, which could be bypassed by
raising the temperature temporarily in our Monte-Carlo simulation.
\begin{figure}[h]
\centering
\includegraphics[width=3.5in]{Fig2b}
\caption{ (color online) Growth of tubes, outward (a,b,c) or inward (a,d,c), depend on the signs of
$c_{\parallel}$ and $c_{\perp}$. We use $c_{\parallel}=-0.05$ and $ c_{\perp}=1$ for
outward growing tubes, and reverse the signs of $c_{\parallel}$ and $c_{\perp}$
to get inward growing tubes. Other parameters are $\kappa_{\parallel}=35,\kappa_{\perp}=25$
and $\epsilon _{LL}=3$, in units of $k_BT$. Volume has been held fixed.
In (e) the vesicle is sectioned to make the inner tubes visible
The results from constant area ensemble are nearly identical at same parameter values.
}
\label{fig.tubes}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=3in]{4shapes}
\caption{ (color online) Shapes with fixed $c_{\perp}=0,\kappa_{\parallel}=25$ and $\epsilon_{LL}=5.0$,
with other parameters varying, a) $\kappa=0,\kappa_{\perp}=25,c_{\parallel}=1.0$ b) same
as (a) except $c_{\parallel}=-1.0$, c) $\kappa=10,\kappa_{\perp}=0,c_{\parallel}=1.0$ and
d) same as (c) except $c_{\parallel}=-1.0$. }
\label{fig.4shapes}
\end{figure}
In Fig.\ref{fig.4shapes} we show few shapes, some of them quite unexpected ones,
when the bundling effect is switched off i.e., $\kappa_{\perp}=0$.
Yet, tubes or inverted tubes can form (see Fig.\ref{fig.4shapes}a and b, respectively)
provided $|c_{\parallel}|\gg 1/R$, the curvature of the original spherical vesicle.
However, Fig.\ref{fig.4shapes}c and d, are examples where tubes do not form due to
variation of other parameters.
The criteria for emergence of a tube and dependence of its radius ($r$) on various parameters
of the model can be understood by the stability analysis of a single tube given in
Ref\cite{Sunilbiopj}. However the specific arrangement of many tubes on the vesicle surface
cannot be inferred analytically. Minimization of the free energy of a straight, uniform
cylindrical tube \cite{Sunilbiopj} yields equilibrium tube radius
\begin{equation}
r^2=\frac {\frac{\kappa}{2}(\kappa_{\parallel}+\kappa_{\perp}) +
\kappa_{\parallel}\kappa_{\perp}} {\kappa_{\parallel}\kappa_{\perp}
(c_{\parallel}+c_{\perp})^2}
\end{equation}
and inclination ($\phi$) of the nematic \cite{Sunilbiopj} on the surface of the tube,
\begin{equation}
\cos^2 \phi = \frac{(\kappa_{\parallel}c_{\parallel} - \kappa_{\perp} c_{\perp}) r +
\kappa_{\perp} } {\kappa_{\parallel} + \kappa_{\perp}}
\label{Eq.cosphi}
\end{equation}
Note that, if any of $\kappa_{\parallel}$ or $\kappa_{\perp}$ is
zero, while $\kappa\neq 0$, the radius becomes infinite, siganalling that tubes will not
form, which is the case for Fig.\ref{fig.4shapes}-c,d. On the other hand if
$\kappa=0$ then the tube radius is $(c_{\parallel}+c_{\perp})^{-1}$. This is the case for
Fig.\ref{fig.4shapes}-a,b, where only $c_{\parallel}$ is nonzero and same for
Fig.\ref{fig.tubes} where $c_{\perp}$ dominates over $c_{\parallel}$.
In this respect we note that in Ref\cite{SunilPRE}, Fig.11 despite $\kappa_{\perp}=0$,
and $\kappa,\kappa_{\parallel}\neq 0$, tubes still emerged. But these tubes were bent
and also had nonuniform cross-section which violate the assumption in the stability analysis.
Furthermore, Eq.\ref{Eq.cosphi} suggests that for physically acceptable solutions the right
hand side (r.h.s.) must be between zero and one. Indeed $\phi=\pi/2$ for Fig.\ref{fig.tubes},
while $\phi=0$, for Fig.\ref{fig.4shapes}a, consistent with the longitudinal and azimuthal orientation
of filaments on the tube, respectively. But when the r.h.s. is less than zero a boundary
minima occurs for the free energy at $\phi=\pi/2$ and similarly for the r.h.s larger than
one the free energy at $\phi=0$ is the physically acceptable minimum value. Correspondingly,
the formula for equilibrium $r$ also changes (see Ref\cite{Sunilbiopj}).
Fig.\ref{fig.onlykper} shows that, even without any intrinsic curvature (i.e., $c_{\parallel}=0$)
of the adhering filaments, tubes may still emerge due to bundling interaction among filaments.
As we discuss later this may be the driving force for tubulation for vesicles coated
with MT, which does not have any intrinsic curvature like FtsZ.
\begin{figure}[h]
\centering
\includegraphics[width=2in]{onlykper}
\caption{ (color online) Tubulation even without intrinsic filament curvature, i.e., $c_{\parallel}=0$.
Other parameters are $\kappa=0,\kappa_{\parallel}=\kappa_{\perp}=25, c_{\perp}=1$ and $\epsilon_{LL}=5.0$ }
\label{fig.onlykper}
\end{figure}
In previous work on this model \cite{SunilPRE,Sunilbiopj,SunilMacromolecule} neither volume nor area of
the deformed vesicle was conserved during Monte-Carlo simulation. In this work we ran separate simulations
for fixed volume and fixed area ensembles. We noticed no significant change in the qualitative results between
these two different ensembles, except that the simulation with constant area was relatively faster. Experimentally,
total fluorescence of the membrane bound FtsZ was found to increase during tubulation
\cite{osawaEMBO}. This could be due to addition of higher density of filaments to the
tubes from the solution or could be due to release of entropic membrane folds leading
to increase in vesicle area, attracting more FtsZ from the solution.
In Osawa's experiments \cite{osawaEMBO} tubes grew indefinitely (see Fig~\ref{fig.expt}e)
as long as GTP was supplied and they shrunk when GTP was depleted. In our simulation too
tube growth did not stop when the vesicle volume was kept fixed but the area was unconstrained.
GTP depletion is known to cause two things: a) it switches off the attraction between FtsZ filaments
and as a result they unbundle \cite{bundle}, and b) it raises the intrinsic curvature of the FtsZ
filaments \cite{osawaEMBO}. We implement GTP depletion by turning off $\kappa_{\perp}$ to zero which causes
the tubes to shrink. We also raise the value of $c_{\parallel}$ simultaneously, which marginally
increases undulations on the nearly spherical vesicle.
It can be argued that in a Monte-Carlo simulation only the final equilibrium state has physical
relevance, while the intermediate states may not follow the actual system kinetics. Therefore
we considered ensembles where both the volume and the area of the vesicle were fixed.
This is a typical recipe for simulating, for example, red blood cell shapes \cite{wortis}.
We fixed the area to be about $10\%$ excess over that of the corresponding sphere at a given volume.
This recipe is physically meaningful for our FtsZ case also because there is a limit to the
maximum amount of area that a vesicle can reserve in the form of membrane folds, beyond which
area stretching elasticity becomes important. For this ensemble, our system took the same
pathway as before (when area was not fixed) but the tubulation was not indefinite and the
system reached equilibrium, as in Fig~\ref{fig.expt}d,f
or g. This ensemble becomes particularly important for the MT induced tubulation of deflated
membrane vesicles in the experiments of Keber et al \cite{Bausch}, which we discuss now.
Keber et. al.~\cite{Bausch} attempted to mimic the active cell cortex by assembling
arrays of microtubules on the inner surface of a spherical membrane vesicle, along with
high concentration of kinesin motors. This rendered activity to the MT layer. MT filaments
showed incessant growth, shrinkage, bundling and sliding motion at the spherical surface.
Four topological nematic defects of charge +1/2 formed at the surface and showed interesting
spatio-temporal dynamics. Upon partial deflation these vesicles deformed into ellipsoidal
shape and four narrow tubes emerged (Fig.6c), with MT bundles inside. The tubes kept changing their positions.
Furthermore, for smaller vesicles only two tubes persisted (Fig.6d) along with shape changing dynamics.
Few groups \cite{onsphere,voigt}, including Keber et al \cite{Bausch}, modelled
the active MT-membrane system as nematics on a spherical surface and studied the
effective dynamics of the nematic defects. But so far no study has included vesicle
deformation due to the active MT-motor dynamics.
Although the dynamics of this active system is beyond the scope of our equilibrium
Monte-Carlo simulation, some aspects of this system can still be understood from
equilibrium physics of our model studied at physically relevant parameters. Towards this
we set $c_{\parallel}=0$ in our model, since MT does not have intrinsic curvature like FtsZ,
and retain nonzero values of $k_{\parallel}, k_{\perp}, c_{\perp}$ and $\epsilon_{LL}$.
It is well known that polymerization of actin and MT bundles, even in the absence of
activity can give rise to tubular protrusions \cite{actinmembranePlos,libchaberbuckledMT}.
That implies that membrane tubulation can occur at equilibrium also, however the dynamics
of the tube and the vesicle has a purely nonequilibrium origin (motor acivity). The location of the
tubes in Keber et. al's experiment \cite{Bausch} coincides with the locations of +1/2
nematic defects, where elastic strain in the nematic field is the highest. However, tubulation
also requires extra area. So when the vesicle is tout (i.e., membrane tension high), the
MT bundles cannot overcome the membrane tension to push out tubes, instead the bundles
buckle. This effect was seen in Ref\cite{libchaberbuckledMT}. For such an undeformed
spherical vesicle the equilibrium positions of four +1/2 defects are the vertices of a
symmetric tetrahedron, since +1/2 defects repel each other. Due to activity in
Ref\cite{Bausch}, such an equilibrium state turns out to be unstable and
the defects oscillate between a tetrahedral and a planar configuration \cite{Bausch}.
Although, we cannot model this instability, our Monte-Carlo simulation at high bending
rigidity ($\kappa$) and at high temperature shows that the defect positions indeed
fluctuate between tetrahedron and planar configurations (see Fig.\ref{fig.tetra}) indicating
closeness of these two states in the equilibrium energy landscape.
\begin{figure}[h]
\centering
\includegraphics[width=3in]{tetra}
\caption{ (color online) The simplest vesicle shape, at fixed volume, with only
nonzero isotropic bending modulus $\kappa=20$ and weak nematic interaction
$\epsilon _{LL}=1.5$. The four $s=+1/2$ defects (blue solid circles) fluctuate, at
finite temperature, between (a) tetrahedral and (b) planar arrangements. The lines
are the connectors between the centroid and the defects. The numerous small dots
(red) show the vertices of the triangulated mesh outlining the surface of the vesicle.
Here the nematic interaction was chosen weak because at stronger interaction the free
energy barrier between (a) and (b) states will be higher which cannot be overcome
only by thermal fluctuations. In the active case \cite{Bausch} active fluctuations
make these states unstable.}
\label{fig.tetra}
\end{figure}
Furthermore, when Keber et al \cite{Bausch} deflate the vesicle, by applying hypertonic stress,
it amounts to generating excess area (as the volume reduces) and as a result the vesicle deforms.
To mimic this system we allow about $10\%$ excess area for the vesicle as before,
but switch to relatively stronger nematic and bundling interactions: $\epsilon_{LL}=9$ and $c_{\perp}=1.2$.
In addition, we set $c_{\parallel}=0$ as MT filaments do not have intrinsic curvature.
When started from a sphere, the vesicle grows into an ellipsoid utilising the extra area.
The four +1/2 defects arrange themselves into two pairs, one pair each migrating approximately
to the opposite ends of the major axis. The strong nematic interaction ensures that more
defects are not nucleated as defects possess high elastic energy. Subsequently four
tubes emerge from the four defects (Fig.6a). However as our system do not have any active
dynamics the tubes do not change their positions.
Interestingly, at these same parameter values, when we reduce the volume of the vesicle
further to one third of its value (with corresponding reduction in area) only two tubes emerged
(see Fig.6b). This occurred via merging of the defect pair into one +1 defect, at each
end of the major axis. The corresponding size difference in the experimental figures
are indicated in Fig.6c and d.
In our equilibrium model the filaments cannot slide/translate, unlike in the active case of
Keber et al \cite{Bausch}, however the defects can move due to rearrangement of the nematics.
As mentioned earlier, the areal density of nematics can be non-uniform in our model,
because although there is one filament per vertex, the areal density of vertices is
higher on the tubes than in the valleys. This could very well be the case in the
experiment. For example, parallel arrangement of filaments on tubes, parallel to the
tube axis, will produce denser filament coverage and than that in the valleys. This is
consistent with higher fluorescent intensity from the tubes, reported in Osawa's
experiments \cite{osawaEMBO}
\begin{figure}[h]
\centering
\includegraphics[width=3in]{bausch}
\caption{ (color online) Effect of excess area: at fixed volume the area is fixed at $10\%$
excess over the corresponding sphere.
(a) and (b) model partially deflated vesicles (c and d) of Ref\cite{Bausch} (with permission) where
initially spherical vesicles deformed into ellipsoidal shapes due to excess available area.
(d) had lesser volume than (c), which resulted into half the number of tubes in (d). Model parameters
are the same for (a) and (b) : $\kappa_0=0, \kappa_{\parallel}=25,\kappa_{\perp}=20,
c_{\parallel}=0, c_{\perp}=1.2, \epsilon _{LL}=9$, except that volume of (b) is 1/3-rd of (a).
Despite presence of bundling effect $c_{\perp}$, strong nematic interaction allowed only finite
number of defects (and hence tubes) as defects cause high elastic energy.}
\label{fig.baush}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=2.5in]{Fig3}
\caption{ (color online) Arrangement of nematic defects on a tubulating vesicle. The red circles
denote $s=-1/2$ defects which encircle the aster like $s=+1$ defects. The parameters
are same as in Fig.~\ref{fig.tubes}. }
\label{fig.theorem}
\end{figure}
Nematic defects turn out to be hot spots of activity on the membrane. In our
simulation tubes grow from sites of nematic defects with a positive defect charge.
The charge $s$ of a nematic defect is defined as the total amount of rotation a nematic
director undergoes as a closed loop is traversed around a defect.
In Keber et. al. \cite{Bausch} defect sites on ellipsoid shaped vesicle constitute
weak spots that encourage growth of MT bundles which in turn induce tubulation.
Ladoux's group \cite{julia} has shown cell death and extrusion occurring predominantly
at the sites of $s=+1/2$, in nearly 2D epithelial tissue.
It turns out that energetically it is favourable to co-localize nematic defects
and high membrane curvature. However in-plane arrangement of nematic defects on a
closed surfaces must obey
Poincare's index theorem (popularly known as the hairy ball theorem) which states
that the total topological charge ($s$) of all defects on a sphere must add up to 2
(more generally to $2(1-g)$ on a
surface of genus $g$) \cite{prost}. This constrains the number and location of the tubes on the
vesicle surface. What precedes a tube is a defect of charge $s=1$ with an aster
like arrangement of nematics.
The membrane deforms into a pointed structure (vertex) around the aster and produce
a tube. The total positive charge increases with the number of vertices but
this is efficiently nullified by the local arrangement of $s=-1/2$ defects around each vertex.
Each vertex is surrounded by typically five or six, and seldom seven, other vertices.
Accordingly, the central vertex forms five, six or seven triangles, with the surrounding vertices.
Each triangle forms a concave valley and hosts a $s=-1/2$ defect
(red circles in Fig.3). But each $s=-1/2$ defect is shared by three $s=1$ defects.
To compute total defect charge on the vesicle, we consider contributions from
polygons formed by the $-1/2$ defects (red circles in Fig.~\ref{fig.theorem}).
The net charge of a pentagon is $1 + 5\times(-1/2)\times (1/3) = 1/6$,
while that for a hexagon is $1 + 6\times(-1/2)\times (1/3) = 0$, and heptagon is $-1/6$.
One realisation of this pentagon-hexagon arrangement is the minimal soccer ball
structure which has 12 pentagons, 20 hexagons and equivalently, thirty-two $+1$ and
sixty $-1/2$ defects, with total charge adding up to 2. The simple picture that
emerges is that each polygon, made of $s= -1/2$ defects, hosts one $s=+1$ defect (tube)
at its centre. So in the framework of soccer ball structure the polygonal units are
formed by the $-1/2$ (valley) defects at the vertices. In our initial guess we
considered the hexagonal lattice dual to this where tubes were at the vertices
of the polygons. However, smaller and larger vesicles have different number of
defects, for example, seventy-six $-1/2$ and forty $+1$ defects, still adding up to
2 (in fact Fig.~\ref{fig.theorem} has this structure). The regular arrangement of valleys
and tubes now can be matched with Figs.~\ref{fig.expt}f and ~\ref{fig.expt}g, respectively.
Note from Fig.~\ref{fig.theorem}, that each tube (+1 defect) typically has five to six
nearest neighbours while each valley (-1/2 defect) has three nearest neighbours.
Charge cancellation among nematic defects, on deformed axis-symmetric vesicles,
have been discussed in Ref\cite{iglicSciRep2016}.
In summary, we showed how filament bundle induced tubulation, can be modeled by
introducing an induced anisotropic membrane curvature perpendicular to the
filament's alignment.
We showed that, within our model, either of $c_{\parallel}$ and $c_{\perp}$ is
individually capable of causing tubulation. But the corresponding inclination
of the nematic field on the tube surface is different in the two cases.
Furthermore, when nematic interaction is weak (in case of FtsZ) the vesicle allows
formation of many defects, and subsequently many tubes where bundling interaction
leads to maximum energy gain. On the other hand for MT with strong nematic interaction
only minimal number of defects and tubes form. Althouh GTP/ATP hydrolysis is common
to both cases (FtsZ/MT), we emphasize that the FtsZ case is not active in the
conventional sense, because no physical filament movement is generated (in the
absence of motors) unlike the case of kinesin driven MT case. Although our model
cannot address the active MT dynamics, our simulations indicates
that some of the features observed in the active MT induced vesicle shapes
may have equilibrium origin. This includes emergence of the ellipsoidal
shape upon reduction of the vesicle volume and the influence of the vesicle volume
in determining the number of tubes (four versus two).
The novel filament arrangement on the deformed vesicle surface is an interesting outcome
of our numerical investigation and is difficult to predict a priori. This link between filament
arrangement and vesicle shape may be useful for vesicle origami.
Gaurav kumar would like to acknowledge financial support from CSIR (India).
|
{
"timestamp": "2018-07-27T02:05:35",
"yymm": "1802",
"arxiv_id": "1802.08608",
"language": "en",
"url": "https://arxiv.org/abs/1802.08608"
}
|
\section{Introduction} \label{sec:
introduction}
The main purpose of this paper is to prove a Strong Unique Continuation Property at the Boundary (SUCPB) for the Kirchhoff-Love plate's equation. In order to introduce the subject of SUCPB we give some basic, although coarse, notion.
Let $\mathcal{L}$ be an elliptic operator of order $2m$, $m\in \N$, and let $\Omega$ be an open domain in $\mathbb{R}^N$, $N\geq2$. We say that $\mathcal{L}$ enjoys a SUCPB with respect to the Dirichlet boundary conditions if the following property holds true:
\begin{equation}\label{formulaz-sucpb}
\begin{cases}
\mathcal{L}u=0, \mbox{ in } \Omega, \\
\frac{\partial^ju}{\partial n^j}=0, \mbox{ on } \Gamma, \quad\mbox{ for } j=0, 1, \ldots, m-1, \\
\int_{\Omega\cap B_r(P)}u^2=\mathcal{O}(r^k), \mbox{ as } r\rightarrow 0, \forall k\in \mathbb{N},
\end{cases}\Longrightarrow \quad u\equiv 0 \mbox{ in } \Omega,
\end{equation}
where $\Gamma$ is an open portion (in the induced topology) of $\partial\Omega$, $n$ is outer unit normal, $P\in\Gamma$ and $B_r(P)$ is the ball of center $P$ and radius $r$.
Similarly, we say that $\mathcal{L}$ enjoys a SUCPB with respect to the set of normal boundary operators $\mathcal{B}_j$, $j\in J$, $\mathcal{B}_j$ of order $j$, $J\subset \{0, 1, \ldots, 2m-1\}$, $\sharp J = m$, \cite{l:Fo}, if the analogous of \eqref{formulaz-sucpb} holds when the Dirichlet boundary conditions are replaced by
\begin{equation}
\label{BC-generale}
\mathcal{B}_ju=0, \quad \hbox{on } \Gamma, \quad \hbox{for } j\in J.
\end{equation}
The SUCPB has been studied for the second order elliptic operators in the last two decades, both in the case of homogeneous Dirichlet, Neumann and Robin boundary conditions, \cite{l:AdEsKe}, \cite{l:AdEs}, \cite{l:ARRV}, \cite{l:ApEsWaZh}, \cite{l:BaGa}, \cite{l:BoWo}, \cite{l:KeWa}, \cite{l:KuNy}, \cite{l:Si}. Although the conjecture that the SUCPB holds true when $\partial\Omega$ is of Lipschitz class is not yet proved, the SUCPB and the related quantitative estimates are today well enough understood for second-order elliptic equations.
Starting from the paper \cite{l:AlBeRoVe}, the SUCPB turned out to be a crucial property to prove optimal stability estimates for inverse elliptic boundary value problems with unknown boundaries. Mostly for this reason the investigation about the SUCPB has been successfully extended to second order parabolic equations \cite{l:CaRoVe}, \cite{l:DcRoVe}, \cite{l:EsFe}, \cite{l:EsFeVe}, \cite{l:Ve1} and to wave equation with time independent coefficients \cite{l:SiVe}, \cite{l:Ve2}. For completeness we recall (coarsely) the formulation of inverse boundary value problems with unknown boundaries in the elliptic context.
Assume that $\Omega$ is a bounded domain, with connected boundary $\partial \Omega$ of $C^{1,\alpha}$ class, and that $\partial\Omega$ is disjoint union of an accessible portion $\Gamma^{(a)}$ and of an inaccessible portion $\Gamma^{(i)}$. Given a symmetric, elliptic, Lipschitz matrix valued $A$ and $\psi \not\equiv 0$ such that $$\psi(x)=0, \mbox{ on } \Gamma^{(i)},$$
let $u$ be the solution to
\begin{equation*}
\left\{\begin{array}{ll}
\mbox{div}\left(A\nabla u\right)=0, \quad \hbox{in } \Omega,\\
u=\psi, \quad \hbox{on } \partial\Omega.
\end{array}\right.
\end{equation*}
Assuming that one knows
\begin{equation*}
\label{flux}
A\nabla u\cdot\nu,\quad \mbox{on } \Sigma,
\end{equation*}
where $\Sigma$ is an open portion of $\Gamma^{(a)}$, the inverse problem under consideration consists in determining the unknown boundary $\Gamma^{(i)}$. The proof of the uniqueness of $\Gamma^{(i)}$ is quite simple and requires the weak unique continuation property of elliptic operators. On the contrary, the optimal continuous dependence of $\Gamma^{(i)}$ from the Cauchy data $u$, $A\nabla u\cdot\nu$ on $\Sigma$,
which is of logarithmic rate (see \cite{l:DcRo}), requires quantitative estimates of strong unique continuation at the interior and at the boundary, like the three spheres inequality, \cite{l:Ku}, \cite{l:La} and the doubling inequality, \cite{l:AdEs}, \cite{l:GaLi}.
Inverse problems with unknown boundaries have been studied in linear elasticity theory for elliptic systems \cite{l:mr03}, \cite{l:mr04}, \cite{l:mr09}, and for fourth-order elliptic equations \cite{l:mrv07}, \cite{l:mrv09}, \cite{l:mrv13}. It is clear enough that the unavailability of the SUCPB precludes proving optimal stability estimates for these inverse problems with unknown boundaries.
In spite of the fact that the strong unique continuation in the interior for fourth-order elliptic equation of the form
\begin{equation}\label{bilaplacian}
\Delta^2u+\sum_{|\alpha|\leq 3}c_{\alpha} D^{\alpha}u=0
\end{equation}
where $c_{\alpha}\in L^{\infty}(\Omega)$, is nowadays well understood, \cite{l:CoGr}, \cite{l:CoKo}, \cite{l:Ge}, \cite{l:LBo}, \cite{l:LiNaWa}, \cite{l:mrv07}, \cite{l:Sh}, to the authors knowledge, the SUCPB for equation like \eqref{bilaplacian} has not yet proved even for Dirichlet boundary conditions. In this regard it is worthwhile to emphasize that serious difficulties occur in performing Carleman method (the main method to prove the unique continuation property) for bi-Laplace operator \textit{near the boundaries}, we refer to \cite{l:LeRRob} for a thorough discussion and wide references on the topics.
In the present paper we begin to find results in this direction for the Kirchhoff-Love equation, describing thin isotropic elastic plates
\begin{equation}
\label{eq:equazione_piastra-int}
L(v) := {\rm div}\left ({\rm div} \left ( B(1-\nu)\nabla^2 v + B\nu \Delta v I_2 \right ) \right )=0, \qquad\hbox{in
} \Omega\subset\mathbb{R}^2,
\end{equation}
where $v$ represents the transversal displacement, $B$ is the \emph{bending stiffness} and $\nu$ the \emph{Poisson's coefficient} (see \eqref{eq:3.stiffness}--\eqref{eq:3.E_nu} for the precise definitions).
Assuming $B,\nu\in C^4(\overline{\Omega})$ and $\Gamma$ of $C^{6, \alpha}$ class, we prove our main results: a three spheres inequality at the boundary with optimal exponent (see Theorem \ref{theo:40.teo} for the precise statement) and, as a byproduct, the following SUCPB result (see Corollary \ref{cor:SUCP})
\begin{equation}\label{formulaz-sucpb-piastra}
\begin{cases}
Lv=0, \mbox{ in } \Omega, \\
v =\frac{\partial v}{\partial n}=0, \mbox{ on } \Gamma, \\
\int_{\Omega\cap B_r(P)}v^2=\mathcal{O}(r^k), \mbox{ as } r\rightarrow 0, \forall k\in \mathbb{N},
\end{cases}\Longrightarrow \quad v\equiv 0 \mbox{ in } \Omega.
\end{equation}
In our proof, firstly we flatten the boundary $\Gamma$ by introducing a suitable conformal mapping (see Proposition \ref{prop:conf_map}), then we combine a reflection argument (briefly illustrated below) and the Carleman estimate
\begin{equation}
\label{eq:24.4-intr}
\sum_{k=0}^3 \tau^{6-2k}\int\rho^{2k+\epsilon-2-2\tau}|D^kU|^2dxdy\leq C
\int\rho^{6-\epsilon-2\tau}(\Delta^2 U)^2dxdy,
\end{equation}
for every $\tau\geq \overline{\tau}$ and for every $U\in C^\infty_0(B_{\widetilde{R}_0}\setminus\{0\})$, where $0<\varepsilon<1$ is fixed and $\rho(x,y)\sim \sqrt{x^2+y^2}$ as $(x,y)\rightarrow (0,0)$, see \cite[Theorem 6.8]{l:mrv07} and here Proposition \ref{prop:Carleman} for the precise statement.
To enter a little more into details, let us outline the main steps of our proof.
a) Since equation \eqref{eq:equazione_piastra-int} can be rewritten in the form
\begin{equation}
\label{eq:equazione_piastra_non_div-intr}
\Delta^2 v= -2\frac{\nabla B}{B}\cdot \nabla\Delta v + q_2(v) \qquad\hbox{in
} \Omega,
\end{equation}
where $q_2$ is a second order operator, the equation resulting after flattening $\Gamma$ by a conformal mapping preserves the same structure of \eqref{eq:equazione_piastra_non_div-intr} and, denoting by $u$ the solution in the new coordinates, we can write
\begin{equation}
\label{eq:15.1a-intro}
\begin{cases}
\Delta^2 u= a\cdot \nabla\Delta u + p_2(u), \qquad\hbox{in
} B_1^+, \\
u(x,0)=u_y(x,0) =0, \quad \forall x\in (-1,1)
\end{cases}
\end{equation}
where $p_2$ is a second order operator.
b) We use the following reflection of $u$, \cite{l:Fa}, \cite{l:Jo}, \cite{l:Sa},
\begin{equation*}
\overline{u}(x,y)=\left\{
\begin{array}{cc}
u(x,y), & \hbox{ in } B_1^+\\
w(x,y)=-[u(x,-y)+2yu_y(x,-y)+y^2\Delta u(x,-y)], & \hbox{ in } B_1^-
\end{array}
\right.
\end{equation*}
which has the advantage of ensuring that $\overline{u}\in H^4(B_1)$ if $u\in H^4(B_1^+)$ (see Proposition \ref{prop:16.1}), and then we apply the Carleman estimate \eqref{eq:24.4-intr} to $\xi \overline{u}$, where $\xi$ is a cut-off function. Nevertheless we have still a problem. Namely
c) Derivatives of $u$ up to the sixth order occur in the terms on the right-hand side of the Carleman estimate involving negative value of $y$, hence such terms cannot be absorbed in a standard way by the left hand side. In order to overcome this obstruction, we use Hardy inequality, \cite{l:HLP34}, \cite{l:T67}, stated in Proposition \ref{prop:Hardy}.
The paper is organized as follows. In Section \ref{sec:
notation} we introduce some notation and definitions and state our main results, Theorem \ref{theo:40.teo} and Corollary \ref{cor:SUCP}. In Section \ref{sec:
flat_boundary} we state Proposition \ref{prop:conf_map}, which introduces the conformal map which
realizes a local flattening of the boundary which preserves the structure of the differential operator.
Section \ref{sec:
Preliminary} contains some auxiliary results which shall be used in the proof of the three spheres inequality in the case of flat boundaries, precisely Propositions \ref{prop:16.1} and \ref{prop:19.2} concerning the reflection w.r.t. flat boundaries and its properties, a Hardy's inequality (Proposition \ref{prop:Hardy}), the Carleman estimate for bi-Laplace operator (Proposition \ref{prop:Carleman}), and some interpolation estimates (Lemmas \ref{lem:Agmon} and \ref{lem:intermezzo}).
In Section \ref{sec:3sfere} we establish the three spheres inequality with optimal exponents for
the case of flat boundaries, Proposition \ref{theo:40.prop3}, and then we derive the proof of our main result, Theorem \ref{theo:40.teo}.
Finally, in the Appendix, we give the proof of Proposition \ref{prop:conf_map} and of
the interpolation estimates contained in Lemma \ref{lem:intermezzo}.
\section{Notation} \label{sec:
notation}
We shall generally denote points in $\R^2$ by $x=(x_1,x_2)$ or $y=(y_1,y_2)$, except for
Sections \ref{sec:
Preliminary} and \ref{sec:3sfere} where we rename $x,y$ the coordinates in $\R^2$.
In places we will use equivalently the symbols $D$ and $\nabla$ to denote the gradient of a function. Also we use the multi-index notation.
We shall denote by $B_r(P)$ the disc in $\R^2$ of radius $r$ and
center $P$, by $B_r$ the disk of radius $r$ and
center $O$, by $B_r^+$, $B_r^-$ the hemidiscs in $\R^2$ of radius $r$ and
center $O$ contained in the halfplanes $\R^2_+= \{x_2>0\}$, $\R^2_-= \{x_2<0\}$ respectively, and by $R_{
a,b}$ the rectangle $(-a,a)\times(-b,b)$.
Given a matrix $A =(a_{ij})$, we shall denote by $|A|$ its Frobenius norm $|A|=\sqrt{\sum_{i,j}a_{ij}^2}$.
Along our proofs, we shall denote by $C$ a constant which may change {}from line to line.
\begin{definition}
\label{def:reg_bordo} (${C}^{k,\alpha}$ regularity)
Let $\Omega$ be a bounded domain in ${\R}^{2}$. Given $k,\alpha$,
with $k\in\N$, $0<\alpha\leq 1$, we say that a portion $S$ of
$\partial \Omega$ is of \textit{class ${C}^{k,\alpha}$ with
constants $r_{0}$, $M_{0}>0$}, if, for any $P \in S$, there
exists a rigid transformation of coordinates under which we have
$P=0$ and
\begin{equation*}
\Omega \cap R_{r_0,2M_0r_0}=\{x \in R_{r_0,2M_0r_0} \quad | \quad
x_{2}>g(x_1)
\},
\end{equation*}
where $g$ is a ${C}^{k,\alpha}$ function on
$[-r_0,r_0]$
satisfying
\begin{equation*}
g(0)=g'(0)=0,
\end{equation*}
\begin{equation*}
\|g\|_{{C}^{k,\alpha}([-r_0,r_0])} \leq M_0r_0,
\end{equation*}
where
\begin{equation*}
\|g\|_{{C}^{k,\alpha}([-r_0,r_0])} = \sum_{i=0}^k r_0^i\sup_{[-r_0,r_0]}|g^{(i)}|+r_0^{k+\alpha}|g|_{k,\alpha},
\end{equation*}
\begin{equation*}
|g|_{k,\alpha}= \sup_ {\overset{\scriptstyle t,s\in [-r_0,r_0]}{\scriptstyle
t\neq s}}\left\{\frac{|g^{(k)}(t) - g^{(k)}(s)|}{|t-s|^\alpha}\right\}.
\end{equation*}
\end{definition}
We shall consider an isotropic thin elastic plate $\Omega\times \left[-\frac{h}{2},\frac{h}{2}\right]$, having middle plane $\Omega$ and width $h$. Under the Kirchhoff-Love theory, the transversal displacement $v$ satisfies the following fourth-order partial differential equation
\begin{equation}
\label{eq:equazione_piastra}
L(v) := {\rm div}\left ({\rm div} \left ( B(1-\nu)\nabla^2 v + B\nu \Delta v I_2 \right ) \right )=0, \qquad\hbox{in
} \Omega.
\end{equation}
Here the \emph{bending stiffness} $B$ is given by
\begin{equation}
\label{eq:3.stiffness}
B(x)=\frac{h^3}{12}\left(\frac{E(x)}{1-\nu^2(x)}\right),
\end{equation}
and the \emph{Young's modulus} $E$ and the \emph{Poisson's coefficient} $\nu$ can be written in terms of the Lam\'{e} moduli as follows
\begin{equation}
\label{eq:3.E_nu}
E(x)=\frac{\mu(x)(2\mu(x)+3\lambda(x))}{\mu(x)+\lambda(x)},\qquad\nu(x)=\frac{\lambda(x)}{2(\mu(x)+\lambda(x))}.
\end{equation}
We shall make the following strong convexity assumptions on the Lam\'{e} moduli
\begin{equation}
\label{eq:3.Lame_convex}
\mu(x)\geq \alpha_0>0,\qquad 2\mu(x)+3\lambda(x)\geq\gamma_0>0, \qquad \hbox{ in } \Omega,
\end{equation}
where $\alpha_0$, $\gamma_0$ are positive constants.
It is easy to see that equation \eqref{eq:equazione_piastra} can be rewritten in the form
\begin{equation}
\label{eq:equazione_piastra_non_div}
\Delta^2 v= \widetilde{a}\cdot \nabla\Delta v + \widetilde{q}_2(v) \qquad\hbox{in
} \Omega,
\end{equation}
with
\begin{equation}
\label{eq:vettore_a_tilde}
\widetilde{a}=-2\frac{\nabla B}{B},
\end{equation}
\begin{equation}
\label{eq:q_2}
\widetilde{q}_2(v)=-\sum_{i,j=1}^2\frac{1}{B}\partial^2_{ij}(B(1-\nu)+\nu B\delta_{ij})\partial^2_{ij} v.
\end{equation}
Let
\begin{equation}
\label{eq:Omega_r_0}
\Omega_{r_0} = \left\{ x\in R_{r_0,2M_0r_0}\ |\ x_2>g(x_1) \right\},
\end{equation}
\begin{equation}
\label{eq:Gamma_r_0}
\Gamma_{r_0} = \left\{(x_1,g(x_1))\ |\ x_1\in (-r_0,r_0)\right\},
\end{equation}
with
\begin{equation*}
g(0)=g'(0)=0,
\end{equation*}
\begin{equation}
\label{eq:regol_g}
\|g\|_{{C}^{6,\alpha}([-r_0,r_0])} \leq M_0r_0,
\end{equation}
for some $\alpha\in (0,1]$.
Let
$v\in H^2(\Omega_{r_0})$ satisfy
\begin{equation}
\label{eq:equat_u_tilde}
L(v)= 0, \quad \hbox{ in } \Omega_{r_0},
\end{equation}
\begin{equation}
\label{eq:Diric_u_tilde}
v = \frac{\partial v}{\partial n}= 0, \quad \hbox{ on } \Gamma_{r_0},
\end{equation}
where $L$ is given by \eqref{eq:equazione_piastra} and $n$ denotes the outer unit normal.
Let us assume that the Lam\'{e} moduli $\lambda,\mu$ satisfies the strong convexity condition \eqref{eq:3.Lame_convex} and the following regularity assumptions
\begin{equation}
\label{eq:C4Lame}
\|\lambda\|_{C^4(\overline{\Omega}_{r_0})}, \|\mu\|_{C^4(\overline{\Omega}_{r_0})}\leq \Lambda_0.
\end{equation}
The regularity assumptions \eqref{eq:3.Lame_convex}, \eqref{eq:regol_g} and \eqref{eq:C4Lame} guarantee that $v\in H^6(\Omega_r)$, see for instance \cite{l:a65}.
\begin{theo} [{\bf Optimal three spheres inequality at the boundary}]
\label{theo:40.teo}
Under the above hypotheses, there exist $c<1$ only depending on $M_0$ and $\alpha$, $C>1$ only depending on $\alpha_0$, $\gamma_0$, $\Lambda_0$, $M_0$, $\alpha$, such that, for every $r_1<r_2<c r_0<r_0$,
\begin{equation}
\label{eq:41.1}
\int_{B_{r_2}\cap \Omega_{r_0}}v^2\leq C\left(\frac{r_0}{r_2}\right)^C\left(\int_{B_{r_1}\cap \Omega_{r_0}}v^2\right)^\theta\left(\int_{B_{r_0}\cap \Omega_{r_0}}v^2\right)^{1-\theta},
\end{equation}
where
\begin{equation}
\label{eq:41.2}
\theta = \frac{\log\left(\frac{cr_0}{r_2}\right)}{\log\left(\frac{r_0}{r_1}\right)}.
\end{equation}
\end{theo}
\begin{cor} [{\bf Quantitative strong unique continuation at the boundary}]
\label{cor:SUCP}
Under the above hypotheses and assuming $\int_{B_{r_0 }\cap\Omega_{r_0}}v^2>0$,
\begin{equation}
\label{eq:suc1}
\int_{B_{r_1 }\cap\Omega_{r_0}}v^2 \geq \left(\frac{r_1}{r_0}\right)^{\frac{\log A}{\log \frac{r_2}{cr_0}}}
\int_{B_{r_0 }\cap\Omega_{r_0}}v^2,
\end{equation}
where
\begin{equation}
\label{eq:suc2}
A= \frac{1}{C}\left(\frac{r_2}{r_0}\right)^C\frac{\int_{B_{r_2 }\cap\Omega_{r_0}}v^2}{\int_{B_{r_0 }\cap\Omega_{r_0}}v^2}<1,
\end{equation}
$c<1$ and $C>1$ being the constants appearing in Theorem \ref{theo:40.teo}.
\end{cor}
\begin{proof}
Reassembling the terms in \eqref{eq:41.1}, it is straightforward to obtain \eqref{eq:suc1}-\eqref{eq:suc2}. The SUCBP follows immediately.
\end{proof}
\section{Reduction to a flat boundary} \label{sec:
flat_boundary}
The following Proposition introduces a conformal map which flattens the boundary
$\Gamma_{r_0}$ and preserves the structure of equation \eqref{eq:equazione_piastra_non_div}.
\begin{prop} [{\bf Conformal mapping}]
\label{prop:conf_map}
Under the hypotheses of Theorem \ref{theo:40.teo}, there exists an injective sense preserving differentiable map
\begin{equation*}
\Phi=(\varphi,\psi):[-1,1]\times[0,1]\rightarrow \overline{\Omega}_{r_0}
\end{equation*}
which is conformal, and it satisfies
\begin{equation}
\label{eq:9.assente}
\Phi((-1,1)\times(0,1))\supset B_{\frac{r_0}{K}}(0)\cap \Omega_{r_0},
\end{equation}
\begin{equation}
\label{eq:9.2b}
\Phi(([-1,1]\times\{0\})= \left\{ (x_1,g(x_1))\ |\ x_1\in [-r_1,r_1]\right\},
\end{equation}
\begin{equation}
\label{eq:9.2a}
\Phi(0,0)= (0,0),
\end{equation}
\begin{equation}
\label{eq:gradPhi}
\frac{c_0r_0}{2C_0}\leq |D\Phi(y)|\leq \frac{r_0}{2}, \quad \forall y\in [-1,1]\times[0,1],
\end{equation}
\begin{equation}
\label{eq:gradPhiInv}
\frac{4}{r_0}\leq |D\Phi^{-1}(x)|\leq \frac{4C_0}{c_0r_0}, \quad\forall x\in \Phi([-1,1]\times[0,1]),
\end{equation}
\begin{equation}
\label{eq:stimaPhi}
|\Phi(y)|\leq \frac{r_0}{2}|y|, \quad \forall y\in [-1,1]\times[0,1],
\end{equation}
\begin{equation}
\label{eq:stimaPhiInv}
|\Phi^{-1}(x)| \leq
\frac{K}{r_0}|x|, \quad \forall x\in \Phi([-1,1]\times[0,1]),
\end{equation}
with
$K>8$, $0<c_0<C_0$ being constants only depending on $M_0$ and $\alpha$.
Letting
\begin{equation}
\label{eq:def_sol_composta}
u(y) = v(\Phi(y)), \quad y\in [-1,1]\times[0,1],
\end{equation}
then $u\in H^6((-1,1)\times(0,1))$ and it satisfies
\begin{equation}
\label{eq:equazione_sol_composta}
\Delta^2 u= a\cdot \nabla\Delta u + q_2(u), \qquad\hbox{in
} (-1,1)\times(0,1),
\end{equation}
\begin{equation}
\label{eq:Dirichlet_sol_composta}
u(y_1,0)= u_{y_2}(y_1,0) =0, \quad \forall y_1\in (-1,1),
\end{equation}
where
\begin{equation*}
a(y) = |\nabla \varphi(y)|^2\left([D\Phi(y)]^{-1}\widetilde{a}(\Phi(y))-2\nabla(|\nabla \varphi(y)|^{-2})\right),
\end{equation*}
$a\in C^3([-1,1]\times[0,1], \R^2)$, $q_2=\sum_{|\alpha|\leq 2}c_\alpha D^\alpha$ is a second order elliptic operator with coefficients $c_\alpha\in C^2([-1,1]\times[0,1])$,
satisfying
\begin{equation}
\label{eq:15.2}
\|a\|_{ C^3([-1,1]\times[0,1], \R^2)}\leq M_1,\quad \|c_\alpha\|_{ C^2([-1,1]\times[0,1])}\leq M_1,
\end{equation}
with $M_1>0$ only depending on $M_0, \alpha, \alpha_0, \gamma_0, \Lambda_0$.
\end{prop}
The explicit construction of the conformal map $\Phi$ and the proof of the above Proposition are postponed to the Appendix.
\section{Preliminary results} \label{sec:
Preliminary}
In this paragraph, for simplicity of notation, we find it convenient to rename $x,y$ the coordinates in $\R^2$ instead of $y_1,y_2$.
Let $u\in H^6(B_1^+)$ be a solution to
\begin{equation}
\label{eq:15.1a}
\Delta^2 u= a\cdot \nabla\Delta u + q_2(u), \qquad\hbox{in
} B_1^+,
\end{equation}
\begin{equation}
\label{eq:15.1b}
u(x,0)=u_y(x,0) =0, \quad \forall x\in (-1,1),
\end{equation}
with $q_2=\sum_{|\alpha|\leq 2}c_\alpha D^\alpha$,
\begin{equation}
\label{eq:15.2_bis}
\|a\|_{ C^3(\overline{B}_1^+, \R^2)}\leq M_1,\quad \|c_\alpha\|_{ C^2(\overline{B}_1^+)}\leq M_1,
\end{equation}
for some positive constant $M_1$.
Let us define the following extension of $u$ to $B_1$ (see \cite{l:Jo})
\begin{equation}
\label{eq:16.1}
\overline{u}(x,y)=\left\{
\begin{array}{cc}
u(x,y), & \hbox{ in } B_1^+\\
w(x,y), & \hbox{ in } B_1^-
\end{array}
\right.
\end{equation}
where
\begin{equation}
\label{eq:16.2}
w(x,y)= -[u(x,-y)+2yu_y(x,-y)+y^2\Delta u(x,-y)].
\end{equation}
\begin{prop}
\label{prop:16.1}
Let
\begin{equation}
\label{eq:16.3}
F:=a\cdot \nabla\Delta u + q_2(u).
\end{equation}
Then $F\in H^2(B_1^+)$, $\overline{u}\in H^4(B_1)$,
\begin{equation}
\label{eq:16.4}
\Delta^2 \overline{u} = \overline{F},\quad \hbox{ in } B_1,
\end{equation}
where
\begin{equation}
\label{eq:16.5}
\overline{F}(x,y)=\left\{
\begin{array}{cc}
F(x,y), & \hbox{ in } B_1^+,\\
F_1(x,y), & \hbox{ in } B_1^-,
\end{array}
\right.
\end{equation}
and
\begin{equation}
\label{eq:16.6}
F_1(x,y)= -[5F(x,-y)-6yF_y(x,-y)+y^2\Delta F(x,-y)].
\end{equation}
\end{prop}
\begin{proof}
Throughout this proof, we understand $(x,y)\in B_1^-$. It is easy to verify that
\begin{equation}
\label{eq:17.1}
\Delta^2 w(x,y)= -[5F(x,-y)-6yF_y(x,-y)+y^2\Delta F(x,-y)]=F_1(x,y).
\end{equation}
Moreover, by \eqref{eq:15.1b} and \eqref{eq:16.2},
\begin{equation}
\label{eq:17.2}
w(x,0)= -u(x,0) =0, \quad \forall x\in (-1,1).
\end{equation}
By differentiating \eqref{eq:16.2} w.r.t. $y$, we have
\begin{equation}
\label{eq:17.3bis}
w_y(x,y)= -[u_y(x,-y)-2yu_{yy}(x,-y)+2y\Delta u(x,-y)-y^2(\Delta u_y)(x,-y)],
\end{equation}
so that, by \eqref{eq:15.1b},
\begin{equation}
\label{eq:17.3}
w_y(x,0)= -u_y(x,0) =0, \quad \forall x\in (-1,1).
\end{equation}
Moreover,
\begin{equation}
\label{eq:17.6}
\Delta w(x,y)= -[3 \Delta u(x,-y)-4u_{yy}(x,-y)-2y(\Delta u_y)(x,-y)+y^2(\Delta^2 u)(x,-y)],
\end{equation}
so that, recalling \eqref{eq:15.1b}, we have that, for every $x\in (-1,1)$,
\begin{multline}
\label{eq:17.4}
\Delta w(x,0)= -[3 \Delta u(x,0)-4u_{yy}(x,0)]= u_{yy}(x,0)
= \Delta u (x,0).
\end{multline}
By differentiating \eqref{eq:17.6} w.r.t. $y$, we have
\begin{multline}
\label{eq:18.1}
(\Delta w_y)(x,y)= -[-5 (\Delta u_y)(x,-y)+4u_{yyy}(x,-y)+\\
+
2y(\Delta u_{yy})(x,-y)
+2y(\Delta^2 u)(x,-y)-y^2(\Delta^2 u_y)(x,-y)],
\end{multline}
so that, taking into account \eqref{eq:15.1b}, it follows that, for every $x\in (-1,1)$,
\begin{multline}
\label{eq:17.5}
(\Delta w_y)(x,0)= -[-5 (\Delta u_y)(x,0)+4u_{yyy}(x,0)] =\\
=-[-5 u_{yxx}(x,0)
- u_{yyy}(x,0)] = u_{yyy}(x,0) = (\Delta u_y)(x,0).
\end{multline}
By \eqref{eq:17.2} and \eqref{eq:17.3}, we have that $\overline{u}\in H^2(B_1)$.
Let $\varphi\in C^\infty_0(B_1)$ be a test function. Then, integrating by parts and using \eqref{eq:17.1}, \eqref{eq:17.4}, \eqref{eq:17.5}, we have
\begin{multline}
\label{eq:18.2}
\int_{B_1}\Delta \overline{u} \Delta\varphi = \int_{B_1^+}\Delta u \Delta\varphi
+\int_{B_1^-}\Delta w \Delta\varphi=\\
=-\int_{-1 }^1 \Delta u(x,0)\varphi_y(x,0)+\int_{-1 }^1 (\Delta u_y)(x,0)\varphi(x,0)
+\int_{B_1^+}(\Delta^2 u) \varphi +\\
+\int_{-1 }^1 \Delta w(x,0)\varphi_y(x,0)-\int_{-1 }^1 (\Delta w_y)(x,0)\varphi(x,0)
+\int_{B_1^-}(\Delta^2 w) \varphi=\\
+\int_{B_1^+}F \varphi+\int_{B_1^-}F_1 \varphi
=\int_{B_1}\overline{F} \varphi.
\end{multline}
Therefore
\begin{equation*}
\int_{B_1}\Delta \overline{u} \Delta\varphi
=\int_{B_1}\overline{F} \varphi, \quad \forall \varphi \in C^\infty_0(B_1),
\end{equation*}
so that \eqref{eq:16.4} holds and, by interior regularity esimates, $\overline{u}\in H^4(B_1)$.
\end{proof}
{}From now on, we shall denote by $P_k$, for $k\in \N$, $0\leq k\leq 3$, any differential operator of the form
\begin{equation*}
\sum_{|\alpha|\leq k}c_\alpha(x)D^\alpha,
\end{equation*}
with $\|c_\alpha\|_{L^\infty}\leq cM_1$, where $c$ is an absolute constant.
\begin{prop}
\label{prop:19.2}
For every $(x,y)\in B_1^-$, we have
\begin{equation}
\label{eq:19.1}
F_1(x,y)= H(x,y)+(P_2(w))(x,y)+(P_3(u))(x,-y),
\end{equation}
where
\begin{multline}
\label{eq:19.2}
H(x,y)= 6\frac{a_1}{y}(w_{yx}(x,y)+u_{yx}(x,-y))+\\
+6\frac{a_2}{y}(-w_{yy}(x,y)+u_{yy}(x,-y))
-\frac{12a_2}{y}u_{xx}(x,-y),
\end{multline}
where $a_1,a_2$ are the components of the vector $a$.
Moreover, for every $x\in (-1,1)$,
\begin{equation}
\label{eq:23.1}
w_{yx}(x,0)+u_{yx}(x,0)=0,
\end{equation}
\begin{equation}
\label{eq:23.2}
-w_{yy}(x,0)+u_{yy}(x,0)=0,
\end{equation}
\begin{equation}
\label{eq:23.3}
u_{xx}(x,0)=0.
\end{equation}
\end{prop}
\begin{proof}
As before, we understand $(x,y)\in B_1^-$.
Recalling \eqref{eq:16.2} and \eqref{eq:16.3}, it is easy to verify that
\begin{equation}
\label{eq:19.3}
F(x,-y)= (P_3(u))(x,-y),
\end{equation}
\begin{equation}
\label{eq:20.1}
-6yF_y(x,-y)= -6y(a\cdot \nabla \Delta u_y)(x,-y)+(P_3(u))(x,-y).
\end{equation}
Next, let us prove that
\begin{equation}
\label{eq:20.2}
y^2\Delta F(x,-y)= (P_2(w))(x,y)+(P_3(u))(x,-y).
\end{equation}
By denoting for simplicity $\partial_1 =\frac{\partial}{\partial x}$,
$\partial_2 =\frac{\partial}{\partial y}$, we have that
\begin{multline}
\label{eq:20.3}
y^2\Delta F(x,-y)= y^2(a_j\partial_j\Delta^2 u + 2\nabla a_j\cdot \nabla \partial_j\Delta u + \Delta a_j\partial_j\Delta u)(x,-y)+y^2\Delta(q_2(u))(x,-y)=\\
=y^2(a_j\partial_j(a\cdot \nabla \Delta u+q_2 (u))(x,-y)+
2y^2(\nabla a_j\cdot\nabla\partial_j \Delta u)(x,-y)+\\
+y^2(\Delta q_2(u))(x,-y)+y^2(P_3(u))(x,-y)=\\
=y^2(a_j a\cdot \nabla \Delta \partial_j u)(x,-y)+\\
+2y^2(\nabla a_j\cdot\nabla\partial_j \Delta u)(x,-y)
+y^2\Delta(q_2(u))(x,-y)+y^2(P_3(u))(x,-y).
\end{multline}
By \eqref{eq:16.2}, we have
\begin{equation*}
y^2\Delta u(x,-y)=-w(x,y)-u(x,-y)-2yu_y(x,-y),
\end{equation*}
obtaining
\begin{multline}
\label{eq:21.1}
y^2(a_j a\cdot \nabla \partial_j\Delta u)(x,-y)=
(a_j a\cdot \nabla \partial_j(y^2\Delta u))(x,-y)+
(P_3(u))(x,-y)=\\
=(P_2(w))(x,y)+(P_3(u))(x,-y).
\end{multline}
Similarly, we can compute
\begin{equation}
\label{eq:21.2}
2y^2(\nabla a_j \cdot \nabla \partial_j\Delta u)(x,-y)=
(P_2(w))(x,y)+(P_3(u))(x,-y),
\end{equation}
\begin{equation}
\label{eq:21.3}
y^2(\Delta q_2(u))(x,-y)=
(P_2(w))(x,y)+(P_3(u))(x,-y).
\end{equation}
Therefore, \eqref{eq:20.2} follows {}from \eqref{eq:20.3}--\eqref{eq:21.3}.
{}From \eqref{eq:16.6}, \eqref{eq:19.3}--\eqref{eq:20.2}, we have
\begin{equation}
\label{eq:21.4}
F_1(x,y)=6y(a\cdot \nabla\Delta u_y)(x,-y)
+(P_2(w))(x,y)+(P_3(u))(x,-y).
\end{equation}
We have that
\begin{equation}
\label{eq:21.5}
6y(a\cdot \nabla\Delta u_y)(x,-y)=
6y(a_1\Delta u_{xy})(x,-y)+6y(a_2\Delta u_{yy})(x,-y).
\end{equation}
By \eqref{eq:16.2}, we have
\begin{equation}
\label{eq:22.1}
w_{yx}(x,y)=-u_{yx}(x,-y)+2yu_{yyx}(x,-y)-2y(\Delta u_{x})(x,-y)
+y^2(\Delta u_{yx})(x,-y),
\end{equation}
so that
\begin{equation}
\label{eq:22.2}
y(\Delta u_{yx})(x,-y)=\frac{1}{y}(w_{yx}(x,y)+u_{yx}(x,-y))+(P_3(u))(x,-y).
\end{equation}
Again by \eqref{eq:16.2}, we have
\begin{multline}
\label{eq:22.3}
w_{yy}(x,y)=\\
=3u_{yy}(x,-y)-2(\Delta u)(x,-y)-2y((u_{yyy})(x,-y)+2\Delta u_y(x,-y))
-y^2(\Delta u_{yy})(x,-y)=\\
=u_{yy}(x,-y)-2u_{xx}(x,-y)-y^2(\Delta u_{yy})(x,-y)+y(P_3(u))(x,-y),
\end{multline}
so that
\begin{equation}
\label{eq:22.4}
y(\Delta u_{yy})(x,-y)=\frac{1}{y}(-w_{yy}(x,y)+u_{yy}(x,-y)-2u_{xx}(x,-y))+(P_3(u))(x,-y).
\end{equation}
Therefore \eqref{eq:19.1}--\eqref{eq:19.2} follow by \eqref{eq:21.4}, \eqref{eq:21.5}, \eqref{eq:22.2} and \eqref{eq:22.4}.
The identity \eqref{eq:23.1} is an immediate consequence of \eqref{eq:22.1} and \eqref{eq:15.1b}.
By \eqref{eq:15.1b}, we have \eqref{eq:23.3} and
by \eqref{eq:22.3} and \eqref{eq:23.3},
\begin{equation*}
-w_{yy}(x,0)+ u_{yy}(x,0) =2 u_{xx}(x,0) =0.
\end{equation*}
\end{proof}
For the proof of the three spheres inequality at the boundary we shall use the following Hardy's inequality (\cite[\S 7.3, p. 175]{l:HLP34}), for a proof see also \cite{l:T67}.
\begin{prop} [{\bf Hardy's inequality}]
\label{prop:Hardy}
Let $f$ be an absolutely continuous function defined in $[0,+\infty)$, such that
$f(0)=0$. Then
\begin{equation}
\label{eq:24.1}
\int_1^{+\infty} \frac{f^2(t)}{t^2}dt\leq 4 \int_1^{+\infty} (f'(t))^2dt.
\end{equation}
\end{prop}
Another basic result we need to derive the three spheres inequality at the boundary is the following Carleman estimate, which was obtained in \cite[Theorem 6.8]{l:mrv07}.
\begin{prop} [{\bf Carleman estimate}]
\label{prop:Carleman}
Let $\epsilon\in(0,1)$. Let us define
\begin{equation}
\label{eq:24.2}
\rho(x,y) = \varphi\left(\sqrt{x^2+y^2}\right),
\end{equation}
where
\begin{equation}
\label{eq:24.3}
\varphi(s) = s\exp\left(-\int_0^s \frac{dt}{t^{1-\epsilon}(1+t^\epsilon)}\right).
\end{equation}
Then there exist $\overline{\tau}>1$, $C>1$, $\widetilde{R}_0\leq 1$, only depending on $\epsilon$, such that
\begin{equation}
\label{eq:24.4}
\sum_{k=0}^3 \tau^{6-2k}\int\rho^{2k+\epsilon-2-2\tau}|D^kU|^2dxdy\leq C
\int\rho^{6-\epsilon-2\tau}(\Delta^2 U)^2dxdy,
\end{equation}
for every $\tau\geq \overline{\tau}$ and for every $U\in C^\infty_0(B_{\widetilde{R}_0}\setminus\{0\})$.
\end{prop}
\begin{rem}
\label{rem:stima_rho}
Let us notice that
\begin{equation*}
e^{-\frac{1}{\epsilon}}s\leq \varphi(s)\leq s,
\end{equation*}
\begin{equation}
\label{eq:stima_rho}
e^{-\frac{1}{\epsilon}}\sqrt{x^2+y^2}\leq \rho(x,y)\leq \sqrt{x^2+y^2}.
\end{equation}
\end{rem}
We shall need also the following interpolation estimates.
\begin{lem}
\label{lem:Agmon}
Let $0<\epsilon\leq 1$ and $m\in \N$, $m\geq 2$. There exists an absolute constant
$C_{m,j}$ such that for every $v\in H^m(B_r^+)$,
\begin{equation}
\label{eq:3a.2}
r^j\|D^jv\|_{L^2(B_r^+)}\leq C_{m,j}\left(\epsilon r^m\|D^mv\|_{L^2(B_r^+)}
+\epsilon^{-\frac{j}{m-j}}\|v\|_{L^2(B_r^+)}\right).
\end{equation}
\end{lem}
See for instance \cite[Theorem 3.3]{l:a65}.
\begin{lem}
\label{lem:intermezzo}
Let $u\in H^6(B_1^+)$ be a solution to \eqref{eq:15.1a}--\eqref{eq:15.1b}, with $a$ and $q_2$ satisfying
\eqref{eq:15.2_bis}. For every $r$, $0<r<1$, we have
\begin{equation}
\label{eq:12a.2}
\|D^hu\|_{L^2(B_{\frac{r}{2}}^+)}\leq \frac{C}{r^h}\|u\|_{L^2(B_r^+)}, \quad \forall
h=1, ..., 6,
\end{equation}
where $C$ is a constant only depending on $\alpha_0$, $\gamma_0$ and $\Lambda_0$.
\end{lem}
The proof of the above result is postponed to the Appendix.
\section{Three spheres inequality at the boundary and proof of the main theorem} \label{sec:3sfere}
\begin{theo} [{\bf Optimal three spheres inequality at the boundary - flat boundary case}]
\label{theo:40.prop3}
Let $u\in H^6(B_1^+)$ be a solution to \eqref{eq:15.1a}--\eqref{eq:15.1b}, with $a$ and $q_2$ satisfying
\eqref{eq:15.2_bis}. Then there exist $\gamma\in (0,1)$, only depending on $M_1$ and an absolute constant $C>0$ such that, for every $r<R<\frac{R_0}{2}<R_0<\gamma$,
\begin{equation}
\label{eq:40.1}
R^{2\epsilon}\int_{B_R^+}u^2\leq C(M_1^2+1)\left(\frac{R_0/2}{R}\right)^C\left(\int_{B_r^+}u^2\right)^{\widetilde{\theta}}\left(\int_{B_{R_0}^+}u^2\right)^{1-\widetilde{\theta}},
\end{equation}
where
\begin{equation}
\label{eq:39.1}
\widetilde{\theta} = \frac{\log\left(\frac{R_0/2}{R}\right)}{\log\left(\frac{R_0/2}{r/4}\right)}.
\end{equation}
\end{theo}
\begin{proof}
Let $\epsilon \in (0,1)$ be fixed, for instance $\epsilon=\frac{1}{2}$. However, it is convenient to maintain the parameter $\epsilon$ in the calculations. Along this proof, $C$ shall denote a positive constant which may change {}from line to line.
Let $R_0\in (0,\widetilde{R}_0)$ to be chosen later, where $\widetilde{R}_0$ has been introduced in Proposition \ref{prop:Carleman}, and let
\begin{equation}
\label{eq:25.1}
0<r<R<\frac{R_0}{2}.
\end{equation}
Let $\eta\in C^\infty_0((0,1))$ such that
\begin{equation}
\label{eq:25.2}
0\leq \eta\leq 1,
\end{equation}
\begin{equation}
\label{eq:25.3}
\eta=0, \quad \hbox{ in }\left(0,\frac{r}{4}\right)\cup \left(\frac{2}{3}R_0,1\right),
\end{equation}
\begin{equation}
\label{eq:25.4}
\eta=1, \quad \hbox{ in }\left[\frac{r}{2}, \frac{R_0}{2}\right],
\end{equation}
\begin{equation}
\label{eq:25.6}
\left|\frac{d^k\eta}{dt^k}(t)\right|\leq C r^{-k}, \quad \hbox{ in }\left(\frac{r}{4}, \frac{r}{2}\right),\quad\hbox{ for } 0\leq k\leq 4,
\end{equation}
\begin{equation}
\label{eq:25.7}
\left|\frac{d^k\eta}{dt^k}(t)\right|\leq C R_0^{-k}, \quad \hbox{ in }\left(\frac{R_0}{2}, \frac{2}{3}R_0\right),\quad\hbox{ for } 0\leq k\leq 4.
\end{equation}
Let us define
\begin{equation}
\label{eq:25.5}
\xi(x,y)=\eta(\sqrt{x^2+y^2}).
\end{equation}
By a density argument, we may apply the Carleman estimate \eqref{eq:24.4} to $U=\xi \overline{u}$, where $\overline{u}$ has been defined in \eqref{eq:16.1}, obtaining
\begin{multline}
\label{eq:26.1}
\sum_{k=0}^3 \tau^{6-2k}\int_{B_{R_0}^+}\rho^{2k+\epsilon-2-2\tau}|D^k(\xi u)|^2
+\sum_{k=0}^3 \tau^{6-2k}\int_{B_{R_0}^-}\rho^{2k+\epsilon-2-2\tau}|D^k(\xi w)|^2\leq \\
\leq C
\int_{B_{R_0}^+}\rho^{6-\epsilon-2\tau}|\Delta^2(\xi u)|^2+
C\int_{B_{R_0}^-}\rho^{6-\epsilon-2\tau}|\Delta^2(\xi w)|^2,
\end{multline}
for $\tau\geq \overline{\tau}$ and $C$ an absolute constant.
By \eqref{eq:25.2}--\eqref{eq:25.5} we have
\begin{multline}
\label{eq:26.2}
|\Delta^2(\xi u)|\leq \xi|\Delta^2 u|+C\chi_{B_{r/2}^+\setminus B_{r/4}^+}
\sum_{k=0}^3 r^{k-4}|D^k u|+ C\chi_{B_{2R_0/3}^+\setminus B_{R_0/2}^+}\sum_{k=0}^3 R_0^{k-4}|D^k u|,
\end{multline}
\begin{multline}
\label{eq:26.3}
|\Delta^2(\xi w)|\leq \xi|\Delta^2 w|+C\chi_{B_{r/2}^-\setminus B_{r/4}^-}
\sum_{k=0}^3 r^{k-4}|D^k w|+ C\chi_{B_{2R_0/3}^-\setminus B_{R_0/2}^-}\sum_{k=0}^3 R_0^{k-4}|D^k w|.
\end{multline}
Let us set
\begin{multline}
\label{eq:27.1}
J_0 =\int_{B_{r/2}^+\setminus B_{r/4}^+}\rho^{6-\epsilon-2\tau}
\sum_{k=0}^3 (r^{k-4}|D^k u|)^2+
\int_{B_{r/2}^-\setminus B_{r/4}^-}\rho^{6-\epsilon-2\tau}
\sum_{k=0}^3 (r^{k-4}|D^k w|)^2,
\end{multline}
\begin{multline}
\label{eq:27.2}
J_1 =\int_{B_{2R_0/3}^+\setminus B_{R_0/2}^+}\rho^{6-\epsilon-2\tau}
\sum_{k=0}^3 (R_0^{k-4}|D^k u|)^2+
\int_{B_{2R_0/3}^-\setminus B_{R_0/2}^-}\rho^{6-\epsilon-2\tau}
\sum_{k=0}^3 (R_0^{k-4}|D^k w|)^2.
\end{multline}
By inserting \eqref{eq:26.2}, \eqref{eq:26.3} in \eqref{eq:26.1} we have
\begin{multline}
\label{eq:27.3}
\sum_{k=0}^3 \tau^{6-2k}\int_{B_{R_0}^+}\rho^{2k+\epsilon-2-2\tau}|D^k(\xi u)|^2
+\sum_{k=0}^3 \tau^{6-2k}\int_{B_{R_0}^-}\rho^{2k+\epsilon-2-2\tau}|D^k(\xi w)|^2\leq \\
\leq C
\int_{B_{R_0}^+}\rho^{6-\epsilon-2\tau}\xi^2|\Delta^2 u|^2+
C\int_{B_{R_0}^-}\rho^{6-\epsilon-2\tau}\xi^2|\Delta^2 w|^2+CJ_0+CJ_1,
\end{multline}
for $\tau\geq \overline{\tau}$, with $C$ an absolute constant.
By \eqref{eq:15.1a} and \eqref{eq:15.2_bis} we can estimate the first term in the right hand side of \eqref{eq:27.3} as follows
\begin{equation}
\label{eq:28.1}
\int_{B_{R_0}^+}\rho^{6-\epsilon-2\tau}\xi^2|\Delta^2 u|^2\leq
CM_1^2\int_{B_{R_0}^+}\rho^{6-\epsilon-2\tau}\xi^2\sum_{k=0}^3|D^k u|^2.
\end{equation}
By \eqref{eq:17.1}, \eqref{eq:19.1} and by making the change of variables
$(x,y)\rightarrow(x,-y)$ in the integrals involving the function $u(x,-y)$,
we can estimate the second term in the right hand side of \eqref{eq:27.3} as follows
\begin{multline}
\label{eq:28.2}
\int_{B_{R_0}^-}\rho^{6-\epsilon-2\tau}\xi^2|\Delta^2 w|^2\leq
C\int_{B_{R_0}^-}\rho^{6-\epsilon-2\tau}\xi^2|H(x,y)|^2+\\
+CM_1^2\int_{B_{R_0}^-}\rho^{6-\epsilon-2\tau}\xi^2\sum_{k=0}^2|D^k w|^2+
CM_1^2\int_{B_{R_0}^+}\rho^{6-\epsilon-2\tau}\xi^2\sum_{k=0}^3|D^k u|^2.
\end{multline}
Now, let us split the integral in the right hand side of \eqref{eq:28.1} and the second and third integrals in the right hand side of \eqref{eq:28.2} over the domains of integration $B_{r/2}^\pm\setminus B_{r/4}^\pm$, $B_{R_0/2}^\pm\setminus B_{r/2}^\pm$, $B_{2R_0/3}^\pm\setminus B_{R_0/2}^\pm$ and then let us insert \eqref{eq:28.1}--\eqref{eq:28.2} so rewritten in \eqref{eq:27.3}, obtaining
\begin{multline}
\label{eq:28.4}
\sum_{k=0}^3 \tau^{6-2k}\int_{B_{R_0}^+}\rho^{2k+\epsilon-2-2\tau}|D^k(\xi u)|^2
+\sum_{k=0}^3 \tau^{6-2k}\int_{B_{R_0}^-}\rho^{2k+\epsilon-2-2\tau}|D^k(\xi w)|^2\leq \\
\leq
C\int_{B_{R_0}^-}\rho^{6-\epsilon-2\tau}\xi^2|H(x,y)|^2
+CM_1^2\int_{B_{R_0/2}^- \setminus B_{r/2}^-}\rho^{6-\epsilon-2\tau}\sum_{k=0}^2|D^k w|^2+\\+
CM_1^2\int_{B_{R_0/2}^+ \setminus B_{r/2}^+}\rho^{6-\epsilon-2\tau}\sum_{k=0}^3|D^k u|^2
+C(M_1^2+1)(J_0+J_1),
\end{multline}
for $\tau\geq \overline{\tau}$, with $C$ an absolute constant.
\end{proof}
Next, by estimating {}from below the integrals in the left hand side of this last inequality reducing their domain of integration to $B_{R_0/2}^\pm\setminus B_{r/2}^\pm$, where $\xi=1$, we have
\begin{multline}
\label{eq:29.1}
\sum_{k=0}^3 \int_{B_{R_0/2}^+ \setminus B_{r/2}^+}\tau^{6-2k}
(1-CM_1^2\rho^{8-2\epsilon-2k})\rho^{2k+\epsilon-2-2\tau}|D^k u|^2+\\
+\int_{B_{R_0/2}^- \setminus B_{r/2}^-}\rho^{4+\epsilon-2\tau}|D^3 w|^2
+\sum_{k=0}^2\int_{B_{R_0/2}^- \setminus B_{r/2}^-}\tau^{6-2k}
(1-CM_1^2\rho^{8-2\epsilon-2k})\rho^{2k+\epsilon-2-2\tau}|D^k w|^2
\leq \\
\leq
C\int_{B_{R_0}^-}\rho^{6-\epsilon-2\tau}\xi^2|H(x,y)|^2
+C(M_1^2+1)(J_0+J_1),
\end{multline}
for $\tau\geq \overline{\tau}$, with $C$ an absolute constant.
Recalling \eqref{eq:stima_rho}, we have that, for $k=0,1,2,3$ and for
$R_0\leq R_1:=\min\{\widetilde{R}_0,2(2CM_1^2)^{-\frac{1}{2(1-\epsilon)}}\}$,
\begin{equation}
\label{eq:30.1}
1-CM_1^2\rho^{8-2\epsilon-2k}\geq \frac{1}{2}, \quad \hbox{ in }B_{R_0/2}^\pm,
\end{equation}
so that, inserting \eqref{eq:30.1} in \eqref{eq:29.1}, we have
\begin{multline}
\label{eq:30.3}
\sum_{k=0}^3 \tau^{6-2k}\int_{B_{R_0/2}^+ \setminus B_{r/2}^+}
\rho^{2k+\epsilon-2-2\tau}|D^k u|^2
+\sum_{k=0}^3 \tau^{6-2k}\int_{B_{R_0/2}^- \setminus B_{r/2}^-}
\rho^{2k+\epsilon-2-2\tau}|D^k w|^2
\leq \\
\leq
C\int_{B_{R_0}^-}\rho^{6-\epsilon-2\tau}\xi^2|H(x,y)|^2
+C(M_1^2+1)(J_0+J_1),
\end{multline}
for $\tau\geq \overline{\tau}$, with $C$ an absolute constant.
By \eqref{eq:19.2} and \eqref{eq:15.2_bis}, we have that
\begin{equation}
\label{eq:30.4}
\int_{B_{R_0}^-}\rho^{6-\epsilon-2\tau}\xi^2|H(x,y)|^2\leq CM_1^2(I_1+I_2+I_3),
\end{equation}
with
\begin{equation}
\label{eq:31.0.1}
I_1=\int_{-R_0}^{R_0}\left(\int_{-\infty}^0\left|y^{-1}(w_{yy}(x,y)-
(u_{yy}(x,-y))\rho^\frac{6-\epsilon-2\tau}{2}\xi\right|^2dy\right)dx.
\end{equation}
\begin{equation}
\label{eq:31.0.2}
I_2=\int_{-R_0}^{R_0}\left(\int_{-\infty}^0\left|y^{-1}(w_{yx}(x,y)+
(u_{yx}(x,-y))\rho^\frac{6-\epsilon-2\tau}{2}\xi\right|^2dy\right)dx.
\end{equation}
\begin{equation}
\label{eq:31.0.4}
I_3=\int_{-R_0}^{R_0}\left(\int_{-\infty}^0\left|y^{-1}
u_{xx}(x,-y)\rho^\frac{6-\epsilon-2\tau}{2}\xi\right|^2dy\right)dx.
\end{equation}
Now, let us see that, for $j=1,2,3$,
\begin{multline}
\label{eq:31.1}
I_j\leq
C\int_{B_{R_0}^-}\rho^{6-\epsilon-2\tau}\xi^2|D^3 w|^2
+C\tau^2\int_{B_{R_0}^-}\rho^{4-\epsilon-2\tau}\xi^2|D^2 w|^2+\\
+C\int_{B_{R_0}^+}\rho^{6-\epsilon-2\tau}\xi^2|D^3 u|^2
+C\tau^2\int_{B_{R_0}^+}\rho^{4-\epsilon-2\tau}\xi^2|D^2 u|^2
+C(J_0+J_1),
\end{multline}
for $\tau\geq \overline{\tau}$, with $C$ an absolute constant.
Let us verify \eqref{eq:31.1} for $j=1$, the other cases following by using similar arguments.
By \eqref{eq:23.2}, we can apply Hardy's inequality \eqref{eq:24.1}, obtaining
\begin{multline}
\label{eq:32.2}
\int_{-\infty}^0\left|y^{-1}(w_{yy}(x,y)-
(u_{yy}(x,-y))\rho^{\frac{6-\epsilon-2\tau}{2}}\xi\right|^2dy\leq\\
\leq 4\int_{-\infty}^0\left|\partial_y\left[(w_{yy}(x,y)-
(u_{yy}(x,-y))\rho^{\frac{6-\epsilon-2\tau}{2}}\xi\right]\right|^2dy\leq\\
\leq 16 \int_{-\infty}^0\left(|w_{yyy}(x,y)|^2 +|u_{yyy}(x,-y)|^2\right)\rho^{6-\epsilon-2\tau}\xi^2dy+\\
16 \int_{-\infty}^0\left(|w_{yy}(x,y)|^2 +|u_{yy}(x,-y)|^2\right)\left|\partial_y\left(\rho^{\frac{6-\epsilon-2\tau}{2}}\xi\right)\right|^2dy.
\end{multline}
Noticing that
\begin{equation}
\label{eq:32.1}
|\rho_y|\leq\left|\frac{y}{\sqrt{x^2+y^2}}\varphi'(\sqrt{x^2+y^2})\right|\leq 1,
\end{equation}
we can compute
\begin{multline}
\label{eq:32.3}
\left|\partial_y\left(\rho^{\frac{6-\epsilon-2\tau}{2}}(x,y)\xi(x,y)\right)\right|^2\leq
2|\xi_y|^2\rho^{6-\epsilon-2\tau}+2\left|\left(\frac{6-\epsilon-2\tau}{2}\right)\xi \rho_y\rho^{\frac{4-\epsilon-2\tau}{2}}\right|^2\leq\\
\leq 2\xi_y^2\rho^{6-\epsilon-2\tau}+2\tau^2\rho^{4-\epsilon-2\tau}\xi^2,
\end{multline}
for $\tau\geq \widetilde{\tau}:= \max\{\overline{\tau},3\}$, with $C$ an absolute constant.
By inserting \eqref{eq:32.3} in \eqref{eq:32.2}, by integrating over $(-R_0,R_0)$ and
by making the change of variables
$(x,y)\rightarrow(x,-y)$ in the integrals involving the function $u(x,-y)$, we derive
\begin{multline}
\label{eq:33.0}
I_1\leq C\int_{B_{R_0}^-}\xi^2\rho^{6-\epsilon-2\tau}|w_{yyy}|^2+
C\int_{B_{R_0}^+}\xi^2\rho^{6-\epsilon-2\tau}|u_{yyy}|^2+\\
+C\int_{B_{R_0}^-}\xi_y^2\rho^{6-\epsilon-2\tau}|w_{yy}|^2
+C\int_{B_{R_0}^+}\xi_y^2\rho^{6-\epsilon-2\tau}|u_{yy}|^2+\\
+C\tau^2\int_{B_{R_0}^-}\xi^2\rho^{4-\epsilon-2\tau}|w_{yy}|^2
+C\tau^2\int_{B_{R_0}^+}\xi^2\rho^{4-\epsilon-2\tau}|u_{yy}|^2.
\end{multline}
Recalling \eqref{eq:25.2}--\eqref{eq:25.5}, we find \eqref{eq:31.1} for $j=1$.
Next, by \eqref{eq:30.3}, \eqref{eq:30.4} and \eqref{eq:31.1}, we have
\begin{multline}
\label{eq:33.1}
\sum_{k=0}^3 \tau^{6-2k}\int_{B_{R_0/2}^+ \setminus B_{r/2}^+}
\rho^{2k+\epsilon-2-2\tau}|D^k u|^2
+\sum_{k=0}^3 \tau^{6-2k}\int_{B_{R_0/2}^- \setminus B_{r/2}^-}
\rho^{2k+\epsilon-2-2\tau}|D^k w|^2
\leq \\
\leq
CM_1^2\int_{B_{R_0}^+}\rho^{6-\epsilon-2\tau}\xi^2|D^3u|^2+
CM_1^2\int_{B_{R_0}^-}\rho^{6-\epsilon-2\tau}\xi^2|D^3w|^2+\\
+CM_1^2\tau^2\int_{B_{R_0}^+}\rho^{4-\epsilon-2\tau}\xi^2|D^2u|^2
+CM_1^2\tau^2\int_{B_{R_0}^-}\rho^{4-\epsilon-2\tau}\xi^2|D^2w|^2
+C(M_1^2+1)(J_0+J_1),
\end{multline}
for $\tau\geq \widetilde{\tau}$, with $C$ an absolute constant.
Now, let us split the first four integrals in the right hand side of \eqref{eq:33.1} over the domains of integration $B_{r/2}^\pm\setminus B_{r/4}^\pm$, $B_{2R_0/3}^\pm\setminus B_{R_0/2}^\pm$ and $B_{R_0/2}^\pm\setminus B_{r/2}^\pm$ and move on the left hand side the integrals over $B_{R_0/2}^\pm\setminus B_{r/2}^\pm$. Recalling \eqref{eq:stima_rho}, we obtain
\begin{multline}
\label{eq:34.1}
\sum_{k=2}^3 \int_{B_{R_0/2}^+ \setminus B_{r/2}^+}
\tau^{6-2k}(1-CM_1^2\rho^{2-2\epsilon})\rho^{2k+\epsilon-2-2\tau}|D^k u|^2+\\
+\sum_{k=2}^3 \int_{B_{R_0/2}^- \setminus B_{r/2}^-}
\tau^{6-2k}(1-CM_1^2\rho^{2-2\epsilon})\rho^{2k+\epsilon-2-2\tau}|D^k w|^2+\\
+\sum_{k=0}^1 \tau^{6-2k}\int_{B_{R_0/2}^+ \setminus B_{r/2}^+}
\rho^{2k+\epsilon-2-2\tau}|D^k u|^2
+\sum_{k=0}^1 \tau^{6-2k}\int_{B_{R_0/2}^- \setminus B_{r/2}^-}
\rho^{2k+\epsilon-2-2\tau}|D^k w|^2
\leq \\
\leq
C(\tau^2M_1^2+1)(J_0+J_1),
\end{multline}
for $\tau\geq \widetilde{\tau}$, with $C$ an absolute constant.
Therefore, for $R_0\leq R_2=\min\{R_1,2(2CM_1^2)^{-\frac{1}{2(1-\epsilon)}}\}$,
it follows that
\begin{multline}
\label{eq:35.1}
\sum_{k=0}^3 \tau^{6-2k}\int_{B_{R_0/2}^+ \setminus B_{r/2}^+}
\rho^{2k+\epsilon-2-2\tau}|D^k u|^2+
\sum_{k=0}^3 \tau^{6-2k}\int_{B_{R_0/2}^- \setminus B_{r/2}^-}
\rho^{2k+\epsilon-2-2\tau}|D^k w|^2\leq\\
\leq
C(\tau^2M_1^2+1)(J_0+J_1),
\end{multline}
for $\tau\geq \widetilde{\tau}$, with $C$ an absolute constant.
Let us estimate $J_0$ and $J_1$. {}From \eqref{eq:27.1} and recalling \eqref{eq:stima_rho}, we have
\begin{multline}
\label{eq:36.1}
J_0\leq\left(\frac{r}{4}\right)^{6-\epsilon-2\tau}\left\{
\int_{B^+_{r/2}}\sum_{k=0}^3(r^{k-4}|D^k u|)^2+
\int_{B^-_{r/2}}\sum_{k=0}^3(r^{k-4}|D^k w|)^2
\right\}.
\end{multline}
By \eqref{eq:16.2}, we have that, for $(x,y)\in B^-_{r/2}$ and $k=0,1,2,3$,
\begin{equation}
\label{eq:36.1bis}
|D^k w|\leq C\sum_{h=k}^{2+k}r^{h-k}|(D^h u)(x,-y)|.
\end{equation}
By \eqref{eq:36.1}--\eqref{eq:36.1bis},
by making the change of variables
$(x,y)\rightarrow(x,-y)$ in the integrals involving the function $u(x,-y)$ and by using Lemma \ref{lem:intermezzo}, we get
\begin{multline}
\label{eq:36.2}
J_0\leq C\left(\frac{r}{4}\right)^{6-\epsilon-2\tau}
\sum_{k=0}^5 r^{2k-8}\int_{B^+_{r/2}}|D^k u|^2
\leq C\left(\frac{r}{4}\right)^{-2-\epsilon-2\tau}\int_{B_r^+}|u|^2,
\end{multline}
where $C$ is an absolute constant. Analogously, we obtain
\begin{equation}
\label{eq:37.1}
J_1
\leq C\left(\frac{R_0}{2}\right)^{-2-\epsilon-2\tau}\int_{B_{R_0}^+}|u|^2.
\end{equation}
Let $R$ such that $r<R<\frac{R_0}{2}$. By \eqref{eq:35.1}, \eqref{eq:36.2}, \eqref{eq:37.1}, it follows that
\begin{multline}
\label{eq:37.1bis}
\tau^{6}R^{\epsilon-2-2\tau}\int_{B_{R}^+ \setminus B_{r/2}^+}
|u|^2
\leq\sum_{k=0}^3\tau^{6-2k}\int_{B_{R_0/2}^+ \setminus B_{r/2}^+}
\rho^{2k+\epsilon-2-2\tau}|D^ku|^2\leq\\
\leq C\tau^2 (M_1^2+1)\left[\left(\frac{r}{4}\right)^{-2-\epsilon-2\tau}\int_{B_r^+}|u|^2+
\left(\frac{R_0}{2}\right)^{-2-\epsilon-2\tau}\int_{B_{R_0}^+}|u|^2
\right],
\end{multline}
for $\tau\geq \widetilde{\tau}$, with $C$ an absolute constant.
Since $\tau>1$, we may rewrite the above inequality as follows
\begin{multline}
\label{eq:37.2}
R^{2\epsilon}\int_{B_{R}^+ \setminus B_{r/2}^+}
|u|^2\leq
C(M_1^2+1)\left[\left(\frac{r/4}{R}\right)^{-2-\epsilon-2\tau}\int_{B_r^+}|u|^2+
\left(\frac{R_0/2}{R}\right)^{-2-\epsilon-2\tau}\int_{B_{R_0}^+}|u|^2
\right],
\end{multline}
for $\tau\geq \widetilde{\tau}$, with $C$ an absolute constant.
By adding $R^{2\epsilon}\int_{B_{r/2}^+}|u|^2$ to both members of \eqref{eq:37.2}, and setting, for $s>0$,
\begin{equation*}
\sigma_s=\int_{B_{s}^+}|u|^2,
\end{equation*}
we obtain
\begin{equation}
\label{eq:38.1}
R^{2\epsilon}\sigma_R\leq
C(M_1^2+1)
\left[\left(\frac{r/4}{R}\right)^{-2-\epsilon-2\tau}\sigma_r+
\left(\frac{R_0/2}{R}\right)^{-2-\epsilon-2\tau}\sigma_{R_0}
\right],
\end{equation}
for $\tau\geq \widetilde{\tau}$, with $C$ an absolute constant.
Let $\tau^*$ be such that
\begin{equation}
\label{eq:38.2}
\left(\frac{r/4}{R}\right)^{-2-\epsilon-2\tau^*}\sigma_r=
\left(\frac{R_0/2}{R}\right)^{-2-\epsilon-2\tau^*}\sigma_{R_0},
\end{equation}
that is
\begin{equation}
\label{eq:38.3}
2+\epsilon+2\tau^*=\frac{\log (\sigma_{R_0}/\sigma_r)}{\log \left(\frac{R_0/2}{r/4}\right)}.
\end{equation}
Let us distinguish two cases:
\begin{enumerate}[i)]
\item
$\tau^*\geq \widetilde{\tau}$,
\item
$\tau^*< \widetilde{\tau}$,
\end{enumerate}
and set
\begin{equation}
\label{eq:39.1bis}
\widetilde{\theta}=\frac{\log \left(\frac{R_0/2}{R}\right)}{\log \left(\frac{R_0/2}{r/4}\right)}.
\end{equation}
In case i), it is possible to choose $\tau = \tau^*$ in \eqref{eq:38.1}, obtaining,
by \eqref{eq:38.2}--\eqref{eq:39.1bis},
\begin{equation}
\label{eq:39.2}
R^{2\epsilon}\sigma_R\leq C(M_1^2+1)\sigma_r^{\widetilde{\theta}}\sigma_{R_0}^{1-\widetilde{\theta}}.
\end{equation}
In case ii), since $\tau^*< \widetilde{\tau}$, {}from \eqref{eq:38.3}, we have
\begin{equation*}
\frac{\log (\sigma_{R_0}/\sigma_r)}{\log \left(\frac{R_0/2}{r/4}\right)}<2+\epsilon+2\widetilde{\tau},
\end{equation*}
so that, multiplying both members by $\log \left(\frac{R_0/2}{R}\right)$, it follows that
\begin{equation*}
\widetilde{\theta}\log\left(\frac{\sigma_{R_0}}{\sigma_r}\right)<\log\left(\frac{R_0/2}{R}\right)
^{2+\epsilon+2\widetilde{\tau}},
\end{equation*}
and hence
\begin{equation}
\label{eq:39.3}
\sigma_{R_0}^{\widetilde{\theta}}\leq \left(\frac{R_0/2}{R}\right)^{2+\epsilon+2\widetilde{\tau}}\sigma_r^{\widetilde{\theta}}.
\end{equation}
Then is follows trivially that
\begin{equation}
\label{eq:39.4}
R^{2\epsilon}\sigma_R\leq R^{2\epsilon}\sigma_{R_0}\leq
R^{2\epsilon}\left(\frac{R_0/2}{R}\right)^{2+\epsilon+2\widetilde{\tau}}\sigma_r^{\widetilde{\theta}}\sigma_{R_0}^{1-\widetilde{\theta}}.
\end{equation}
Finally, by \eqref{eq:39.2} and \eqref{eq:39.4}, we obtain
\eqref{eq:40.1}.
\begin{proof}[Proof of Theorem \ref{theo:40.teo}]
Let $r_1<r_2<\frac{r_0R_0}{2K}<r_0$, where $R_0$ is chosen such that $R_0<\gamma<1$, where $\gamma$ has been introduced in Theorem
\ref{theo:40.prop3} and $K>1$ is the constant introduced in Proposition
\ref{prop:conf_map}.
Let us define
\begin{equation*}
r=\frac{2r_1}{r_0}, \qquad R= \frac{Kr_2}{r_0}.
\end{equation*}
Recalling that $K>8$, it follows immediately that $r<R<\frac{R_0}{2}$.
Therefore, we can apply \eqref{eq:40.1} with $\epsilon=\frac{1}{2}$ to $u=v\circ\Phi$, obtaining
\begin{equation}
\label{eq:3sfere_u}
\int_{B_R^+}u^2\leq \frac{C}{R^C}\left(\int_{B_r^+}u^2\right)^{\widetilde{\theta}}\left(\int_{B_{R_0}^+}u^2\right)^{1-\widetilde{\theta}},
\end{equation}
with
\begin{equation*}
\widetilde{\theta} = \frac{\log\left(\frac{R_0r_0}{2Kr_2}\right)}{\log\left(\frac{R_0r_0}{r_1}\right)}.
\end{equation*}
and $C>1$ only depending on $M_0$, $\alpha$, $\alpha_0$ e $\gamma_0$ and $\Lambda_0$.
{}From \eqref{eq:gradPhiInv}, \eqref{eq:stimaPhi}, \eqref{eq:stimaPhiInv}
and noticing that
\begin{equation*}
\widetilde{\theta} \geq \theta:=\frac{\log\left(\frac{R_0r_0}{2Kr_2}\right)}{\log\left(\frac{r_0}{r_1}\right)},
\end{equation*}
we obtain \eqref{eq:41.1}--\eqref{eq:41.2}.
\end{proof}
\section{Appendix} \label{sec:
Appendix}
\begin{proof}[Proof of Proposition \ref{prop:conf_map}]
Let us construct a suitable extension of $g$ to $[-2r_0,2r_0]$.
Let $P_6^\pm$ be the Taylor polynomial of order 6 and center $\pm r_0$
\begin{equation*}
P_6^\pm(x_1)=\sum_{j=0}^6 \frac{g^{(j)}(\pm r_0)}{j!}(x_1-(\pm r_0))^j,
\end{equation*}
and let $\chi\in C^\infty_0(\R)$ be a function satisfying
\begin{equation*}
0\leq\chi\leq 1,
\end{equation*}
\begin{equation*}
\chi=1, \hbox{ for } |x_1|\leq r_0,
\end{equation*}
\begin{equation*}
\chi=0, \hbox{ for } \frac{3}{2}r_0\leq |x_1|\leq 2r_0,
\end{equation*}
\begin{equation*}
|\chi^{(j)}(x_1)|\leq \frac{C}{r_0^j}, \hbox{ for } r_0\leq |x_1|\leq \frac{3}{2}r_0, \forall j\in \N.
\end{equation*}
Let us define
\begin{equation*}
\widetilde{g}=\left\{
\begin{array}{cc}
g, & \hbox{ for } x_1\in [-r_0,r_0],\\
\chi P_6^+, & \hbox{ for } x_1\in [r_0, 2r_0],\\
\chi P_6^-, & \hbox{ for } x_1\in [-2r_0, -r_0].
\end{array}
\right.
\end{equation*}
It is a straightforward computation to verify that
\begin{equation}
\label{eq:3.2}
\widetilde g(x_1)=0, \hbox{ for } \frac{3}{2}r_0\leq |x_1|\leq 2r_0,
\end{equation}
\begin{equation}
\label{eq:3.2bis}
|\widetilde g(x_1)|\leq 2M_0r_0, \hbox{ for } |x_1|\leq 2r_0,
\end{equation}
so that the graph of $\widetilde g$ is contained in $R_{2r_0,2M_0r_0}$ and
\begin{equation}
\label{eq:3.3}
\|\widetilde g \|_{C^{6,\alpha}([-2r_0,2r_0])}\leq CM_0r_0,
\end{equation}
where $C$ is an absolute constant.
Let
\begin{equation}
\label{eq:Omega_r_0_tilde}
\widetilde{\Omega}_{r_0} = \left\{ x\in R_{2r_0,2M_0r_0}\ |\ x_2>\widetilde{g}(x_1)\right\},
\end{equation}
and let $k\in H^1(\widetilde{\Omega}_{r_0} )$ be the solution to
\begin{equation}
\label{eq:3.4}
\left\{ \begin{array}{ll}
\Delta k =0, &
\hbox{in } \widetilde{\Omega}_{r_0},\\
& \\
k_{x_1}(2r_0,x_2) =k_{x_1}(-2r_0,x_2)=0, & \hbox{for } 0\leq x_2\leq 2M_0r_0,\\
& \\
k(x_1,2M_0r_0) =1, & \hbox{for } -2r_0\leq x_1\leq 2r_0,\\
& \\
k(x_1,\widetilde{g}(x_1)) =0, &\hbox{for } -2r_0\leq x_1\leq 2r_0.\\
\end{array}\right.
\end{equation}
Let us notice that $k\in C^{6,\alpha}\left(\overline{\widetilde{\Omega}}_{r_0} \right)$.
Indeed, this regularity is standard away {}from any neighborhoods of the four points $(\pm2r_0,0)$, $(\pm 2r_0, 2M_0r_0)$ and, by making a even reflection of $k$ w.r.t. the lines $x_1 = \pm 2r_0$ in a neighborhood in $\widetilde{\Omega}_{r_0}$ of each of these points, we can apply Schauder estimates and again obtain the stated regularity.
By the maximum principle, $\min_{\overline{\widetilde{\Omega}}_{r_0}}k =
\min_{\partial \widetilde{\Omega}_{r_0}}k$. In view of the boundary conditions, this minimum value cannot be achieved
in the closed segment $\{x_2=2M_0r_0, |x_1|\leq 2r_0\}$. It cannot be achieved in the
segments $\{\pm 2r_0\}\times (0,2M_0r_0)$ since the boundary conditions over these segment contradict Hopf Lemma (see \cite{l:GT}). Therefore the minimum is attained on the boundary portion
$\{(x_1, \widetilde{g}(x_1) \ | \ x_1\in [-2r_0,2r_0]\}$, so that
$\min_{\overline{\widetilde{\Omega}}_{r_0}}k = 0$.
Similarly, $\max_{\overline{\widetilde{\Omega}}_{r_0}}k = 1$ and, moreover, by the strong maximum and minimum principles,
$0<k(x_1,x_2)<1$, for every $(x_1,x_2)\in \widetilde{\Omega}_{r_0}$.
Denoting by $\mathcal R$ be the reflection around the line $x_1=2r_0$, let
\begin{equation*}
\Omega^*_{r_0}=\widetilde{\Omega}_{r_0}\cup \mathcal R(\widetilde{\Omega}_{r_0})\cup(\{2r_0\}\times (0,2M_0r_0)),
\end{equation*}
and let $k^*$ be the extension of $k$ to $\overline{\Omega}^*_{r_0}$ obtained by making an even reflection of $k$ around the line $x_1=2r_0$.
Next, let us extend $k^*$ by periodicity w.r.t. the $x_1$ variable to the unbounded strip
\begin{equation*}
S_{r_0} = \cup_{l\in \Z} (\Omega^*_{r_0} + 8r_0le_1).
\end{equation*}
By Schauder estimates and by the periodicity of $k^*$, it follows that
\begin{equation}
\label{eq:5.1}
\|\nabla k^*\|_{L^\infty(S_{r_0})}\leq \frac{C_0}{r_0},
\end{equation}
with $C_0$ only depending on $M_0$ and $\alpha$.
Therefore there exists $\delta_0= \delta_0(M_0, \alpha)$, $0<\delta_0\leq \frac{1}{4}$, such that
\begin{equation}
\label{eq:5.2}
k^*(x_1,x_2)\geq \frac{1}{2} \quad \forall (x_1,x_2)\in \R\times[(1-\delta_0)2M_0r_0,2M_0r_0].
\end{equation}
Since $k^*>0$ in $S_{r_0}$, by applying Harnack inequality and Hopf Lemma (see \cite{l:GT}), we have
\begin{equation*}
\frac{\partial k^*}{\partial x_2}\geq \frac{c_0}{r_0}, \quad \hbox{ on }
\partial S_{r_0},
\end{equation*}
with $c_0$ only depending on $M_0$ and $\alpha$.
Therefore, the function $k^*$ satisfies
\begin{equation*}
\left\{ \begin{array}{ll}
\Delta \left(\frac{\partial k^*}{\partial x_2}\right) =0, &
\hbox{in } S_{r_0},\\
& \\
\frac{\partial k^*}{\partial x_2}\geq \frac{c_0}{r_0}, & \hbox{on } \partial S_{r_0}.\\
\end{array}\right.
\end{equation*}
Moreover, $\frac{\partial k^*}{\partial x_2}$, being continuous and periodic w.r.t. the variable $x_1$, attains its minimum in $\overline{S}_{r_0}$. Since this minimum value cannot be attained in $S_{r_0}$, it follows that
\begin{equation}
\label{eq:6.1}
\frac{\partial k^*}{\partial x_2}\geq \frac{c_0}{r_0}, \quad \hbox{ in }
\overline{S}_{r_0}.
\end{equation}
Now, let $h$ be an harmonic conjugate of $-k$ in $\widetilde{\Omega}_{r_0}$, that is
\begin{equation}
\label{eq:6.2}
\left\{ \begin{array}{ll}
h_{x_1} = k_{x_2}, &\\
& \\
h_{x_2} = -k_{x_1}. &\\
\end{array}\right.
\end{equation}
The map $\Psi : = h+ik$ is a conformal map in $\widetilde{\Omega}_{r_0}$,
\begin{equation}
\label{eq:DPsi}
D\Psi =\left( \begin{array}{ll}
k_{x_2} &-k_{x_1}\\
& \\
k_{x_1} &k_{x_2}\\
\end{array}\right)
\end{equation}
so that
$|D\Psi| = \sqrt 2|\nabla k|$ and, by \eqref{eq:5.1} and \eqref{eq:6.1},
\begin{equation}
\label{eq:6.3}
\sqrt 2\frac{c_0}{r_0}\leq |D\Psi|\leq \sqrt 2\frac{C_0}{r_0}, \quad \hbox{in } \widetilde{\Omega}_{r_0}.
\end{equation}
Let us analyze the behavior of $\Psi$ on the boundary of $\widetilde{\Omega}_{r_0}$
\begin{equation*}
\partial{\widetilde{\Omega}_{r_0}} = \sigma_1\cup \sigma_2\cup \sigma_3\cup \sigma_4,
\end{equation*}
where
\begin{equation*}
\sigma_1 = \{(x_1, \widetilde{g}(x_1)),\ | \ x_1\in [-2r_0,2r_0]\},\qquad
\sigma_2 = \{(2r_0, x_2),\ | \ x_2\in [0,2M_0r_0]\},
\end{equation*}
\begin{equation*}
\sigma_3 = \{(x_1,2M_0r_0),\ | \ x_1\in [-2r_0,2r_0]\}, \qquad
\sigma_4 = \{(-2r_0, x_2),\ | \ x_2\in [0,2M_0r_0]\}.
\end{equation*}
On $\sigma_1$, we have
\begin{equation*}
\Psi(x_1, \widetilde{g}(x_1))= h((x_1, \widetilde{g}(x_1))) +i0,
\end{equation*}
\begin{equation*}
\frac{\partial}{\partial x_1}h(x_1, \widetilde{g}(x_1)=
h_{x_1}(x_1, \widetilde{g}(x_1)+ h_{x_2}(x_1, \widetilde{g}(x_1)\widetilde{g}'(x_1)
=-\sqrt{1+[\widetilde{g}'(x_1)]^2}(\nabla k\cdot n)>0,
\end{equation*}
where $n$ is the outer unit normal.
Therefore $\Psi$ is injective on $\sigma_1$ and $\Psi(\sigma_1)$ is an
interval $[a,b]$ contained in the line $\{y_2=0\}$, with
\begin{equation*}
a=h(-2r_0, 0), \quad b=h(2r_0, 0).
\end{equation*}
On $\sigma_2$, we have
\begin{equation*}
\Psi(2r_0, x_2)= h(2r_0, x_2)+ik(2r_0, x_2),
\end{equation*}
\begin{equation*}
h_{x_2}(2r_0, x_2)=-k_{x_1}(2r_0, x_2)=0,
\end{equation*}
and similarly in $\sigma_4$,
so that $h(-2r_0, x_2)\equiv a$ and $h(2r_0, x_2)\equiv b$ for $x_2\in[0,2M_0r_0]$ whereas, by \eqref{eq:6.1}, $k$ is increasing w.r.t. $x_2$. Therefore $\Psi$ is injective on $\sigma_2\cup \sigma_4$,
and maps $\sigma_2$ into the segment $\{b\}\times[0,1]$ and $\sigma_4$ into the segment $\{a\}\times[0,1]$.
On $\sigma_3$, we have
\begin{equation*}
\Psi(x_1, 2M_0r_0)= h(x_1, 2M_0r_0) +i1,
\end{equation*}
\begin{equation*}
h_{x_1}(x_1, 2M_0r_0) = k_{x_2}(x_1, 2M_0r_0)>0,
\end{equation*}
so that $h$ is increasing in $[-2r_0,2r_0]$, $\Psi$ is injective on $\sigma_3$ and $\Psi(\sigma_3)$ is the interval $[a,b]\times\{1\}$.
Therefore $\Psi$ maps in a bijective way the boundary of $\widetilde{\Omega}_{r_0}$ into the boundary of $[a,b]\times [0,1]$. Moreover, we have
\begin{equation}
\label{eq:b-a}
b-a= \int_{-2r_0}^{2r_0}h_{x_1}(x_1,2M_0r_0)dx_1 =
\int_{-2r_0}^{2r_0}k_{x_2}(x_1,2M_0r_0)dx_1.
\end{equation}
By \eqref{eq:5.1}, \eqref{eq:6.1} and \eqref{eq:b-a} the following estimate holds
\begin{equation}
\label{eq:b-a_bis}
4c_0\leq b-a\leq 4C_0.
\end{equation}
By \eqref{eq:6.3}, we can apply the global inversion theorem, ensuring that
\begin{equation*}
\Psi^{-1}: [a,b]\times [0,1]\rightarrow \overline{\widetilde{\Omega}}_{r_0}
\end{equation*}
is a conformal diffeomorphism. Moreover,
\begin{equation}
\label{eq:DPsi_inversa}
D(\Psi^{-1}) =\frac{1}{|\nabla k|^2}\left( \begin{array}{ll}
k_{x_2} &k_{x_1}\\
& \\
-k_{x_1} &k_{x_2}\\
\end{array}\right),
\end{equation}
\begin{equation}
\label{eq:8.1}
\frac{\sqrt 2}{C_0}r_0\leq |D\Psi^{-1}|= \frac{\sqrt 2}{|\nabla k|}\leq \frac{\sqrt 2}{c_0}r_0, \quad \hbox{in } [a,b]\times [0,1].
\end{equation}
Now, let us see that the set $\Psi(\Omega_{r_0})$ contains a closed rectangle having one basis contained in the line $\{y_2=0\}$ and whose sides can be estimated in terms of $M_0$ and $\alpha$. To this aim we need to estimate the distance of $\Psi(0,0)=(\overline{\xi}_1,0)$ {}from the edges $(a,0)$ and $(b,0)$ of the rectangle $[a,b]\times[0,1]$. Recalling that $\widetilde{g}\equiv 0$ for
$\frac{3}{2}r_0\leq |x_1|\leq 2r_0$, we have that $\sigma_1$ contains the segments
$\left[-2r_0,-\frac{3}{2}r_0\right]\times \{0\}$, $\left[\frac{3}{2}r_0,2r_0\right]\times \{0\}$, so that
\begin{equation}
\label{eq:segmentino}
h(2r_0,0)-h\left(\frac{3}{2}r_0,0\right)= \int_{\frac{3}{2}r_0}^{2r_0}h_{x_1}(x_1,0)dx_1 =
\int_{\frac{3}{2}r_0}^{2r_0}k_{x_2}(x_1,0)dx_1.
\end{equation}
By \eqref{eq:5.1}, \eqref{eq:6.1} and \eqref{eq:segmentino} we derive
\begin{equation}
\label{eq:segmentino_bis}
\frac{c_0}{2}\leq h(2r_0,0)-h\left(\frac{3}{2}r_0,0\right)\leq \frac{C_0}{2}.
\end{equation}
Similarly,
\begin{equation}
\label{eq:segmentino_ter}
\frac{c_0}{2}\leq h\left(-\frac{3}{2}r_0,0\right)-h(-2r_0,0)\leq \frac{C_0}{2}.
\end{equation}
Since $h$ is injective and maps $\sigma_1$ into $[a,b]\times\{0\}$, it follows that
\begin{equation*}
|\Psi(0,0)-(a,0)| = h(0,0)-h(-2r_0,0) \geq\frac{c_0}{2},
\end{equation*}
\begin{equation*}
|\Psi(0,0)-(b,0)| = h(2r_0,0) - h(0,0) \geq\frac{c_0}{2}.
\end{equation*}
Possibly replacing $c_0$ with $\min\{c_0,2\}$, we obtain that
$\overline{B}^+_{\frac{c_0}{2}}(\Psi(O))\subset [a,b]\times [0,1]$.
By \eqref{eq:8.1},
\begin{equation*}
|\Psi^{-1}(\xi)| = |\Psi^{-1}(\xi)-\Psi^{-1}(\Psi(O))|\leq\frac{\sqrt 2}{2}r_0<r_0, \qquad \forall \xi \in B^+_{\frac{c_0}{2}}(\Psi(O)),
\end{equation*}
so that $\Psi^{-1}\left(B^+_{\frac{c_0}{2}}(\Psi(O))\right)\subset \Omega_{r_0}$,
\begin{equation*}
\Psi(\Omega_{r_0})\supset B^+_{\frac{c_0}{2}}(\Psi(O))\supset R,
\end{equation*}
where $R$ is the rectangle
\begin{equation*}
R= \left(\overline{\xi}_1-\frac{c_0}{2\sqrt 2}, \overline{\xi}_1+\frac{c_0}{2\sqrt 2}\right)\times \left(0,\frac{c_0}{2\sqrt 2}\right).
\end{equation*}
Let us consider the homothety
\begin{equation*}
\Theta:[a,b]\times [0,1] \rightarrow\R^2,
\end{equation*}
\begin{equation*}
\Theta(\xi_1,\xi_2) = \frac{2\sqrt 2}{c_0}(\xi_1-\overline{\xi}_1,\xi_2),
\end{equation*}
which satisfies
\begin{equation*}
\Theta(\Psi(O)) = O, \qquad D\Theta = \frac{2\sqrt 2}{c_0} I_2,
\end{equation*}
\begin{equation*}
\Theta([a,b]\times [0,1]) =R^*, \qquad R^* =\left[\frac{2\sqrt 2}{c_0}(a-\overline{\xi}_1),
\frac{2\sqrt 2}{c_0}(b-\overline{\xi}_1)\right]\times
\left[0,
\frac{2\sqrt 2}{c_0}\right],
\end{equation*}
\begin{equation*}
\Theta(\overline{R}) = [-1,1]\times [0,1],
\end{equation*}
\begin{equation*}
D(\Theta\circ \Psi)(x) = \frac{2\sqrt 2}{c_0}D\Psi(x).
\end{equation*}
Its inverse
\begin{equation*}
\Theta^{-1}:R^*\rightarrow [a,b]\times [0,1],
\end{equation*}
\begin{equation*}
\Theta^{-1}(y_1,y_2) = \frac{c_0}{2\sqrt 2}(y_1+\overline{\xi}_1,y_2),
\end{equation*}
satisfies
\begin{equation*}
D\Theta^{-1}= \frac{c_0} {2\sqrt 2}I_2,
\end{equation*}
\begin{equation*}
D((\Theta\circ \Psi)^{-1})(y) = \frac{c_0}{2\sqrt 2}D\Psi^{-1}(\Theta^{-1}(y)).
\end{equation*}
Let us define
\begin{equation*}
\Phi =(\Theta\circ \Psi)^{-1}).
\end{equation*}
We have that $\Phi$ is a conformal diffeomorphism {}from $R^*$ into $\widetilde{\Omega}_{r_0}$ such that
\begin{equation*}
\Omega_{r_0}\supset \Psi^{-1}(R)=\Phi((-1,1)\times(0,1)),
\end{equation*}
\begin{equation}
\label{eq:gradPhibis}
\frac{c_0r_0}{2C_0}\leq |D\Phi(y)|\leq \frac{r_0}{2},
\end{equation}
\begin{equation}
\label{eq:gradPhiInvbis}
\frac{4}{r_0}\leq |D\Phi^{-1}(x)|\leq \frac{4C_0}{c_0r_0}.
\end{equation}
By \eqref{eq:gradPhi}, we have that, for every $y\in [-1,1]\times [0,1]$,
\begin{equation}
\label{eq:stimaPhibis}
|\Phi(y)|= |\Phi(y)-\Phi(O)|\leq \frac{r_0}{2}|y|.
\end{equation}
Given any $x(x_1,x_2)\in \overline{\Omega}_{r_0}$, let $x^* =(x_1,g(x_1))$.
We have
\begin{equation*}
|x-x^*| = |x_2 - g(x_1)| \leq|x_2|+ |g(x_1)-g(0)|\leq (M_0+1)|x|,
\end{equation*}
and, since the segment joining $x$ and $x^*$ is contained in $\overline{\Omega}_{r_0}$,
by \eqref{eq:gradPhiInvbis} we have
\begin{equation}
\label{eq:stimaPhiInv1}
|\Phi^{-1}(x)-\Phi^{-1}(x^*)|\leq \frac{4C_0}{c_0r_0}(M_0+1)|x|.
\end{equation}
Let un consider the arc $\tau(t)= \Phi^{-1}(t,g_1(t))$, for $t\in [0,x_1]$.
Again by \eqref{eq:gradPhiInvbis}, we have
\begin{multline}
\label{eq:stimaPhiInv2}
|\Phi^{-1}(x^*)| = |\Phi^{-1}(x^*)-\Phi^{-1}(O)| =\tau(x_1) -\tau(0) \leq\\
\leq \left|\int_0^{x_1}\tau'(t)dt \right|\leq \frac{4C_0}{c_0r_0}\sqrt{M_0^2+1}\ |x|.
\end{multline}
By \eqref{eq:stimaPhiInv1}, \eqref{eq:stimaPhiInv2}, we have
\begin{equation}
\label{eq:stimaPhiInvbis}
|\Phi^{-1}(x)| \leq
\frac{K}{r_0}|x|,
\end{equation}
with $K=\frac{4C_0}{c_0}(M_0+1+\sqrt{M_0^2+1})>8$.
{}From this last inequality, we have that
\begin{equation*}
\Phi^{-1}\left(\Omega_{r_0}\cap B_{\frac{r_0}{K}}\right)\subset B_1^+\subset (-1,1)\times(0,1), \qquad \Phi((-1,1)\times(0,1))\supset \Omega_{r_0}\cap B_{\frac{r_0}{K}}.
\end{equation*}
Let $\Phi = (\varphi, \psi)$.
We have that
\begin{equation}
\label{eq:DPhi}
D\Phi =\left( \begin{array}{ll}
\varphi_{y_1} &\varphi_{y_2}\\
& \\
-\varphi_{y_2} &\varphi_{y_1}\\
\end{array}\right),
\end{equation}
\begin{equation}
\label{eq:32.1bisluglio}
det(D\Phi(y)) = |\nabla\varphi(y)|^2,
\end{equation}
\begin{equation}
\label{eq:DPhi_inversa}
(D\Phi)^{-1} =\frac{1}{|\nabla \varphi|^2}\left( \begin{array}{ll}
\varphi_{y_1} &-\varphi_{y_2}\\
& \\
\varphi_{y_2} &\varphi_{y_1}\\
\end{array}\right).
\end{equation}
Concerning the function $u(y) = v(\Phi(y))$, we can compute
\begin{equation}
\label{eq:32.3luglio}
(\nabla v) (\Phi(y)) = [(D\Phi(y))^{-1}]^T\nabla u(y),
\end{equation}
\begin{equation}
\label{eq:32.2luglio}
(\Delta v) (\Phi(y)) = \frac{1}{|det(D\Phi(y)|}\divrg(A(y)\nabla u(y)),
\end{equation}
where
\begin{equation}
\label{eq:33.0luglio}
A(y) = |det(D\Phi(y)| (D\Phi(y))^{-1} [(D\Phi(y))^{-1}]^T.
\end{equation}
By \eqref{eq:DPhi}--\eqref{eq:DPhi_inversa}, we obtain that
\begin{equation}
\label{eq:33.0bisluglio}
A(y) = I_2,
\end{equation}
so that
\begin{equation}
\label{eq:33.0terluglio}
(\Delta v) (\Phi(y)) = \frac{1}{|\nabla \varphi(y)|^2}\Delta u(y),
\end{equation}
\begin{equation}
\label{eq:33.1luglio}
(\Delta^2 v) (\Phi(y)) = \frac{1}{|\nabla \varphi(y)|^2}\Delta \left(\frac{1}{|\nabla \varphi(y)|^2}\Delta u(y)\right).
\end{equation}
By using the above formulas, some computations allow to derive \eqref{eq:equazione_sol_composta}--\eqref{eq:15.2} {}from \eqref{eq:equazione_piastra_non_div}.
Finally, the boundary conditions \eqref{eq:Dirichlet_sol_composta} follow {}from \eqref{eq:32.3luglio}, \eqref{eq:9.2b} and \eqref{eq:Diric_u_tilde}.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lem:intermezzo}]
Here, we develop an argument which is contained in \cite[Chapter 9]{l:GT}.
By noticing that
$a\cdot\nabla\Delta u = \divrg(\Delta u a)-(\divrg a)\Delta u$,
we can rewrite \eqref{eq:41.1} in the form
\begin{equation*}
\sum_{|\alpha|,|\beta|\leq 2}D^\alpha(a_{\alpha\beta}D^\beta u)=0.
\end{equation*}
Let $\sigma\in\left[\frac{1}{2},1\right)$, $\sigma'=\frac{1+\sigma}{2}$ and let us notice that
\begin{equation}
\label{eq:3a.1}
\sigma'-\sigma = \frac{1-\sigma}{2}, \qquad 1-\sigma = 2(1-\sigma').
\end{equation}
Let $\xi\in C^\infty_0(\R^2)$ be such that
\begin{equation*}
0\leq\xi\leq 1,
\end{equation*}
\begin{equation*}
\xi=1, \hbox{ for } |x|\leq \sigma,
\end{equation*}
\begin{equation*}
\xi=0, \hbox{ for } |x|\geq \sigma',
\end{equation*}
\begin{equation*}
|D^k(\xi)|\leq \frac{C}{(\sigma'-\sigma)^k}, \hbox{ for } \sigma\leq \sigma', k=0,1,2.
\end{equation*}
By straightforward computations we have that
\begin{equation*}
\sum_{|\alpha|,|\beta|\leq 2}D^\alpha(a_{\alpha\beta}D^\beta (u\xi))=f,
\end{equation*}
with
\begin{equation*}
f=\sum_{|\alpha|,|\beta|\leq 2}\sum_ {\overset{\scriptstyle \delta_2\leq\alpha}{\scriptstyle
\delta_2\neq0}}{\alpha \choose \beta}
D^{\alpha-\delta_2}a_{\alpha\beta}D^\beta u)D^{\delta_2}\xi+
\sum_{|\alpha|,|\beta|\leq 2}D^\alpha\left[a_{\alpha\beta}
\sum_ {\overset{\scriptstyle \delta_1\leq\beta}{\scriptstyle
\delta_1\neq0}}{\beta \choose \delta_1}
D^{\beta-\delta_1}uD^{\delta_1}\xi\right].
\end{equation*}
By standard regularity estimates (see for instance \cite[Theorem 9.8]{l:a65},
\begin{equation}
\label{eq:8a.1}
\|u\xi\|_{H^{4+k}(B_1^+)}\leq C\left(\|u\xi\|_{L^{2}(B_1^+)}+
\|f\|_{H^{k}(B_1^+)}\right).
\end{equation}
On the other hand, it follows trivially that
\begin{equation}
\label{eq:8a.2}
\|f\|_{H^{k}(B_1^+)}\leq CM_1 \sum_{h=0}^{3+k}\frac{1}{(1-\sigma')^{4+k-h}}\|D^h u\|_{L^{2}(B_{\sigma'}^+)}.
\end{equation}
By inserting \eqref{eq:8a.2} in \eqref{eq:8a.1}, by multiplying both members by
$(1-\sigma')^{4+k}$ and by recalling \eqref{eq:3a.1}, we have
\begin{equation}
\label{eq:8a.3}
(1-\sigma)^{4+k}\|D^{4+k}u\|_{L^{2}(B_{\sigma}^+)}\leq C
\left(\|u\|_{L^{2}(B_1^+)}+\sum_{h=1}^{3+k}(1-\sigma')^h
\|D^{h}u\|_{L^{2}(B_{\sigma'}^+)}
\right)
\end{equation}
Setting
\begin{equation*}
\Phi_j=\sup_{\sigma\in\left[\frac{1}{2},1\right)}(1-\sigma)^j
\|D^{j}u\|_{L^{2}(B_{\sigma}^+)},
\end{equation*}
{}from \eqref{eq:8a.3} we obtain
\begin{equation}
\label{eq:9a.2}
\Phi_{4+k}\leq C\left(A_{2+k}+
\Phi_{3+k}\right).
\end{equation}
where
\begin{equation*}
A_{2+k}=\|u\|_{L^{2}(B_1^+)}+
\sum_{h=1}^{2+k}\Phi_h.
\end{equation*}
By the interpolation estimate \eqref{eq:3a.2} we have that, for every $\epsilon$, $0<\epsilon<1$ and for every $h\in \N$, $1\leq h\leq 3+k$,
\begin{equation}
\label{eq:9a.3}
\|D^{h}u\|_{L^{2}(B_{\sigma}^+)}\leq C\left(
\epsilon\|D^{4+k}u\|_{L^{2}(B_{\sigma}^+)}+
\epsilon^{-\frac{h}{4+k-h}}\|u\|_{L^{2}(B_{\sigma}^+)}\right).
\end{equation}
Let $\gamma>0$ and let $\sigma_\gamma\in \left[\frac{1}{2},1\right)$ such that
\begin{equation}
\label{eq:9a.4}
\Phi_{3+k}\leq(1-\sigma_\gamma)^{3+k}
\|D^{3+k}u\|_{L^{2}(B_{\sigma_\gamma}^+)}+\gamma.
\end{equation}
By applying \eqref{eq:9a.3} with $h=3+k$, $\epsilon=(1-\sigma_\gamma)\widetilde{\epsilon}$, $\sigma = \sigma_\gamma$, we have
\begin{equation*}
(1-\sigma_\gamma)^{3+k}\|D^{3+k}u\|_{L^{2}(B_{\sigma_\gamma}^+)}\leq
\left(
\widetilde{\epsilon}(1-\sigma_\gamma)^{4+k}\|D^{4+k}u\|_{L^{2}(B_{\sigma_\gamma}^+)}+
\widetilde{\epsilon}^{-(3+k)}\|u\|_{L^{2}(B_{\sigma_\gamma}^+)}\right),
\end{equation*}
so that, by \eqref{eq:9a.4} and by the arbitrariness of $\gamma$, we have
\begin{equation*}
\Phi_{3+k}\leq C
\left(
\widetilde{\epsilon}\Phi_{4+k}+
\widetilde{\epsilon}^{-(3+k)}\|u\|_{L^{2}(B_{1}^+)}\right).
\end{equation*}
By inserting this last inequality in \eqref{eq:9a.2}, we get
\begin{equation*}
\Phi_{4+k}\leq C
\left(A_{2+k}+
\widetilde{\epsilon}^{-(3+k)}\|u\|_{L^{2}(B_{1}^+)}+
\widetilde{\epsilon}\Phi_{4+k}\right),
\end{equation*}
which gives, for $\epsilon =\frac{1}{2C+1}$,
\begin{equation*}
\Phi_{4+k}\leq C
\left(\|u\|_{L^{2}(B_{1}^+)}+
\sum_{h=1}^{2+k}\Phi_{h}\right).
\end{equation*}
By proceeding similarly, we get
\begin{equation*}
\Phi_{4+k}\leq C
\|u\|_{L^{2}(B_{1}^+)},
\end{equation*}
so that
\begin{equation}
\label{eq:12a.1}
\|D^{4+k}u\|_{L^{2}(B_{\frac{1}{2}}^+)}
\leq2^{4+k}C\|u\|_{L^{2}(B_{1}^+)}, \qquad k=0,1,2.
\end{equation}
\end{proof}
By applying \eqref{eq:9a.3} for a fixed $\epsilon$, $\sigma=\frac{1}{2}$, we can estimates
the derivatives of order $h$, $1\leq h\leq 3$,
\begin{equation}
\label{eq:12a.1bis}
\|D^{h}u\|_{L^{2}(B_{\frac{1}{2}}^+)}
\leq C\left(\|D^{4+k}u\|_{L^{2}(B_{\frac{1}{2}}^+)}+
\|u\|_{L^{2}(B_{\frac{1}{2}}^+)}\right).
\end{equation}
By \eqref{eq:12a.1}, \eqref{eq:12a.1bis}, we have
\begin{equation*}
\|D^{h}u\|_{L^{2}(B_{\frac{1}{2}}^+)}
\leq C\|u\|_{L^{2}(B_{1}^+)}, \qquad \hbox{ for } h=1,...,6.
\end{equation*}
By employing an homothety, we obtain \eqref{eq:12a.2}.
\medskip
\noindent
\emph{Acknowledgement:} The authors wish to thank Antonino Morassi for fruitful discussions on the subject of this work.
\bibliographystyle{plain}
|
{
"timestamp": "2018-08-24T02:07:37",
"yymm": "1802",
"arxiv_id": "1802.08631",
"language": "en",
"url": "https://arxiv.org/abs/1802.08631"
}
|
\section{\label{sec:level1}INTRODUCTION}
Since the first observation of the superdeformed rotational band in $^{152}$Dy~\cite{TwinP1986_PRL57}, numerous superdeformed bands have been discovered in the "traditional" superdeformed regions of mass numbers $80, 130, 150$ and $190$. The latest superdeformed archipelago has been found in the light mass region around $A=40$. High spin states of the
superdeformed rotational bands have been successfully populated in experiment for $^{36}$Ar~\cite{SvenssonC2000_PRL85,SvenssonC2001_PRC63}, $^{40}$Ar~\cite{IdeguchiE2010_PLB686}, $^{40}$Ca\cite{IdeguchiE2001_PRL87} and $^{44}$Ti\cite{OLearyC2000_PRC61}. Most interestingly, these nuclei are magic or near-magic systems, whose ground states are corresponding to a spherical shape. This exotic shape coexistence phenomenon provides an ideal test ground of theoretical models.
Many microscopic descriptions of these bands have been performed, like Cranked Nilsson-Strutinsky (CNS)~\cite{SvenssonC2000_PRL85}, Shell Model (SM)~\cite{SvenssonC2000_PRL85,PovesA2004_NPA731,CaurierE2005_PRL95,CaurierE2007_PRC75}, Cranked Relativistic Mean-Field (CRMF)~\cite{IdeguchiE2001_PRL87}, Hartree-Fock BCS with Skyrme interaction SLy6~\cite{BenderM2003_PRC68}, Angular Momentum Projected Generator Coordinate (AMP-GCM) method with the Gogny force D1S~\cite{Rodrguezguzmn2004_IJoMPE13}, Projected Shell Model (PSM)~\cite{LongG2001_PRC63,YangY2015_eprint}, Multidimensionally Constrained Relativistic Mean Field (MDC-RMF)~\cite{LuB2014_PRC89}, Antisymmetrized Molecular Dynamics (AMD)~\cite{Kanada-EnyoY2005_PRC72,KimuraM2006_NPA767,TaniguchiY2007_PRC76,TaniguchiY2010_PRC82}, Cluster models~\cite{SakudaT2004_NPA744_Ar36} and Cranked Hartree-Fock-Bogoliubov (CHFB) etc. Each of these models can give a good description of the certain aspects of these superdeformed nuclei under certain assumptions. Therefore, comprehensive understanding of the superdeformed nuclear structure of these magic or near magic nuclei needs a complementary investigations of different models. Among these models, as it states in Refs.~\cite{CaurierE2005_PRL95,CaurierE2007_PRC75}, the interacting shell model, when affordable, is a prime choice. However, to carry out a practical shell model calculations of $^{36}$Ar, the $1d_{5/2}$ orbital had to be excluded from the $sd$-$pf$ shells space~\cite{SvenssonC2000_PRL85}. Recently, shell model calculations were performed on $^{46}$Ti where a limited configuration space consisting of $1d_{3/2}$ and $1f_{7/2}$ orbitals is constructed. But the full $sd$-$pf$ calculations are still not possible~\cite{MedinaN2011_PRC84}. Therefore, as the full $sd$-$pf$ shell model calculations are difficult in the $A=40$ mass region so far, to test an efficient shell model truncation scheme for the well-deformed nuclei in light mass region is necessary.
Cranked shell model has been proved to be a powerful tool to study the nuclear collective rotation of the most of areas in the nuclear chart. However, up to now, there is no cranked shell model calculation being performed on the SD bands around $A=40$ region. For the first time, we perform the cranked shell model calculations with the pairing treated by the particle-number conserving (PNC-CSM) method on the SD bands in such a light nuclear mass region. The PNC-CSM method is proposed to treat properly the pairing correlations and blocking effects. It has been applied successfully for describing the properties of normal deformed nuclei in A$\sim$170 mass region~\cite{ZengJ1994_PRC50,WuC1991_PRC44,ZengJ2002_PRC65,LiuS2002_PRC66,LiuS2004_NPA735,ZengJ2001_PRC63}, superdeformed nuclei in A$\sim$150, 190 mass region~\cite{WuC1992_PRC45,LiuS2002_PRC66a,LiuS2004_NPA736,ZengJ1991_PRC44,HeX2005_NPA760}, high-K isomeric states in the rare-earth and actinide mass region~\cite{LiB2013_CPC37,ZhangZ2009_PRC80,ZhangZ2009_NPA816,FuX2013_PRC87} and recently in the heaviest actinides and light superheavy nuclei around Z$\sim$100 region~\cite{LiY2016_SCPMA59,ZhangZ2013_PRC87,HeX2009_NPA817}. In contrast to the Bardeen-Cooper-Schrieffer (BCS) or Hartree-Fock-Bogolyubov (HFB) approach, the Hamiltonian is diagonalized directly in a truncated Fock space in the PNC method~\cite{ZengJ1994_PRC50a,ZengJ1983_NPA405}. Therefore, particle number is conserved and Pauli blocking effects are taken into account exactly.
In the present work we focus on the case of $^{36}$Ar and its heavier isotope $^{40}$Ar, of which superdeformed rotational bands have been established up to the high spin. The present PNC-CSM calculations can reproduce the experimental extracted moment of inertia within an acceptable deviation. This indicates that the PNC-CSM method is an appropriate approach in the light mass region around $A=40$. The observed backbendings of the rotational bands can be understand in the PNC-CSM frame work as the band crossing between the $[321]3/2$ and $[202]5/2$ configuration bands (for both of the neutron and proton). Note that the Nilsson $[321]3/2$ and $[202]5/2$ levels stem from the spherical $1f_{7/2}$ and $1d_{5/2}$ orbitals, respectively. Therefore the effect of the $1d_{5/2}$ orbital is nontrivial on these rotational SD bands.
\section{\label{sec:level2}THEORETICAL FRAMEWORK}
The cranked shell model Hamiltonian of an axially symmetric nucleus in the rotating frame reads,
\begin{equation}
H_{\text{CSM}}=\sum_{n}(h_{\text{Nil}}-\omega j_{x})_{n}+H_{\text{P}},
\end{equation}
where $h_{0}(\omega)=h_{\textrm{Nil}}-\omega j_{x}$ is the single-particle part with $h_{\textrm{Nil}}$ being the Nilsson Hamiltonian~\cite{NilssonS_DMFM29,NilssonS1969_NPA131} and $-\omega j_{x}$ being the Coriolis force with the cranking frequency $\omega$ about the $x$ axis.
The cranked Nilsson orbitals are obtained by diagonalizing the single-particle Hamiltonian $h_{0}(\omega)=h_{\textrm{Nil}}-\omega j_{x}$. $H_{\text{P}}=H_{\text{P}}(0)+H_{\text{P}}(2)$ is the pairing including monopole and quadrupole pairing correlations. The corresponding effective pairing strengths $G_0$ and $G_2$ are connected with the dimension of the truncated Cranked Many-Particle Configuration (CMPC) space\cite{WuC1989_PRC39} in which $H_{\text{CSM}}$ is diagonalized. In the following calculations, the CMPC space for $^{36,40}$Ar is constructed in the $N = 0\sim4$ major shells for both of neutrons and protons. By taking the cranked many-particle configuration truncation (Fock space truncation), the dimensions of the CMPC space are about 500, the corresponding effective monopole and quadrupole pairing strengths are $G_{0p}=G_{0n}=0.18$ MeV and $G_{2p}=G_{2n}=0.08$ MeV, respectively.
The yrast and low-lying eigenstates are obtained as,
\begin{equation}
\left| \psi \right\rangle =\sum_{i}C_{i}\left| i\right\rangle \ ,
\label{eq:wf}
\end{equation}
where $\left\vert i\right\rangle $ is a cranked many-particle configuration and $C_{i}$ is the
corresponding probability amplitude.
The angular momentum alignment $\left\langle J_{x} \right\rangle$ of
the state $\left\vert \psi \right\rangle$ is,
\begin{eqnarray}
\left\langle \psi \right| J_{x}
\left| \psi \right\rangle
= \sum_{i}\left|C_{i}\right| ^{2}
\left\langle i\right| J_{x}\left| i\right\rangle
+ 2\sum_{i<j}C_{i}^{\ast }C_{j}
\left\langle i\right| J_{x}\left| j\right\rangle.
\label{eq:Jx1}
\end{eqnarray}
Since $J_{x}$ is an one-body operator, the matrix element $\left\langle i\right| J_{x}\left| j\right\rangle$ is nonzero only when $\left| i\right\rangle$ and $\left| j\right\rangle$ differ by one particle occupation, which are denoted by orbitals $\mu$ and $\nu$. Then $\left| i\right\rangle=(-)^{M_{i\mu}}\left| \mu\cdots\right\rangle$ and $\left| j\right\rangle=(-)^{M_{j\nu}}\left| \nu\cdots\right\rangle$ with the ellipsis stands for the same particle occupation and $(-)^{M_{i\mu}}=\pm1$ and $(-)^{M_{j\nu}}=\pm1$ according to whether the permutation is even or odd. The angular momentum alignment can be expressed as the diagonal and the off-diagonal parts,
\begin{eqnarray}
\nonumber
\left\langle J_{x} \right\rangle
& = &\langle J_x(\mu)\rangle+\langle J_x(\mu\nu)\rangle\\
& =& \sum_{\mu} j_{x}(\mu)
+ \sum_{\mu<\nu} j_{x}(\mu\nu).
\label{eq:Jx1}
\end{eqnarray}
The kinematic moment of inertia is given by $J^{(1)}=\left\langle \psi \right\vert J_{x}\left\vert\psi \right\rangle /\omega$. For the details of the PNC-CSM method, see Refs.~\cite{ZengJ1994_PRC50,ZengJ1983_NPA405,ZengJ1994_PRC50a}.
\section{\label{sec:level3}CALCULATION AND DISCUSSION}
\begin{figure}[!t]
\centering
\includegraphics[width=4in]{fig1.png}
\caption{The Nilsson diagram for protons or neutrons around N=Z=20 with quadrupole deformation $\varepsilon_{2}$ ($\varepsilon_4=0$ and $\varepsilon_6=0$.). The Nilsson parameters $(\kappa,\mu)$ are taken from Ref.~\cite{BengtssonT1985_NPA436}. The $\kappa_{2},\kappa_{3}$ are modified slightly as $\kappa_{2}=0.08,\kappa_{3}=0.12$.}
\label{Fig:fig1}
\end{figure}
The Nilsson parameters $(\kappa,\mu)$ are taken from Ref.~\cite{BengtssonT1985_NPA436}. Since $^{36}$Ar is a $N=Z$ symmetric nuclear system and the density of the single-particle level is low, the same set of $(\kappa,\mu)$ is used for neutrons and protons in the present calculations. The values of $\kappa_{2},\kappa_{3}$ are modified slightly to reproduce the correct single-particle level sequence. The corresponding Nilsson diagram for protons or neutrons is shown in Figure~\ref{Fig:fig1}. As it shows that the deformed $N=Z=18$ and $20$ energy gaps appear at the well-deformed prolate region around $\varepsilon_{2}=0.4$ and $0.5$, respectively. At the oblate deformation side, the $N=Z=20$ energy gap around $\varepsilon_{2}=-0.4$ is as large as the spherical shell gap at $N=Z=20$.
The quadrupole deformation $\beta_2=0.45$ was suggested by cranked Nilsson-Strutinsky calculations for the SD bands of $^{36}$Ar in the original experimental paper~\cite{SvenssonC2000_PRL85}. Later, a large low spin quadrupole deformation $\beta_2=0.46\pm0.03$ is deduced from the $B(E2)$ value for $4^+\rightarrow2^+$ SD transition in Ref.~\cite{SvenssonC2001_PRC63}. As for $^{40}$Ar, the observed superdeformed structure was calculated by cranked Hartree-Fock-Bogoliubov with the $P + QQ$ force~\cite{IdeguchiE2010_PLB686}. The calculation shows that $\beta_{2}=0.57$ at $I = 0\hbar$ and the deformation gradually decreases to $0.45$ at $I = 12\hbar$. Triaxiality is found to be almost zero $(\gamma\approx 0^{\circ})$ throughout this angular momentum range. Calculations by the parity and angular momentum projection and the generator coordinate method suggest a quadrupole deformation $\beta_2=0.478$ and triaxial deformation with $\gamma\approx10^{\circ}$~\cite{TaniguchiY2010_PRC82}. In the PNC-CSM frame, nucleus is restricted to an axial symmetric shape in the whole spin range with the fixed deformation parameters. $\varepsilon_2=0.48$ and $\varepsilon_2=0.5$ are adopted for $^{36}$Ar and $^{40}$Ar, respectively. Higher order axial symmetric deformations of $\varepsilon_4=0.06$ and $\varepsilon_6=-0.06$ are included to reproduce the backbending of the SD band in $^{36}$Ar.
\begin{figure}[!t]
\centering
\includegraphics[width=2.8in]{fig2.png}
\caption{(colour online) Cranked proton Nilsson levels near the Fermi surface of $^{36}$Ar with quadrupole deformation parameter $\varepsilon_2=0.48$. The signature $\alpha=+1/2$ $(\alpha=-1/2)$ levels are denoted by solid (dash) lines. The positive (negative) parity levels are denoted by blue (red) lines. Cranked neutron Nilsson levels are the same.}
\label{Fig:fig2}
\end{figure}
With the above selected parameters, the cranked Nilsson levels with quadrupole deformation parameter $\varepsilon_2=0.48$ for $^{36}$Ar are calculated in figure \ref{Fig:fig2}. It is same for neutrons and protons. The proton/neutron Fermi surface of $^{36}$Ar locates between the $1f_{7/2}[321]3/2$ and $1d_{5/2}[202]5/2$ orbitals. Since these two Nilsson orbitals stay close to each other and cross around $\hbar\omega=1.5$ MeV $(\alpha=-1/2)$, a band crossing is likely to arise around $\hbar\omega=1.5$ MeV. In contrast, the neutron Fermi surface of $^{40}$Ar is lifted up to the deformed shell gap at $N=22$ where the band crossing occurs between the $1d_{3/2}[200]1/2$ and $1g_{9/2}[440]1/2$ orbitals around $\hbar\omega=1.5$ MeV. Since $1g_{9/2}[440]1/2$ orbital is a high$-j$ low$-\omega$ intruder orbital, which is characterized by its large contributions to alignment and large Coriolis responses, a sharp backbending would arise around $\hbar\omega=1.5$ MeV. However, the experimentally observed SD band in $^{40}$Ar is up to spin $I^{\pi}=12^{+}$ which is equivalent to $\hbar\omega=1.35$ MeV. Then the predicted band crossing would occur beyond the experimental observed frequency range. Therefore, we will not discuss about it.
\begin{figure}[!t]
\centering
\includegraphics[width=4in]{fig3.png}
\caption{The comparison of the experimental kinematic moment of inertia $J^{(1)}$ of SD bands in $^{36}$Ar (a) and $^{40}$Ar (b) with the PNC-CSM calculations. Experimental data are denoted by solid squares and theoretical results are denoted by solid lines.}
\label{Fig:fig3}
\end{figure}
The comparison of the theoretical $J^{(1)}$ with the extracted experimental values for SD bands in $^{36,40}$Ar is plotted in Figure~\ref{Fig:fig3}. The near-perfect rotational behavior of the $^{36}$Ar was observed in experiment up to spin $I=10\hbar$ (around rotational frequency $\hbar\omega=1.5$ MeV) where a backbending arises~\cite{SvenssonC2000_PRL85}. The agreement of the backbending frequency around $\hbar\omega=1.5$ MeV is remarkably good. The calculated backbending of $J^{(1)}$ at $\hbar\omega>1.5$ MeV is less pronounced than the experimental data. The Cranked Nilsson-Strutinsky calculation shows the system maintains an axially symmetric shape before the backbending while it changes the shape to have the triaxial deformation after the backbending~\cite{SvenssonC2001_PRC63}. The cranked Skyrme-Hartree-Fock calculation reveals that the shape of the superdeformed $^{36}$Ar system becomes triaxial and evolves toward the oblate shape at high spin limit. The PNC-CSM calculation is carried out with the fixed symmetric deformations throughout the whole observed frequency range. To develop present model to take into account the shape evolution may improve quantitive agreement between theoretical results and experimental data.
\begin{figure}[!t]
\centering
\includegraphics[width=4in]{fig4.png}
\caption{ (colour online) Occupation probability $n_{\mu}$ of each orbit $\mu$ (including both $\alpha=\pm1/2$) near the Fermi surface for SD bands in $^{36}$Ar (left) and $^{40}$Ar (right). $n_{\mu}$ of the positive (negative) parity levels are denoted by black (red) lines. }
\label{Fig:fig4}
\end{figure}
To reveal the microscopic mechanism of the backbending of the rotational band is often interested in theoretical studies since it can provide valuable informations to understand deeply the microscopic structures of the rotating nuclear system. Based on the analysis of projected shell model calculation, the backbending of the SD band in $^{36}$Ar is explained as the results of the 0-, 2-, and 4-quasiparticle (qp) bands cross each other at the same angular momentum $I^{+}=10^{+}$~\cite{LongG2001_PRC63}.
In the PNC-CSM calculations, the band crossing in $^{36}$Ar is clearly exhibited by the occupation probabilities $n_{\mu}$ of each cranked Nilsson orbit $\mu$ in figure~\ref{Fig:fig4}(a) and~\ref{Fig:fig4}(c). We can see that $n_{\mu}$ of neurons and protons are the same. Before the backbending (at $\hbar\omega\le1.5$ MeV), $1f_{7/2}[321]3/2$, orbital just above the Fermi surface, is almost empty $(n_{\mu}\approx0)$ and $1d_{5/2}[202]5/2$, orbital just below the Fermi surface, is almost fully occupied $(n_{\mu}\approx2)$ whereas it exchanges after the backbending. Therefore, the backbending results from the simultaneous band crossing of neutrons and protons between the ground state (0-qp) band and the $1f_{7/2}[321]3/2$ (with signature $\alpha=\pm1/2$) configuration state (4-qp) band. This is consistent with the conclusion of projected shell model calculations. Furthermore, the PNC-CSM calculations present clearly why in contrast to the common band crossing picture, the 2-qp configurations do not have a chance to play a major role in the structure of the SD yrast band in $^{36}$Ar. Since $^{36}$Ar is a $N=Z$ symmetric nucleus, the neutron and proton signature pairs are excited from $1d_{5/2}[202]5/2$ to $1f_{7/2}[321]3/2$ configuration state simultaneously to form a 4-qp state immediately after the backbending.
\begin{figure}[!t]
\centering
\includegraphics[width=4in]{fig5.png}
\caption{ (colour online) The direct contributions to the angular momentum alignment $\langle J_x\rangle$ from the particle occupying the cranked orbit $\mu$ (denoted by solid lines) and the interference $\langle J_x(\mu\nu)\rangle$ between orbit $\mu$ and $\nu$ (denoted by the dashed lines) for the SD band of $^{36}$Ar (left) and $^{40}$Ar (right).}
\label{Fig:fig5}
\end{figure}
To quantify the effect, more detailed informations of the angular momentum alignment $\langle J_x(\mu)\rangle$ from each single particle orbit $\mu$ and the interference $\langle J_x(\mu\nu)\rangle$ between orbit $\mu$ and $\nu$ are presented in figure~\ref{Fig:fig5}. As it shows in figure~\ref{Fig:fig5} (a) and~\ref{Fig:fig5} (c), the sharp rise of the $J^{(1)}$ in $^{36}$Ar mainly results from the contribution of the sudden increased simultaneous alignments of neutron and proton $1d_{5/2}[202]5/2$ pairs and $1f_{7/2}[321]3/2$ pairs at $\hbar\omega=1.5$ MeV. Besides, their involved interference terms are important too. While $\langle J_{x}([321]1/2\otimes[321]3/2)\rangle$ decreases suddenly at $\hbar\omega=1.5$ MeV, $\langle J_{x}([330]1/2\otimes[321]3/2)\rangle$ shows an excessive sharp increase. The smooth ascend of the $J^{(1)}$ at the low frequency is attributed to the gradual alignments of neutron and proton $1f_{7/2}[330]1/2]$ pairs and $1d_{5/2}[220]1/2$ pairs.
The rotational behavior of the SD band in $^{40}$Ar is quite different. Only a slight upbending of $J^{(1)}$ appears at the low spin (around $\hbar\omega=0.5$ MeV). Then $J^{(1)}$ increases smoothly with rotational frequency. The PNC-CSM calculations reproduce the experimental variation tendency very well. However, the theoretical results underestimate the data by about $1.2$ $\hbar^{2}$MeV$^{-1}$ throughout the whole observed frequency range. The $n-p$ pairing would be important in such (near-)symmetric nuclear system, that is not included in the present PNC-CSM method. This could be one of the reasons for the systematic shift down of the theoretical results. Nevertheless, further investigations aimed at the effect of the $n-p$ pairing should be fulfilled by the PNC-CSM method in this mass region.
Effected by the four additional neutrons in $^{40}$Ar, the rotational behavior differs a lot with that of $^{36}$Ar. From the occupation probabilities in figure~\ref{Fig:fig4} (b), we can see that the $1d_{3/2}[200]1/2$ and $1f_{7/2}[321]3/2$ orbitals are fully occupied $(n_{\mu}\approx2)$. Since the deformed $N=22$ shell gap is comparatively big, there is no neutron band crossing occurring during the experimentally observed frequency range. The proton occupation probabilities [see figure~\ref{Fig:fig4} (d)] are effected accordingly due to the change of the mean field. $1d_{5/2}[202]5/2$ orbital is more than half occupied $(n_{\mu}\approx1.25)$, and $1f_{7/2}[321]3/2$ orbital is less than half occupied $(n_{\mu}\approx0.75)$. From figure~\ref{Fig:fig5} (b) and~\ref{Fig:fig5} (d), the slight upbending at the low frequency is attributed to the alignments of the neutron $1f_{7/2}[321]3/2$ pairs and proton $1d_{5/2}[202]5/2$ pairs, and their involved interference terms of neutron $\langle J_{x}([312]5/2\otimes[321]3/2)\rangle$ and proton $\langle J_{x}([330]1/2\otimes[321]3/2)\rangle$.
\section{\label{sec:level4}CONCLUSIONS}
For the first time the cranked shell model with the paring correlations treated by the particle-number-conserving method has been used to describe the superdeformed rotational bands in the $A=40$ mass region. The calculations are carried out within $N=0\sim4$ major shells, with axially symmetric deformation parameters $\varepsilon_{2,4,6}$ being included and the pairing correlations being treated properly. The experimental kinematic moments of inertia $J^{(1)}$ versus rotational frequency in $^{36}$Ar and $^{40}$Ar are reproduced well. This may convince us that the PNC-CSM method is an efficient method to describe the rotational properties of the superdeformed nuclei around the $A=40$ mass region.
The microscopic mechanism of the variation of the superdeformed bands versus frequency is explicit in the PNC-CSM calculations. The backbending around $\hbar\omega=1.5$ MeV of the SD band in $^{36}$Ar is clearly presented by analysis the dominant components of the total wave function of the cranked shell model Hamiltonian. It is attributed to the simultaneous alignments of neutron and proton $1d_{5/2}[202]5/2$ pairs and $1f_{7/2}[321]3/2$ pairs, which is caused by the band crossing between the $1d_{5/2}[202]5/2$ and $1f_{7/2}[321]3/2$ configuration states. As for $^{40}$Ar, four more additional neutrons rise the neutron Fermi surface to the $N=22$ deformed shell gap. There is no band crossing occurring during the experimentally observed frequency range. Therefore the variation of the $J^{(1)}$ versus frequency is much gentle. The slight upbending at the low frequency is mainly effected by the alignments of the neutron $1f_{7/2}[321]3/2$ pairs and proton $1d_{5/2}[202]5/2$ pairs. Moreover, the PNC-CSM results show that besides the diagonal parts, the off-diagonal parts is very important. We see that not only $1f_{7/2}$ orbital, but also $1d_{5/2}$ plays a very important role in the rotational behavior of the SD bands in $^{36,40}$Ar, which can not be neglected.
\section{\label{sec:level5}ACKNOWLEDGMENTS}
This work was supported by the National Natural Science Foundation of China under Grant No. 11775112 and 11275098, and the Priority Academic Program Development of Jiangsu Higher Education Institutions.
\bibliographystyle{unsrt.bst}
|
{
"timestamp": "2018-02-27T02:06:36",
"yymm": "1802",
"arxiv_id": "1802.08866",
"language": "en",
"url": "https://arxiv.org/abs/1802.08866"
}
|
\section{Introduction}
{In machine learning, Evolution Strategies (ES) are mainly used for direct policy search in reinforcement learning \cite{gomez2008accelerated,heidrich2009hoeffding,stulp2013robot,salimans2017evolution} and hyperparameter tuning in supervised learning, e.g., for Support Vector Machines \cite{glasmachers2008uncertainty,igel2011evolutionary} and Deep Neural Networks \cite{loshchilov2016cma}}.
Recently it has been shown~\cite{salimans2017evolution} that ES algorithms can be used for tasks which are dominated by deep reinforcement learning (RL) algorithms. Those tasks include learning a policy with discrete action set to control an agent's behavior in a wide variety of Atari environments, as well as learning a policy with continuous action space for agents operating in MuJoCo~\cite{todorov2012mujoco} environments. ES algorithms offer a set of attractive advantages when compared to deep RL algorithms:
\begin{itemize}
\item They are highly parallelizable, since the amount of information that has to be exchanged between workers does not depend on the network size.
\item Depending on the problem, they can offer better exploration, and as a result different training runs can converge to qualitatively different solutions.
\item They are not sensitive to the distribution of rewards and do not require careful tuning of discount factors while still facilitating long-term foresight more than traditional discount-based RL algorithms.
\item They can be used for the optimization of non-differentiable policy functions.
\end{itemize}
In this work, we go one step further than \citet{salimans2017evolution} and study the applicability of even simpler ES algorithm to the task of learning a policy network for playing Atari games. \citet{salimans2017evolution} used a specialized ES algorithm that belongs to the class of Natural Evolution Strategies (NES)~\cite{wierstra2008natural}, which computes approximate gradients similar to the REINFORCE algorithm~\cite{williams1992simple}. Here, we demonstrate that very comparable results can already be achieved with a simpler very basic Canonical ES algorithm from the 1970s~\cite{rechenberg1973evolutionsstrategie,rudolph1997convergence}.
Our contributions in this work are as follows:
\begin{itemize}
\item We demonstrate that even a very basic Canonical ES algorithm is able to match (and sometimes supersede) the performance of the Natural Evolution Strategy used by \citet{salimans2017evolution} for playing Atari games.
\item We demonstrate that after 5 hours of training, Canonical ES is able to find novel solutions that exploit the game design and even find bugs in one game that allow them to achieve unprecedented high scores.
\item We experimentally study the performance characteristics of both ES algorithms, demonstrating that (1) individual runs have high variance in performance and that (2) longer runs (5h instead of 1h) lead to significant further performance improvements.
\item By demonstrating that Canonical ES is a competitive alternative to traditional RL algorithms and the specialized ES algorithms tested so far on the Atari domain we set a benchmark for future work on modern ES variants that are directly based on the canonical version.
\end{itemize}
\section{Background}
In this section, we discuss background on RL for playing Atari and on the previously-introduced NES method.
\subsection{Reinforcement Learning for Playing Atari}
In the Atari task of the OpenAI gym environment~\cite{brockman2016openai}, the agent needs to learn to maximize its cumulative reward solely by interacting with the environment (i.e., playing the game). Inputs to the agent include the raw pixels displayed on the Atari console as well as the reward signal; its actions correspond to joystick movements to execute.
Recent developments in the field of deep RL have made it possible to address challenging problems that require processing of high dimensional inputs, such as the raw images in this Atari domain, which can be adressed by deep convolutional neural networks. This approach was popularized by Google DeepMind's Nature paper on the deep Q network (DQN)~\cite{mnih2015human}, a Q-learning method that estimates the utility of an action given a current state by means of a deep neural network. Given this network for approximating the Q function, in any state $s$, DQN's policy then simply selects the action $a$ with the largest predicted Q value $Q(s,a)$.
While DQN requires this maximization over the action space, policy gradient algorithms directly parameterize a policy networks that maps a state to a probability distribution over actions. Policy gradient algorithms, such as the Asynchronous Advantage Actor Critic (A3C)~\cite{mnih2016asynchronous}, directly optimize this policy network.
\vspace*{-0.5cm}
\paragraph{State representation.} In Atari games it is important to model the state to include information from previous frames that will influence an agent's performance. In this work we use the following standard preprocessing pipeline~\cite{mnih2015human} with an implementation provided by OpenAI~\cite{baselines}. First, we apply preprocessing to the screen frames to reduce the dimensionality and remove artifacts related to the technical limitations of the Atari game console (flickering). Specifically, we apply a pixel-wise max operation on the current frame and the one preceeding it. Next, we convert the output from this operation into a grayscale image, resize it and crop it to 84x84 pixels. At the end of this pipeline, we stack together the result of the 4 last frames produced this way to construct a 84x84x4 state tensor. Also following common practice, to speed up policy evaluation (and thus reduce training time), instead of collecting every frame and making a decision at every step, we collect every 4th frame (3rd frame for SpaceInvaders to make the laser visible~\cite{mnih2013playing}) and apply the same action for all frames in between. Figure \ref{fig:Preprocessing} visualizes the full preprocessing pipeline. Figure \ref{fig:GameScreens} shows an example of the state representation for 5 different games.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{resources/Preprocessing.pdf}
\vspace*{-0.2cm}
\caption{\label{fig:Preprocessing}Preprocessing pipeline. Take every 4th frame, apply max operation to remove screen flickering, convert to grayscale, resize/crop, stack 4 last frames.}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.09\textwidth]{resources/Alien.png}
\includegraphics[width=0.09\textwidth]{resources/Enduro.png}
\includegraphics[width=0.09\textwidth]{resources/Pong.png}
\includegraphics[width=0.09\textwidth]{resources/Seaquest.png}
\includegraphics[width=0.09\textwidth]{resources/SpaceInvaders.png}
\vspace*{-0.2cm}
\caption{\label{fig:GameScreens}State representation (84x84x4 tensor) for 5 different games: Alien, Enduro, Pong, Seaquest and SpaceInvaders. Channels are shown on top of each other for better visualization.}
\end{figure}
\subsection{Natural Evolution for Playing Atari}
\begin{algorithm}[t]
\footnotesize
\caption{OpenAI ES\label{alg:OpenAI_ES}}
\KwIn{\\
$optimizer$ - Optimizer function \\
$\sigma$ - Mutation step-size \\
$\lambda$ - Population size \\
$\theta_{0}$ - Initial policy parameters \\
$F$ - Policy evaluation function \\
}
\For{t = 0, 1, ...}{
\For{i = 1, 2, ... $\frac{\lambda}{2}$}{
Sample noise vector: $\epsilon_{i} \sim \mathcal{N}(0, I)$ \\
Evaluate score in the game: $s^{+}_{i} \leftarrow F(\theta_{t} + \sigma*\epsilon_{i})$ \\
Evaluate score in the game: $s^{-}_{i} \leftarrow F(\theta_{t} - \sigma*\epsilon_{i})$ \\
}
Compute normalized ranks: $r = ranks(s), r_{i} \in [0, 1)$\\
Estimate gradient: $g \leftarrow \frac{1}{\sigma * \lambda} \sum_{i=1}^{\lambda} (r_{i}* \epsilon_{i})$ \\
Update policy network: $\theta_{t+1} \leftarrow \theta_{t} + optimizer(g)$ \\
}
\end{algorithm}
\citet{salimans2017evolution} recently demonstrated that an ES algorithm from the specialized class of Natural Evolution Strategies (NES; \citet{wierstra2008natural}) can be used to successfully train policy networks in a set of RL benchmark environments (Atari, MuJoCo) and compete with state-of-the-art RL algorithms. Algorithm \ref{alg:OpenAI_ES} describes their approach on a high level. In a nutshell, it evolves a distribution over policy networks over time by evaluating a population of $\lambda$ different networks in each iteration, starting from initial policy parameter vector $\theta_0$. At each iteration $t$, the algorithm evaluates the game scores $F(\cdot)$ of $\lambda$ different policy parameter vectors centered around $\theta_t$ (lines 3-5) to estimate a gradient signal, using mirrored sampling~\cite{brockhoff2010mirrored} to reduce the variance of this estimate. Since the $\lambda$ game evaluations are independent of each other, ES can make very efficient use of parallel compute resources. The resulting $\lambda$ game scores are then ranked (line 6), making the algorithm invariant to their scale; as noted by the authors, this approach (called fitness shaping~\cite{wierstra2014natural} but used in all ESs since the 1970s)
decreases the probability of falling into local optima early and lowers the influence of outliers. Based on these $\lambda$ ranks of local steps around $\theta_t$, the algorithm approximates a gradient $g$ (line 7) and uses this with a modern version of gradient descent (Adam~\cite{kingma2014adam} or SGD with momentum)
with weight decay to compute a robust parameter update step (line 8) in order to move $\theta_t$ towards the parameter vectors that achieved higher scores.
We note that the computation of the approximate gradient $g$ in line 7 follows the same approach as the well-known policy gradient algorithm REINFORCE. This can be shown as follows. Denoting the distribution from which we draw policy network parameters $\theta$ as $p_{\psi}$,
the gradient of the expected reward $F(\theta)$ with respect to $\psi$ is:
\begin{equation}
\nabla_{\psi} \mathbb{E}_{\theta \sim p_{\psi}}\{F(\theta)\} = \mathbb{E}_{\theta \sim p_{\psi}}\{F(\theta)\nabla_{\psi}\log{p_{\psi}(\theta)}\}.
\end{equation}
Because $p_{\psi}$ is chosen as an isotropic Gaussian distribution with mean $\theta_t$ and fixed standard deviation (mutation step-size) $\sigma$, the only parameter of $p_{\psi}$ is $\theta_t$ and we have:
\begin{equation}
\nabla_{\psi}\log{p_{\psi}(\theta)} = \nabla_{\theta_t}\log\frac{1}{{\sigma \sqrt {2\pi } }}e^{{{ - \left( {\theta - \theta_t } \right)^2 } \mathord{\left/ {\vphantom {{ - \left( {x - \theta_t } \right)^2 } {2\sigma ^2 }}} \right. \kern-\nulldelimiterspace} {2\sigma ^2 }}} = \frac{\theta-\theta_t}{\sigma^{2}}
\end{equation}
\noindent{}and therefore the following identity holds:
\begin{eqnarray}
\nabla_{\psi} \mathbb{E}_{\theta \sim p_{\psi}}\{F(\theta)\}\!\!\!\!\!&=&\!\!\!\!\! \mathbb{E}_{\theta \sim p_{\psi}}\{F(\theta) * \frac{\theta-\theta_t}{\sigma^{2}}\}\\
&\approx&\!\!\!\!\! \frac{1}{\sigma*\lambda} \sum_{i=1}^{\lambda} F(\theta^{(i)}) * \frac{(\theta^{(i)}- \theta_t)}{\sigma},\label{eq:like_line7}
\end{eqnarray}\noindent{}where the last step is simply an approximation by $\lambda$ samples $\theta^{(i)} \sim p_{\psi}$. Equation \ref{eq:like_line7} is exactly as in line 7 of the algorithm except that the raw game scores $F(\theta^{(i)})$ are replaced with their ranks $r_i$ due to fitness shaping.
\citet{salimans2017evolution} also made two further contributions to stabilize the training and improve performance.
Firstly, they introduced a novel parallelization technique (which uses a noise table to reduce communication cost in order to scale to a large number of $\lambda$ parallel workers) and
used virtual batch normalization~\cite{salimans2016improved} to make the network output more sensitive to the noise in the parameter space.
\section{Canonical ES}
While the specialized ES algorithm proposed by OpenAI is equivalent to a policy gradient algorithm, in this work we consider a very basic canonical ES algorithm that belongs to the prominent family of $(\mu, \lambda)-ES$ optimization algorithms. Algorithm \ref{algo:Canonical_ES} illustrates this simple approach. Starting from a random parameter vector $\theta_{0}$, in each iteration $t$ we generate an offspring population of size $\lambda$. For each element of the population, we add sampled mutation noise $\epsilon_{i} \sim \mathcal{N}(0, \sigma^2)$ to the current parameter vector $\theta_{t}$ (line 3) and evaluate the game score of the resulting vector by one episode rollout (line 4). We then pick the top $\mu$ parameter vectors according to the collected score and form a new parameter vector $\theta_{t+1}$ as their weighted mean (lines 5-6).
This algorithm is very basic in its setting: we do not use mirrored sampling, we do not decay the parameters, we do not use any advanced optimizer. The standard weights used to compute the weighted mean of the top $\mu$ solutions fulfill a similar function to the fitness shaping implemented in OpenAI ES.
The new elements introduced by \citet{salimans2017evolution} that we \emph{do} use are virtual batch normalization (which is a component of the game evaluations $F$ and not really of the ES algorithm itself) and the efficient parallelization of ES using a random noise table.
We initially implemented the Cumulative Step-size $\sigma$ Adaptation (CSA) procedure~\cite{hansen1996adapting}, which is standard in canonical ES algorithms. However, due to the high time cost of game evaluations, during our time-limited training, we are only able to perform up to thousands update iterations. Since the dimensionality of the parameter vector is relatively large (1.7M), this results in only a negligible change of $\sigma$ during the training. Therefore, effectively, our algorithm used a fixed step-size and thus we removed step-size adaptation from the description from Algorithm 2 making it even somewhat simpler than typical ES. We employed weighted recombination \cite{rudolph1997convergence} and weights $w$ as in CSA-ES.
\begin{algorithm}
\footnotesize
\caption{Canonical ES Algorithm\label{algo:Canonical_ES}}
\KwIn{\\
$\sigma$ - Mutation step-size \\
$\theta_{0}$ - Initial policy parameters \\
$F$ - Policy evaluation function \\
$\lambda$ - Offspring population size \\
$\mu$ - Parent population size \\
}
\Initialize{ \\
\vspace*{0.1cm}\hspace*{-1.5cm}$w_{i} = \frac{\log(\mu + 0.5) - \log(i)}{\sum_{j=1}^{\mu}\log(\mu + 0.5) - \log(j)}$ \\
}
\For{$t = 0, 1, ...$}{
\For{$i = 1 ... \lambda$}{
Sample noise: $\epsilon_{i} \sim \mathcal{N}(0, I)$ \\
Evaluate score in the game: $s_{i} \leftarrow F(\theta_{t} + \sigma*\epsilon_{i})$ \\
}
Sort $(\epsilon_1, \dots, \epsilon_\lambda)$ according to $s$ ($\epsilon_i$ with best $s_i$ first)\\
Update policy: $\theta_{t+1} \leftarrow \theta_{t} + \sigma*\sum_{j=1}^{\mu}w_{j} * \epsilon_{j}$\\
Optionally, update step size $\sigma$ (see text)
}
\end{algorithm}
\section{Experiments}
In our experiments, we evaluate the performance of the Canonical ES on a subset of 8 Atari games available in OpenAI Gym ~\cite{brockman2016openai}. We selected these games to represent different levels of difficulty, ranging from simple ones like Pong and Breakout to complex games like Qbert and Alien. We make our implementation of the Canonical ES algorithm available online at \url{https://github.com/PatrykChrabaszcz/Canonical_ES_Atari}.
We compare our results against those obtained with the ES algorithm proposed by OpenAI~\cite{salimans2017evolution}. Since no implementation of that algorithm is publicly available for Atari games, we re-implemented it with help from OpenAI\footnote{We thank Tim Salimans for his helpful email support.} and the results of our implementation (which we refer to as ``OpenAI ES (our)'') roughly match those reported by~\citet{salimans2017evolution} (see Table \ref{table:Scores}).
\vspace*{-0.2cm}
\paragraph{{Network Architecture.}}
We use the same network structure as the original DQN work~\cite{mnih2015human}, only changing the activation function from ReLU to ELU~\cite{clevert2015fast} and adding batch normalization layers~\cite{ioffe2015batch}. The network as presented in Figure \ref{fig:Architecture} has approximately 1.7M parameters. We initialize network weights using samples from a normal distribution $\mathcal{N}(\mu=0, \sigma=0.05)$.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{resources/Architecture.pdf}
\caption{\label{fig:Architecture}Neural network architecture. Numbers on top show number of parameters in each layer (kernel parameters and batch norm parameters). Each batch norm layer has a trainable shift parameter $\beta$; the last batch norm has an additional trainable scale parameter $\alpha$.}
\end{figure}
\vspace*{-0.5cm}
\paragraph{{Virtual Batch Normalization.}}Following \citet{salimans2017evolution}, we use virtual batch normalization~\cite{salimans2016improved}.
In order to collect the reference batch, at the beginning of the training we play the game using random actions. In each step, we save the corresponding state with the probability $p(save) = 1\%$ and stop when 128 samples have been collected.
\vspace*{-0.5cm}
\paragraph{{Training.}}For each game and each ES variant we tested, we performed 3 training runs, each on 400 CPUs with a time budget of 10 hours. Every worker (CPU) evaluates 2 offspring solutions, meaning that our setting is roughly the same as training for 5 hours with full parallelization (800 CPUs); therefore, we label this setting as ``5 hours''. In addition, we save the solution proposed after 2 hours of training (equivalent to 1 hour with full parallelization) or after 1 billion training frames (whatever comes first) to allow for a fair comparison with results reported by \citet{salimans2017evolution}; we label this setting as ``1 hour''. During training, one CPU is reserved to evaluate the performance of the solution proposed in the current iteration; hence, the offspring population size in our experiments is $\lambda=798$.
In each decision step, the agent passes its current environment state through the network and performs an action that corresponds to the output with the highest value.
We limit episodes to have a maximum length of 25k steps; we do not adjust this value during training.
An episode includes multiple lives, and we do not terminate an episode after the agent dies the first time in order to allow the learning of strategies that span across multiple lives. We start each episode with up to 30 initial random no-op actions.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{resources/Baseline_VS_OpenAI.pdf}
\vspace*{-0.7cm}
\caption{\label{fig:Scores} Final evaluation scores (mean across 30 evaluation rollouts with random initial no-ops) for Canonical ES ($\mu \in {10, 20, 50, 100, 200, 400}$) and for our implementation of OpenAI ES. For each setting we use population size $ \lambda=798$ and report the performance from 3 separate training runs. We report evaluation scores after 1 hour and after 5 hours of training. The horizontal line indicates the results reported by \protect\citet{salimans2017evolution}.}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{resources/time_plot.pdf}
\vspace*{-0.5cm}
\caption{\label{fig:TimePlot}{Training curves for CanonicalES ($\mu=50$) and OpenAI ES (our). At each iteration $t$ we evaluate currently proposed solution $\theta{t}$ two times using one CPU, because of that values reported in the Table\ref{table:Scores} (mean over 30 evaluation runs) might differ. For better plot quality we filter out the noise. We observe that both algorithms often get stuck in local optima.}}
\end{figure}
\begin{table*}[t!]
\footnotesize
\centering
\begin{tabular}{l | c | l l | l l}
\hline
& OpenAI ES & OpenAI ES (our) & Canonical ES & OpenAI ES (our) & Canonical ES \\ [0.5ex]
& 1 hour & 1 hour & 1 hour & 5 hours & 5 hours \\ \hline
Alien & & $\bmg{3040\pm276.8}$ & $2679.3\pm1477.3$ & $4940\pm0$ & $\bmu{5878.7\pm1724.7}$ \\
Alien & $994$ & $\bmu{1733.7\pm493.2}$& $965.3\pm229.8$ & $3843.3\pm228.7$ & $\bmu{5331.3\pm990.1}$ \\
Alien & & $\bmu{1522.3\pm790.3}$& $885\pm469.1$ & $2253\pm769.4$ & $\bmu{4581.3\pm299.1}$ \\ \hline
BeamRider & & $\bmg{792.3\pm146.6}$ & $774.5\pm202.7$ & $\bmu{4617.1\pm1173.3}$ & $1591.3\pm575.5$ \\
BeamRider & $744$ & $708.3\pm194.7$ & $\bmg{746.9\pm197.8}$ & $\bmu{1305.9\pm450.4}$ & $965.3\pm441.4$ \\
BeamRider & & $690.7\pm87.7$ & $\bmu{719.6\pm197.4}$ & $\bm{714.3\pm189.9}$ & $703.5\pm159.8$ \\ \hline
Breakout & & $14.3\pm6.5$ & $\bmu{17.5\pm19.4}$ & $26.1\pm5.8$ & $\bmu{105.7\pm158}$ \\
Breakout & $9.5$ & $11.8\pm3.3$ & $\bmu{13\pm17.1}$ & $19.4\pm6.6$ & $\bmu{80\pm143.4}$ \\
Breakout & & $\bmu{11.4\pm3.6}$ & $10.7\pm15.1$ & $\bmu{14.2\pm2.7}$ & $12.7\pm17.7$ \\ \hline
Enduro & & $70.6\pm17.2$ & $\bmu{84.9\pm22.3}$ & $\bmu{115.4\pm16.6}$ & $86.6\pm19.1$ \\
Enduro & $95$ & $36.4\pm12.4$ & $\bmu{50.5\pm15.3}$ & $\bm{79.9\pm18}$ & $76.5\pm17.7$ \\
Enduro & & $\bmu{25.3\pm9.6}$ & $7.6\pm5.1$ & $58.2\pm10.5$ & $\bm{69.4\pm32.8}$ \\ \hline
Pong & & $\bmu{21.0\pm0.0}$& $12.2\pm16.6$ & $\bmg{21.0\pm0.0}$ & $\bmg{21.0\pm0.0}$ \\
Pong & $21$ & $\bmu{21.0\pm0.0}$ & $5.6\pm20.2$ & $\bmu{21\pm0}$ & $11.2\pm17.8$ \\
Pong & & $\bmu{21.0\pm0.0}$ & $0.3\pm20.7$ & $\bmu{21\pm0}$ & $-9.8\pm18.6$ \\ \hline
Qbert & & $\bmu{8275\pm0}$ & $8000\pm0$ & $12775\pm0$ & $\bmu{263242\pm433050}$ \\
Qbert & $147.5$ & $1400\pm0$ & $\bmu{6625\pm0}$ & $5075\pm0$ & $\bmu{16673.3\pm6.2}$ \\
Qbert & & $1250\pm0$ & $\bmu{5850\pm0}$ & $4300\pm0$ & $\bmg{5136.7\pm4093.9}$ \\ \hline
Seaquest & & $1006\pm20.1$ & $\bmu{1306.7\pm262.7}$ & $1424\pm26.5$ & $\bmu{2849.7\pm599.4}$ \\
Seaquest & $1390$ & $898\pm31.6$ & $\bmu{1188\pm24}$ & $1040\pm0$ & $\bmu{1202.7\pm27.2}$ \\
Seaquest & & $887.3\pm20.3$ & $\bmu{1170.7\pm23.5}$ & $\bmg{960\pm0}$ & $946.7\pm275.1$ \\ \hline
SpaceInvaders & & $\bmu{1191.3\pm84.6}$ & $896.7\pm123$ & $\bmg{2326.5\pm547.6}$ & $2186\pm1278.8$ \\
SpaceInvaders & 678.5 & $\bmu{983.7\pm158.5}$ & $721.5\pm115$ & $\bmg{1889.3\pm294.3}$ & $1685\pm648.6$ \\
SpaceInvaders & & $\bmu{845.3\pm69.7}$ & $571.3\pm98.8$ & $\bmu{1706.5\pm118.3}$ & $1648.3\pm294.5$ \\ \hline
\end{tabular}
\caption{Evaluation scores (mean over 30 evaluation runs with up to 30 initial no-ops) for different training times and algorithms. For each training time limit we compare results from OpenAI ES(our) and Canonical ES $\mu=50$. For each setting we performed 3 training runs, ordered the results for each game and compared them row by row, boldfacing the better score. Results for which the difference is significant across the 30 evaluation runs based on a Mann-Whitney U test ($p<0.05$) are marked in blue.}
\label{table:Scores}
\end{table*}
\vspace*{-0.5cm}
\paragraph{{Results.}}First, we studied the importance of the parent population size $\mu$. This hyperparameter is known to often be important for ES algorithms and we found this to also hold here. We measured performance for $\mu \in\{10, 20, 50, 100, 200, 400\}$ and observed different optimal values of $\mu$ for different games (Figure \ref{fig:Scores}). For the subsequent analyses we fixed $\mu=50$ for all games.
In Table~\ref{table:Scores}, for each game and for both Canonical ES and OpenAI ES, we report the results of our 3 training runs; for each of these runs, we evaluated the final policy found using 30 rollouts. We ordered the 3 runs according to their mean score and compared the results row-wise. We ran a Mann-Whitney U test~\cite{mann1947test} to check if the distributions of the 30 rewards significantly differed between the two ES variants. After 5 hours of training, Canonical ES performed significantly better than OpenAI ES in 9 out of 24 different runs ($p < 0.05$) and worse in 7 (with 8 ties); this shows that our simple Canonical ES is competitive with the specialized OpenAI ES algorithm on complex benchmark problems from the Atari domain.
Additionally, we tested whether the algorithms still made significant progress between 1 and 5 hours of training; indeed, the performance of our Canonical ES algorithm improved significantly in 16 of 24 cases, demonstrating that it often could make effective use of additional training time.
However, qualitatively, we observed that the performance of both algorithms tends to plateau in locally optimal solutions for extended periods of time (see Figure \ref{fig:TimePlot}) and that they often find solutions that are not robust to the noise in the domain; i.e., there is a high variance in the points scored with the same trained policy network across initial environment conditions.
\section{Qualitative analysis}\label{Qualitative analysis}
Visual inspection and comparison between solutions found by reinforcement learning algorithms and solutions found by ES algorithms shows some significant differences.
In this section we describe interesting agent behaviors on different games; a video with evaluation runs on all games is available on-line at \url{https://www.youtube.com/watch?v=0wDzPBiURSI}.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{resources/Seaquest_behaviour.png}
\vspace*{-0.2cm}
\caption{\label{fig:SeaquestBehaviour}{The agent learns to dive to the bottom of the sea and constantly shoot left and right, occasionally scoring points.}}
\end{figure}
First, we study two games in which most of the ES runs converged to similar sub-optimal solutions: Seaquest and Enduro. In Seaquest, the agent dives to the bottom of the sea and starts to shoot left and right, occasionally hitting an enemy and scoring points (Figure \ref{fig:SeaquestBehaviour}). However, it is not able to detect the lack of oxygen and quickly dies. In Enduro, the agent steers the car to keep it in the middle of the road and accelerate, but from time to time it bounces back after hitting a rival car in front of it. Both solutions are easy-to-reach local optima and do not require developing any complex behavior; since the corresponding scores achieved with these policies are much higher than those of random policies we believe that it is hard for ES to escape from these local attractor states in policy space.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{resources/Qbert_behaviour.png}
\vspace*{-0.2cm}
\caption{\label{fig:QbertBehaviourSuicide}{The agent (orange blob in the upper left part of the screen) learns to commit suicide to kill its enemy (purple spring) and collects enough points to get another life. The whole cycle is repeated over and over again.}}
\end{figure}
We next study the game Qbert, in which Canonical ES found two particularly interesting solutions. In the first case (\url{https://www.youtube.com/watch?v=-p7VhdTXA0k}), the agent gathers some points at the beginning of the game and then stops showing interest in completing the level. Instead, it starts to bait an enemy that follows it to kill itself. Specifically, the agent learns that it can jump off the platform when the enemy is right next to it, because the enemy will follow: although the agent loses a life, killing the enemy yields enough points to gain an extra life again (Figure \ref{fig:QbertBehaviourSuicide}). The agent repeats this cycle of suicide and killing the opponent over and over again.
In the second interesting solution (\url{https://www.youtube.com/watch?v=meE5aaRJ0Zs}), the agent discovers an in-game bug. First, it completes the first level and then starts to jump from platform to platform in what seems to be a random manner. For a reason unknown to us, the game does not advance to the second round but the platforms start to blink and the agent quickly gains a huge amount of points (close to 1 million for our episode time limit). Interestingly, the policy network is not always able to exploit this in-game bug and 22/30 of the evaluation runs (same network weights but different initial environment conditions) yield a low score.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{resources/Breakout_behaviour.png}
\vspace*{-0.2cm}
\caption{\label{fig:BreakoutBehaviour}{The agent learns to make a hole in the brick wall to collect many points with one ball bounce.}
}
\end{figure}
Breakout seems to be a challenging environment for ES algorithms. Canonical ES only found reasonable solutions for a few settings. The best solution shows a strategy that looks similar to the best strategies obtained by using reinforcement learning algorithms, in which the agent creates a hole on one side of the board and shoots the ball through it to gain many points in a short amount of time (Figure \ref{fig:BreakoutBehaviour}). However, even this best solution found by ES is not stable: for different initial environment conditions the agent with the same policy network quickly loses the game with only few points.
In the games SpaceInvaders and Alien we also observed interesting strategies. We do not clip rewards during the training as is sometimes done for reinforcement learning algorithms. Because of that the agent puts more attention to behaviors that result in a higher reward, sometimes even at the cost of the main game objective. In SpaceInvaders, we observe that in the best solution the agent hits the mother-ship that appears periodically on the top of the screen with 100\% accuracy. In Alien, the agent focuses on capturing an item that makes it invincible and then goes to the enemy spawn point to collect rewards for eliminating newly appearing enemies. However, the agent is not able to detect when the invincibility period ends.
\section{Recent related work}
Evolutionary strategies, such as the Covariance Matrix Adaptation Evolution Strategy (CMA-ES~\cite{hansen2003reducing}), are commonly used as a baseline approach in reinforcement learning tasks~ \cite{heidrich2009hoeffding,stulp2013robot,duan2016benchmarking,li2017deep}. Here, we only discuss the most recent related works, which also followed up on the work by \citet{salimans2017evolution}; in particular, we discuss three related arXiv preprints that scientists at Uber released in the last two months about work concurrent to ours.
Similarly to our work, \citet{such2017deep} studied the performance of simpler algorithms than OpenAI's specialized ES variant. They show that genetic algorithms (another broad class of black-box optimization algorithms) can also reach results competitive to OpenAI's ES variant and other RL algorithms. Additionally, interestingly, the authors show that for some of the games even simple random search can outperform carefully designed RL and ES algorithms.
\citet{lehman2017more} argue that comparing ES to finite-difference-based approximation is too simplistic. The main difference comes from the fact that ES tries to optimize the performance of the distribution of solutions rather than a single solution, thus finding solutions that are more robust to the noise in the parameter space. The authors leave open the question whether this robustness in the parameter space also affects the robustness to the noise in the domain. In our experiments we observe that even for the best solutions on some of the games, the learned policy network is not robust against environment noise.
\citet{conti2017improving} try to address the problems of local minima (which we also observed in games like Seaquest and Enduro) by augmenting ES algorithms with a novelty search (NS) and quality diversity (QD). Their proposed algorithms add an additional criterion to the optimization procedure that encourages exploring qualitatively different solutions during training, thus reducing the risk of getting stuck in a local optimum. The authors also propose to manage a meta-population of diverse solutions allocating resources to train more promising ones. In our experiments we observe that training runs with the same hyperparameters and different initializations often converge to achieve very different scores; managing a meta-population could therefore be an easy way to improve the results and reduce the variance between the runs.
Overall, the success of \citet{conti2017improving} in improving performance with some of these newer methods in ES research strengthens our expecation that a wide range of improvements to the state of the art are possible by integrating the multitude of techniques developed in ES research over the last decades into our canonical ES.
\section{Conclusion}
The recent results provided by Open AI~\cite{salimans2017evolution} suggest that natural evolution strategies represent a viable alternative to more common approaches used for deep reinforcement learning. In this work, we analyzed whether the results demonstrated in that work are due to the special type of evolution strategy used there. Our results suggest that even a very basic decades-old evolution strategy
provides comparable results; thus, more modern evolution strategies should be considered as a potentially competitive approach to modern deep reinforcement learning algorithms.
Since evolution strategies have different strengths and weaknesses than traditional deep reinforcement learning strategies, we also expect rich opportunities for combining the strength of both.
\section*{Acknowledgments}
This work has partly been supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme under grant no.\ 716721.
The authors acknowledge support by the state of Baden-Württemberg through bwHPC and the German Research Foundation (DFG) through grant no INST 39/963-1 FUGG.
|
{
"timestamp": "2018-02-27T02:05:55",
"yymm": "1802",
"arxiv_id": "1802.08842",
"language": "en",
"url": "https://arxiv.org/abs/1802.08842"
}
|
\section{\@startsection {section}{1}{\z@}
{-30pt \@plus -1ex \@minus -.2ex} {2.3ex \@plus.2ex}
{\normalfont\normalsize\bfseries}}
\renewcommand\subsection{\@startsection{subsection}{2}{\z@}
{-3.25ex\@plus -1ex \@minus -.2ex} {1.5ex \@plus .2ex}
{\normalfont\normalsize\bfseries}}
\renewcommand{\@seccntformat}[1]{\csname the#1\endcsname. }
\makeatother
\title{\bf{Skew cyclic codes over $F_{p}+uF_{p}+\dots +u^{k-1}F_{p}$}}
\author{ \bf Om Prakash and Habibul Islam \\\\
Department of Mathematics \\
Indian Institute of Technology Patna\\ Patna- 801 106, India \\
E-mail: om@iitp.ac.in and habibul.pma17@iitp.ac.in}
\begin{document}
\maketitle
\begin{abstract}
In this article, we study the skew cyclic codes over $R_{k}=F_{p}+uF_{p}+\dots +u^{k-1}F_{p}$ of length $n$. We characterize the skew cyclic codes of length $n$ over $R_{k}$ as free left $R_{k}[x;\theta]$-submodules of $R_{k}[x;\theta]/\langle x^{n}-1\rangle$ and construct their generators and minimal generating sets. Also, an algorithm has been provided to encode and decode these skew cyclic codes.
\end{abstract}
{keywords}
Skew polynomial ring; Cyclic code, Skew cyclic code; Generating set; Syndrome decoding
{keywords}
\noindent {\it 2010 MSC} : 94B15; 94B05; 94B60.
\section{Introduction}
Many new error-correcting codes have been obtained from the cyclic codes due to advancement in algebraic structures. It is a very useful linear code and extensively studied in the theory of error-correcting codes for last few decades by many researchers. A cyclic code of length $n$ over a field $F$ is defined to be an ideal of the quotient ring $F[x]/\langle x^{n}-1\rangle$ and also this ideal is principally generated by a factor of $x^{n}-1$. But whenever we consider analogous concept of cyclic codes over finite ring, in general, we can not identify a cyclic code with a principally generated ideal of a quotient ring. Hence, task of finding generators of cyclic codes over finite ring is little bit difficult. In 2007, Abualrub and Siap \cite{Abualrub07} studied the cyclic codes over $\mathbb{Z}_{2}+u\mathbb{Z}_{2}$ and $\mathbb{Z}_{2}+\mathbb{Z}_{2}+u^{2}\mathbb{Z}_{2}$. They obtained the generators of cyclic codes over these two rings explicitly. Later on, in 2015, Singh and Kewat \cite{singh 15} generalized the concept of \cite{Abualrub07} over the rings $\mathbb{Z}_{p}[u]/<u^{k}>$ and discussed the generators and minimal spanning set of cyclic codes over it.\\
In 2007, Boucher et al. in \cite{Db07} used the concept of skew polynomial rings into the coding theory. They characterized skew cyclic code of length $n$ as an ideal of the quotient ring $F[x;\theta]/\langle x^{n}-1\rangle$ where $\theta$ is an automorphism on the field $F$. Later Siap et al. \cite{siap11}, consider the skew cyclic codes of length $n$ as $F[x;\theta]$-submodules of $F[x;\theta]/\langle x^{n}-1\rangle$ and also they have constructed generators of these skew cyclic codes as the submodules. Recently, Dastbasteh et al. in \cite{Dastbast17}, studied the skew cyclic codes over the ring $F_{p}+uF_{p}$ and obtained their generators.\\
Let $F_{p}$ be the Galois field of $p$ elements. For any odd prime $p$ and integer $k\geq 1$, let $R_{k}=F_{p}+uF_{p}+\dots +u^{k-1}F_{p}$ where $u^{k}=0$. In this article, we study the skew cyclic codes of length $n$ over $R_{k}$ and construct their generators explicitly. The main motive of the study is to drive skew cyclic codes by using their generators and show how to encode and decode these skew cyclic codes over $R_{k}$. Note that the ring $R_{k}$ is isomorphism to the quotient ring $F_{p}[u]/\langle u^{k}\rangle$ and $R_{k-i}$ is subring of $R_{k}$ for any $k \geq i\geq 1$. Any element $w$ of the ring $R_{k}$ can be written as $w=a_{0}+ua_{1}+\dots +u^{k-1}a_{k-1}$ where $a_{i}\in F_{p}$. Let $\theta$ be an element of the Galois group Aut$(R_{k})$ and $\theta(u)=a_{0}+ua_{1}+\dots +u^{k-1}a_{k-1}$. Since $\theta$ is an automorphism on $R_{k}$ and $u^{k}=0$, so $\theta(u^{k})=0$ and hence $a_{0}=0$. Particularly, we choose the automorphism $\theta$ as $\theta(1)=1$ and $\theta(u)=su$, for some non-zero $s$ in $F_{p}$. For this automorphism $\theta$, the set $R_{k}[x;\theta]=\big \{ a_{0}+a_{1}x+\dots +a_{n}x^{n}\mid a_{i}\in R_{k} \big \}$ forms a non-commutative ring under addition of polynomials and multiplication of polynomials with respect to the condition $ax^{i} bx^{j}=a\theta^{i}(b)x^{i+j}$. This ring is known as skew polynomial ring. Let order of the automorphism $\theta$ is $m$, i.e. $\theta^{m}(a)=a$ for all $a\in R_{k}$. One can see that center of $R_{k}[x;\theta]$ is $F_{p}[x^{m}]$ and hence $R_{k, n}=R[x;\theta]/\langle x^{n}-1\rangle$ is a ring when $n\mid m$, where skew cyclic codes of length $n$ over $R_{k}$ are nothing but ideals of the quotient ring $R_{k, n}$. However, $R_{k, n}$ is a left $R_{k}[x;\theta]$-module and skew cyclic codes of length $n$ over $R_{k}$ are left $R_{k}[x;\theta]$-submodules of $R_{k, n}$. Since we are interested to get skew cyclic code of arbitrary length $n$ over $R_{k}$, we focus on the module structure of $R_{k, n}$ throughout this note.
\section{Definitions and Basic Results}
\df A code of length $n$ over $R_{k}$ is said to be skew cyclic code if \\
\begin{enumerate}
\item $C$ is a submodule of $R^{n}_{k}$;
\item For any $c=(c_{0}, c_{1},\dots c_{n-1})\in C$, we have $\tau(c)=(\theta(c_{n-1}), \theta(c_{0}),\dots ,\theta(c_{n-2}))\in C$.
\end{enumerate}
Note that the above definition is nothing but the definition of cyclic code over $R_{k}$ when $\theta$ is an identity automorphism. For any codeword $c=(c_{0}, c_{1},\dots c_{n-1})\in R^{n}_{k}$, we can find a polynomial $c(x)=c_{0}+c_{1}x+\dots c_{n-1}x^{n-1}$ in $R_{k, n}=R_{k}[x;\theta]/\langle x^{n}-1 \rangle.$ With this identification one can easily find the following result.\\
\theorem
A linear code $C$ of length $n$ over $R_{k}$ is a skew cyclic code if and only if the polynomial representation of $C$ is an $R_{k}[x;\theta]$-submodule of $R_{k, n}=R_{k}[x;\theta]/\langle x^{n}-1 \rangle.$
\theorem \cite{McDonald} \label{div}
Let $f(x), g(x)\in R_{k}[x;\theta]$ where leading coefficient of $g(x)$ is a unit. Then there exist two unique polynomials $q(x), r(x)\in R_{k}[x;\theta]$ such that\\
\begin{align*}
f(x)=q(x)g(x)+r(x),
\end{align*}
where $r(x)=0$ or $deg(g(x))>deg(r(x))$. \\
This theorem is known as right division algorithm. Similar result can be stated for left division.
\proposition \label{pro 1} For any polynomial $f(x)\in R_{k}[x;\theta]$, there exist polynomials $f_{i}(x)\in R_{k-i}[x;\theta]$ such that $f(x)u^{i}=u^{i}f_{i}(x)$ for $1\leq i\leq k-1$.
\begin{proof}
Let $f(x)\in R_{k}[x;\theta]$. Then $f(x)=\sum (a_{0, i}+ua_{1,i}+\dots u^{k-1}a_{k-1, i})x^{i} $ where $a_{j, i}\in F_{p}$, for $0\leq j\leq k-1$.\\\\
Now,
\begin{align} \label{equ 1}
\nonumber f(x)u&=\sum (a_{0, i}+ua_{1,i}+\dots u^{k-1}a_{k-1, i})x^{i}u\\
\nonumber &=\sum (a_{0, i}\theta^{i}(u)+ua_{1,i}\theta^{i}(u)+\dots u^{k-1}a_{k-1, i}\theta^{i}(u))x^{i}\\
\nonumber &=\sum (a_{0, i}s^{i}u+ua_{1,i}s^{i}u+\dots u^{k-1}a_{k-1, i}s^{i}u)x^{i}\\
\nonumber &=\sum (a_{0, i}s^{i}u+ua_{1,i}s^{i}u+\dots u^{k-2}a_{k-2, i}s^{i}u)x^{i}\\
&=uf_{1}(x)
\end{align}
where $f_{1}(x)=\sum (a_{0, i}s^{i}+ua_{1,i}s^{i}+\dots u^{k-2}a_{k-2, i}s^{i})x^{i}\in R_{k-1}[x;\theta]$. Again, multiplying $u$ from right side of the equation (\ref{equ 1}), we get \\
\begin{align*}
f(x)u^{2}&=u\sum (a_{0, i}s^{2i}u+ua_{1,i}s^{2i}u+\dots u^{k-2}a_{k-2, i}s^{2i}u)x^{i}\\
&=u^{2}\sum (a_{0, i}s^{2i}+ua_{1,i}s^{2i}+\dots u^{k-3}a_{k-3, i}s^{2i})x^{i}\\
&=u^{2}f_{2}(x)
\end{align*}
where $f_{2}(x)=\sum (a_{0, i}s^{2i}+ua_{1,i}s^{2i}+\dots u^{k-3}a_{k-3, i}s^{2i})x^{i}\in R_{k-2}[x;\theta]$. Continuing this process, we get $f(x)u^{3}=u^{3}f_{3}(x),\dots , f(x)u^{k-1}=u^{k-1}f_{k-1}(x)$ where each $f_{i}(x)\in R_{k-i}[x;\theta]$.
\end{proof}
\proposition \label{pro 2} The set of units of $R_{i}[x;\theta]$ is $U(R_{i} [x;\theta])=\big \{ a+uh_{1}(x)+u^{2}h_{2}(x)+\dots +u^{i-1}h_{i-1}(x) \mid a\in F^{*}_{p}$ and $h_{i}(x)\in F_{p}[x]\big \}$.
\begin{proof}
We proof this result by induction on $i$. For $i=1$, $R_{1}=F_{p}$ and hence $U(F_{p}[x])=F^{*}_{p}$. For $i=2$, Lemma 13 of \cite{Dastbast17} gives the result. Assume that the result is true for $i=m>2$. In order to prove the result for $i=m+1$, let $a+uh_{1}(x)+\dots u^{m}h_{m}(x) \in R_{m+1}[x;\theta]$ where $a\in F^{*}_{p}$ and $h_{i}(x)\in F_{p}[x]$. Then $g(x)=a+uh_{1}(x)+\dots u^{m-1}h_{m-1}(x)\in U(R_{m}[x;\theta])$.\\
Now,
\begin{align*}
(g+u^{m}h_{m})(g^{-1}-g^{-1}u^{m}h_{m}g)&=1-u^{m}h_{m}g+u^{m}h_{m}g-u^{m}h_{m}g^{-1}u^{m}h_{m}g\\
&=1-u^{2m}h_{t}h_{m}g ~( ~By ~~Proposition ~~(\ref{pro 1}))\\
&=1 ~(~as~~ 2m>m+1, u^{2m}=0).
\end{align*}
Hence, $a+uh_{1}(x)+\dots u^{m}h_{m}(x) \in U(R_{m+1}[x;\theta])$.\\
Conversely, let $f(x)\in R_{m+1}[x;\theta]$ be a unit. Then there exists a polynomial $g(x)$ in $R_{m+1}[x;\theta]$ such that $f(x)g(x)=g(x)f(x)=1$. Therefore, $f_{0}(x)g_{0}(x)=1$ where $f(x)=f_{0}(x)+uf_{1}(x)+\dots +u^{m}f_{m}(x)$ and $g(x)=g_{0}(x)+ug_{1}(x)+\dots +u^{m}g_{m}(x)$. This shows that $f_{0}(x)$ is a non-zero constant polynomial in $F_{p}[x]$. Hence, $f(x)=a+uf_{1}(x)+\dots +u^{m}f_{m}(x)$ where $a\in F^{*}_{p}.$ This completes the proof.
\end{proof}
\proposition \label{pro 3} For $k\geq 2$, the polynomial $x^{n}-1$ factories in $R_{k-1}[x;\theta]$ as $x^{n}-1=g_{1}(x)g_{2}(x)$ if and only if there exist polynomials $f_{1}(x), f_{2}(x)\in R_{k}[x;\theta]$ such that $x^{n}-1=f_{1}(x)f_{2}(x)$ where $f_{i}(x)=g_{i}(x)+u^{k-1}k_{i}(x)$, $k_{i}(x)\in F_{p}(x)$.
\begin{proof}
Let $x^{n}-1=f_{1}(x)f_{2}(x)$ in $R_{k}[x;\theta]$ where $f_{i}(x)=g_{i}(x)+u^{k-1}k_{i}(x)$, $g_{i}(x)\in R_{k-1}[x;\theta]$.\\ Then
\begin{align*}
x^{n}-1&=(g_{1}(x)+u^{k-1}k_{1}(x))(g_{2}(x)+u^{k-1}k_{2}(x))\\
&=g_{1}(x)g_{2}(x)+g_{1}(x)u^{k-1}k_{2}(x)+u^{k-1}k_{1}(x)g_{2}(x)+u^{k-1}k_{1}(x)u^{k-1}k_{2}(x)\\
&=g_{1}(x)g_{2}(x)+u^{k-1}g'_{1}(x)k_{2}(x)+u^{k-1}k_{1}(x)g_{2}(x)+u^{2k-2}k'_{1}(x)k_{2}(x) \\
&\hspace{.5cm}~(~By ~Proposition~ (\ref{pro 1}))
\end{align*}
Therefore, in $R_{k-1}[x;\theta]$, we have $x^{n}-1=g_{1}(x)g_{2}(x)$.\\
Conversely, as $R_{k-1}[x;\theta]$ is a subring of the ring $R_{k}[x;\theta]$, so factorization $x^{n}-1=g_{1}(x)g_{2}(x)$ in $R_{k-1}[x;\theta]$ can be treated in $R_{k}[x;\theta]$ as well. In fact, in this case $f_{i}(x)=g_{i}(x)$ and $k_{i}(x)=0$.
\end{proof}
\cor \label{cor 1} The polynomial $x^{n}-1$ factories in $F_{p}[x]$ as $x^{n}-1=g_{1}(x)g_{2}(x)$ if and only if there exist polynomials $f_{1}(x), f_{2}(x)\in R_{k}[x;\theta]$ such that $x^{n}-1=f_{1}(x)f_{2}(x)$ where $f_{i}(x)=g_{i}(x)+ul_{1 i}(x)+u^{2}l_{2 i}+\dots +u^{k-1}l_{k-1 i}$, $l_{j i}\in F_{p}[x].$
\begin{proof}
Repeated application of Proposition (\ref{pro 3}).
\end{proof}
\section{Generators of skew cyclic codes over $R_{k}$}
In this section, we are interested to find the generators of skew cyclic codes of arbitrary length $n$ over $R_{k}$. Note that if $f(x)\in R_{k}[x;\theta]$ and leading coefficient $d$ of $f(x)$ is unit, then $d^{-1}f(x)$ is a monic polynomial. But by Proposition \ref{pro 2}, every element of $R_{k}$ is not a unit. Therefore, a skew cyclic code $C$ over $R_{k}$ may or may not contains any monic polynomials. By simple application of Theorem \ref{div}, we can find the generators if $C$ contains any monic polynomial of minimal degree. However, task would be difficult if $C$ does not contains any monic polynomial or contains some monic polynomials which are not of minimal degree in $C$. Based on this distinction, we would be able to find generators of skew cyclic code for all possible cases in next four theorems.
\theorem Let $f(x)$ be a non-monic polynomial of minimal degree in $C$. Then $f(x)=u^{i}a^{i}(x)$ where $a^{i}(x)\in R_{k-i}[x;\theta]$ for some positive integer $i$.
\begin{proof}
Let $f(x)$ be a non-monic polynomial of minimal degree in $C$. Let $f(x)=a_{0}+a_{1}x+\dots +ua_{r}x^{r}$ where $a_{i}\in R_{k}$ and $a_{r}\in R_{k-1}$. Suppose $a_{r}=t_{1}+ut_{2}+\dots +u^{k-2}t_{k-1}$ where $t_{i}\in F_{p}$. Let $i$ be a least positive integer such that $t_{i}\neq 0$. Then $f(x)=a_{0}+a_{1}x+\dots +ua_{r}x^{r}$ with $a_{r}=u^{i-1}(t_{i}+ut_{i+1}+\dots +u^{k-i-1}t_{k-i})$. Therefore, $u^{k-1}f(x), u^{k-2}f(x), \dots, u^{k-i}f(x)$ all are in $C$ with degree less than $r$. Since $f(x)$ is the minimal polynomial in $C$ with degree $r$, $u^{k-1}f(x)=0, u^{k-2}f(x)=0, \dots ,u^{k-i}f(x)=0$. Hence, $f(x)=u^{i}a^{i}(x)$ where $a^{i}(x)\in R_{k-i}[x;\theta]$ with unit leading coefficient.
\end{proof}
\theorem \label{th 1} Let $C$ be a non-zero skew cyclic codes of length $n$ over $R_{k}$. Suppose $C$ does not contain any monic polynomial and $a(x)=ua^{1}(x)$ is the minimal degree polynomial in $C$, then $C=\langle ua^{1}(x)\rangle$ with $x^{n}-1=b^{1}(x)a^{1}(x)$ in $R_{k-1}[x;\theta]$.
\begin{proof}
Let $c(x)\in C$ be a codeword. Then $c(x)$ is not a monic polynomial. Let $c(x)=c_{0}+c_{1}x+\dots +uc_{t}x^{t}$ where $c_{i}\in R_{k}$ and $c_{t}\in R_{k-1}$. Assume that $c_{t}=e_{1}+ue_{2}+\dots +u^{k-2}e_{k-1}\in R_{k-1}$. Let $j$ be a least positive integer such that $e_{j}\neq 0$. Then $c(x)=c_{0}+c_{1}x+\dots +u^{j}c_{t}x^{t}$ where $c_{i}\in R_{k}$ and $c_{t}\in R^{*}_{k-j}$. Let $a(x)=ua^{1}(x)$ be the polynomial of minimal degree $r$ in $C$ with $a^{1}(x)=a_{0}+a_{1}x+\dots +a_{r}x^{r}$ where $a_{i}\in R_{k-1}$ and $a_{r}\in R^{*}_{k-1}$. Let $c(x)=c_{1}(x)+c_{2}(x)$ where $c_{1}(x)$ contains all terms of degree up to $r-1$ and $c_{2}(x)$ contains all rest terms (with degree $r$ and above). If possible, let $c_{t-1}$ be a unit. Since $a_{r}\in R^{*}_{k-1}$, $\alpha=\theta(a_{r})\in R^{*}_{k-1}$. Also, by definition, $\theta^{i}(u)=s^{i}u$ and let $d=s^{t-r}\in F^{*}_{p}$. Now,
\begin{align*}
d_{1}(x)&=u^{j-1}d^{-1}\alpha^{-1}x^{t-r}a(x)\\
&=u^{j-1}d^{-1}\alpha^{-1}\theta(ua_{0})x^{t-r}+u^{j-1}d^{-1}\alpha^{-1}\theta(ua_{1})x^{t-r+1}+\dots u^{j-1}d^{-1}\alpha^{-1}\theta(ua_{r-1})x^{t-1}\\
&\hspace{.5cm} +u^{j-1}\alpha^{-1}d^{-1}\theta(ua_{r})x^{t}\\
&=u^{j-1}d^{-1}\alpha^{-1}\theta(ua_{0})x^{t-r}+u^{j-1}d^{-1}\alpha^{-1}\theta(ua_{1})x^{t-r+1}+\dots u^{j-1}d^{-1}\alpha^{-1}\theta(ua_{r-1})x^{t-1}\\
&\hspace{.5cm} +u^{j}x^{t}\in C.
\end{align*}
Also,
\begin{align*}
d_{2}(x)&=(c_{t})^{-1}c(x)\\
&=(c_{t})^{-1}c_{0}+(c_{t})^{-1}c_{1}x+\dots +(c_{t})^{-1}c_{t-1}x^{t-1}+u^{j}x^{t}\in C.
\end{align*}
Then $d(x)=d_{2}(x)-d_{1}(x)\in C$ is a polynomial of degree $(t-1)$. Hence, by Proposition \ref{pro 2}, the leading coefficient of $d(x)$ i.e.,$d_{t-1}$ is a unit. This contradicts the fact that $C$ does not contain any monic polynomial. Thus, $c_{t-1}$ can not be unit. With similar argument, we can conclude that none of the coefficients of $c_{2}(x)$ is a unit. \\If possible, let coefficient $c_{i}$ of the polynomial $c_{1}(x)$ be a unit for some integer $i$. Then $u^{k-1}c(x)=u^{k-1}c_{1}(x)\in C$ where $deg(u^{k-1}c_{1}(x))<r=$deg$(a(x)).$ This contradicts the fact that $a(x)$ is the minimal degree polynomial in $C$. Consequently, $c(x)=uc^{1}(x)$ where $c^{1}(x)\in R_{k-1}[x;\theta]$.\\
Again, by division algorithm, we get two polynomials $b^{1}(x), r^{1}(x)$ in $R_{k-1}[x;\theta]$ such that\\
\begin{align*}
x^{n}-1=b^{1}(x)a^{1}(x)+r^{1}(x),
\end{align*}
where $deg(r^{1}(x))< deg(a^{1}(x))$ or $r^{1}(x)=0$. By Proposition \ref{pro 1}, we have \\
\begin{align*}
u(x^{n}-1)=b^{1}_{1}(x)ua^{1}(x)+ur^{1}(x).
\end{align*}
Thus, in $R_{n, k}=R_{k}[x;\theta]/\langle x^{n}-1\rangle$, we have $ur^{1}(x)=-b^{1}_{1}(x)ua^{1}(x)\in C$. As $deg(ur^{1}(x))=$deg$(r^{1}(x))<deg(a^{1}(x))=r$, $ur^{1}(x)=0$. Since $r^{1}(x)\in R_{k-1}[x;\theta]$, $r^{1}(x)=0$. Therefore, $x^{n}-1=b^{1}(x)a^{1}(x)$ in $R_{k-1}[x;\theta]$.
\end{proof}
\theorem \label{th 2} Let $C$ be a non-zero skew cyclic code of length $n$ over $R_{k}$ in which $g(x)$ be the minimal degree monic polynomial. Then $C=\langle g(x)\rangle$ and $x^{n}-1=k(x)g(x)$ in $R_{k, n}$.
\begin{proof}
One can proof it by simple application of Theorem \ref{div}.
\end{proof}
\theorem \label{th 3} Let $C$ be a non-zero skew cyclic code of length $n$ with at least one monic polynomial over $R_{k}$. Suppose $C$ does not contain any monic polynomial of minimal degree and $a(x)=ua^{1}(x)$ is the polynomial of minimal degree in $C$. Let $g(x)$ be the polynomial of minimal degree among the monic polynomials in $C$. Then $C=\langle g(x)+up_{1}(x)+\dots + u^{k-1}p_{k-1}(x), ua^{1}(x)\rangle$ where $x^{n}-1=k(x)g(x)$ in $R_{k}[x;\theta]$ and $x^{n}-1=b^{1}(x)a^{1}(x)$ in $R_{k-1}[x;\theta]$.
\begin{proof}
Let $f(x)$ be the polynomial of minimal degree among the monic polynomials in $C$ and $a(x)=ua^{1}(x)$ be the polynomial of minimal degree in $C$ which is not monic. Let $c(x)$ be a codeword in $C$. By division algorithm, there exist two polynomials $q_{1}(x), r_{1}(x)$ such that \\
\begin{align*}
c(x)=q_{1}(x)f(x)+r_{1}(x),
\end{align*}
where $deg(f(x))> deg(r_{1}(x))$ or $r_{1}(x)=0$. Therefore, $r_{1}(x)\in C$ and hence $r_{1}(x)$ is non-monic. By the proof of Theorem \ref{th 1}, we have $r_{1}(x)=ur_{2}(x)$. Again, by division algorithm, we have\\
\begin{align*}
r_{2}(x)=q_{2}(x)a^{1}(x)+r_{3}(x),
\end{align*}
where $deg(a^{1}(x)) > deg(r_{3}(x))$ or $r_{3}(x)=0$. Therefore, $r_{1}(x)=ur_{2}(x)=q'_{2}(x)ua^{1}(x)+ur_{3}(x)$, and this implies $ur_{3}(x)\in C$. Since, $deg(a^{1}(x)) > deg(r_{3}(x)) = deg(ur_{3}(x))$, so $ur_{3}(x)=0$. As $r_{3}(x)\in R_{k-1}[x;\theta]$, $r_{3}(x)=0$. Hence, \\
\begin{align*}
c(x)&=q_{1}(x)f(x)+r_{1}(x)\\
&=q_{1}(x)f(x)+ur_{2}(x)\\
&=q_{1}(x)f(x)+q'_{2}(x)ua^{1}(x)\in \langle f(x), ua^{1}(x) \rangle.
\end{align*}
Consequently, $C=\langle f(x), ua^{1}(x) \rangle.$
By following the proof of Theorem \ref{th 1}, we can conclude $x^{n}-1=b^{1}(x)a^{1}(x)$ in $R_{k-1}[x;\theta]$.\\
Further, let $f(x)=f_{1}(x)+uf_{2}(x)+\dots u^{k-1}f_{k}(x)$ where $f_{i}(x)\in F_{p}[x]$. Then, by division algorithm, we have\\
\begin{align*}
x^{n}-1=q_{4}(x)(f_{1}(x)+uf_{2}(x)+\dots u^{k-1}f_{k}(x))+ r_{4}(x),
\end{align*}
where $r_{4}(x)=0$ or $deg(r_{4}(x))< deg(f_{1}(x))= deg(f(x))$. Therefore, $r_{4}(x)\in C$ and this shows that $r_{4}(x)$ is not a monic polynomial. Hence, from the proof of Theorem \ref{th 1}, we conclude that $r_{4}(x)=l(x)ur_{5}(x)=ul'(x)r_{5}(x)$. Thus,\\
\begin{align*}
x^{n}-1=q_{4}(x)(f_{1}(x)+uf_{2}(x)+\dots u^{k-1}f_{k}(x))+ ul'(x)r_{5}(x).
\end{align*}
In $F_{p}[x]$, we have \\
\begin{align*}
x^{n}-1=q_{6}(x)f_{1}(x).
\end{align*}
Now, by Corollary \ref{cor 1}, there exists a polynomial $g(x)$ in $R_{k}[x;\theta]$ which is a right divisor of $x^{n}-1$ in $R_{k}[x;\theta]$ such that $g(x)=f_{1}(x)+ue_{1}(x)+\dots u^{k-1}e_{k-1}(x)$ with $deg(g(x)) = deg(f_{1}(x))$. Therefore, \\
\begin{align*}
f(x)&=f_{1}(x)+uf_{2}(x)+\dots u^{k-1}f_{k}(x)\\
&=g(x)+u(f_{2}(x)-e_{1}(x))+\dots u^{k-1}(f_{k}(x)-e_{k-1}(x)).
\end{align*}
Consequently, $C=\langle f(x), ua^{i}(x)\rangle =\langle g(x)+up_{1}(x)+\dots +u^{k-1}p_{k-1}(x), ua^{1}(x)\rangle$ where $x^{n}-1 = k(x)g(x)$ in $R_{k}[x;\theta]$ and $x^{n}-1=b^{1}(x)a^{1}(x)$ in $R_{k-1}[x;\theta]$.
\end{proof}
\proposition \label{pro 4} Let $C=\langle g(x)+up_{1}(x)+\dots +u^{k-1}p_{k-1}(x), ua^{1}(x)\rangle$ be a non-zero skew cyclic code as given in Theorem \ref{th 3}. Then $a^{1}(x)\mid (g(x)+up_{1}(x)+\dots +u^{k-2}p_{k-2}(x))$ mod $u^{k-1}$ and $\frac{x^{n}-1}{g(x)}[up_{1}(x)+\dots +u^{k-1}p_{k-1}(x)]\in \langle ua^{1}(x)\rangle$.
\begin{proof}
Let $ut(x)\in C=\langle g(x)+up_{1}+\dots +u^{k-1}p_{k-1}(x), ua^{1}(x)\rangle$ where $t(x)\in R_{k-1}[x;\theta]$. By division algorithm, we have\\
\begin{align*}
t(x)=q(x)a^{1}(x)+r(x),
\end{align*}
where $deg(r(x)) < deg(a^{1}(x))$ or $r(x)=0$. Then $ut(x)=q'(x)ua^{1}(x)+ur(x)$ and hence $ur(x)\in C$. This is a contradiction unless $ur(x)=0, i.e., r(x)=0$. Therefore, $ut(x)=q'(x)ua^{1}(x)\in \langle ua^{1}(x)\rangle$. As the result, for any $ut(x)\in C$, we get $t(x)=q(x)a^{1}(x)$ and $ut(x)\in \langle ua^{1}(x)\rangle$. Note that $u( g(x)+up_{1}(x)+\dots +u^{k-1}p_{k-1}(x) )= u(g(x)+up_{1}(x)+\dots +u^{k-2}p_{k-2}(x))\in C$. Hence, from the above discussion we conclude that $a^{1}(x)\mid (g(x)+up_{1}(x)+\dots +u^{k-2}p_{k-2}(x))$ mod $u^{k-1}$. Further,\\
\begin{align*}
&\frac{x^{n}-1}{g(x)}[g(x)+up_{1}(x)+\dots +u^{k-1}p_{k-1}(x)]\\
&=u[(\frac{x^{n}-1}{g(x)})'p_{1}(x)+\dots +(\frac{x^{n}-1}{g(x)})'u^{k-2}p_{k-1}(x)]\in C.
\end{align*}
Thus, $\frac{x^{n}-1}{g(x)}[up_{1}(x)+\dots +u^{k-1}p_{k-1}(x)]\in \langle ua^{1}(x)\rangle$.
\end{proof}
4\section{Minimal spanning set}
In this section, we discuss the minimal spanning set of skew cyclic codes of length $n$ for different cases as given in Theorem \ref{th 1}, \ref{th 2} and \ref{th 3}. These minimal spanning sets will help to find generator matrices and cardinality of the skew cyclic codes over $R_{k}.$
\theorem \label{min th 1} Let $C=\langle ua^{1}(x)\rangle$ be a non-zero skew cyclic code of length $n$ over $R_{k}$ where $a(x)=ua^{1}(x)$ be the polynomial of minimal degree $r$ in $C$ with $x^{n}-1=b^{1}(x)a^{1}(x)$ in $R_{k-1}[x;\theta]$. Then \\
\begin{align*}
\Gamma= \big \{ ua^{1}(x), xa^{1}(x), \dots, x^{n-r-1}ua^{1}(x)\big \}
\end{align*}
forms a minimal generating set for the code $C$ and $\mid C\mid =(p^{k-1})^{n-r}.$
\begin{proof}
Let $C=\langle ua^{1}(x)\rangle$ where $x^{n}-1=b^{1}(x)a^{1}(x)$ in $R_{k-1}[x;\theta]$. Let $c(x)\in C$. Then $c(x)=j(x)ua^{1}(x)$. Let $j(x)=j_{1}(x)+u^{k-1}j_{2}(x)$ where $j_{1}(x)\in R_{k-1}[x;\theta]$. Then $c(x)=j(x)ua^{1}(x)=(j_{1}(x)+u^{k-1}j_{2}(x))ua^{1}(x)=uj_{3}(x)a^{1}(x)$. If $deg(j_{3}(x)) \leq (n-r-1)$, then $c(x)\in$ span$(\Gamma)$. Otherwise, by division algorithm, we have\\
\begin{align*}
j_{3}(x)=q_{1}(x)\frac{x^{n}-1}{a^{1}(x)}+r_{1}(x),
\end{align*}
where $deg(r_{1}(x)) < deg(a^{1}(x))=(n-r)$ or $r_{1}(x)=0$.\\
Therefore, \\
\begin{align*}
c(x)=uj_{3}(x)a^{1}(x)&=u(q_{1}(x)\frac{x^{n}-1}{a^{1}(x)}+r_{1}(x))a^{1}(x)\\
&=ur_{1}(x)a^{1}(x).
\end{align*}
Since $deg(r_{1}(x))\leq (n-r-1)$, so $c(x)\in$ span$(\Gamma).$ Clearly none of the element of $\Gamma$ is a linear combination of preceding elements. Therefore, $\Gamma$ is the minimal generating set for the skew cyclic code $C$. Since $r_{1}(x)\in R_{k-1}$, so $\mid C\mid = (p^{k-1})^{n-r}.$
\end{proof}
\theorem \label{min th 2} Let $C=\langle g(x)\rangle$ be a non-zero skew cyclic code of length $n$ where $g(x)$ be the monic polynomial of minimal degree $r$ in $C$ and $x^{n}-1=k(x)g(x)$ in $R_{k, n}$. Then\\
\begin{align*}
\Gamma=\{ g(x), xg(x), \dots, x^{n-r-1}g(x) \}
\end{align*}
forms a minimal generating set for the code $C$ and $\mid C\mid = (p^{k})^{n-r}$.
\begin{proof}
Similar proof as Theroem \ref{min th 1}.
\end{proof}
\theorem \label{min th 3} Let $C=\langle g(x)+up_{1}(x)+\dots +u^{k-1}p_{k-1}(x), ua^{1}(x)\rangle$ be a non-zero skew cyclic code of length $n$ over $R_{k}$ where $a(x)=ua^{1}(x)$ is the polynomial of minimal degree $t$ in $C$ which is not monic, $g(x)$ is the polynomial of minimal degree $r$ among all monic polynomials in $C$, $x^{n}-1=k(x)g(x)$ in $R_{k}[x;\theta]$ and $x^{n}-1=b^{1}(x)a^{1}(x)$ in $R_{k-1}[x;\theta]$. Then \\
\begin{align*}
\Gamma=\big \{ h(x), xh(x), \dots ,x^{n-r-1}h(x), ua^{1}(x), xa^{1}(x), \dots x^{r-t-1}a^{1}(x)\big \}
\end{align*}
forms a minimal generating set for the code $C$ and $\mid C\mid =(p^{k})^{n-r}(p^{k-1})^{r-t}$, where $h(x)=g(x)+up_{1}(x)+\dots +u^{k-1}p_{k-1}(x)$.
\begin{proof}
Let $c(x)\in C$. Then $c(x)=j_{1}(x)(g(x)+up_{1}(x)+\dots +u^{k-1}p_{k-1}(x))+j_{2}(x)ua^{1}(x).$ If $deg(j_{1}(x))\leq (n-r-1)$, then $j_{1}(x)(g(x)+up_{1}(x)+\dots +u^{k-1}p_{k-1}(x))\in span(\Gamma)$. Otherwise, by division algorithm, we have\\
\begin{align*}
j_{1}(x)=q_{1}(x)\frac{x^{n}-1}{g(x)}+r_{1}(x),
\end{align*}
where $r_{1}(x)=0$ or $deg(r_{1}(x))<deg(\frac{x^{n}-1}{g(x)})=n-r.$\\
Therefore,
\begin{align*}
c(x)&=j_{1}(x)(g(x)+up_{1}(x)+\dots +u^{k-1}p_{k-1}(x))+j_{2}(x)ua^{1}(x)\\
&=(q_{1}(x)\frac{x^{n}-1}{g(x)}+r_{1}(x))(g(x)+up_{1}(x)+\dots +u^{k-1}p_{k-1}(x))+j_{2}(x)ua^{1}(x)\\
&=u[q'_{1}(x)p_{1}(x)+j'_{2}(x)a^{1}(x)+uq_{2}(x)p_{2}(x)+\dots +u^{k-2}p_{k-1}(x)]+ \\
&\hspace{.5cm}r_{1}(x)(g(x)+up_{1}(x)+\dots +u^{k-1}p_{k-1}(x)).
\end{align*}
Since $deg(r_{1}(x))\leq (n-r-1)$, so $r_{1}(x)(g(x)+up_{1}(x)+\dots +u^{k-1}p_{k-1}(x))\in$ span$(\Gamma)$. Now, we prove $uk(x)\in$ span$(\Gamma)$ where $uk(x)\in C$.\\
For this, let $uk(x)\in C$ with $deg(k(x))\geq deg(g(x))$. Then by division algorithm, we have\\
\begin{align*}
k(x)=q(x)(g(x)+up_{1}(x)+\dots +u^{k-1}p_{k-1}(x))+r(x),
\end{align*}
where $r(x)=0$ or $deg(r(x))<$deg$(g(x)+up_{1}(x)+\dots +u^{k-1}p_{k-1}(x))=$deg$(g(x))=r$. Hence,\\
\begin{align*}
uk(x)=q'(x)u(g(x)+up_{1}(x)+\dots +u^{k-1}p_{k-1}(x))+ur(x).
\end{align*}
Now, $deg(q'(x))= deg(q(x))= deg(k(x))-r\leq n-r-1$. Therefore, $q'(x)u(g(x)+up_{1}(x)+\dots +u^{k-1}p_{k-1}(x))\in $span$(\Gamma).$
Finally, we prove $ur(x)\in $span$(\Gamma)$. Here, Note that $deg(ur(x))< deg(g(x))$ and $deg(ur(x)) \geq deg(a^{1}(x))$. Also, from the proof of Proposition \ref{pro 4}, we know that $ur(x)\in \langle ua^{1}(x)\rangle$, therefore, $ur(x)=m(x)ua^{1}(x)=um'(x)a^{1}(x)$. Consequently, $ur(x)=l_{0}ua^{1}(x)+l_{1}xua^{1}(x)+\dots +l_{r-t-1}x^{r-t-1}ua^{1}(x)\in $span$(\Gamma)$. Since none of the element of the set $\Gamma$ is a linear combination of preceding elements, hence $\Gamma$ is a minimal generating set for the code $C$ and $\mid C\mid = (p^{k})^{n-r}(p^{k-1})^{r-t}$.
\end{proof}
\section{Encoding of the skew cyclic codes over $R_{k}$}
Now, we propose an encoding algorithm for skew cyclic codes of arbitrary length $n$ over $R$ as the application of Theorems \ref{min th 1}, \ref{min th 2} and \ref{min th 3}.
\theorem \label{enc th} Let $C$ be a skew cyclic code of length $n$ over $R_{k}$.\\ \\
\textbf{Case I:} If $C=\langle ua^{1}(x)\rangle$ where $a(x)=ua^{1}(x)$ is the polynomial of minimal degree $r$ in $C$ and $x^{n}-1=b^{1}(x)a^{1}(x)$ in $R_{k-1}[x;\theta]$, then any codeword $c(x)\in C$ can be encoded as\\
\begin{align*}
c(x)=[t_{0}(x)+ut_{1}(x)+\dots +u^{k-2}t_{k-2}(x)]ua^{1}(x),
\end{align*}
where $t_{0}(x)+ut_{1}(x)+\dots +u^{k-2}t_{k-2}(x)$ is a polynomial of degree $\leq (n-r-1)$ in $R_{k-1}[x;\theta].$\\ \\
\textbf{Case II:} If $C=\langle g(x)\rangle$ where $g(x)$ is the monic polynomial of minimal degree $r$ in $C$ and $x^{n}-1=k(x)g(x)$ in $R_{k, n}$, then any codeword $c(x)\in C$ can be encoded as\\
\begin{align*}
c(x)=[t_{0}(x)+ut_{1}(x)+\dots +u^{k-1}t_{k-1}(x)]g(x),
\end{align*}
where $t_{0}(x)+ut_{1}(x)+\dots +u^{k-1}t_{k-1}(x)$ is a polynomial of degree $\leq (n-r-1)$ in $R_{k}[x;\theta].$\\ \\
\textbf{Case III:} If $C=\langle g(x)+up_{1}(x)+\dots u^{k-1}p_{k-1}(x), ua^{1}(x) \rangle$ where $g(x)$ is the polynomial of minimal degree $r$ among all monic polynomials in $C$, $a(x)=ua^{1}(x)$ is the polynomial of minimal degree $\tau$ in $C$, $x^{n}-1=b^{1}(x)a^{1}(x)$ in $R_{k-1}[x;\theta]$, $x^{n}-1=k(x)g(x)$ in $R_{k}[x;\theta]$, $a^{1}(x)\mid (g(x)+up_{1}(x)+\dots +u^{k-2}p_{k-2}(x))$ mod $u^{k-1}$ and $\frac{x^{n}-1}{g(x)}[up_{1}(x)+\dots +u^{k-1}p_{k-1}(x)]\in \langle ua^{1}(x)\rangle$, then any codeword $c(x)\in C$ can be encoded as\\
\begin{align*}
c(x)&=[t_{0}(x)+ut_{1}(x)+\dots +u^{k-1}t_{k-1}(x)](g(x)+up_{1}(x)+\dots u^{k-1}p_{k-1}(x))\\
&\hspace{.5cm}+[j_{0}(x)+uj_{1}(x)+\dots +u^{k-2}j_{k-2}(x)]ua^{1}(x),
\end{align*}
where $t_{0}(x)+ut_{1}(x)+\dots +u^{k-1}t_{k-1}(x)$ is a polynomial of degree $\leq (n-r-1)$ in $R_{k}[x;\theta]$ and $j_{0}(x)+uj_{1}(x)+\dots +u^{k-2}j_{k-2}(x)$ is a polynomial of degree $\leq (r-\tau-1)$ in $R_{k-1}[x;\theta].$\\
Note that if the skew cyclic code $C$ is in the form given in Theorem \ref{th 3}, we follow the encoding process of Case III of Theorem \ref{enc th} and syndrome decoding process for decoding. Through an example we present the verification of encoding and decoding algorithm for Case III of Theorem \ref{enc th}.
\example Let $k=4$ and $p=3$, $R_{3} = F_{3}+uF_{3}+u^{2}F_{3}, R_{2} = F_{3}+uF_{3}$ and automorphism $\theta(u) = -u$ on $R_{3}$. Take $n = 6, r = 4, \tau = 2$ and $C=\langle x^{4}+(1+u+u^{2})x^{2}-(u+u^{2})x+(1+u+u^{2}), u(x^{2}-x+1)\rangle$. Suppose the sender wishes to transmit two strings $I = (1+u+2u^{2}, 2u+u^{2})\in R^{2}_{3}$ and $J = (2+u, u)\in R^{2}_{2}$. Following the Case III of Theorem \ref{enc th}, the sender encoded two strings $I, J$ as\\
\begin{align*}
Encode(I, J) &= ((1+u+2u^{2})x+2u+u^{2})(x^{4}+(1+u+u^{2})x^{2}-(u+u^{2})x+\\
&\hspace{.5cm}(1+u+u^{2}))+u((2+u)x+u)(x^{2}-x+1)\\
& = (1+u+2u^{2})x^{5}+(2u+u^{2})x^{4}+(1+2u)x^{3}+ux^{2}+(1+2u)x\\
&\hspace{.5cm}+2u+u^{2}.
\end{align*}
Therefore, the sender sends the encoded string $(1+u+2u^{2}, 2u+u^{2}, 1+2u, u, 1+2u, 2u+u^{2})$ though an open channel. Due to noise of the open channel, suppose receiver received the string as $(1+u+2u^{2}, 2u+2u^{2}, 1+2u, u, 1+2u, 2u+u^{2})$ (messages with some errors). Note that number of symbols in input is $10$ whereas in output is $18$. Also, the receiver follows the syndrome decoding algorithm to retrieved the actual string (messages) which was sent by sender. Thus, receiver should follow the following process.\\
\begin{align*}
t_{1}(x)+ut_{2}(x)+u^{2}t_{3}(x) &= (1+u+2u^{2})x^{5}+(2u+2u^{2})x^{4}+(1+2u)x^{3}+ux^{2}\\
&\hspace{.5cm}+(1+2u)x+2u+u^{2}. \\
\end{align*}
which gives $t_{1}(x) = x^{5}+x^{3}+x$, $t_{2}(x)= x^{5}+2x^{4}+2x^{3}+x^{2}+2x+2$ and $t_{3}(x)= 2x^{5}+2x^{4}+1.$ Therefore, syndromes are given by \\
\begin{align*}
e_{1}(x) &= t_{1}(x)(x^{2}-1)=0;\\
e_{2}(x) &= t_{2}(x)(x^{4}+x^{3}-x-1) = 0;\\
e_{3}(x) &= t_{3}(x)(x^{4}+x^{3}-x-1);\\
&= x+x^{2}-x-x^{5}.
\end{align*}
Moreover, \\
\begin{align*}
x^{4}(x^{4}+x^{3}-x-1) = x+x^{2}-x-x^{5}.
\end{align*}
Thus, $e_{3}(x)$ is the syndrome of $x^{4}$. Consequently, the receiver can detect the error term $u^{2}x^{4}$. Apply division algorithm, receiver obtained the strings $I$ and $J$ as follows:\\
\begin{eqnarray}\label{equation}
\nonumber &&(1+u+2u^{2})x^{5}+(2u+u^{2})x^{4}+(1+2u)x^{3}+ux^{2}+(1+2u)x+2u+u^{2}\\
\nonumber &&= ((1+u+2u^{2})x+2u+u^{2})(x^{4}+(1+u+u^{2})x^{2}-(u+u^{2})x+(1+u+u^{2}))\\
\nonumber &&\hspace{1.4cm}+u((2+u)x+u)(x^{2}-x+1).\\
\end{eqnarray}
Hence, receiver can extract $I$ and $J$ from above \ref{equation}.\\
In next example, we construct some skew cyclic codes of arbitrary length $n$ over $R$ as proposed in Theorem \ref{th 1} and \ref{th 2}.\\
\example
Let $F_{5}$ be the Galois field of order $5$ and $R_{3}=F_{5}+uF_{5}+u^{2}F_{5}$. Consider the automorphism $\theta$ on $R_{3}$ as $\theta(u)=-u$, i.e. $\theta(a+ub+u^{2}c)= a-ub+u^{2}c$ where $a, b, c\in R_{3}$.\\
Here, we are interested to show some principally generated skew cyclic codes of length $4$. First, we discuss skew cyclic codes with non-monic generators.\\
We consider one of the factorization of $x^{4}-1$ as $x^{4}-1=(ax^{2}+b)(cx^{2}+d)$ where $a,b,c,d\in F_{5}+uF_{5}$. This gives the possible factorization as given in Table 1.\\ \\
\begin{tabular}{ |p{5.50cm}|p{1.25cm}|p{2.50cm}|p{1.5cm}|p{1.5cm}| }
\hline
\multicolumn{5}{|c|}{\textbf{Table 1}: Principally generated skew cyclic codes of length $4$ over $R_{3}$} \\
\hline
$x^{4}-1= f_{1}(x)f_{2}(x)$ & No. of distinct factors &Codes generated by non-monic poly.& Rank(C)& Distance, d(C)\\
\hline
$[(1+uk)x^{2}+1+uk][(1-uk)x^{2}+4+uk]$ & 10 & $C_{1}=\langle uf_{1}\rangle, C_{2}=\langle uf_{2}\rangle$ & 2&2\\
\hline
$[(4-uk)x^{2}+1+uk][(4+uk)x^{2}+4+uk]$ & 10 & $C_{1}=\langle uf_{1}\rangle, C_{2}=\langle uf_{2}\rangle$ &2&2\\
\hline
$[(2+uk)x^{2}+2+uk][(3+uk)x^{2}+2-uk]$ &10 & $C_{1}=\langle uf_{1}\rangle, C_{2}=\langle uf_{2}\rangle$& 2&2\\
\hline
$[(2-uk)x^{2}+3+uk][(3-uk)x^{2}+3-uk]$ &10 & $C_{1}=\langle uf_{1}\rangle, C_{2}=\langle uf_{2}\rangle$& 2&2\\
\hline
\end{tabular}\\ \\
Note that in the first row of Table 1, we factorized $x^{4}-1$ as \\
\begin{align*}
x^{4}-1= [(1+uk)x^{2}+4][(1-uk)x^{2}+1],
\end{align*}
where $k = 0, 1, 2, 3, 4.$ By putting $k=0$, we get
\begin{align*}
x^{4}-1&=(x^{2}+4)(x^{2}+1)\\
&=(x^{2}-1)(x^{2}+1).
\end{align*}
Moreover, $(x^{2}+1)$ can be factorized as $(x^{2}+1)=(ax+b)(cx+d)$ in $F_{5}+uF_{5}$ as follows:
\begin{enumerate}
\item $(x^{2}+1)=[(1+ut)x+2+us][(1+ut)x+3+us];$
\item $(x^{2}+1)=[(4+ut)x+2+us][(4+ut)x+3+us];$
\item $(x^{2}+1)=[(2+ut)x+1-us][(3+u(5-t))x+1+us];$
\item $(x^{2}+1)=[(2+ut)x+4-us][(3+u(5-t))x+4+us];$
\end{enumerate}
where $t, s = 0, 1, 2, 3, 4.$ Note that there are $4\times 50=200$ distinct linear factors in $R_{3}$. Therefore, there are 200 skew cyclic codes of length 4 over $R_{3}$ in which each has rank 3. Similarly, we can factorize $x^{2}-1$ in $(F_{5}+uF_{5})[x;\theta]$ to get more skew cyclic codes generated by non-monic polynomials over $R_{3}$. \\
Now, consider the skew cyclic codes over $R_{3}$ whose generators are monic polynomials in $C$ as given in Theorem \ref{th 2}.\\
Let $C = \langle f(x)\rangle = \langle (1+4u+u^{2})x^{2}+4+u+4u^{2}\rangle$ where $x^{n}-1=k(x)f(x)$ in $R_{3, n}$. Then the rank of the skew cyclic code $C$ is 2 while generators matrix $G$ and parity check matrix $H$ are given by\\
\[
G=
\begin{bmatrix}
4+u+4u^{2} & 0 & 1+4u+u^{2} & 0 \\
o & 4+u+4u^{2} & 0 & 1+4u+u^{2} \\
\end{bmatrix}
\]
and
\[
H=
\begin{bmatrix}
1+u & 0 & 1+u & 0 \\
0 & 1+u & 0 & 1+u \\
\end{bmatrix}.
\]
\section{Conclusion}
In this article, we study skew cyclic codes of arbitrary length $n$ over $R_{k}=F_{p}+uF_{p}+\dots +u^{k-1}F_{p}$ with $u^{k}=0$. The generators and minimal spanning sets of the skew cyclic codes over $R_{k}$ are obtained. Further, we proposed an algorithm to encode such skew cyclic codes.
\section*{Acknowledgement}
The authors are thankful to University Grant Commission(UGC), Govt. of India for financial support under Ref. No. 20/12/2015(ii)EU-V dated 31/08/2016 and Indian Institute of Technology Patna for providing the research facilities.
|
{
"timestamp": "2018-02-26T02:12:13",
"yymm": "1802",
"arxiv_id": "1802.08659",
"language": "en",
"url": "https://arxiv.org/abs/1802.08659"
}
|
\section*{INTRODUCTION}
\par Breath-held cine-MRI is the key component of a cardiac MRI exam, which offers valuable assessments about the structure and function of the heart. However, the acquisition of the data at high spatial and temporal resolution requires long breath-hold durations, which is often challenging for patients with chronic obstructive pulmonary disease (COPD) or obesity \cite{copd}. Pediatric patients who are unable to follow complex breath-holding instructions, often have to be sedated for the purpose of the scan. In addition, multiple breath-holds along with intermittent pauses for recovery also results in prolonged scan time, adversely impacting patient comfort and compliance. Several acquisition and reconstruction techniques have been introduced to improve cardiac cine MRI. Early work relied on the reduction of breath-hold durations in cine MRI by acquiring undersampled k-space measurements. The images were reconstructed by exploiting the structure of x-f space \cite{dime,paradise,blast}, diversity of coil sensitivities \cite{sense}, and the sparsity of k-space \cite{sparse}. Real-time methods that rely on parallel MRI \cite{blast} were introduced for subjects who cannot hold their breath, but these often suffer from lower image quality. Low-rank based schemes that rely on k-space navigators need different subspace/rank models for cardiac and non-cardiac spatial regions \cite{christodoulou, brinegar}, requiring user intervention. Another strategy is to estimate the cardiac and respiratory phases, and explicitly bin the data to their respective phases, followed by recovery using compressed sensing \cite{grasp,xdgrasp}. These schemes rely on a series of complex steps, including bandpass filtering using prior information about the cardiac and respiratory rates, and peak identification to estimate the phases.
We had recently introduced the SToRM \cite{storm} framework, which enables ungated cardiac cine imaging in the free-breathing mode using radial acquisitions. The SToRM algorithm assumes that the images in the free-breathing dataset lie on a smooth and low-dimensional manifold, parameterized by a few variables (e.g. cardiac \& respiratory phases). The acquisition scheme relies on navigator radial spokes, which are used to compute the graph Laplacian matrix that captures the structure of the manifold. An off-diagonal entry of the Laplacian matrix is high if the corresponding pair of frames have similar cardiac and respiratory phases, even though they may be well-separated in time. This implicit soft-binning strategy offers the potential to simultaneously image cardiac and respiratory function, and eliminates the need for explicit binning of data as in \cite{grasp,xdgrasp}. Since the framework does not require the associated complex processing steps that assume the periodicity of the cardiac/respiratory motion, it is readily applicable to several dynamic applications, including speech imaging as shown in \cite{storm}, or cardiac applications involving arrhythmia. Conceptually similar manifold models have been proposed by other groups \cite{bhatia,usman}. Despite promising results, the above manifold models still have some deficiencies that restrict its clinical use. Specifically, the need to reconstruct and store the entire dataset (around 1000 frames) makes the algorithms memory demanding and computationally expensive, and restricts their eventual extension to 3-D applications. Another challenge that impairs the quality of the reconstruction is the sensitivity of the Laplacian estimation process to noise as well as subtle patient motion.
In this work, we propose a bandlimited SToRM (b-SToRM) framework to overcome both of these challenges, and determine its utility on cardiac MRI patients. We introduce a bandlimited model for the manifold shape to improve the estimation of the Laplacian from the navigators. Specifically, we model the manifold in high dimensional (equal to the number of pixels) space as the zero level-set of a band-limited potential function. We show that under the bandlimited assumption, exponential feature maps of each of the images can be annihilated by a finite impulse response filter whose support is the same as that of the Fourier co-efficients of the potential function. These annihilation relations translate to a low rank structure of the matrix of feature maps. We pose the recovery of the navigators from their noisy measurements as a nuclear norm minimization of the matrix of feature maps. We obtain a Laplacian matrix that is more robust to noise and subtle motion artifacts than the previous SToRM approach, as a by-product of the above optimization scheme.
In order to reduce the computational complexity and memory demand by an order of magnitude, we approximate the Laplacian matrix by a few of its eigen vectors. The eigen vectors of the Laplacian are termed as Fourier exponentials on the manifold/graph \cite{ortegaGraph}. Instead of reconstructing the entire dataset, we propose to only recover the coefficients of the Laplacian basis functions. Since the proposed scheme in this paper improves the SToRM framework by using the bandlimited models, we refer to the new approach as the bandlimited SToRM (b-SToRM) framework.
We validate the b-STORM framework on nine adult congenital heart disease patients with different imaging views, as an add-on to the routine contrast enhanced cardiac MRI study. We also study the impact of patient motion, reduced number of navigators, and reduced acquisition time on the algorithm. We show that the reconstructed images can be sorted into respiratory and cardiac phases using the eigen-vectors of the estimated Laplacian matrix, facilitating the easy visualization of the data.
This work has similarities to the kernel low-rank approach for MRI reconstruction in \cite{nakarmi}. The algorithm in \cite{nakarmi} requires the computation of the feature maps of the polynomial kernel and their pre-images. The explicit computation of the feature maps is infeasible for Gaussian kernels since the feature maps are infinite-dimensional. Our approach relies on the kernel trick \cite{scholkopf}, and is thus computationally feasible for Gaussian kernels. This approach is built upon our recent work on annihilation based image recovery \cite{gregtsp,gregpapers,ongie2017fast} and the work on polynomial kernels introduced in \cite{gregvariety}; we extend \cite{gregvariety} to Gaussian kernels in this paper.
\section*{THEORY}
\subsection*{Background on smooth manifold models for images}
A manifold is a topological space that locally resembles a Euclidean space. In particular, each $n$-dimensional point (where $n$ is the ambient dimension) on an $m$ dimensional manifold ($m<<n$) has a local neighbourhood, which has a continuous one-to-one mapping (homeomorphic) with a Euclidean space of dimension $m$. Many classes of natural images can be modelled as points sampled from a low-dimensional manifold, embedded in an ambient high dimensional space. The dimension of the ambient space is equal to the number of pixels in the images, while the dimension of the manifold depends on the degrees of freedom of the class of images. For example, a dataset of images of faces of the same person may be parameterized by pose and lighting. Similarly, each image in a real-time cardiac MRI acquisition can parameterized by the cardiac and respiratory phases.
Manifold embedding schemes \cite{lle,lapEig} aim to find a non-linear mapping $f: \mathbb R^n \rightarrow \mathbb R^m$ from the points $\mathbf x_i \in \mathcal M \subset \mathbb R^n$, such that $f(\mathbf x_i) = \mathbf f_i$ preserves geodesic distances on the manifold; i.e, $\|\mathbf f_i-\mathbf f_j\|^2 \approx \|\mathbf x_i-\mathbf x_j\|_{\mathcal M}^2; \forall i,j$. As shown in \cite{lapEig}, in order to preserve local neighbourhoods of the manifold, we need function with low average smoothness $\int_{\mathcal M} \|\nabla f\|^2$. Many algorithms, including the popular Laplacian eigen maps embedding algorithm \cite{lapEig} and the SToRM formulation \cite{storm} operate on discrete samples from the manifold. In terms of $k$ points $\{\mathbf x_i\} \in \mathbb R^n$, $i=1,\ldots,k$ sampled from the manifold, the average smoothness of $f$ can be approximated as:
\begin{equation}
\label{unidimensional}
\int_{\mathcal M} \|\nabla f\|^2 \approx \frac{1}{2}\;\sum_{i,j=1}^k \mathbf W_{i,j}\; \|f(\mathbf x_i) -f(\mathbf x_j)\|^2 = {\rm trace}(\mathbf f\; \mathbf L\; \mathbf f^H)
\end{equation}
where the weight matrix $\mathbf W$ is specified by:
\begin{equation}
\label{Wcomp}
\mathbf W_{ij} = \begin{cases}
\mathbf e^{-\frac{d_{i,j}^{2}}{\sigma^{2}}}&, \text{if $\mathbf x_{i}$ and $\mathbf x_{j}$ are neighbours}.\\
0 &, \text{otherwise}.
\end{cases}
\end{equation}
Here, $d_{i,j}^2 = \|\mathbf x_i - \mathbf x_j\|^2$. The use of the exponential kernel assigns higher weights to local neighbours on the manifold. $\mathbf f$ is a matrix, whose columns correspond to $f(\mathbf x_i); i=1,..,k$ and $\mathbf L = \mathbf D - \mathbf W$ is the graph Laplacian, which approximates the Laplace Beltrami operator. $\mathbf D$ is a diagonal matrix with elements defined as $\mathbf D_{ii} = \sum_j \mathbf W_{ij}$. Three approaches are used in the literature to determine if $\mathbf x_i$ and $\mathbf x_j$ are neighbours:
\vspace{-1.5em}
\begin{enumerate}
\item Distance thresholding: $\mathbf x_i$ and $\mathbf x_j$ are neighbours if $d_{ij} < t$, where $t$ is a fixed threshold. This may result in disconnected graphs.
\item Number of neighbours: $\mathbf x_i$ has only a fixed number of neighbours, which are the points with lowest distance from it. This technique always leads to fully connected graphs but may be associated with false edges.
\end{enumerate}
The optimal embedding obtained by minimizing \eqref{unidimensional} is a matrix of the eigen-vectors with the $m$ lowest eigen-values in the generalized eigen-vector problem $\mathbf L \mathbf f = \lambda \mathbf D \mathbf f$.
\subsection*{SToRM framework \cite{storm}}
SToRM relies on a navigated radial acquisition scheme where the same navigator lines (2-4 radial spokes) are played at the beginning of every 10-12 spokes. The acquisition of the $i^{\rm th}$ frame can be represented as:
\begin{equation}
\label{measurements}
\underbrace{\left[\begin{array}{c}
\mathbf z_{i,j}\\
\mathbf y_{i,j}
\end{array}
\right] }_{\mathbf b_{i,j}} = \underbrace{\left[\begin{array}{c}
\boldsymbol\Phi\\
\mathbf B_{i}
\end{array}
\right] \mathbf F ~\mathbf C_j}_{\mathbf A_{ij}} ~\mathbf x_i + \boldsymbol\eta_{ij}
\end{equation}
Here, $\mathbf F$ is the 2-D Fourier transform matrix and $\mathbf C_j$ is a diagonal matrix corresponding to weighting by the $j^{\rm th}$ coil sensitivity map. $\boldsymbol \Phi$ is the sampling matrix corresponding to the navigators that is kept the same for all frames.
The weight matrix of the image manifold is estimated from the navigators $\mathbf z_{i,l}=\boldsymbol\Phi\mathbf F ~\mathbf C_l ~\mathbf x_i, ~l=1,..,{\rm N_{coils}}$ using \eqref{Wcomp}, where:
\begin{equation}
d_{ij}^2 = \sum_{l=1}^{\rm N_{coils}}\left\|\mathbf z_{il}-\mathbf z_{jl}\right\|^{2}
\end{equation}
This results in high weights between images with similar cardiac and respiratory phase, while the weights between images with different phases are small. The manifold Laplacian $\mathbf L$ is computed from the weight matrix $\mathbf W$.
The acquisition can be compactly rewritten as:
\begin{equation}
\label{acquisition}
\mathbf B = \mathcal A(\mathbf X) + \boldsymbol{\eta}
\end{equation}
Here, $\mathbf X = \left[\mathbf x_1,\ldots,\mathbf x_k\right]$ is the Casorati matrix obtained by stacking the vectorized images as columns, while $\mathcal A$ captures the measurement process described in \eqref{measurements}. SToRM reconstructs the images by solving the following problem:
\begin{equation}
\mathbf X^{*} =\arg \min_{\mathbf X} \|\mathcal A(\mathbf X)-\mathbf B\|^{2}_{F} + \lambda~ \mathrm{{\rm trace}}(\mathbf X {\mathbf L} \mathbf X^{H})
\label{l2problem}
\end{equation}
A key drawback of SToRM is the sensitivity of the Laplacian matrix to noise and artifacts in the acquisition process. While the exponential weight choice is popular, this approach is is dependent on the specific way in which neighbours are selected. Another challenge is the large memory demand and computational complexity associated with the recovery of the large dataset, often consisting of 1000 frames. This makes it difficult to extend SToRM to 3D+time applications.
The main focus of this paper is to introduce the b-SToRM framework, which minimizes the problems associated with SToRM \cite{storm}. We introduce a bandlimited manifold model for the systematic estimation of the Laplacian matrix from navigators. This reduces the sensitivity of the estimation process to noise and subtle patient motion. We also introduce an approximation of the Laplacian matrix using its eigen decomposition to drastically reduce memory demand and computational complexity. The estimated Laplacian matrix enables the visualization of the reconstruction results. An overview of the proposed scheme is given in Fig 1.
\subsection*{Bandlimited manifold shape model \& kernel low-rank relation}
We model the manifold $\mathcal M \subset \mathbb R^n$ as the zero-level set of a bandlimited function $\psi: \mathbb R^n \rightarrow \mathbb R$, represented using its Fourier series (see Fig 2) :
\begin{equation}
\label{implicit}
\psi(\mathbf x) = \sum_{\mathbf k \in \Lambda} \mathbf c_{\mathbf k} e^{j~2\pi \mathbf k^T \mathbf x}
\end{equation}
The manifold is specified by the set of points $\{\mathbf x \in \mathbb R^n|\psi(\mathbf x)=0\}$. Here, $\Lambda \subset \mathbb Z^n$ is a set of contiguous discrete locations that indicate the support of the Fourier series co-efficients of $\psi$. We assume that $\{\mathbf c_k\}$ is the smallest set of Fourier co-efficients that satisfies the above relation; we term it as the minimal filter. We refer to the above representation as a bandlimited manifold. All points $\mathbf x$ on the implicit surface \eqref{implicit} satisfy $\psi(\mathbf x)=0$, which implies that:
\begin{equation}
\label{nonlinearmapping}
\psi(\mathbf x) = \mathbf c^T \underbrace{ \begin{bmatrix} e^{j2\pi\mathbf k_1^T\mathbf x}\\ \vdots\\ e^{j2\pi\mathbf k_{|\Lambda|}^T\mathbf x}\end{bmatrix}}_{\phi_\Lambda (\mathbf x)} = 0
\end{equation}
The entries of $\phi_\Lambda(\mathbf x)$ are non-linear transformations of $\mathbf x$, similar to kernel approaches \cite{scholkopf}; we term $\phi_\Lambda(\mathbf x)$ as the non-linear feature map of $\mathbf x$ (see Fig 2). When there are multiple points $\mathbf x_1,.., \mathbf x_k$ sampled from the manifold, we have the following annihilation relation:
\begin{equation}
\label{matrixannihilation}
\mathbf c^T \underbrace{\begin{bmatrix}
\phi_{\Lambda}(\mathbf x_1),\ldots \phi_{\Lambda}(\mathbf x_k)
\end{bmatrix}
}_{\Phi_{\Lambda}(\mathbf X)} = 0
\end{equation}
Since $\mathbf c$ is the unique minimal filter of $\Phi_{\Lambda}(\mathbf X)$, rank$(\Phi_{\Lambda}(\mathbf X)) = |\Lambda|-1$. In practice, the exact bandwidth of $\Lambda$ is unknown. We choose a rectangular support $\Gamma \subset \mathbb Z^n$ such that $\Lambda \supseteq \Gamma$; the corresponding feature matrix is denoted by $\Phi_{\Gamma}(\mathbf X)$. $\mathbf c_1$ obtained by zero-padding the original coefficients $\mathbf c$ will satisfy $\mathbf c_1^T \Phi_{\Gamma}(\mathbf X) = 0$. $\mathbf c_2$ obtained by shifting $\mathbf c_1$ by an integer value will also satisfy $\mathbf c_2^T \Phi_{\Gamma}(\mathbf X) = 0$. We denote the number of valid shifts of $\mathbf c$ such that it is still support limited in $\Gamma$ by $|\Gamma:\Lambda|$ \cite{gregpapers}. Thus, we have:
\begin{equation}
{\rm rank}(\Phi_{\Gamma}(\mathbf X)) \leq |\Gamma|-|\Gamma:\Lambda|
\end{equation}
If the number of points $k$ is greater than the above rank, we obtain right null-space relations $\Phi_{\Gamma}(\mathbf X)~\mathbf v_i = \mathbf 0$, or equivalently, $\mathbf K^{\Gamma}\mathbf v_i = \mathbf 0$. The entries of the $P\times P$ Gram matrix $\mathbf K^{\Gamma} = \Phi_{\Gamma}(\mathbf X)^H\Phi_{\Gamma}(\mathbf X)$ are given by:
\begin{equation}
\mathbf K_{i,j}^{\Gamma} = \phi_{\Gamma}(\mathbf x_i)^H\phi_{\Gamma}(\mathbf x_j) =\underbrace{ \sum_{k\in \Gamma} e^{\left(j~2\pi\mathbf k^T \left(\mathbf x_j-\mathbf x_i\right)\right)}}_{\kappa_{\Gamma}(\mathbf x_j-\mathbf x_i)}
\end{equation}
where $\kappa_{\Gamma}(\mathbf r)$ is shift invariant. When $\Gamma$ is a centered cube in $\mathbf R^n$, we have the Dirichlet kernel $\kappa_{\Gamma}(\mathbf r) = {\rm D}_{\Gamma}(\mathbf r)$. The above relations show that when the points live on the level-sets of a bandlimited function, their Dirichlet kernel matrix is low-rank. If we choose the weighted maps $\phi'_\Gamma = \mathbf M ~\phi_\Gamma$ (where $\mathbf M$ is a diagonal matrix with entries $e^{-\pi^2 \sigma^2 \mathbf \|\mathbf k\|^2}$), then the kernel function approaches a Gaussian as $\Gamma \rightarrow \mathbb Z^n$. In this case, the matrix $\mathbf K_{\Gamma}$ is theoretically full rank. However, we observe that the Fourier series coefficients of a Gaussian function can be safely approximated to be zero outside $|\mathbf k|< 3/\pi\sigma$, which translates to $|\Lambda| \approx \left(\frac{6}{\pi\sigma}\right)^n$; i.e., the rank will be small for high values of $\sigma$.
The above results show that the rank of the feature map matrix $\Phi_\Gamma$ or the kernel matrix $\mathbf K_{\Gamma}$ can be used as a measure of the smoothness of the manifold. Specifically, if the rank is small, a low bandwidth implicit surface $\psi$ is sufficient to annihilate all the images in the dataset; this implies that the points lie on a smooth manifold, which is the zero level set of a bandlimited $\psi$.
\subsection*{Laplacian estimation using feature-map rank minimization}
\label{lapest}
We use this low-rank prior to estimate the Laplacian matrix of the manifold. Since this approach is more systematic compared to the exponential weight based technique, it is expected to be more robust to noise and related artifacts. We recover the navigator signals $\mathbf R$ from their noisy measurements $\mathbf Z$ by solving the following optimization problem:
\begin{equation}
\label{kernel}
\mathbf R^* = \arg\min_{\mathbf R} \|\mathbf R - \mathbf Z\|_F^2 + \mu~
\left\| \Phi\left(\mathbf R\right)\right\|_*
\end{equation}
where $\|\cdot\|_*$ denotes the nuclear norm and $\Phi(\mathbf R)$ denotes a matrix whose columns are the non-linear maps of the columns of $\mathbf R$ (corresponding to different frames), similar to \eqref{nonlinearmapping}. Note that the above formulation simplifies to low-rank denoising similar to \cite{ktslr} when $\Phi = \mathcal I$, which is the identity map.
Inspired by similar methods in low-rank recovery \cite{irls}, we introduce an iterative reweighted least squares (IRLS) algorithm to solve the above minimization problem. The approach followed is similar to \cite{gregvariety}, where the case of polynomial kernels is discussed. The manifold Laplacian is obtained as a byproduct of this algorithm. This approach relies on approximating the nuclear norm penalty in \eqref{kernel} as:
\begin{equation}
\left\|\Phi(\mathbf R)\right\|_* = {\rm trace}\left[\left(\Phi(\mathbf R)^T \Phi(\mathbf R)\right)^{\frac{1}{2}}\right] = {\rm trace}\left[\mathcal K(\mathbf R)^{\frac{1}{2}}\right] \approx {\rm trace}\left[\mathcal K(\mathbf R)\mathbf P\right]
\end{equation}
where $\mathbf P = \left[\mathcal K(\mathbf R) + \gamma \mathbf I\right]^{-\frac{1}{2}}$ and $\mathcal K$ is the Gaussian kernel. The algorithm alternates between the following 2 steps:
\begin{equation}
\label{kernelApprox}
\mathbf R^{(n)}= \arg\min_{\mathbf R} \underbrace{ \|\mathbf R - \mathbf Z\|_F^2 + \mu~
{\rm trace}\left[\mathcal K(\mathbf R)\mathbf P^{(n)}\right]}_{\mathcal C(\mathbf R^{(n)})}
\end{equation}
\begin{equation}
\mathbf P^{(n)} = \left[\mathcal K\left(\mathbf R^{(n-1)}\right) + \gamma^{(n)} \mathbf I\right]^{-\frac{1}{2}}
\end{equation}
where $\gamma^{(n)} = \frac{\gamma^{(n-1)}}{\eta}$, and $\eta>1$ is a constant.
We use the kernel trick $\left\langle \Phi(\mathbf r_1),\Phi(\mathbf r_2)\right\rangle = \mathcal K\left(\mathbf r_1,\mathbf r_2\right)$ to solve \eqref{kernel} without explicitly evaluating the maps $\Phi(\mathbf R)$. Note that $\Phi(\mathbf R)$ may be considerably higher in dimension compared to the frames. A key benefit of this approach over the kernel low-rank approach in \cite{nakarmi} is that we do not require the computation of explicit images and pre-images, and hence our scheme is applicable to shift invariant kernels that are widely used.
The estimation of $\mathbf R$ involves solving a non-linear system of equations. To reduce computational complexity, we linearize the gradient of the cost function in \eqref{kernelApprox} with respect to $\mathbf R$. The gradient of the objective function w.r.t $\mathbf R_i$ is given by $ \nabla_{\mathbf R_i}\mathcal C = 2(\mathbf R_i - \mathbf Z_i) + \mu \sum_j \nabla_{\mathbf R_i}[\mathcal K(\mathbf R)]_{ij}\mathbf P^{(n)}_{ij}$. Assuming a Gaussian kernel and linearizing the gradient with respect to $\mathbf R_i$, we obtain:
\begin{equation}
\begin{split}
\nabla_{\mathbf R_i} \mathcal C& \approx 2(\mathbf R_i - \mathbf Z_i) +2\mu \sum_j w_{ij}^{(n-1)}(\mathbf R_i- \mathbf R_j)
\end{split}
\end{equation}
where $w_{ij}^{(n-1)}$ is the $(i,j)^{th}$ entry of a matrix $\mathbf W^{(n-1)} = -\frac{1}{\sigma^2}\mathcal K(\mathbf R^{(n-1)}) \odot \mathbf P^{(n)}$. In matrix form, the gradient can be rewritten as $\nabla_{\mathbf R} \mathcal C = 2(\mathbf R - \mathbf Z) + 2\mu \mathbf R \mathbf L^{(n-1)}$,
where $\mathbf L^{(n)}$ is the Laplacian matrix computed from the weight matrix $\mathbf W^{(n)}$. This results in the following equivalent optimization problem for the estimation of $\mathbf R$ at the $n^{th}$ iteration, which can be solved analytically:
\begin{equation}
\label{Req}
\mathbf R^{(n)} = \arg\min_{\mathbf R} \left\|\mathbf R - \mathbf Z\right\|_F^2 + \mu~{\rm trace}\left(\mathbf R~ \mathbf L^{(n)} ~\mathbf R^{H} \right)
\end{equation}
Note that the above iterative algorithm is analogous to SToRM, where $\mathbf L^{(n)}$ is the Laplacian. We use $\mathbf L^{(n)}$ obtained from the above denoising problem to recover the image frames from their undersampled measurements.
\subsection*{Efficient signal recovery using bandlimited approximation of the Laplacian matrix}
\label{secondstep}
The SToRM implementation \eqref{l2problem} required the storage and processing of a large number of frames (around 400-1000), which is computationally expensive. We now propose an efficient algorithm based on the eigen decomposition of the Laplacian matrix to significantly reduce the computational complexity and memory demand.
Denoting the eigen decomposition of the symmetric Laplacian matrix as $\mathbf L = \mathbf V\Sigma \mathbf V^H$, we rewrite the SToRM cost function in \eqref{l2problem} as:
\begin{eqnarray}
\mathbf X^{*} &=& \arg \min_{\mathbf X} \|\mathcal A(\mathbf X)-\mathbf B\|^{2}_{F} + \lambda~ \mathrm{{\rm trace}}\left[\underbrace{(\mathbf X \mathbf V)}_{\mathbf U}~{\boldsymbol \Sigma} \underbrace{(\mathbf X \mathbf V)^{H}}_{\mathbf U^H})\right]\\
&=& \arg \min_{\mathbf X} \|\mathcal A(\mathbf X)-\mathbf B\|^{2}_{F} + \lambda~ \sum_{i=1}^k \sigma_i\left\|\underbrace{\mathbf X\,\mathbf v_i}_{\mathbf u_i}\right\|^2
\end{eqnarray}
Here, the columns of $\mathbf V$ form an orthonormal temporal basis set and $\mathbf u_i$ are the spatial coefficients. We observe that the minimum eigen value of the Laplacian matrix is zero, while the other eigen values often increase rapidly. Hence, the weighted norm in the penalty encourages signals $\mathbf X$ that are maximally concentrated along the eigen vectors $\mathbf v_i$ with small eigen values; these eigen vectors correspond to smooth signals on the manifold. Since the projections of the recovered signal onto the higher singular vectors are expected to be small, we pick the $r$ smallest eigen vectors of $\mathbf L$ to approximate the recovered matrix as:
\begin{equation}
\mathbf X = \mathbf U_{r} \mathbf V_{r}^H
\end{equation}
where $\mathbf U_r$ is a matrix of $r$ basis images (typically around $r \approx 30$) and $\mathbf V_r$ is a matrix of $r$ eigen vectors of $L$ with the smallest eigen values. Thus the optimization problem \eqref{l2problem} now reduces to:
\begin{equation}
\mathbf U^{*} =\arg \min_{\mathbf U} \|\mathcal A(\mathbf U\mathbf V^H)-\mathbf B\|^{2}_{F} + \lambda~ \sum_{i=1}^r \sigma_i \|\mathbf u_i\|^2
\label{l2synthesis}
\end{equation}
We observe $r\approx 30$ is sufficient to recover the dynamic dataset with high accuracy. Typically, a dataset with less amount of motion can be accurately represented with a lower value of $r$. However, we choose a fixed larger value ($r=30$), which would work for the more challenging datasets as well. Note that in this case, the reconstruction algorithm aims to recover $r$ coefficient images from the measurement data; the optimization problem is expected to be an order of magnitude less computationally intensive than \eqref{l2problem}, especially when the number of basis functions $r$ is low.
\subsection*{Visualization of the reconstructed data using manifold embedding}
Laplacian eigen-maps rely on the eigen vectors of the Laplacian matrix to embed the manifold to a lower dimensional space. When the signal variation in the dataset is primarily due to cardiac and respiratory motion, the second and third lowest eigen vectors are often representative of the cardiac and respiratory phases. This information may be used to bin the recovered data into respiratory and cardiac phases for visualization as in Fig 7, even though we do not use explicit binning for image recovery. This post-processing step can be thought of as a manifold embedding scheme using an improved Laplacian eigen-maps algorithm \cite{lapEig}, where the main difference with \cite{lapEig} is the estimation of the Laplacian.
\section*{METHODS}
Cardiac data was collected in the free-breathing mode from nine patients at the University of Iowa Hospitals and Clinics on a 1.5 T Siemens Aera scanner. The institutional review board at the local institution approved all the in-vivo acquisitions and written consent was obtained from all subjects. A FLASH sequence was used to acquire 10 radial lines per frame out of which 4 were uniform radial navigator lines and 6 were Golden angle lines. The sequence parameters were: TR/TE=4.3/1.92 ms, FOV=300mm, Base resolution=256, Bandwidth=574Hz/pix. 10000 spokes of k-space were collected in 43 s. Data corresponding to two views (two-chamber/short-axis and four-chamber) was collected for each patient, resulting in a total of 18 datasets. We used the proposed scheme to reconstruct these datasets. The parameters of the image reconstruction algorithm were manually optimized on one dataset, and kept fixed for the rest of the datasets.
We conduct a few experiments to study the performance of our method in different datasets, and with different acquisition parameters. We study the impact of motion patterns on the reconstructions, using two of the most challenging datasets, with different breathing and cardiac patterns. We also study the effect of the number of navigator lines on the quality of the recovered images using a dataset with a large amount of breathing motion. The main goal is to determine the minimum number of navigator lines per frame to acquire in future studies. For this purpose, we compared the reconstruction using 4 navigator lines to that using only 1 or 2 navigator lines. Two experiments were conducted using 2 navigator lines per frame (corresponding to $0^\circ$ and $90^\circ$) and 1 navigator line per frame (corresponding to $0^\circ)$ respectively to estimate the weights. For the purpose of reconstruction, we used the full data (6 golden angle lines and 4 navigators). We also study the impact of the acquisition duration on image quality. For 2 datasets with different types of motion patterns, we compare the reconstruction using the entire data, 450 contiguous frames corresponding to 22 s, and also 300 frames corresponding to 12 s.
We demonstrate that the recovered data can be automatically binned into respiratory and cardiac phases using two eigen-vectors of the estimated Laplacian matrix. Thanks to the accurate and robust estimation of the Laplacian matrix, these eigen-vectors accurately represent the respiratory and cardiac motion of the patient over the entire acquisition. Using this information, each image frame can be assigned a bin depending on its respiratory and cardiac phase. Images from each bin can be viewed to find representative members of a particular cardiac or respiratory phase. We also compare our free-breathing ungated reconstructions to images obtained from a clinical breath-held sequence on the same patients.
\section*{RESULTS}
The benefit of the low-rank Laplacian estimation scheme over the SToRM estimate, which is based on the exponential kernel \cite{lapEig}, is evident from Fig. 3. The navigator signals denoised using the proposed scheme show the preservation of the cardiac and respiratory motion, while reducing the impact of noise and subtle patient motion. The temporal basis functions (lowest eigen vectors of the Laplacian) when the Laplacian matrix is computed using (c) the proposed iterative scheme (d) the exponential scheme (e) the exponential scheme where only two neighbours have been retained per frame also show the benefit of the the proposed Laplacian estimation scheme. The new approach captures the cardiac and respiratory variations, while both other strategies are highly sensitive to abrupt patient motion. Since most of the information is captured by the lower eigen vectors as seen from Fig 3 (c), a bandlimited approximation of the Laplacian matrix is sufficient to solve for the dataset.
The datasets in Fig 4 have a high amount of respiratory and out-of plane motion, compared to the other datasets that we have collected. The first dataset shows a normal cardiac rate (68 beats/min) accompanied by a very irregular breathing pattern, characterized by several large gasps of breath. We show a few reconstructed frames from different time points, at various states of motion. The reconstruction quality is better in presence of less respiratory motion since there are frames similar to it in the dataset; the manifold neighbourhood is well sampled in these neighbourhoods. By contrast, the images are seen to be more noisy in manifold regions that are not well-sampled (red box). The second dataset shows a high cardiac rate (107 beats/min) accompanied by heavy regular breathing (42 breaths/min). We observe that the algorithm is able to reconstruct this case satisfactorily, despite the rapid motion since the manifold is well-sampled.
We observe from Fig 5 that for both high and low motion regions, there is no degradation in image quality when the number of navigator lines are reduced to two from four. Only using one navigator spoke induces some error, especially for the frames highlighted in green since they have more respiratory motion. This is expected since the approach will only be sensitive to the motion in one direction and not to the direction orthogonal to it. As a result of this experiment, we plan to keep only two navigator lines per frame in the future, and consequently increase the number of golden angle lines to 8 (from 6 in the current acquisition). This should improve image quality by making the sampling patterns between frames more incoherent.
The effect of reducing acquisition time is illustrated in Fig 6. The dataset at the top has more breathing motion as compared to the bottom one. We observe that the bottom dataset is robust to decrease in the number of frames; it can be reliably recovered even from 12 seconds of data. The top dataset is more sensitive to reduction in scan time. The green line corresponds to the lowest position of the diaphragm, which is less frequent in the dataset. By contrast, the blue line corresponds to a more frequent frame. The frames around the green line, shown in the green box are more noisy when the scan time is reduced to 12 seconds, compared to the reconstructions within the blue box. We observe negligible errors in both datasets when the acquisition time is reduced to 22s, whereas relatively noisy reconstructions are seen in high motion frames when it is reduced to 12 second acquisition windows. The error images for Fig 5 and Fig 6 are on the same scale, to illustrate the relative effects of changing the number of navigators and the number of frames.
The results in Fig 7 show that the improved Laplacian eigen maps approach facilitates the easy visualization of the data. In general, we observe that the eigen-vectors of the Laplacian matrix with the second and third lowest eigen values correspond to respiratory and cardiac motion. It can be appreciated from Fig 3 that such a binning strategy is not possible when the exponential weights are used.
Fig 8 demonstrates the potential of our proposed scheme to replace clinical breath-held and gated techniques. There is some difference in the appearance of the breath-held and free-breathing reconstructions due to mismatch in slice position. Moreover, the breath-held acquisition was done using a TRUFI sequence, and thus shows higher contrast than the free-breathing data which was acquired using a FLASH sequence. In spite of these differences, we note that the images reconstructed using our proposed scheme are of clinically acceptable quality.
\section*{DISCUSSION}
We have introduced the b-STORM framework for the recovery of free-breathing and ungated cardiac images in a short 2-D acquisition. We assume that the images are points on a smooth manifold. We estimate the graph Laplacian from radial navigators. This framework relies on two key innovations over the SToRM algorithm: \textbf{(i)} A novel algorithm imposing a bandlimited manifold model, is used to estimate the Laplacian matrix; the new estimate is considerably more robust to noise and subtle noise variation. \textbf{(ii)} A bandlimited approximation of the Laplacian reduces the computational complexity and memory demand of the algorithm by an order of magnitude. Due to its computational efficiency and lack of need for manual intervention, the b-SToRM framework may be a good candidate for clinical scans where patients (e.g. pediatric patients, patients with COPD) are unable to hold their breath for sufficiently long periods of time, or are unable to follow breath-holding instructions. We plan to extend the proposed scheme to include perfusion and parameter mapping, thus moving towards a single short free-breathing clinical cardiac MR scan for structural and functional imaging. We also plan to extend the method for 3D acquisitions.
While the framework has similarities to low-rank approaches \cite{ktslr,zhao_psf_2010,bcs} that rely on the factorization of the Casorati matrix, the key difference is the signal model and the approach in which the temporal basis functions are estimated. Conventional low-rank methods often require the binning of the k-space data to respiratory bins before reconstructions, using self gating approaches \cite{grasp}. The main benefit of the proposed scheme is that it does not require any explicit binning, which often includes complex steps including band-pass filtering, and peak isolation. The computational complexity and memory demand of the algorithm is comparable to XD-GRASP and similar binning approaches, thanks to the bandlimited Laplacian approximation.
We demonstrate our algorithm on a number of datasets with different respiratory and cardiac patterns. In accordance with the results of our retrospective experiments on the impact of the number of navigator lines, we plan to collect data with only two navigator lines in the future. This would increase the incoherence of the undersampling patterns across frames, resulting in better quality reconstructions. Our experiments on reduced scan time shows that we can obtain reliable data from datasets with high motion with around 22s of data/slice, while it can be pushed down to 12s for datasets with less motion. We have not imposed any spatial regularization on the recovered images. Perhaps, with the addition of such priors, the acquisition time can be further reduced.
Our method produces a series of ungated images, enabling the user to visualize the real-time data with both respiratory and cardiac motion. This approach may be useful in studies on patients with pulmonary complications such as COPD. The data can also be automatically segmented into respiratory and cardiac phases post reconstruction for easy visualization of the data, using the eigen-vectors of the estimated Laplacian matrix.
Since the study was an add on to the routine cardiac exam, there was no perfect control on the specific time point of acquisition following contrast administration. This explains the differing contrast between the datasets.
\section*{CONCLUSION}
We proposed a novel bandlimited manifold regularization framework termed as b-SToRM for free-breathing and ungated cardiac MR imaging. The validation of the dataset using cardiac datasets with differing amount of cardiac and respiratory motion shows the ability of the scheme to provide good image quality. It is also demonstrated that the resulting ungated images can be easily binned into respiratory and cardiac phases and viewed as a gated dataset. The success of the method on very challenging datasets with high cardiac rate and irregular breathing patterns suggests a useful clinical application of the method on patients who have difficulty in following traditional breath-holding instructions.
\newpage
\section*{Legends}
\textbf{Fig 1:} {\small Outline of the b-SToRM scheme. The free breathing and ungated data is acquired using a navigated golden angle acquisition scheme. We estimate the Laplacian matrix from the navigator data using the kernel low-rank model. The entries of the Laplacian matrix specifies the connectivity of the points of the manifold, with larger weights between similar frames in the dataset. The manifold is illustrated by the sphere, while the connectivity of the points are denoted by lines whose thickness is indicative of the proximity on the manifold. Note that the frames that may be closer on the manifold may be well separated in time. The bandlimited manifold recovery scheme uses the Laplacian matrix to recover the images from the acquired k-space measurements. The Laplacian matrix also facilitates the easy visualization of the data.} \\
\textbf{Fig 2:} {\small Illustration of Annihilation Condition: The data points $\mathbf x$ lie on the zero-level set of a band-limited function $\psi$. Thus, each point $\mathbf x$ satisfies the relation: $\psi(\mathbf x) = 0$. The Fourier series co-efficients $\mathbf c$ satisfy the annihilation relation $\mathbf c^T \phi(\mathbf x) = 0$, where $\phi(\mathbf x)$ is a non-linear feature mapping of $\mathbf x$.} \\
\textbf{Fig 3:} {\small Improved manifold Laplacian estimation using iterative low-rank approach: We compare the proposed scheme against the old SToRM approach that relies on exponential kernels. The proposed scheme denoises the original navigator signals in (a) using the low-rank approach to obtain (b). It is seen that the denoising approach significantly reduces the noise, while retaining the cardiac and respiratory motion information. The eigen functions $\mathbf V$ with the smallest eigen values of the Laplacian estimated using the proposed scheme, exponential weights, and exponential weights with truncation (SToRM approach) are shown in (c), (d), and (e), respectively. It is observed that the proposed scheme provides more accurate estimates of cardiac and respiratory motion than the other schemes, facilitating the low-rank approximation. Both (d) and (e) are affected by subtle motion of the subject while the proposed scheme is relatively unperturbed. The spatial coefficients $\mathbf U$ estimated for each case is also shown.}\\
\textbf{Fig 4:} {\small Sensitivity of the algorithm to high motion: We illustrate the proposed scheme on two datasets acquired from two patients with different types of motion. For both datasets, we show a temporal profile for the whole acquisition to give an idea of the amount of breathing and cardiac motion present. We also show a few frames from time points with varying respiratory phase. The dataset on the left has regions with abrupt breathing motion at a few time points. Since these image frames have few similar frames in the dataset (poorly sampled neighbourhood on the manifold), the algorithm results in slightly noisy reconstructions at the time points with high breathing motion (red box). The regions with low respiratory motion (blue and light blue boxes) are recovered well. The dataset on the right shows consistent, but low respiratory motion. By contrast, the heart rate in this patient was high. We observe that the proposed algorithm is able to produce good quality reconstructions in this case, since all neighbourhoods of the manifold are well sampled.} \\
\textbf{Fig 5:} {\small Effect of number of navigator lines on the reconstruction quality. We perform an experiment to study the effect of computing the Laplacian matrix $\mathbf L$ from different number of navigator lines. For this purpose, we use one of the acquired datasets with 4 navigator lines per frame. We compute the ground-truth $\mathbf L$ matrix using all 4 navigators. Next, we also estimate the $\mathbf L$ matrix using 2 navigator lines (keeping only the $0^\circ$ and $90^\circ$ lines) and 1 navigator line (keeping only the $0^\circ$ line). We now reconstruct the full data using these three Laplacian matrices, as shown in the figure. We observe that two navigator lines are sufficient to compute the Laplacian matrix reliably. Using one navigator line induces some errors, especially in the frames highlighted in green which are from a time point with higher respiratory motion. As a comparison, note that the error images are in the same scale as those for Fig 6.}\\
\textbf{Fig 6:} {\small Effect of number of frames on the reconstruction quality. We perform an experiment to study the effect of reconstructing the data from a fraction of the time-frames acquired. The original acquisition was 45 seconds long, resulting in 1000 frames. We compare the reconstruction of the $1^{st}$ 250 frames, using (1) all 1000 frames (2) only 550 frames, i.e. 22 s of acquisition (3) only 350 frames, i.e. 12 s of acquisition. As can be seen from the temporal profiles, Dataset-1 has more respiratory motion than Dataset-2. Consequently, the performance degradation in Dataset-1 is more pronounced with decrease in the number of frames. Moreover, the errors due to decrease in the number of frames is mostly seen in frames with higher respiratory motion, as pointed out by the arrows. As a comparison, note that the error images are in the same scale as those for Fig 5. }\\
\textbf{Fig 7:} {\small Binning into cardiac and respiratory phases. We demonstrate that the reconstructed ungated image series can easily be converted to a gated series of images if desired. For this purpose, the $2^{nd}$ and $3^{rd}$ eigen-vectors of the estimated Laplacian matrix are used as an estimate of the respiratory and cardiac phases respectively. The images can then be separated into the desired number of cardiac and respiratory bins. Here, we demonstrate this on two datasets that have been separated into 8 cardiac and 4 respiratory phases. Representative images from these bins have been shown in the figure.}\\
\textbf{Fig 8:} {\small Comparison to breath-held scheme. We demonstrate that our proposed free-breathing reconstruction technique produces images of similar quality to clinical breath-held scans, in the same acquisition time. Note that there are differences between the free-breathing and breath-held images due to variations in contrast between TRUFI and FLASH acquisitions, and also due to mismatch in slice position. However, the images we obtain are of clinically acceptable quality. Moreover, unlike the breath-held scheme we reconstruct the whole image time series (as is evident from the temporal profile). This can provide richer information , such as studying the interplay of cardiac and respiratory motion.}\\
\newpage
\bibliographystyle{unsrtnat-mrm}
|
{
"timestamp": "2018-02-27T02:07:50",
"yymm": "1802",
"arxiv_id": "1802.08909",
"language": "en",
"url": "https://arxiv.org/abs/1802.08909"
}
|
\section{Introduction}
In recent years, a number of experiments have dramatically increased the amount of data pertaining to the polarized thermal emission from Galactic dust in the submillimetre~\citep[e.g.][]{matthews-et-al-2009,ward-thompson-et-al-2009,dotson-et-al-2010,bierman-et-al-2011,vaillancourt-matthews-2012,hull-et-al-2014,koch-et-al-2014,fissel-et-al-2016}. Chief among these is {\textit{Planck}}\,\footnote{{\textit{Planck}} (\url{http://www.esa.int/Planck}) is a project of the European Space Agency (ESA) with instruments provided by two scientific consortia funded by ESA member states and led by Principal Investigators from France and Italy, telescope reflectors provided through a collaboration between ESA and a scientific consortium led and funded by Denmark, and additional contributions from NASA (USA).}, which provided the first full-sky map of this emission, leading to several breakthrough results. It was thus found that the polarization fraction $p$ in diffuse regions of the sky can reach values above 20\%~\citep{planck2014-XIX}, confirming results previously obtained over one fifth of the sky by the {\it Archeops} balloon-borne experiment~\citep{benoit-et-al-2004,ponthieu-et-al-2005}. Furthermore, the polarization fraction is anti-correlated with the local dispersion $\mathcal{S}$ of polarization angles $\psi$~\citep{planck2014-XIX,planck2014-XX}, and the decrease of the maximum observed $p$ with increasing gas column density $N_\mathrm{H}$ may be fully accounted for, at the scales probed by {\textit{Planck}} (5{\arcmin} at 353\,GHz), by the tangling of the magnetic field on the line of sight (LOS)~\citep{planck2014-XX}. Similar anti-correlations were found by the BLASTPol experiment~\citep{fissel-et-al-2016} at higher angular resolution (a few tens of arcseconds) towards a single Galactic molecular cloud (Vela C). At 10{\arcmin} scales, and over a larger sample of clouds, {\textit{Planck}} data showed that the relative orientation of the interstellar magnetic field $\boldsymbol{B}$ and filamentary structures of dust emission is consistent with simulated observations derived from numerical simulations of sub- or trans-Alfv\'enic MHD turbulence~\citep{planck2015-XXXV}, and starlight polarization data in extinction yield similar diagnostics~\citep{soler-et-al-2016}. In diffuse regions, the preferential alignment of filamentary structures with the magnetic field~\citep{planck2014-XXXII,planck2015-XXXVIII} is linked to the measured asymmetry between the power spectral amplitudes of the so-called E- and B-modes of polarized emission. Finally, measurements of the spatial power spectrum of polarized dust emission showed that it must be taken into account in order to obtain reliable estimates of the cosmological polarization signal~\citep{planck2014-XXX}.
With this wealth of data, we may be able to put constraints on models of magnetized turbulence in the interstellar medium (ISM), provided we can extract the relevant information from polarization maps. Of particular interest are the statistical properties of the Galactic magnetic field (GMF) $\boldsymbol{B}$. Let us write it as a sum $\boldsymbol{B}=\boldsymbol{B}_0+\boldsymbol{B}_t$ of a uniform, large-scale component $\boldsymbol{B}_0$, and a turbulent component $\boldsymbol{B}_t$ with a null spatial average, $\langle\boldsymbol{B}_t\rangle=\boldsymbol{0}$. The statistical properties in question are then essentially modelled by two quantities, i) the ratio of the turbulent component to the mean, $y_B=\sigma_B/B_0$, where $\sigma_B^2=\langle \boldsymbol{B}_t^2\rangle$ and $B_0=||\boldsymbol{B}_0||$, and ii) the spectral index $\beta_B$, which characterizes the distribution of power of $\boldsymbol{B}_t$ across spatial scales, through the relationship $P(k)\propto k^{-\beta_B}$, where $k$ is the wavenumber and $P(k)$ is the power spectrum\,\footnote{In all generality, several spectral indices may be defined, as one may consider the power spectrum of any one of the three cartesian components of $\boldsymbol{B}_t$, or of the modulus $|\boldsymbol{B}_t|$. Assuming that $\boldsymbol{B}_t$ is isotropic, which we will, all of these spectral indices are identical.}.
As already mentioned, \cite{planck2015-XXXV} studied the relative orientation between the magnetic field, probed by polarized thermal dust emission, and filaments of matter in and around nearby molecular clouds. They found that this relative orientation changes, from mostly parallel to mostly perpendicular, as the total gas column density $N_\mathrm{H}$ increases, which is a trend observed in simulations of trans-Alfv\'enic or sub-Alfv\'enic MHD turbulence~\citep{soler-et-al-2013}. Using the Davis-Chandraskehar-Fermi method~\citep{chandrasekhar-fermi-1953} improved by~\cite{falceta-goncalves-et-al-2008} and~\cite{hildebrand-et-al-2009}, and from their results, we can estimate the ratio $y_B$ to be in the range 0.3-0.7. \cite{planck2014-XXXII} studied that same relative orientation in the diffuse ISM at intermediate and high Galactic latitudes, and their estimate of $y_B$ is in the range 0.6-1.0 with a preferred value at 0.8. These estimates are confirmed in \cite{planck2016-XLIV}, which presents a fit of the distributions of polarization angles and fractions observed by {\textit{Planck}} towards the southern Galactic cap. They use a model of the GMF involving a uniform large-scale field $\boldsymbol{B}_0$ and a small number ($N_l\simeq 7$) of independent ``polarization layers'' on the line of sight, each of which accounts for a fraction $1/N_l$ of the total unpolarized emission. Within each layer, the turbulent component of the magnetic field $\boldsymbol{B}_t$, which is used to compute synthetic Stokes $Q$ and $U$ maps, is an independent realization of a Gaussian 2D random field with a prescribed spectral index $\beta_B$. Through these fits, they confirm the near equipartition of large-scale and turbulent components of $\boldsymbol{B}$, with $y_B\simeq 0.9$. They also provide a rough estimate of the magnetic field's spectral index $\beta_B$ in the range 2-3. This work was complemented in~\cite{vansyngel-et-al-2017}, using the same framework, but including observational constraints on the power spectra of polarized thermal dust emission. These authors were able to constrain $\beta_B\simeq 2.5$, an exponent which is compatible with the rough estimate of \cite{planck2016-XLIV}, close to that measured for the total intensity of the dust emission. We note that their exploration of the parameter space does not allow for an estimation of the uncertainty on $\beta_B$.
In~\cite{planck2014-XXXII},~\cite{planck2016-XLIV} and~\cite{vansyngel-et-al-2017}, the description of structures, in both dust density and magnetic field, along the LOS is reduced to the bare minimum, while statistical properties in the plane of the sky (POS) are modelled through $y_B$ and $\beta_B$. Orthogonal approaches have also been pursued~\citep[e.g.][]{miville-deschenes-et-al-2008,odea-et-al-2012}, in which the turbulent component of the magnetic field is modelled along each LOS independently from the neighbouring ones, as a realization of a one-dimensional Gaussian random field with a power-law power spectrum. In this type of approach there is no correlation from pixel to pixel on the sky, and such studies seek to exploit the depolarization along the LOS, rather than spatial correlations in the POS, to constrain statistical properties of the interstellar magnetic field.
We seek to explore another avenue, taking into account statistical correlation properties of $\boldsymbol{B}$ in all three dimensions, as well as properties of the dust density field, building on methods developed in~\cite{planck2014-XX} to compare {\textit{Planck}} data with synthetic polarization maps. In that paper, the synthetic maps were computed from data cubes of dust density $n_\mathrm{d}$ and magnetic field $\boldsymbol{B}$ produced by numerical simulations of MHD turbulence. One could think to generalize this approach, taking advantage of the ever-increasing set of such simulations~\citep[see, e.g.,][]{hennebelle-et-al-2008,hennebelle-2013,hennebelle-iffrig-2014,inutsuka-et-al-2015,seifried-walch-2015}. However, this would be impractical for two main reasons : i) these simulations often have a limited inertial range over which the power spectrum has a power-law behaviour, and ii) a systematic study exploring a wide range of physical parameters with sufficient sampling is not possible due to the computational cost of each simulation.
We therefore propose an alternative approach, which is to build simple, approximate, three-dimensional models for the dust density $n_\mathrm{d}$ and the magnetic field $\boldsymbol{B}$, allowing us to perfectly control the statistical properties of these 3D fields, and to fully explore the space of parameters characterizing them. With this approach, we are able to perform a statistically significant number of simulated polarization maps for each set of parameters. Actual observations may then be compared to these simulated maps, using least-square analysis methods, to extract best-fitting parameters, in particular the spectral index of the magnetic field, $\beta_B$, and the ratio of turbulent to regular field, $y_B$.
The paper is organized as follows : Sec.~\ref{sec:buildingmaps} presents the method used to build simulated thermal dust polarized emission maps using prescribed statistical properties for $n_\mathrm{d}$ and $\boldsymbol{B}$. Observables derived from these maps, serving as statistical diagnostics of the input parameters, are presented in Sec.~\ref{sec:observables}. In Sec.~\ref{sec:method}, we describe the analysis method devised to explore the space of input parameters for a given set of polarization maps. The validation of the method and its application to actual observations of polarized dust emission from the Polaris Flare molecular cloud observed by {\textit{Planck}} are given in Sec.~\ref{sec:results}. Finally, Sec.~\ref{sec:discussion} discusses our results and offers conclusions. Several appendices complement our work. Appendix~\ref{sec:appendix:nHproperties} presents further statistical properties of the model dust density fields. Appendix~\ref{sec:appendix:L2distance} details the likelihood used in the MCMC analysis. Finally, appendix~\ref{sec:appendix:chi2} details the $\chi^2$ parameter used to estimate the goodness-of-fit.
\section{Building synthetic polarized emission maps}
\label{sec:buildingmaps}
In this section, we first describe the synthetic dust density and magnetic field cubes we use in our analysis, then explain how simulated polarized emission maps are built from these cubes.
\subsection{Fractional Brownian motions}
The basic ingredients to synthetise polarized thermal dust emission maps are three-dimensional cubes of dust density $n_\mathrm{d}$ and magnetic field $\boldsymbol{B}$, which we build using fractional Brownian motions (fBm)~\citep{falconer-1990}. An $N$-dimensional fBm $X$ is a random field defined on $\mathbb{R}^N$ such that $\langle\left[X\left(\boldsymbol{r}_2\right)-X\left(\boldsymbol{r}_1\right)\right]^2\rangle\propto{||}\boldsymbol{r}_2-\boldsymbol{r}_1{||}^{2H}$, for any pair of points $(\boldsymbol{r}_1,\boldsymbol{r}_2)$. $H$ is called the Hurst exponent. These fBm fields are usually built in Fourier space\footnote{In all of this paper, for any field $F$ the notation $\widetilde{F}$ represents its Fourier transform.},
\begin{equation}
\label{eq:Xfbm}
\widetilde{X}(\boldsymbol{k})=A(\boldsymbol{k})\exp{\left[i\phi_X(\boldsymbol{k})\right]},
\end{equation}
by specifying amplitudes that scale as a power-law of the wavenumber $k=||\boldsymbol{k}||$,
$$
A(\boldsymbol{k})=A_0k^{-\beta_X/2},
$$
with $\beta_X=2H+N$ the spectral index, and phases drawn from a uniform random distribution in $[-\pi,\pi]$, subject to the constraint $\phi_X(-\boldsymbol{k})=-\phi_X(\boldsymbol{k})$ so that $X$ is real-valued. Their power spectra are therefore power laws,
$$
P_X(k)=\left<\left|\widetilde{X}(\boldsymbol{k})\right|^2\right>_{||\boldsymbol{k}||=k}\propto k^{-\beta_X},
$$
where the average is taken over the locus of constant wavenumber $k$ in Fourier space. Such fields have been used previously as toy models for the fractal structure of molecular clouds, in both density and velocity space~\citep{stutzki-et-al-1998,brunt-heyer-2002,miville-deschenes2003,correia-et-al-2016}.
\subsection{Dust density}
\label{sec:modeldensity}
\begin{figure}[htbp]
\includegraphics[width=9cm]{{Density_map_0_120x120x120_b2.6_Y1.0-eps-converted-to}.pdf}
\caption{Total gas column density $N_\mathrm{H}$, derived from a synthetic density field $n_\mathrm{H}$ built by exponentiation of a fBm field with spectral index $\beta_X=2.6$ and size $120 \times 120 \times 120$ pixels. The volume density fluctuation level is $y_n=1$, and the column density fluctuation level is $y_{N_\mathrm{H}}\simeq0.25$.}
\label{fig:densitymaps}
\end{figure}
In our approach, the dust density $n_\mathrm{d}$ is taken to be proportional to the total gas density $n_\mathrm{H}$, so that the dust optical depth within each cell is also proportional to $n_\mathrm{H}$ (see the derivation of polarization maps in Sec.~\ref{sec:polarmaps}). Therefore, we mean to model $n_\mathrm{H}$ from numerical realizations of three-dimensional fBm fields built in Fourier space. These have means defined by the value chosen for the null-wavevector amplitude $A(\boldsymbol{0})$, so if one wished to use such a synthetic random field $X$ directly as a model for the positive-valued $n_\mathrm{H}$, one would be required to choose $n_\mathrm{H}=X'=X-a$ with $a\geqslant\mathrm{min}(X)$ a constant. However, since the distributions of these fields in 3D are close to Gaussian, their ratio of standard deviation to mean is typically $\sigma_{X'}/\langle X'\rangle\lesssim 0.3$, which is much too small compared to observational values. For instance, the total gas column density fluctuation ratios $\sigma_{N_\mathrm{H}}/\left<N_\mathrm{H}\right>$ in the ten nearby molecular clouds selected for study in~\cite{planck2014-XX} are in the range 0.3-1, and one should keep in mind that these are only lower bounds for fluctuation ratios in the three-dimensional density field $n_\mathrm{H}$.
We remedy this shortcoming by taking $X$ to represent the log-density, i.e., $n_\mathrm{H}$ is given by
\begin{equation}
\label{eq:expfbm}
n_\mathrm{H}=n_0\exp{\left(\frac{X}{X_r}\right)},
\end{equation}
where $X$ is a three-dimensional fBm field with spectral index $\beta_X$, and $X_r$ and $n_0$ are positive parameters. The $n_\mathrm{H}$ fields built in this fashion have simple and well-controlled statistical properties. First, their probability distribution functions (PDF) are log-normal, which allows, through an adequate choice of $X_r$, to set the fluctuation level of the density field $y_n=\sigma_{n_\mathrm{H}}/\langlen_\mathrm{H}\rangle$ to any desired value. Second, their power spectra, as azimuthal averages in Fourier space, retain the power-law behaviour of the original fBm $X$,
$$
P_{n_\mathrm{H}}(k)=\left<\left|\widetilde{n_\mathrm{H}}(\boldsymbol{k})\right|^2\right>_{||\boldsymbol{k}||=k} \propto k^{-\beta_n},
$$
although the spectral indices $\beta_n$ may deviate significantly from $\beta_X$. An example of such a field is shown in Fig. \ref{fig:densitymaps}, which represents the total gas column density $N_\mathrm{H}$ derived from a gas volume density $n_\mathrm{H}$ built as the exponential of a $120 \times 120 \times 120$ fractional Brownian motion with zero mean, unit variance, and spectral index $\beta_X=2.6$. The parameters of the exponentiation are $X_r=1.2$ and $n_0=20\,\mathrm{cm}^{-3}$, and the grid is chosen so that the extent of the cube is 30\,pc on each side, corresponding to a pixel size of 0.25\,pc. More details on the properties of these fields are given in Appendix~\ref{sec:appendix:nHproperties}.
The density fields built in this fashion are of course only a rough statistical approximation for actual interstellar density fields. For instance, they are unable to reproduce the filamentary structures observed in dust emission maps~\citep{andre-et-al-2010,miville-deschenes-et-al-2010}. These structures cannot be captured by one- and two-point statistics such as those used here, and require a description involving higher-order moments, or equivalently of the Fourier phases~\citep[see, e.g.,][]{levrier-et-al-2004,burkhart-lazarian-2016}.
\subsection{Magnetic field}
\begin{figure}[h!]
\includegraphics[width=9cm]{{Magnetic_map_0_120x120x120_b5.0_Y1.0_c0.0_g0.0_BMap_Slices-eps-converted-to}.pdf}
\caption{Synthetic magnetic field $\boldsymbol{B}$ built using Eq.~\ref{eq:Btlambda}. The spectral index of the vector potential is $\beta_A=5$ and the size of the cubes is $120 \times 120 \times 120$ pixels, corresponding to 30\,pc on each side. Shown are 2D slices through the cubes of the components $B_{x}$ ({\it top}), $B_{y}$ ({\it middle}), and $B_{z}$ ({\it bottom}). The ratio of the fluctuations of the turbulent component $\boldsymbol{B}_t$ to the norm of the uniform component $\boldsymbol{B}_0$ is $y_B=1$ in this particular case, with angles $\chi_0=\gamma_0=0^\circ$. }
\label{fig:Bmaps}
\end{figure}
\begin{figure}[htbp]
\resizebox{\hsize}{!}{
\includegraphics[width=9cm]{{Magnetic_map_0_120x120x120_b5.0_Y0.1_c0.0_g60.0_BMap_PDFs-eps-converted-to}.pdf}
}
\caption{Distribution functions of the components $B_x$, $B_y$ and $B_z$ of a model magnetic field $\boldsymbol{B}=\boldsymbol{B}_0+\boldsymbol{B}_t$ built on a grid $120 \times 120 \times 120$ pixels using Eq.~\ref{eq:Btlambda} with $\beta_A=5$, and a mean, large-scale magnetic field $\boldsymbol{B}_0$ defined by the angles $\chi_0=0^\circ$ and $\gamma_0=60^\circ$, and a norm $B_0=50\,\mu{\mathrm{G}}$ such that the fluctuation level is $y_B=0.1$. The vertical lines represent the projected values of the large scale magnetic field $B_{0x}=B_0 \sin \chi_0 \cos\gamma_0$, $B_{0y}=-B_0 \cos \chi_0 \cos\gamma_0$ and $B_{0z} = B_0 \sin\gamma_0$. See figure 14 in~\cite{planck2014-XX} for the definition of angles.}
\label{fig:Bpdfs}
\end{figure}
\begin{figure}[htbp]
\resizebox{\hsize}{!}{
\includegraphics[width=9cm]{{PS_Magnetic_map_0_120x120x120_b5.0_Y1.0_c0.0_g0.0-eps-converted-to}.pdf}
}
\caption{Power spectra of the components $B_x$, $B_y$ and $B_z$ of a model magnetic field $\boldsymbol{B}=\boldsymbol{B}_0+\boldsymbol{B}_t$ built on a grid $120 \times 120 \times 120$ pixels using Eq.~\ref{eq:Btlambda} with $\beta_A=3$ (different shades of blue for the three components) and $\beta_A=5$ (different shades of red for the three components). The power spectra are normalized differently so as to allow comparison between them. The fitted power-laws shown as solid lines yield spectral indices $\beta_B=1$ and $\beta_B=3$, in agreement with Eq.~\ref{eq:PB}. These are the power spectra of the same particular realizations shown in Fig.~\ref{fig:Bmaps}.}
\label{fig:Bpowerspectra}
\end{figure}
To obtain a synthetic turbulent component of the magnetic field $\boldsymbol{B}_t$ with null divergence and controlled power spectrum, we start from a vector potential $\boldsymbol{A}$ built as a three-dimensional fractional Brownian motion. To be more precise, each Cartesian component $A_\lambda$ of $\boldsymbol{A}$ is a fBm field,
\begin{equation*}
\widetilde{A_\lambda}(\boldsymbol{k})=\mathcal{A}_0k^{-\beta_A/2}\exp\left[i\phi_{A_\lambda}(\boldsymbol{k})\right],
\end{equation*}
where the spectral index $\beta_A$ and the overall normalization parameter $\mathcal{A}_0$ are independent of the Cartesian component $\lambda=x,y,z$ considered. Using the definition of the magnetic field from the vector potential $B_{t,\lambda}=\epsilon_{\lambda\mu\nu}\partial_\mu A_\nu$, where $\epsilon_{\lambda\mu\nu}$ is the Levi-Civita tensor, and the derivation relation in Fourier space
$$
\widetilde{\partial_\lambda F}=ik_\lambda \widetilde{F}
$$
we have the expression of the components of $\boldsymbol{B}_t$ in Fourier space
\begin{equation}
\label{eq:Btlambda}
\widetilde{B_{t,\lambda}}(\boldsymbol{k})=\mathcal{A}_0\epsilon_{\lambda\mu\nu}ik_\mu k^{-\beta_A/2}\exp\left[i\phi_{A_\nu}(\boldsymbol{k})\right]
\end{equation}
As it should, this expression corresponds to a divergence-free turbulent magnetic field,
$$
ik_\lambda\widetilde{B_{t,\lambda}}=0,
$$
using the Einstein notation. Writing $k_\mu=kf_\mu$, with $\boldsymbol{f}=\left(\sin\vartheta\cos\varphi,\sin\vartheta\sin\varphi,\cos\vartheta\right)$, the power spectrum of each component of $\boldsymbol{B}_t$ is then
\begin{equation*}
P_{B_{t,\lambda}}(k)=\mathcal{A}_0^2k^{2-\beta_A}\left<\left|\epsilon_{\lambda\mu\nu}f_\mu\exp\left[i\phi_{A_\nu}(\boldsymbol{k})\right]\right|^2\right>_{||\boldsymbol{k}||=k}.
\end{equation*}
The last factor is essentially independent of the wavenumber $k$, so the spectral index of each component of $\boldsymbol{B}_t$ is $\beta_{B_t}=\beta_A-2$. After Fourier-transforming back to real space, $\boldsymbol{B}_t$ is shifted and scaled so that it has zero mean and a standard deviation $\sigma_B$ of $5\,\mu\mathrm{G}$, a value typical of the interstellar magnetic field~\citep[see, e.g.,][and references therein]{haverkorn-et-al-2008}.
The model magnetic field $\boldsymbol{B}$ is obtained by adding a uniform\footnote{Note that we do not consider an ordered random or striated random component of the field~\citep{jaffe-et-al-2010,jansson-farrar-2012}, which we justify by the smallness of the field-of-view considered.} vector field $\boldsymbol{B}_0$ to that turbulent magnetic field $\boldsymbol{B}_t$. The effect in Fourier space is limited to a modification for $\boldsymbol{k}=\boldsymbol{0}$ only, so the spectral index of each component $B_\lambda$ of the total magnetic field is the same as that of $B_{t,\lambda}$, i.e.,
\begin{equation}
\label{eq:PB}
P_{B_{\lambda}}(k)\propto k^{2-\beta_A}.
\end{equation}
Note that this means that the resulting magnetic fields thus only display anisotropy in the $\boldsymbol{k}=\boldsymbol{0}$ mode, and not at the other scales. This is a limitation of our model, which is thus not fully consistent with observations of the magnetic field structure~\citep{planck2015-XXXV}, but it is sufficient for our purposes.
The uniform field $\boldsymbol{B}_0$ which is added to the turbulent field $\boldsymbol{B}_t$ is defined by its norm $B_0$ and a pair of angles, $\gamma_0$ and $\chi_0$, which are respectively the angle between the magnetic field and the POS, and the position angle of the projection of $\boldsymbol{B}_0$ in the POS, counted positively clockwise from the north-south direction~\citep[see figure 14 of][]{planck2014-XX}. The total magnetic field's direction in three-dimensional space is characterized by angles $\gamma$ and $\chi$ defined in the same way. The ratio of the turbulent to mean magnetic field strengths is then defined by
\begin{equation*}
y_B=\frac{\sigma_B}{B_0}=\frac{\sqrt{\left<\boldsymbol{B}_t^2\right>-\left<\boldsymbol{B}_t^{\phantom{2}}\right>^2}}{||\boldsymbol{B}_{0}||}=\frac{\sqrt{\left<\boldsymbol{B}_t^2\right>}}{||\boldsymbol{B}_{0}||}.
\end{equation*}
Fig.~\ref{fig:Bmaps} shows an example of a synthetic magnetic field $\boldsymbol{B}$ generated in this fashion, and defined on the same 120 $\times$ 120 $\times$ 120 pixels grid that was used for the gas density model described in Sec.~\ref{sec:modeldensity}. The parameters used for this specific realization are $\beta_A=5$, $y_B=1$, and $\chi_0=\gamma_0=0^\circ$. The PDFs of the components of such model magnetic fields are Gaussian, as shown in Fig.~\ref{fig:Bpdfs}, and their power spectra are power laws of the wavenumber, as shown in Fig.~\ref{fig:Bpowerspectra}, with a common spectral index $\beta_B$ that is related to the input $\beta_A$ by $\beta_B = \beta_A - 2$.
\subsection{Polarization maps}
\label{sec:polarmaps}
Once cubes of total gas density $n_\mathrm{H}$ and magnetic field $\boldsymbol{B}$ are available, maps of Stokes parameters $I$, $Q$, and $U$ at 353\,GHz (the frequency of the {\textit{Planck}} channel with the best signal-to-noise ratio in polarized thermal dust emission) are built by integrating along the line of sight through the simulation cubes, following the method in~\cite{planck2014-XX} :
\begin{eqnarray}
\label{eq:defI}
I_0&=&\int S_\nu\left[1-p_0\left(\cos^2\gamma-\frac{2}{3}\right)\right]\sigma_{353}\,n_\mathrm{H}\,\mathrm{d} z;\\
\label{eq:defQ}
Q_0&=&\int p_0\,S_\nu\cos\left(2\phi\right)\cos^2\gamma\,\sigma_{353}\,n_\mathrm{H}\,\mathrm{d} z;\\
\label{eq:defU}
U_0&=&\int p_0\,S_\nu\sin\left(2\phi\right)\cos^2\gamma\,\sigma_{353}\,n_\mathrm{H}\,\mathrm{d} z.
\end{eqnarray}
In these equations, we take the intrinsic polarization fraction parameter $p_0$ to be uniform, and the source function $S_\nu=B_\nu(T_\mathrm{d})$ to be that of a blackbody with an assumed uniform dust temperature $T_\mathrm{d}$. The dust opacity at this frequency $\sigma_{353}$ is taken to vary with $N_\mathrm{H}$, following figure 20 from~\cite{planck2013-p06b} for $X_\mathrm{CO}=2\,10^{20}\,\mathrm{H_2\,cm^{-2}\,K^{-1}\,km^{-1}\,s}$, and propagating the associated errors. The order of magnitude of the dust opacity is around $\sigma_{353}\approx 10^{-26}\,\mathrm{cm}^{2}$. The values of $N_\mathrm{H}$ considered in our study are typically at most a few $10^{21}\,\mathrm{cm}^{-2}$, so the optically thin limit applies in the integrals of Eqs.~\ref{eq:defI}-\ref{eq:defU}. The angle $\phi$ is the local polarization angle, which is related to the position angle\footnote{Not to be confused with the corresponding position angle $\chi_0$ of the uniform component of the magnetic field $\boldsymbol{B}_0$.} $\chi$ of the magnetic field's projection on the POS at each position on the LOS by a rotation of $90^\circ$~\citep[see definitions of angles in][]{planck2014-XX}.
The $n_\mathrm{H}$ and $\boldsymbol{B}$ cubes are built on a grid which is $132\times132$ pixels in the POS and $N_z$ pixels in the $z$ direction (that of the LOS). The cells have a physical size $\delta=0.24\,\mathrm{pc}$ in each direction, so the depth $d=N_z\,\delta$ of the cloud is a free parameter in our analysis, and the Stokes maps built from Eq.~\ref{eq:defI}-\ref{eq:defU} are $32\,\mathrm{pc}$ in both $x$ and $y$ directions.
\subsection{Noise and beam convolution}
In order to proceed with the analysis of observational data, one cannot use these model Stokes maps directly : it is necessary to properly take into account noise and beam convolution. Anticipating somewhat on the description of the \textit{Planck}\ data we shall use as an application of the method, the 353\,GHz noise covariance matrix maps are taken directly from the {\it \textit{Planck}\ Legacy Archive}\footnote{\tt http://pla.esac.esa.int/pla/} and are part of the 2015 public release of {\textit{Planck}} data~\citep{planck2014-a01},
\begin{equation}
\boldsymbol{\Sigma}=\left(\begin{array}{ccc}
\sigma_{II} & \sigma_{IQ} & \sigma_{IU} \\
\sigma_{QI} & \sigma_{QQ} & \sigma_{QU} \\
\sigma_{UI} & \sigma_{UQ} & \sigma_{UU} \\
\end{array}\right).
\end{equation}
Noise is added to the model Stokes maps pixel by pixel, as
\begin{equation}
\label{eq:addnoise}
I_n=I_0+n_I \qquad Q_n=Q_0+n_Q \qquad U_n=U_0+n_U,
\end{equation}
where $n_I$, $n_Q$, and $n_U$ are random values drawn from a three-dimensional Gaussian distribution with zero mean and characterized by the noise covariance matrix $\boldsymbol{\Sigma}$. To preserve the properties of {\textit{Planck}} noise, the random values are directly drawn from the {\tt Healpix}~\citep{gorski_et_al_2005} covariance matrix maps and added to the simulated maps after a gnomonic projection of the region under study, in our case the Polaris Flare molecular cloud.
The resulting Stokes $I_n$, $Q_n$, and $U_n$ maps are then placed at a distance\footnote{A more recent determination of the distance to the Polaris Flare places it at 350-400\,pc~\citep{schlafly2014}. For the demonstration of the method presented here, this is not a critical issue, as the power-law power spectra underline self-similar behaviours, so that a change of the distance can be compensated by a change in pixel size.} $D=140\,\mathrm{pc}$, so that the angular size of each pixel is about $6\arcmin$, and then convolved by a circular 15{\arcmin} full-width at half maximum (FWHM) Gaussian beam $\mathcal{B}$. To avoid edge effects, only the central $120\times120$ pixels of the convolved maps $I_\mathrm{m}=\mathcal{B}\otimes I_n$, $Q_\mathrm{m}=\mathcal{B}\otimes Q_n$, and $U_\mathrm{m}=\mathcal{B}\otimes U_n$ are retained, corresponding to a field-of-view (FoV) of approximately 12$^\circ$. With this approach, we ensure that these model maps $(I_\mathrm{m},Q_\mathrm{m},U_\mathrm{m})$ are fit to be compared to actual {\textit{Planck}} data, which we discuss in Sec.~\ref{sec:Polaris}.
\begin{table*}[htb]
\caption[]{Parameter space explored in the grid of model polarization maps.}\label{tab:priors}
\centering
\begin{tabular}{cccc} \hline \hline \\ [-1ex]
Parameter & Prior\,\tablefootmark{a} & Definition \\ [1ex] \hline \\ [-1ex]
$\beta_B$ & $\left[1,4\right]$ & Spectral index of the 3D turbulent magnetic field \\
$\beta_n$ & $\left[1,5\right]$ & Spectral index of the 3D dust density field \\
$\log_{10}{y_n}$ & $\left[-1,1\right]$ & Log of the RMS-to-mean ratio of dust density\\
$\log_{10}{y_B^{\rm POS}}$ & $\left[-1,1\right]$ & Log of the ratio of the turbulent magnetic field RMS to the mean magnetic field in the POS\\
$\chi_0$ & $\left[-90^\circ,90^\circ\right]$ & Position angle of the mean magnetic field in the POS\\
$\log_{10}\left(d/1\,\mathrm{pc}\right) $ & $\left[-0.3,1.5\right]$\,\tablefootmark{b} & Depth of the simulated cube \\
$\log_{10}\left(\langlen_\mathrm{H}\rangle/1\,\mathrm{cm}^{-3}\right) $ & $\left[1,2.7\right]$\,\tablefootmark{c} & Mean dust density \\
$T_\mathrm{d} $ & $\left[5\,\mathrm{K},200\,\mathrm{K}\right]$ & Dust temperature \\
$p_0 $ & $\left[0.01,0.5\right]$ & Intrinsic polarisation fraction parameter \\
[1ex] \hline \\ [-1ex]
\end{tabular}
\tablefoot{\tablefoottext{a}{Priors are assumed to be flat in the given range for the parametrization given in this table, and zero outside, except $\chi_0$, for which a $180^\circ$ periodicity is applied when the Metropolis algorithm draws values outside the given range.}\tablefoottext{b}{Corresponding to a cube depth in the interval $\left[0.5\,\mathrm{pc},32.5\,\mathrm{pc}\right]$.}\tablefoottext{c}{Corresponding to a density in the interval $\left[10\,\mathrm{cm^{-3}},500\,\mathrm{cm^{-3}}\right]$.}}
\end{table*}
\section{Observables}
\label{sec:observables}
\begin{figure*}[htbp]
\centerline{
\includegraphics[width=0.43\textwidth,trim=0 0 0 20,clip=true]{{Stokes_map_132x132x40_b2.3_A4.6_Y0.6_YB1.0_c-50.0_g0.0_nH30.0_T18.0_a0.2_Beam15_Reso6_NoisePolaris_I-eps-converted-to}.pdf}
\includegraphics[width=0.49\textwidth,trim=-20 -10 0 0,clip=true]{{Stokes_map_132x132x40_b2.3_A4.6_Y0.6_YB1.0_c-50.0_g0.0_nH30.0_T18.0_a0.2_Beam15_Reso6_NoisePolaris_I_PS-eps-converted-to}.pdf}
}
\centerline{
\includegraphics[width=0.43\textwidth,trim=0 0 0 20,clip=true]{{Stokes_map_132x132x40_b2.3_A4.6_Y0.6_YB1.0_c-50.0_g0.0_nH30.0_T18.0_a0.2_Beam15_Reso6_NoisePolaris_Q-eps-converted-to}.pdf}
\includegraphics[width=0.49\textwidth,trim=-20 -10 0 0,clip=true]{{Stokes_map_132x132x40_b2.3_A4.6_Y0.6_YB1.0_c-50.0_g0.0_nH30.0_T18.0_a0.2_Beam15_Reso6_NoisePolaris_Q_PS-eps-converted-to}.pdf}
}
\centerline{
\includegraphics[width=0.43\textwidth,trim=0 0 0 20,clip=true]{{Stokes_map_132x132x40_b2.3_A4.6_Y0.6_YB1.0_c-50.0_g0.0_nH30.0_T18.0_a0.2_Beam15_Reso6_NoisePolaris_U-eps-converted-to}.pdf}
\includegraphics[width=0.49\textwidth,trim=-20 -10 0 0,clip=true]{{Stokes_map_132x132x40_b2.3_A4.6_Y0.6_YB1.0_c-50.0_g0.0_nH30.0_T18.0_a0.2_Beam15_Reso6_NoisePolaris_U_PS-eps-converted-to}.pdf}
}
\caption{{\it Left column row :} Example maps (from top to bottom : total intensity $I_\mathrm{m}$ on a logarithmic scale, Stokes $Q_\mathrm{m}$, and Stokes $U_\mathrm{m}$) from simulation A (see~\ref{sec:results} and Table~\ref{tab:results_sim}). {\it Right column : } Corresponding power spectra. The gray points represent the two-dimensional power spectra, while the black dots represent the azimuthal averages in Fourier space in a set of wavenumber bins, and the blue line is a power-law fit to the black points.}
\label{fig_maps_and_ps_simA}
\end{figure*}
\begin{figure*}[htbp]
\resizebox{\hsize}{!}{
\includegraphics{{triangle_plot_simA-eps-converted-to}.pdf}
}
\caption{Constraints (posterior probability contours and marginalized PDFs) on the statistical properties of the dust density and magnetic field for the simulation A maps. On the posterior probability contours, the filled dark and light blue regions respectively enclose 68.3\% and 95.4\% of the probability, the black stars indicate the averages over the two-dimensional posterior PDFs, and the red circles indicate the input values for the simulation. In the plots showing the marginalized posterior PDFs, the light blue regions enclose 68.3\% of the probability, the dashed blue lines indicate the averages over the posterior PDFs, and the solid red lines indicate the input values for the simulation. The upper right plot displays the correlation matrix between the fitted parameters.}
\label{fig_triangle_simA}
\end{figure*}
From the model maps above, we build an ensemble of derived maps, starting with the normalized Stokes maps
$$
i=\frac{I_\mathrm{m}}{\langle I_\mathrm{m}\rangle} \qquad q=\frac{Q_\mathrm{m}}{I_\mathrm{m}} \qquad u=\frac{U_\mathrm{m}}{I_\mathrm{m}}
$$
where $\langle I_\mathrm{m}\rangle$ is the spatial average of the model Stokes $I_\mathrm{m}$ map. Then we define the polarization fraction, which requires us to note that since our models include noise, we should not use the ``na\"ive'' estimator~\citep{montier-et-al-2015a,montier-et-al-2015b}
\begin{equation*}
p=\frac{\sqrt{Q_\mathrm{m}^2+U_\mathrm{m}^2}}{I_\mathrm{m}},
\end{equation*}
but rather the modified asymptotic (MAS) estimator proposed by~\cite{plaszczynski_et_al_14}
\begin{equation}
p_\mathrm{MAS}=p-b^2\frac{\displaystyle 1-e^{-p^2/b^2}}{2p}
\end{equation}
where the noise bias parameter $b^2$ derives from the elements of the noise covariance matrix $\boldsymbol{\Sigma}$~\citep[see][]{montier-et-al-2015b}. Next, we define the polarization angle
$$
\psi=\frac{1}{2}\mathrm{atan}{\left(U_\mathrm{m},Q_\mathrm{m}\right)}
$$
where the two-argument $\mathrm{atan}$ function lifts the $\pi$-degeneracy of the usual $\mathrm{atan}$ function. Note that this expression means that the polarization angle is defined in the {\tt Healpix} convention.
We also build maps of the polarization angle dispersion function $\mathcal{S}$~\citep{planck2014-XIX,planck2014-XX,alina-et-al-2016}, which quantifies the local dispersion of polarization angles at a given lag $\delta$ and is defined by
$$
\mathcal{S}(\boldsymbol{r},\delta)=\sqrt{\frac{1}{\mathcal{N}}\sum_{i=1}^\mathcal{N}\left[\psi\left(\boldsymbol{r}+\boldsymbol{\delta}_i\right)-\psi\left(\boldsymbol{r}\right)\right]^2}
$$
where the sum is performed over the $\mathcal{N}$ pixels $\boldsymbol{r}+\boldsymbol{\delta}_i$ whose distance to the central pixel $\boldsymbol{r}$ lies between $\delta/2$ and $3\delta/2$. For the sake of consistency with the analysis performed on simulated polarization maps in~\cite{planck2014-XX}, we take $\delta=16\arcmin$.
Finally we build the column density and optical depth $\tau_{353}$ maps from the dust density cube using
\begin{equation*}
N_\mathrm{H} = \intn_\mathrm{H}\,\mathrm{d} z \quad\text{ and }\quad \tau_{353} = \sigma_{353}\left(N_\mathrm{H}\right) \timesN_\mathrm{H}
\end{equation*}
with the $\sigma_{353}\left(N_\mathrm{H}\right)$ conversion\footnote{The conversion factor is given for a map of $N_\mathrm{H}$ at a resolution of $30\arcmin$. Thus, before applying it pixel by pixel, we smooth the simulated $N_\mathrm{H}$ map to $30\arcmin$ resolution, apply the conversion, then resample the resulting $\tau_{353}$ map at the original resolution.} from \cite{planck2013-p06b}. A temperature map $T_{\rm obs}$ is also created using the anti-correlation with the column density $N_\mathrm{H}$ observed in the data. This "dust temperature map" does not pretend to model reality but since $T_\mathrm{d}$ is one of the parameters of the model, the fitting algorithm requires a map whose mean value should yield $T_\mathrm{d}$.
\section{Exploring the parameter space}
\label{sec:method}
The goal of this paper is to constrain the physical parameters of molecular clouds, in particular the spectral indices of the dust density and of the turbulent magnetic field, using {\textit{Planck}} polarization maps and a grid of model maps built as explained in the previous section.
\subsection{Parameter space}
The nine physical parameters that are explored in this paper using fBm simulations are summarized in Table~\ref{tab:priors}. They are sufficient to describe the one-point and two-point statistical properties of the dust density and magnetic field models. Note that unlike what was done in~\cite{planck2016-XLIV}, the field of view of the maps analysed in the following (approximately $12^\circ$) is too small to contain remarkable features that could be used to constrain the angle $\gamma_0$ that the mean magnetic field makes with the POS. In such small fields of view, there is a degeneracy between $y_B$ and $\gamma_0$ which cannot be lifted. Consequently, we chose not to try to fit for $\gamma_0$ and $y_B$, but for the ratio of the turbulent magnetic field RMS to the mean magnetic field in the POS, i.e.,
$$
y_B^{\rm POS} = \frac{\sigma_B}{B_0^{\mathrm{POS}}}=\frac{\sqrt{\left<\boldsymbol{B}_t^2\right>}}{||\boldsymbol{B}_{0}||\cos\gamma_0} = \frac{y_B}{\cos\gamma_0}.
$$
This analysis was applied to the Polaris Flare (see section~\ref{sec:Polaris}), so the priors are chosen to be flat over a reasonably large range, to cover the expected physical values of the molecular cloud under consideration, but they are necessary for the analysis to converge. The cloud's average column density is of the order $\langleN_\mathrm{H}\rangle\approx 10^{21}\,\mathrm{cm}^{-2}$~\citep{planck2014-XX}. This value was used to set the range for the prior on the depth $d$ of the cube, with the limits of this range chosen in such a way that the average gas density $\langlen_\mathrm{H}\rangle$ lies between 10 and $500\,\mathrm{cm}^{-3}$, a reasonable assumption for the Polaris Flare molecular cloud. This translates to a total cube depth $d$ between 0.5 and $32.5\,\mathrm{pc}$. The range used for the prior on $\beta_n$ is justified by a number of observational studies~\citep[see, e.g., the review by][]{hennebelle-falgarone-2012}, and that on $\beta_B$ is chosen based on the results from~\cite{vansyngel-et-al-2017}, but also on numerical studies of MHD turbulence~\citep[see, e.g.,][]{perez-et-al-2012,beresnyak-2014}. The fluctuation ratios $y_n$ and $y_B^{\rm{POS}}$ are explored in a logarithmic scale, as we are mainly interested in order of magnitude estimates for these parameters. The polarization maps being statistically identical when the angle $\chi_0$ of the POS projection of the mean magnetic field is shifted by $180^\circ$, the prior on this parameter is such that this periodicity is applied when the Metropolis algorithm~(see~\ref{sec:MCMC}) draws values outside the given range. The priors chosen for $T_\mathrm{d}$ and $p_0$ are very large and do not play a role in the fitting procedure.
\subsection{Comparing models with data}\label{sec:chi2}
To set constraints on the parameters listed in Table~\ref{tab:priors}, {we build a likelihood function, which expresses the probability that a given set of synthetic polarization maps reproduce adequately actual observational data. From the model Stokes maps $I_\mathrm{m}$, $Q_\mathrm{m}$, and $U_\mathrm{m}$, we derive a set of observables that are used in the likelihood function. These observables are given in Table~\ref{tab:observables}. More precisely, we use i) the mean values for the optical depth $\tau_{353}$ and the dust temperature $T_\mathrm{obs}$, ii) the distribution functions (one-point statistics) of the $I_\mathrm{m}$, $Q_\mathrm{m}$, $U_\mathrm{m}$, $p_\mathrm{MAS}$, $\psi$, $\mathcal{S}$, and $\tau_{353}/\left\langle \tau_{353}\right\rangle$ maps, iii) the power spectra (two-point statistics) of the $I_\mathrm{m}$ ,$Q_\mathrm{m}$, and $U_\mathrm{m}$ maps, and iv) the pixel-by-pixel anti-correlation between $\mathcal{S}$ and $p_\mathrm{MAS}$ underlined by~\cite{planck2014-XIX}. Indeed, we have found that the shape of this two-dimensional distribution function also depends on the model parameters. Many other observables were tested but we have retained only those which bring constraints on the model parameters.
\begin{table}[htb]
\caption[]{Observables from polarization maps used to fit data.}
\label{tab:observables}
\begin{center}
\begin{tabular}{cc} \hline \hline \\ [-1ex]
Type & From \\ [1ex] \hline \\ [-1ex]
Mean values & $\tau_{353}$, $T_\mathrm{obs}$ \\
Distribution function & $I_\mathrm{m}$, $Q_\mathrm{m}$, $U_\mathrm{m}$, $p_\mathrm{MAS}$, $\psi$, $\mathcal{S}$, $\tau_{353}/\left\langle \tau_{353}\right\rangle$ \\
Power spectrum & $I_\mathrm{m}$, $Q_\mathrm{m}$, $U_\mathrm{m}$ \\
Correlation & $\left\lbrace\mathcal{S},p_\mathrm{MAS}\right\rbrace$\\ [1ex] \hline \\ [-1ex]
\end{tabular}
\end{center}
\end{table}
On the simulation side, $N_r=60$ model realizations per set of parameter values are generated with their observables, to be compared with data. The $N_r$ models differ by the random phases $\phi_X$ and $\phi_{A_\lambda}$ used to build the dust density and magnetic field cubes (see Eqs.~\ref{eq:Xfbm} and~\ref{eq:Btlambda}), and by the random realization of the noise applied to the model (Eq.~\ref{eq:addnoise}). We checked that 60 simulations represent a statistically large enough sample to get robust averages and dispersions for the observables. The statistical properties of the observables derived from the observational polarization data are thus compared with the observables from those 60 models, through the evaluation of a parameter $D^2$ which quantifies the distance between data and one random realisation of the model, averaged over the $N_r$ random realisations, with contributions associated to the various observables listed in Table~\ref{tab:observables}, i.e.,
\begin{equation}
\label{eq:chi2tot}
D^2 = \frac{1}{N_r}\sum_{i=1}^{N_{\rm r}} \left[ D^2_{ \mu} + \sum_{o}D^2_{{\rm DF}(o)} + \sum_{o} D^2_{P(o)} + D^2_{\mathcal{S}-p_\mathrm{MAS}}\right].
\end{equation}
This quantity is subtly different from the usual $\chi^2$ (see Appendices~\ref{sec:appendix:L2distance} and~\ref{sec:appendix:chi2}). The first term in Eq.~\ref{eq:chi2tot} covers the observables $\left\langle \tau_{353}\right\rangle$ and $\left\langle T_\mathrm{obs}\right\rangle$, and quantifies the difference between these values in the simulated maps and in the data. The second sum extends over the observable maps $o$ in the set $\left\{I_\mathrm{m},Q_\mathrm{m},U_\mathrm{m},p_\mathrm{MAS},\psi,\mathcal{S},\tau_{353}/\left\langle \tau_{353}\right\rangle\right\}$ and quantifies the difference between the distribution functions (DF) of these observables in synthetic maps and those of the same observables in the data. The third sum extends over the observable maps $o$ in the set $\left\{I_\mathrm{m},Q_\mathrm{m},U_\mathrm{m}\right\}$ and quantifies the difference between the power spectra of the simulated maps and those of the same maps in the data. Finally, the last term quantifies the discrepancy between the two-dimensional joint DFs of $\mathcal{S}$ and $p_\mathrm{MAS}$ in the data and in synthetic maps. We detail the computation of these various terms in Appendix~\ref{sec:appendix:L2distance}.
\subsection{MCMC chains}
\label{sec:MCMC}
Given the vast parameter space to explore, we built a Monte Carlo Markov Chain method (MCMC)~\citep[see, e.g.][]{MCMC} that has the advantage to sample specifically the regions of interest in this space. We used a simple Metropolis-Hastings algorithm to build five Markov chains which sample the posterior probability distribution of the parameters listed in Table~\ref{tab:priors}. The likelihood $\mathcal{L}$ of a set $s$ of parameters is evaluated thanks to the $D^2$ criterion described in section~\ref{sec:chi2} and Appendix~\ref{sec:appendix:L2distance} as
$$
\mathcal{L}(s) \propto e^{-D^2(s)/2}\pi(s)
$$
with $\pi(s)$ the prior associated to the parameters.
According to the Metropolis-Hastings algorithm, at each step $q$ of the chain, parameters are drawn according to a multivariate probability distribution function whose covariance is set to allow for an efficient exploration of the parameter space, with an average given by the parameter values $s_{q-1}$ at step $q-1$. If the likelihood for the new set of parameters, $s_q$, is larger than for the previous one, then the chain records the new set. Otherwise, the likelihood ratio $\mathcal{L}\left(s_q\right)/\mathcal{L}\left(s_{q-1}\right)< 1$ is compared to a number $\alpha$ drawn randomly from a uniform distribution over $[0,1]$. If the likelihood ratio is larger than $\alpha$, the $s_q$ set of parameters is kept, otherwise the chain duplicates the $s_{q-1}$ set, i.e., $s_q=s_{q-1}$. The posterior probability distribution function is then given by the occurence frequency of the parameters along the chains, after removal of the initial "burn-in" phase.
The priors used for each parameter are detailed in Table~\ref{tab:priors}. For all parameters, flat priors are set covering a reasonable range of physical interest. If the Metropolis algorithm draws values outside of these priors we set $\pi(s)=0$, except for the position angle of the mean magnetic field, $\chi_0$, for which the $180^\circ$ periodicity is used to bring back the angle inside its definition range when it is perchance drawn outside.
The convergence of the Markov chains is tested using the Gelman-Rubin statistics $R$ \citep{gelman1992}, which is essentially the ratio of the variance of the chain means to the mean of the chain variances. We estimate that the chains converged when $R-1<0.03$ for the least-converged parameter. The convergence is also assessed by checking visually the $D^2$ and parameter evolutions along the chains.
\begin{figure*}[htbp]
\centering
\includegraphics[width=0.33\textwidth]{{Polaris_120_27_0_132x132_Beam15_Reso6_I-eps-converted-to}.pdf}
\includegraphics[width=0.33\textwidth]{{Polaris_120_27_0_132x132_Beam15_Reso6_Q-eps-converted-to}.pdf}
\includegraphics[width=0.33\textwidth]{{Polaris_120_27_0_132x132_Beam15_Reso6_U-eps-converted-to}.pdf}\\
\includegraphics[width=0.33\textwidth]{{Polaris_120_27_0_132x132_Beam15_Reso6_PoverI-eps-converted-to}.pdf}
\includegraphics[width=0.33\textwidth]{{Polaris_120_27_0_132x132_Beam15_Reso6_Psi-eps-converted-to}.pdf}
\includegraphics[width=0.33\textwidth]{{Polaris_120_27_0_132x132_Beam15_Reso6_DeltaPsi_R16-eps-converted-to}.pdf}
\caption{{\textit{Planck}} 353\,GHz maps of the Polaris Flare molecular cloud. The top row shows, from left to right, the total intensity $I_{353}$ on a logarithmic scale, the Stokes $Q_{353}$ map and the Stokes $U_{353}$ map, while the bottom row shows the polarization fraction $p_{\rm MAS}$, the polarization angle $\psi$, and the polarization angle dispersion function $\mathcal{S}$. The $\tau_{353}$ and $T_{\rm obs}$ maps have the same aspects as the $I_{353}$ map but with their own scales.}
\label{fig_polaris_map}
\end{figure*}
\begin{figure*}[htbp]
\resizebox{\hsize}{!}{
\includegraphics{{triangle_plot-eps-converted-to}.pdf}
}
\caption{Constraints (posterior probability contours and marginalized PDFs) on the statistical properties of the dust density and magnetic field for the {\textit{Planck}} maps of the Polaris Flare. On the posterior probability contours, the filled dark and light blue regions respectively enclose 68.3\% and 95.4\% of the probability, and the black stars indicate the averages over the two-dimensional posterior PDFs. In the plots showing the marginalized posterior PDFs, the light blue regions enclose 68.3\% of the probability, and the dashed blue lines indicate the averages over the posterior PDFs. The upper right plot displays the correlation matrix between the fitted parameters.}
\label{fig_triangle}
\end{figure*}
\begin{table*}
\caption{Best fit values from four fBm simulations using the observables from Table~\ref{tab:observables}. The column $\left<\chi^2_{\rm{best}}\right>$ shows the $\chi^2$ values for the best fit parameters averaged over 100 fits (see Appendix~\ref{sec:appendix:chi2}).}
\label{tab:results_sim}
\begin{center}
\resizebox{\textwidth}{!}{\begin{tabular}{ccccccccccc} \hline \hline \\ [-1ex]
Parameters & $\beta_B$ & $\beta_n$ & $\log_{10}y_n$ & $\log_{10} y_B^{\rm POS}$ & $\chi_0$ [$^\circ$] & $\log_{10}\left(\frac{d}{1\,\mathrm{pc}}\right)$ & $\log_{10}\left(\frac{\langlen_\mathrm{H}\rangle}{1\,\mathrm{cm}^{-3}}\right)$ & $T_\mathrm{d}$ [K] & $p_0$ & $\left<\chi^2_{\rm{best}}\right>$\\ [1ex] \hline \\ [-1ex]
\multicolumn{10}{c}{\bf Simulation A} \\ [1ex]
Input parameters & 2.6 & 2.08 & -0.10 & -0.22 & $-50$ & 1.00 & 1.48 & 18.0 & 0.12 \\ [1ex]
Best fit values & $2.8^{+0.2}_{-0.2}$ & $1.9^{+0.3}_{-0.2}$ & $0.03^{+0.15}_{-0.15}$ & $-0.26^{+0.05}_{-0.05}$ & $-50^{+2}_{-2}$ & $1.2^{+0.2}_{-0.2}$ & $1.3^{+0.1}_{-0.2}$ & $18.0^{+0.5}_{-0.5}$ & $0.11^{+0.02}_{-0.03}$ & 1.3 \\ [1ex] \hline \\ [-1ex]
\multicolumn{10}{c}{\bf Simulation B} \\ [1ex]
Input parameters & 2.6 & 2.09 & -0.10 & -0.22 & $-70$ & 0.70 & 2.00 & 20.0 & 0.15 \\ [1ex]
Best fit values & $2.7^{+0.1}_{-0.2}$ & $1.9^{+0.3}_{-0.2}$ & $-0.01^{+0.13}_{-0.20}$ & $-0.24^{+0.03}_{-0.04}$ & $-70^{+2}_{-2}$ & $0.8^{+0.2}_{-0.3}$ & $2.0^{+0.3}_{-0.2}$ & $20.0^{+0.5}_{-0.5}$ & $0.13^{+0.02}_{-0.02}$ & 1.7\\ [1ex] \hline \\ [-1ex]
\multicolumn{10}{c}{\bf Simulation C} \\ [1ex]
Input parameters & 3.0 & 2.8 & 0.0 & -0.10 & $-30$ & 1.18 & 1.30 & 22.0 & 0.2 \\ [1ex]
Best fit values & $2.8^{+0.1}_{-0.2}$ & $2.6^{+0.3}_{-0.2}$ & $0.00^{+0.10}_{-0.11}$ & $-0.08^{+0.03}_{-0.04}$ & $-30^{+3}_{-4}$ & $1.1^{+0.2}_{-0.2}$ & $1.4^{+0.2}_{-0.2}$ & $22.1^{+0.5}_{-0.5}$ & $0.21^{+0.03}_{-0.03}$ & 1.8\\ [1ex] \hline \\ [-1ex]
\multicolumn{10}{c}{\bf Simulation D} \\ [1ex]
Input parameters & 2.0 & 1.87 & -0.22 & -0.10 & $-10$ & 0.70 & 2.18 & 16.0 & 0.1 \\ [1ex]
Best fit values & $2.2^{+0.2}_{-0.2}$ & $1.7^{+0.3}_{-0.3}$& $-0.1^{+0.2}_{-0.2}$ & $-0.08^{+0.06}_{-0.05}$ & $-8^{+2}_{-2}$ & $1.0^{+0.3}_{-0.3}$ & $1.9^{+0.3}_{-0.3}$ & $16.0^{+0.5}_{-0.5}$ & $0.11^{+0.03}_{-0.03}$ & 0.7 \\ [1ex] \hline \\ [-1ex]
\end{tabular}}
\end{center}
\end{table*}
The obtained 9D posterior probability distribution is generally not a multivariate Gaussian distribution. To quote an estimate of the best fit value for any one of the nine parameters and the associated uncertainties, we first marginalize over the other eight parameters to obtain the one-dimensional posterior PDF for the remaining parameter. In the following, the quoted best fit value for a parameter is the mean over this posterior PDF (which is less sensitive to binning effect than the maximum likelihood). As the PDFs are usually not Gaussian, we quote asymmetric error bars following the minimum credible interval technique \citep[see, e.g.,][]{hamann2007}.
\section{Results}
\label{sec:results}
\subsection{Validation of the method}
\label{sec:validation}
To validate the fitting method, we simulated four sets of model cubes and computed the corresponding $I_\mathrm{m}$, $Q_\mathrm{m}$, and $U_\mathrm{m}$ maps, including noise, with different values of the input parameters (these are labelled simulations A, B, C and D hereafter). The MCMC fitting procedure was run on these mock polarization data sets to check if it was able to recover the statistical properties of the input dust density and magnetic field cubes through the selected observables. The results are presented in Table~\ref{tab:results_sim}. For the four sets of maps, the fitting method recovered the input values within the quoted uncertainties, after the convergence criteria for all the chains were reached\footnote{And after removing the burn-in phase, which is quite short in our case ($\lesssim 30\%$ of the chain lengths in general).
}. This shows that this choice of observables is relevant to extract the input values from polarized thermal dust emission data within our model. To assess the goodness of fit of the model to the data, we use an {\it a posteriori} $\chi^2$ test, as explained in Appendix~\ref{sec:appendix:chi2}.} In all four cases, we find that the match is very good, since $\left<\chi^2_{\rm{best}}\right>\approx 1$. For illustration, the posterior probability contours for simulation A are presented in Figure~\ref{fig_triangle_simA}. We note that the MCMC procedure reveals correlations between the model parameters, which is not unexpected, e.g., between $y_B^\mathrm{POS}$ and $p_0$, or between $y_n$ and $\langlen_\mathrm{H}\rangle$. These trends are best visualized with the correlation matrix, shown in the upper right corner of Fig.~\ref{fig_triangle_simA}.
\begin{figure*}[htbp]
\centering
\includegraphics[width=0.33\textwidth]{{Stokes_map_132x132x50_b2.3_A4.8_Y1.65_YB0.64_c-69.0_g0.0_nH40.0_T17.5_a0.12_Beam15_Reso6_NoisePolaris_I-eps-converted-to}.pdf}
\includegraphics[width=0.33\textwidth]{{Stokes_map_132x132x50_b2.3_A4.8_Y1.65_YB0.64_c-69.0_g0.0_nH40.0_T17.5_a0.12_Beam15_Reso6_NoisePolaris_Q-eps-converted-to}.pdf}
\includegraphics[width=0.33\textwidth]{{Stokes_map_132x132x50_b2.3_A4.8_Y1.65_YB0.64_c-69.0_g0.0_nH40.0_T17.5_a0.12_Beam15_Reso6_NoisePolaris_U-eps-converted-to}.pdf}\\
\includegraphics[width=0.33\textwidth]{{Stokes_map_132x132x50_b2.3_A4.8_Y1.65_YB0.64_c-69.0_g0.0_nH40.0_T17.5_a0.12_Beam15_Reso6_NoisePolaris_PoverI-eps-converted-to}.pdf}
\includegraphics[width=0.33\textwidth]{{Stokes_map_132x132x50_b2.3_A4.8_Y1.65_YB0.64_c-69.0_g0.0_nH40.0_T17.5_a0.12_Beam15_Reso6_NoisePolaris_Psi-eps-converted-to}.pdf}
\includegraphics[width=0.33\textwidth]{{Stokes_map_132x132x50_b2.3_A4.8_Y1.65_YB0.64_c-69.0_g0.0_nH40.0_T17.5_a0.12_Beam15_Reso6_NoisePolaris_DeltaPsi_R16-eps-converted-to}.pdf}
\caption{Same as Fig.~\ref{fig_polaris_map} with the same color scales, but for model maps using the best fitting parameters to the Polaris Flare data.}
\label{fig_best_fit_map}
\end{figure*}
\begin{figure*}[htbp]
\centerline{
\includegraphics[width=0.4\textwidth]{{Stokes_Maps_132x132x50_b2.3_A4.8_Y1.65_YB0.64_c-69.0_g0.0_HistI_PDF-eps-converted-to}.pdf}
\includegraphics[width=0.4\textwidth]{{Stokes_Maps_132x132x50_b2.3_A4.8_Y1.65_YB0.64_c-69.0_g0.0_HistQ_PDF-eps-converted-to}.pdf}
}
\centerline{
\includegraphics[width=0.4\textwidth]{{Stokes_Maps_132x132x50_b2.3_A4.8_Y1.65_YB0.64_c-69.0_g0.0_HistU_PDF-eps-converted-to}.pdf}
\includegraphics[width=0.4\textwidth]{{Stokes_Maps_132x132x50_b2.3_A4.8_Y1.65_YB0.64_c-69.0_g0.0_HistPoverI_PDF-eps-converted-to}.pdf}
}
\centerline{
\includegraphics[width=0.4\textwidth]{{Stokes_Maps_132x132x50_b2.3_A4.8_Y1.65_YB0.64_c-69.0_g0.0_HistPsi_PDF-eps-converted-to}.pdf}
\includegraphics[width=0.4\textwidth]{{Stokes_Maps_132x132x50_b2.3_A4.8_Y1.65_YB0.64_c-69.0_g0.0_HistDeltaPsi_PDF-eps-converted-to}.pdf}
}
\centerline{
\includegraphics[width=0.4\textwidth]{{Stokes_Maps_132x132x50_b2.3_A4.8_Y1.65_YB0.64_c-69.0_g0.0_HistTau353_PDF-eps-converted-to}.pdf}
}
\caption{Comparison of the DFs extracted from the {\textit{Planck}} Polaris Flare maps (black points) with the observables computed from simulations using the best fitting parameters (blue curves). The latter curves are averaged over 60 realizations, as described in section~\ref{sec:chi2} : the average is given by the central blue curve and the shaded bands give the $1\sigma$ and $2\sigma$ standard deviation in each bin.}
\label{fig_best_fit_pdf}
\end{figure*}
\begin{figure*}[htbp]
\centering
\includegraphics[width=0.33\textwidth]{{Stokes_Maps_132x132x50_b2.3_A4.8_Y1.65_YB0.64_c-69.0_g0.0_PSI-eps-converted-to}.pdf}
\includegraphics[width=0.33\textwidth]{{Stokes_Maps_132x132x50_b2.3_A4.8_Y1.65_YB0.64_c-69.0_g0.0_PSQ-eps-converted-to}.pdf}
\includegraphics[width=0.33\textwidth]{{Stokes_Maps_132x132x50_b2.3_A4.8_Y1.65_YB0.64_c-69.0_g0.0_PSU-eps-converted-to}.pdf} \\
\caption{Comparison of the $I_\mathrm{m}$, $Q_\mathrm{m}$ and $U_\mathrm{m}$ power spectra extracted from the {\textit{Planck}} Polaris Flare maps (gray points representing the two-dimensional power spectra, and black dots representing the azimuthal averages in Fourier space) with the observables computed from simulations using the best fitting parameters (blue curves). The latter curves are averaged over 60 realizations as described in section~\ref{sec:chi2} : the average is given by the central blue curve and the narrow shaded bands give the $1\sigma$ and $2\sigma$ standard deviation in each bin.}
\label{fig_best_fit_ps}
\end{figure*}
\begin{figure*}[htbp]
\centering
\includegraphics[width=0.33\textwidth]{{Stokes_Maps_132x132x50_b2.3_A4.8_Y1.65_YB0.64_c-69.0_g0.0_correlation_data-eps-converted-to}.pdf}
\includegraphics[width=0.33\textwidth]{{Stokes_Maps_132x132x50_b2.3_A4.8_Y1.65_YB0.64_c-69.0_g0.0_correlation_sim-eps-converted-to}.pdf}
\includegraphics[width=0.33\textwidth]{{Stokes_Maps_132x132x50_b2.3_A4.8_Y1.65_YB0.64_c-69.0_g0.0_correlation_residuals-eps-converted-to}.pdf}
\caption{Two-dimensional distribution function of $\mathcal{S}$ and polarization fraction $p_\mathrm{MAS}$ for the Polaris Flare maps (left), for the model maps using the best fitting parameters averaged over 60 realizations (middle), and residuals (right). The polarization angle dispersion function $\mathcal{S}$ is computed at a lag $\delta=16\arcmin$. The solid black line shows the mean $\mathcal{S}$ for each bin in $p_\mathrm{MAS}$ and the dashed black line is a linear fit of that curve, restricted to bins in $p_\mathrm{MAS}$ which contain at least 1\% of the total number of points (120$\times$120).}
\label{fig_best_fit_corr}
\end{figure*}
\subsection{Application to the Polaris Flare}
\label{sec:Polaris}
As an application of our method, we wish to constrain statistical properties of the turbulent magnetic field in the Polaris Flare, a diffuse, highly dynamical, non-starforming molecular cloud. There are several reasons for choosing this particular field. First, it has been widely observed : the structures of matter were studied in dust thermal continuum emission by, e.g.,~\cite{miville-deschenes-et-al-2010}; the velocity field of the molecular gas was studied down to very small scales through CO rotational lines~\citep{falgarone-et-al-1998,hily-blant-et-al-2009}; and the magnetic field was probed by optical stellar polarization data in~\cite{panopoulou-et-al-2016}. Second, as this field does not show signs of star formation, the dynamics of the gas and dust are presumably dominated by magnetized turbulence processes, without contamination by feedback from young stellar objects (YSOs). It therefore seems like an ideal test case for our method.
To this aim, we use the full-mission \textit{Planck}\ maps of Stokes parameters $(I_{353},Q_{353},U_{353})$ at 353\,GHz and, as already mentioned, the associated covariance matrices from the {\textit{Planck}} Legacy Archive. We also use the thermal dust model maps $\tau_{353}$ and $T_{\rm obs}$ from the 2013 public release~\citep{planck2013-p06b}. All maps are at a native $4\arcmin.8$ resolution in the {\tt Healpix} format with $N_\mathrm{side}=2048$, and the Polaris Flare maps are obtained by projecting these onto a Cartesian grid with 6$\arcmin$ pixels, centered on Galactic coordinates $(l,b)=(120^\circ,27^\circ)$, with a field of view $\Delta l=\Delta b=12^\circ$. The maps of $I_{353}$, $Q_{353}$, $U_{353}$, and $\tau_{353}$ are then smoothed using a circular Gaussian beam, to obtain maps at a 15$\arcmin$ FWHM resolution. The covariance matrix maps are computed at the same resolution, using a set of Monte-Carlo simulations of pure noise maps, drawn from the original full-resolution covariance maps and smoothed at 15\arcmin. The maps of $I_{353}$, $Q_{353}$, $U_{353}$, $p_\mathrm{MAS}$, $\psi$, and $\mathcal{S}$ obtained in this way are shown in Fig.~\ref{fig_polaris_map}.
Note that the features of simulated Stokes maps are not located in the same regions as in the Polaris Flare maps. As the noise covariance matrices are the same for all the simulated maps, this means that the signal-to-noise ratio per pixel in the model $I_{\rm m}$, $Q_{\rm m}$ and $U_{\rm m}$ maps is different for each set of parameters. However, the MCMC procedure is able to choose the parameter sets that give signal-to-noise ratios similar to those in the {\textit{Planck}} maps.
\begin{table*}
\caption{Best fit values for the {\textit{Planck}} Polaris Flare maps, using the observables from Table~\ref{tab:observables}. The column $\left<\chi^2_{\rm{best}}\right>$ shows the $\chi^2$ values for the best fit parameters averaged over 100 fits (see Appendix~\ref{sec:appendix:chi2}).}
\label{tab:results}
\begin{center}
\resizebox{\textwidth}{!}{\begin{tabular}{ccccccccccc} \hline \hline \\ [-1ex]
Parameters & $\beta_B$ & $\beta_n$ & $\log_{10}y_n$ & $\log_{10} y_B^{\rm POS}$ & $\chi_0$ [$^\circ$] & $\log_{10}\left(\frac{d}{1\,\mathrm{pc}}\right)$ & $\log_{10}\left(\frac{\langlen_\mathrm{H}\rangle}{1\,\mathrm{cm}^{-3}}\right)$ & $T_\mathrm{d}$ [K] & $p_0$ & $\left<\chi^2_{\rm{best}}\right>$\ \\ [1ex] \hline \\ [-1ex]
Best fit values & $2.8^{+0.2}_{-0.2}$ & $1.7^{+0.4}_{-0.3}$ & $0.2^{+0.2}_{-0.2}$ & $-0.19^{+0.04}_{-0.04}$ & $-69^{+2}_{-3}$ & $1.1^{+0.3}_{-0.2}$ & $1.6^{+0.2}_{-0.3}$ & $17.5^{+0.5}_{-0.5}$ & $0.12^{+0.02}_{-0.02}$ & 2.9\\ [1ex] \hline \\ [-1ex]
\end{tabular}}
\end{center}
\end{table*}
The results of the analysis of the {\textit{Planck}} polarized thermal dust emission data towards the Polaris Flare are presented in Table~\ref{tab:results}, and the posterior probability distribution contours are shown in Figure~\ref{fig_triangle}. We find in particular that the spectral index of the turbulent component of the magnetic field is $\beta_B=2.8\pm 0.2$, and that the spectral index of the dust density field is around $\beta_n=1.7$ with a rather large uncertainty. The fluctuation ratio of the density field is about unity, $y_n\approx 1.6$, and the magnitude of the large scale magnetic field in the POS dominates slightly the RMS of the turbulent component, $y_B^{\rm POS}\approx 0.65$, with a position angle $\chi_0 \approx -69^\circ$. The constraint on the depth of the cloud seems to indicate that $d\approx 13\,\mathrm{pc}$, with $\langlen_\mathrm{H}\rangle\approx 40\,\mathrm{cm}^{-3}$. The temperature $T_\mathrm{d}$ is 17.5\,K equal to the average of the $T_{\rm obs}$ {\textit{Planck}} map, and the polarization fraction is $p_0\approx 0.12$. The parameter set for simulation A was chosen a posteriori to give similar best fit parameters and to test our likelihood method in the conditions driven by the Polaris Flare data.
Using the best fitting parameters from Table~\ref{tab:results} we performed simulations to visually check the agreement between the model and {\textit{Planck}} data. Figure~\ref{fig_best_fit_map} shows the polarization maps from a simulation using these best fitting parameters. The overall similarity with the data maps from Figure~\ref{fig_polaris_map} is reasonably good, although spatially coherent structures appear in the data maps which cannot be reproduced by the model maps. The agreement between the best fitting simulation and the data is quantified through plots of the different observables that were used in the fitting procedure (Figs.~\ref{fig_best_fit_pdf}, \ref{fig_best_fit_ps} and \ref{fig_best_fit_corr}). The agreement is excellent for most observables, although substantial deviations are visible in the DFs of the intensity $I_{\rm m}$, of the normalized optical depth $\tau_{353}/\left\langle\tau_{353}\right\rangle$ and of the polarization angle $\psi$. These deviations are due to the simplifying assumptions of our fBm model. It may be that in the Polaris Flare the large scale magnetic field has two major components that superimposed, with global orientations $\chi_0 \approx -70^\circ$ and $\chi_0 \approx 50^\circ$. Note that the DF in Fig.~\ref{fig_best_fit_pdf} is that of the $\psi$ angle, which differs from $\chi_0$ by 90$^\circ$. Also the exponentiation procedure to model the dust field is a good but incomplete approximation of the reality and it is not able to totally reproduce the shapes of the $I_{\rm m}$ and $\tau_{353}/\left\langle\tau_{353}\right\rangle$ DFs together. These deviations impact the reduced best fit $\left<\chi^2_{\rm{best}}\right> \approx 2.9$, which is somewhat larger than for mock data ($\approx 1$), but still reasonably good.
Concerning the mean values used as observables, the Polaris Flare has a mean optical depth of $\left\langle\tau_{353}\right\rangle = \left(1.25 \pm 0.05 \right)\times 10^{-5}$ and a mean temperature of $\left\langle T_{\rm obs} \right\rangle = 17.5 \pm 0.4$\,K. Using the best fitting parameters from Table~\ref{tab:results} we get optical depth maps with an average of $\left\langle\tau_{353}\right\rangle = \left(1.82 \pm 0.05 \right)\times 10^{-5}$ over 60 realizations which is in tension with the data value, as mentioned above for the $\tau_{353}/\left\langle\tau_{353}\right\rangle$ discrepancy. However the best fitting parameter for temperature is $T_\mathrm{d} = 17.5\pm 0.5$\,K which is exactly the same as in data with a width reflecting the data uncertainties.
The observables we used to extract the statistical properties of the Polaris Flare field are by themselves unable to constrain the $\gamma_0$ angle of the large scale magnetic field on the LOS. However, the \cite{planck2016-XLIV} analysis was able to fit the $\chi_0$ and $\gamma_0$ angle is the southern Galactic cap and found an intrinsic polarization fraction of the gas of $p_\mathrm{int}\approx 0.26$. If we believe this latter value is true also in the Polaris Flare, then it is related to our fit as $p_0 \approx p_\mathrm{int} \left\langle\cos^2 \gamma \right\rangle \approx p_\mathrm{int} \cos^2 \gamma_0$. We can thus constrain the $\gamma_0$ angle to be around $45^\circ$.
\section{Discussion and summary}
\label{sec:discussion}
We have presented an analysis framework for maps of polarized thermal dust emission in the diffuse ISM aimed at constraining the statistical properties of the dust density and magnetic field responsible for this emission. Our framework rests on a set of synthetic models for the dust density and magnetic field, for which we precisely control the one- and two-point statistics, and on a least-squares analysis in which the space of parameters is explored via a MCMC method. The application of the method to {\textit{Planck}} maps of the Polaris Flare molecular cloud leads to a spectral index of the turbulent component of the magnetic field $\beta_B=2.8\pm 0.2$, which is in very good agreement with the findings of~\cite{planck2016-XLIV} and~\cite{vansyngel-et-al-2017}, who used a very different approach over a much larger fraction of the sky. The dust density field exhibits a much flatter spectrum, $\beta_n=1.7$. This latter exponent is remarkably close to the Kolmogorov index for the velocity field in incompressible hydrodynamical turbulence, but this comparison should be taken with caution, as closer examination of the power spectrum of
the model density field shows a spectral break with an exponent closer to 2.2 at the largest scales ($k\lesssim 1\,\mathrm{pc}^{-1}$) while the smaller scales ($k\gtrsim 1\,\mathrm{pc}^{-1}$) have a 1.7 exponent\footnote{Incidentally, from the \textit{Planck}\ maps, we can measure the spectral index of the total intensity for the Polaris Flare to be $\beta_I=2.84\pm0.10$, in excellent agreement with the measurement by~\cite{stutzki-et-al-1998} on CO integrated emission at a similar angular resolution.}. What is clear is that the magnetic field power spectrum is much steeper, which underlines the role that the large scale magnetic field plays in the structure of polarized emission maps. We find that the fluctuation ratio of the dust density field and the ratio of turbulent-to-uniform magnetic field are both around unity. Finally, our analysis is able to give a constraint on the polarization fraction, $p_0 \approx 0.12$, and on the depth of the Polaris Flare molecular cloud, $d\approx 13\,\mathrm{pc}$, which is about half the transverse extent of the field-of-view, with $\langlen_\mathrm{H}\rangle \approx 40 \,\mathrm{cm}^{-3}$. The good visual agreement between the Polaris Flare maps and model maps for the best-fitting parameters (Figs.~\ref{fig_polaris_map} and~\ref{fig_best_fit_map}), and the excellent agreement between the two sets of maps for most of the observables used in the analysis (Figs.~\ref{fig_best_fit_pdf}, \ref{fig_best_fit_ps} and \ref{fig_best_fit_corr}), all lead us to conclude that our fBm-based model, although limited, provides a reasonable description of the magnetized, turbulent, diffuse ISM.
In fact, it is quite remarkable to find such a good agreement with the data, considering the limitations of the model. First, it is statistically isotropic, and therefore cannot reproduce the interstellar filamentary structures observed at many scales and over a large range in column densities~\citep[see, e.g.][]{miville-deschenes-et-al-2010,arzoumanian-et-al-2011}. Second, our model dust density and magnetic fields are completely uncorrelated, which is clearly not realistic, as it was found that there is a preferential relative orientation between structures of matter and magnetic field, both in molecular clouds~\citep{planck2015-XXXV} and in the diffuse, high-latitude sky~\citep{planck2014-XXXII,planck2015-XXXVIII}. The change in relative orientation, from mostly parallel to mostly perpendicular, as the total gas column density $N_\mathrm{H}$ increases, is also not reproducible with our fully-synthetic models. Third, it is now commonly acknowledged that two-point statistics
such as power spectra are not sufficient to properly describe the structure of interstellar matter. Improving our synthetic models along these three directions will be the subject of future work.
For completeness, we have also looked into applying our MCMC approach based on fBm models to synthetic polarization maps built from a numerical simulation of MHD turbulence. We used simulation cubes from {\tt http://www.mhdturbulence.com}~\citep{cho-lazarian-2003,burkhart-et-al-2009,burkhart-et-al-2014}, basing our choice on the simulation parameters, which seemed more or less consistent with the parameters found for the Polaris Flare data. We built simulated Stokes $I$, $Q$, and $U$ maps using the same resolution and noise parameters, and launched the MCMC analysis on these simulated Stokes maps. It turns out that the Markov chains have a much harder time converging than when applying the method to the {\textit{Planck}} data. It is not yet completely clear why that is so, but we suspect that part of the reason may lie with the limited range of spatial scales over which the fields in the MHD simulation can be accurately described by scale-invariant processes. Indeed, while the fBm models exhibit power-law power spectra over the full range of accessible scales (basically one decade in our case), the MHD simulations are hampered by effects of numerical dissipation at small scales (possibly over nearly 10 pixels), and the properties at large scales are dependent on the forcing, which is user-defined. The data, on the other hand, exhibit a much larger « inertial range ». In that respect, our fBm models, despite all their drawbacks, and despite the fact that they lack the physically realistic content of MHD simulations, provide a better framework for assessing the statistical properties of the {\textit{Planck}} data than current MHD simulations can. Of course, this conclusion is based on just one simulation, and there would definitely be a point in applying the MCMC approach to assess various instances of MHD simulations with respect to the observational data, based on the same observables, but independently of the grid of fBm models. This project, however, is clearly beyond the scope of this paper.
\begin{acknowledgements}
We gratefully acknowledge fruitful discussions with S. Plaszczynski and O. Perdereau.
\end{acknowledgements}
\bibliographystyle{aa}
|
{
"timestamp": "2018-02-27T02:01:15",
"yymm": "1802",
"arxiv_id": "1802.08725",
"language": "en",
"url": "https://arxiv.org/abs/1802.08725"
}
|
\section{Introduction}\label{intro}
The concept of entropy has been widely used in the physics
literature. But it has also been applied in information theory.
An important example of a recent development combining both fields
of research is given by quantum information theory \cite{NieChu10}.
In this field, quantum versions of information measures play a key
role. The quantum counterpart of Shannon entropic measure is the von
Neumann entropy~\cite{vNeu27,Holik-Plastino-Saenz-VNF,Watanabe}.
Also other measures have been adapted to the quantum realm in
different
contexts~\cite{Ren61,Tsa88,CanRos02,HuYe06,Kan02,SR-Paper}. Entropic
measures are important in several fields of research. They find
applications in the study of:
\begin{itemize}
\item uncertainty measures (as is the case in the study of
uncertainty relations~\cite{Uff90,ZozBos13,ZozBos14}),
\item different formulations of the MaxEnt
principle~\cite{Jaynes-Book,HeinQL,QuantalEffectsandMaxEnt,HolikIJGMMP},
\item entanglement measuring and
detection~\cite{HorHor94,AbeRaj01:01,TsaLlo01,RosCas03,BenZyc06,Hua13,OurHam15},
\item measures of mutual
information~\cite{Yeu97,ZhaYeu98,Car13,GroWal13,Watanabe2}
\item the theory of quantum coding and quantum information
transmission~\cite{SR-Paper,AhlLob01,Watanabe}.
\end{itemize}
In the theory of classical information measures, Salicr\'u
$(h,\phi)$-entropies~\cite{SalMen93} are, up to now, the most
generalized extension containing the Shannon~\cite{Sha48},
R\'enyi~\cite{Ren61} and Tsallis~\cite{Tsa88} entropies as
particular examples.
A finite dimensional quantum version of the $(h,\phi)$-entropies was
advanced and thoroughly studied in~\cite{QINP-paper}. A
generalization of the $(h,\phi)$-entropies to arbitrary finite
dimensional probabilistic models was introduced
in~\cite{Entropy-PaperGeneralized}
(see~\cite{Ohya-1989,Entropy-generalized-II,HeinQL} for
generalizations of more restricted families of entropic measures to
different frameworks). In this short paper we extend the previous
definitions of $(h,\phi)$-entropies so as to include infinite
dimensional models.
The paper is organized as follows. In
Section~\ref{ProbabilisticModels} we introduce preliminary notions
of generalized probabilistic models and decomposition theory.
In Section~\ref{s:Review} we discuss the classical formulation
of $(h,\phi)$-entropies and provide a definition of quantum
$(h,\phi)$-entropies that includes infinite dimensional
models. In Section~\ref{s:GeneralizedEntropies}, we define
$(h,\phi)$-entropies for probabilistic theories whose states form
compact convex sets. Section~\ref{s:Conclusions} is devoted to
some concluding remarks.
\section{Probabilistic models}\label{ProbabilisticModels}
The description of quantum mechanical systems makes use of a family
of probabilistic models that can be radically different from those
originated in classical statistical theories. It is easy to show
that both quantum and classical state spaces are convex
sets~\cite{Entropy-PaperGeneralized}. Indeed, this result is much
more general: in the approach to physical theories based on von
Neumann
algebras~\cite{HalvorsonARQFT,RedeiSummersQuantumProbability} the
sets of states are also convex. The canonical example of a von
Neumann algebra is given by the set $\mathcal{B}(\mathcal{H})$ of bounded operators
acting on a separable Hilbert space $\mathcal{H}$. Due to von Neumann
double commutant
theorem\cite{RedeiSummersQuantumProbability}\footnote{Given a
subset $M \subseteq
\mathcal{B}(\mathcal{H})$, the commutant of $M$ is defined as $M' = \{A \in \mathcal{B}(\mathcal{H}) \: | \: AB -
BA = 0 , \:\forall \, B \in M\}$.}, it is possible to define a von
Neumann algebra as a $\ast$-subalgebra\footnote{For bounded
operators the $\ast$
operation means just taking the adjoint of a given operator (i.e., $A^\ast :=
A^\dag$). Thus, the condition ``$\ast$-subalgebra'' reads ``is a subalgebra
that is closed under the adjoint operation''.} $\mathcal{W} \subseteq \mathcal{B}(\mathcal{H})$
satisfying $\mathcal{W}'' = \mathcal{W}$~\cite{Yngvason2005-TypeIIIFactors}. $\mathcal{B}(\mathcal{H})$
is not the only example of a von Neumann algebra. By
appealing to a dimension function, irreducible von Neumann
algebras can be classified in terms of factors of Type I, II and
III~\cite{HalvorsonARQFT}. Only Type I factors appear in standard
quantum mechanics: the set of matrices of a complex finite
dimensional Hilbert space and $\mathcal{B}(\mathcal{H})$ (in the infinite dimensional
case), are examples of Type I factors. But other factors may appear
in the study of models of quantum mechanics involving infinitely
many degrees of freedom (as is the case in quantum field
theory~\cite{HalvorsonARQFT,Yngvason2005-TypeIIIFactors} and quantum
statistical mechanics~\cite{Bratteli}). A commutative von
Neumann algebra can be used to describe the algebra of observables
of a classical probabilistic theory. States in general von Neumann
algebras are defined in the standard way: a state $\nu: \mathcal{W}
\longrightarrow \mathbb{C}$ is a continuous positive linear functional
such that $\nu(\mathbf{I})=1$, with $\mathbf{I}$ the identity
operator over $\mathcal{W}$. Positivity means that $\nu\left(A^\ast A\right)
\geq 0$ for all $A \in \mathcal{W}$.
All von Neumann algebras are particular examples of
C$^\ast$-algebras (see, e.g.,~\cite{Bratteli}). A
\emph{C$^\ast$-algebra} $\mathcal{M}$ is defined as a complex Banach algebra
endowed with an $\ast$ involution satisfying $(\alpha a + \beta
b)^\ast = \bar{\alpha} a^\ast + \bar{\beta} b^\ast$, $(ab)^\ast =
b^\ast a^\ast$ and $\|a a^\ast\| = \|a\| \|a^\ast\|$, for all $a,b
\in \mathcal{M}$ and $\alpha, \beta \in \mathbb{C}$. All C$^\ast$-algebras can
be represented as $\ast$-subalgebras of $\mathcal{B}(\mathcal{H})$, closed under the
norm operator topology. It is possible to show that, if the algebra
$\mathcal{M}$ is unital, then, the set of states $\mathcal{C}(\mathcal{M})$ is convex and
compact (in the weak$^\ast$ topology \cite[Chap.2]{Bratteli}).
Furthermore, due to the Krein-Milman theorem \cite[Chap.1]{Phelps},
the state space of a unital C$^\ast$-algebra $\mathcal{M}$ is the
weak$^\ast$ convex hull of its extreme points $\mathcal{E}(\mathcal{C}(\mathcal{M}))$.
Thus, in the rest of this paper, we will assume that the state
spaces of the probabilistic models are compact convex subsets of a
locally compact topological vector space. Notice that this
assumption includes quantum theories (standard, statistical and
relativistic) and classical theories as well, as particular
cases. We denote by $\mathcal{C}$ the set of states of a given probabilistic
model. The physical interpretation of the convexity assumption is
that, given two states of the system, we should always be able to
form a convex combination of them, representing a statistical
mixture. Convex sets play a key role in the formal
structure of quantum
theory~\cite{Holik-Ciancaglini,Holik-Zuberman}. The approach to
quantum theories based in convex sets dates back to the works
of B. Mielnik~\cite{Mielnik} and G. Ludwig~\cite{Ludwig} (at
least). Recently, the operational approach based in convex sets
has attracted much attention, related to the search of
operational and informational axioms characterizing quantum theory
(see for
example~\cite{Entropy-PaperGeneralized,Entropy-generalized-II} and
references therein).
The extreme points of the state space are termed \emph{pure} states,
while other states are known as \emph{mixed} ones. As is well
known, for the case of an arbitrary (compact) convex set of
states $\mathcal{C}$ in finite dimensions, each state $\nu\in\mathcal{C}$ can be
written as a convex combination of its extreme points. This is
indeed the case in finite dimensional quantum and classical
models~\cite{Entropy-PaperGeneralized}. In other words, for each
state $\nu$, there exist a finite collection of extreme states
$\{\nu_i\}^{n}_{1}$, such that $\nu$ can be written as
\begin{equation} \label{e:Decomposition}
\nu = \sum^{n}_{i=1} p_i \nu_i.
\end{equation}
\noindent where $p_{i}\geq 0$ and $\sum^{n}_{i=1}p_i=1$. The state
space of a (finite-dimensional) classical model will be a
$d$-dimensional simplex, which can be defined as the convex hull of
$d+1$ linearly-independent points (defining a $d$-dimensional
simplex). In such a simplex, a point can be expressed as a unique
convex combination of its extreme points. It is remarkable that, for
Abelian C$^\ast$-algebras the state space is a simplex
(see~\cite[Vol.~1, Chap.~4]{Bratteli} and \cite[Chap.~10]{Phelps}
for more discussion on uniqueness of representing measures). Thus,
the decomposition in terms of extreme points will also be unique.
This characteristic feature of classical (commutative) theories no
longer holds in quantum models. Indeed, even in the case of standard
quantum mechanics of finite dimensional models, there are infinite
ways to express a mixed state as a convex combination of extreme
states.
In a more general theory described by a compact convex set $\mathcal{C}$,
the decomposition of a given state in terms of the set $\mathcal{E}(\mathcal{C})$ of
extreme points of $\mathcal{C}$ is more involved (see~\cite{Bratteli,Alfsen}
for details). Given $\omega\in\mathcal{C}$ the goal is to build a
decomposition of the form
\begin{equation}
\omega(a) = \int d\mu(\omega')\omega'(a)
\end{equation}
\noindent where $\mu$ is a measure over $\mathcal{C}$ supported by the
extremal points of $\mathcal{C}$ and $\omega$ is considered as a
functional. This theory is related to the theory of barycentric
decompositions in compact convex sets: given a normalized Radon
measure in $\mathcal{C}$, its associated barycenter $b(\mu)$ will be given
by
\begin{equation}
b(\mu)=\int d\mu(\omega)\omega
\end{equation}
Given a C$^\ast$-algebra $\mathcal{M}$ and a (weak$^\ast$) compact convex
subset $\mathcal{S} \subseteq \mathcal{C}(\mathcal{M})$, it turns out that for every state
$\omega \in \mathcal{S}$, there exists a maximal\footnote{An order "$\leq$"
is introduced for the measures in $M_{+}(\mathcal{C})$ as follows:
$\mu\leq\nu$, if and only if, $\mu(f)\leq\nu(f)$ for all real
continuous convex functions. A measure $\mu$ is said to be
\emph{maximal} with respect to "$\leq$" if, for all $\nu$ satisfying
$\nu\geq\mu$, we have $\nu=\mu$ \cite[Vol.~1, Chap.~4]{Bratteli}}
measure $\mu$, \textit{pseudosupported}\footnote{Given a compact
convex set $\mathcal{C}$, a measure $\mu$ is pseudosupported by the set of
its extreme points $\mathcal{E}(\mathcal{C})$, if for each Baire set $B\subseteq \mathcal{C}$
satisfying $B\cap\mathcal{E}(\mathcal{C})=\emptyset$, we have $\mu(B)=0$ \cite[Vol.~1,
Chap.~4]{Bratteli}.
} in $\mathcal{E}(\mathcal{C}(\mathcal{M}))$~\cite{Ohya-1989,Bratteli}, such that
\begin{equation}
\omega=\int d\mu(\omega')\omega'
\end{equation}
The above result is much more general: it is valid for arbitrary
compact convex subsets of locally convex spaces (c.f.
\cite[Chap.~4]{Phelps} and \cite[Vol.~1, Chap.~4]{Bratteli}). As
usual in noncommutative models, the above decomposition is not
unique. For a given state $\omega$, we denote by $M_\omega(\mathcal{C})$ the
set of all such measures.
\section{$(h,\phi)$-entropies}\label{s:Review}
In this section we discuss entropic measures in the context of
standard quantum mechanics (i.e., we restrict our study to the case
of Type I factors) and return to the general setting in
Section~\ref{s:GeneralizedEntropies}. The $(h,\phi)$-entropies were
introduced by Salicr\'u \textit{et al.}~\cite{SalMen93} as follows:
\begin{definition} \label{def:Salicru} Let us consider
an $N$-dimensional probability vector \ $p=[p_1 \: \cdots \:
p_{N}] \in [0,1]^N$ with $\sum_{i=1}^{N} p_i = 1$. The so-called
$(h,\phi)$-entropies are defined as
\begin{equation} \label{eq:SalicruEnt}
H_{(h,\phi)}(p) = h\left( \sum_{i=1}^{N} \phi(p_i) \right),
\end{equation}
where the \textit{entropic functionals} $h: \mathbb{R} \mapsto \mathbb{R}$
and $\phi: [0,1] \mapsto \mathbb{R}$ are such that either: (i) $h$ is
increasing and $\phi$ is concave, or (ii) $h$ is decreasing and
$\phi$ is convex. In both cases, we restrict $\phi$ to be
strictly concave/convex and $h$ to be strictly monotone, together
with $\phi(0) = 0$ and $h(\phi(1)) = 0$.
\end{definition}
\noindent The family of $(h,\phi)$-entropies~\eqref{eq:SalicruEnt}
includes, as particular cases, the Shannon~\cite{Sha48},
R\'enyi~\cite{Ren61},
Havrda--Charv\'at--Tsallis~\cite{HavCha67,Dar70,Tsa88},
unified~\cite{Rat91} and Kaniadakis~\cite{Kan02} entropies. Dealing
with the infinite dimensional context, the above definition
naturally extends where the sum is then over $i \in \mathbb{N}$
provided the sum is finite (otherwise, by convention, the entropy is
set to be infinite).
In reference~\cite{QINP-paper}, a quantum mechanical version of
the $(h,\phi)$-entropies was introduced and some of their general
properties were discussed. But this definition was restricted to
finite dimensional quantum models. In what follows we advance a
definition for the infinite dimensional case. We need to introduce
first the following concepts. Let us denote the set of
Hilbert--Schmidt operators acting on $\mathcal{H}$ by $\mathcal{B}_{HS}:= \{T \in
\mathcal{B}(\mathcal{H}): \operatorname{Tr}\left( T^2 \right) < \infty \}$ \cite{Holik-Zuberman}. As
is well known, the set $\mathcal{B}_{HS}$ endowed with the inner product
$\braket{T_1,T_2} = \operatorname{Tr}\left( T_2^\dag T_1 \right)$ is a Hilbert
space. For $T \in \mathcal{B}(\mathcal{H})$ the absolute value of $T$ is defined by
$|T|= \left( T^\dag T \right)^{\frac12}$. The subspace formed by the
trace class operators is defined as $\mathcal{B}_1(\mathcal{H}) = \left\{ T \in\mathcal{B}(\mathcal{H}):
|T|^{\frac12} \in \mathcal{B}_{HS}\right\}$. \emph{Quantum states} can be
defined as positive trace class operators of trace one (also called
\emph{density operators}). Now we can define the quantum
$(h,\phi)$-entropies.
\begin{definition} \label{def:QuantumSalicru} Let us
consider a quantum system described by a density operator $\rho$
(i.e., a positive trace class operator of trace one) acting on a
Hilbert space $\mathcal{H}$. The quantum $(h,\phi)$-entropies are defined as
\begin{equation} \label{eq:QuantumSalicru}
\mathbf{H}_{(h,\phi)}(\rho) = h\left( \operatorname{Tr} \phi(\rho) \right) ,
\end{equation}
where the \textit{entropic functionals} \ $h: \mathbb{R} \mapsto \mathbb{R}$
\ and \ $\phi:[0,1] \mapsto \mathbb{R}$ \ are such that either: \ (i)
$h$ is strictly increasing and $\phi$ is strictly concave, or (ii)
$h$ is strictly decreasing and $\phi$ is strictly convex. We impose
$\phi(0)=0$ and $h(\phi(1))=0$ and we take the convention
$\mathbf{H}_{(h,\phi)}(\rho)=+\infty$ whenever $\sum_{i\in\mathbb{N}}\phi(p_i)$ is
not convergent. Here $\{ p_i \}_{i\in\mathbb{N}}$ is the sequence of
eigenvalues of the spectral decomposition of $\rho$, sorted in
decreasing order and counted with their respective multiplicities.
\end{definition}
\noindent The last convention of this definition is justified as
follows. Every positive trace class operator $\rho$ of trace one
admits a spectral decomposition of the form $\sum_{i \in
\mathbb{N}}\lambda_i \mathbf{P}_i$, where $\{ \mathbf{P}_i\}_{i\in\mathbb{N}}$
is a family of projection operators\footnote{Notice that the
spectral decomposition can be easily rewritten in terms of rank
one projections as $\rho = \sum_{i \in \mathbb{N}}s_i
\ketbra{\phi_i}{\phi_i}$, with $\sum_{i \in \mathbb{N}} s_i = 1$, $s_i
\geq 0$ and $s_i \geq s_{i+1}$. This is known as the
\emph{Schatten decomposition} of $\rho$ (see,
e.g.,~\cite{Ohya-1989}).}. Thus, $\phi(\rho)
=\sum_{i\in\mathbb{N}}\phi(p_i)\mathbf{P}_i$. But then, $\mathbf{H}_{(h,\phi)}(\rho)$
exists, only if $h\left( \operatorname{Tr}(\phi(\rho)) \right) = h\left(\sum_{i
\in \mathbb{N}}\phi(p_i) \right)<\infty$.
Regarding convergence in Definition~\ref{def:QuantumSalicru}, the
following remarks are in order. Notice that $h\left( \sum_{i \in
\mathbb{N}}\phi(p_i) \right)$ will be a convergent quantity --for all
$h$ and $\phi$-- whenever the rank of $\rho$ is finite
dimensional. In principle, even if infinite dimensional ranks are
allowed, one may try to determine, given a particular choice of $h$
and $\phi$, the set of states $\rho$ for which the sum converges.
Notice also that for important families of entropic functionals,
$h\left( \sum_{i} \phi(p_i)\right)$ will be convergent for all
states. As an example, consider the case of the R\'{e}nyi
entropies with entropic index greater than one. A detailed study of
the convergence properties of the infinite dimensional
$(h,\phi)$-entropies will be carried out elsewhere.
For finite dimensional $\mathcal{H}$, the Definition \ref{def:QuantumSalicru}
reduces to the one introduced in~\cite{QINP-paper}. Furthermore, the
quantum $(h,\phi)$-entropy of a density operator $\rho$ equals the
classical entropy of the sequence $p=\{ p_i \}_{i \in \mathbb{N}}$
formed by its eigenvalues: $\mathbf{H}_{(h,\phi)}(\rho)=H_{(h,\phi)}(p)$.
An important notion for the rest of this work is that of
\emph{majorization}. Given two sequences $p=\{p_{i}\}_{i\in\mathbb{N}}$
and $q=\{q_{i}\}_{i\in\mathbb{N}}$ of positive real numbers sorted in
decreasing order, we say that $q$ is majorized by $p$ (and we denote
it by $q\preceq p$), if and only if,
$\sum_{i=1}^{k}q_{i}\leq\sum_{i=1}^{k}p_{i}$ for all $k\in\mathbb{N}$ and
$\sum_{i\in\mathbb{N}}q_{i}=\sum_{i\in\mathbb{N}}p_{i}$.
In what follows we make use of the integral form of the Jensen's
inequality (c.f. \cite{Niculescu}). Let $\mu$ be the Lebesgue
measure, $f:[a,b]\longrightarrow\mathbb{R}$ be a Lebesgue-integrable
function and $\phi$ a convex function. Then, for this case, Jensen's
inequality reads
\begin{equation}
\phi\left (\frac{1}{b-a}\int_{a}^{b} f(x)dx\right )\leq
\frac{1}{b-a}\int_{a}^{b} \phi(f(x))dx
\end{equation}
\noindent Assume that, for two sequences $p$ and $q$, we have that
$p\succeq q$. Thus, $q=Qp$ with $Q_{ij}=|U_{ij}|^{2}$ for some
unitary operator $U$ (see \cite{Li-Busch}). Due to the fact that
$\sum_{j\in\mathbb{N}}Q_{ij}=1$, for each $i\in\mathbb{N}$, we can decompose
the unit interval as
\begin{equation}
[0,1]=\bigcup_{k=0}^{\infty}\left[\sum_{j=1}^{k}Q_{ij},\sum_{j=1}^{k+1}Q_{ij}\right]
\end{equation}
\noindent (where we adopt the convention $\sum_{j=1}^{0}Q_{ij}=0$).
Put in words: we write the unit interval as an infinite union of
segments whose lengths are given by the sequence
$\{Q_{ij}\}_{j\in\mathbb{N}}$. Define a step function
$f:[0,1]\longrightarrow\mathbb{R}$ as $f(x)=p_{k+1}$ when
$x\in\left[\sum_{j=1}^{k}Q_{ij},\sum_{j=1}^{k+1}Q_{ij}\right)$ and
$f(1)=0$. By construction, we have that $\int_{0}^{1}f(x)dx=\sum_{k
\in \mathbb{N}} Q_{ik} p_k=q_{i}$ and $\int_{0}^{1}\phi(f(x))dx=\sum_{k
\in \mathbb{N}} Q_{ik}\phi(p_k)$. Thus, applying Jensen's inequality, we
obtain
\begin{equation}\label{eq:BistochasticConvex}
\phi(q_i)\leq\sum_{k \in \mathbb{N}} Q_{ik}\phi(p_k)
\end{equation}
\noindent Summing over $i\in\mathbb{N}$, we have
\begin{equation}
\sum_{i\in\mathbb{N}}\phi(q_i)\leq\sum_{i\in\mathbb{N}}\sum_{k \in
\mathbb{N}}Q_{ik}\phi(p_k)=\sum_{k \in \mathbb{N}}\phi(p_k)
\end{equation}
Let us now invoke Theorem~4.1 of
Ref.~\cite{Arveson-Kadison}, that we reproduce here for
the sake of completeness. Let $\mathcal{A}$ be the maximal
Abelian von Neumann generated by the orthogonal set
of rank-one projection operators $\{
\ketbra{e_k}{e_k} \}_{k \in \mathbb{N}}$ and define the
conditional expectation map $E: \mathcal{B}(\mathcal{H}) \longrightarrow
\mathcal{A}$
\begin{equation}
E(A) = \sum_{n \in \mathbb{N}}\braket{ e_k | A |e_k}
\ketbra{e_k}{e_k}
\end{equation}
Given $\{ \lambda_n \}_{n \in \mathbb{N}}$, a decreasing
sequence in $\ell^1$ with non-negative terms, let
$\mathcal{O}_\lambda$ be the set of trace class operators
possessing $\lambda$ as eigenvalue list.
Then, Theorem 4.1 of~\cite{Arveson-Kadison} asserts
that $E(\mathcal{O}_\lambda)$ consists of all positive
trace-class operators $B \in \mathcal{A}$ whose eigenvalue list
$\{ p_n \}_{n \in \mathbb{N}}$ (arranged in decreasing order)
is majorized by $\lambda$. Then, it follows that, if
the density operator $\rho$ has an eigenvalue list
$\lambda = \{ \lambda_n \}_{n \in \mathbb{N}}$, the list
formed by $p = \{ \braket{e_n|\rho|e_n} \}_{n \in
\mathbb{N}}$ (sorted in decreasing order) is majorized by $\{
\lambda_n \}_{n \in \mathbb{N}}$. Thus, as we have seen
above, for a convex function $\phi$ we have
$\sum_{n\in\mathbb{N}}\phi(\braket{e_n|\rho|e_n})\leq\sum_{n\in\mathbb{N}}\phi(\lambda_{n})$.
Remembering our convention for convex functions in
Definition \ref{def:QuantumSalicru}, we have
\begin{equation}
h\left(\sum_{n\in\mathbb{N}}\phi(\lambda_{n})\right)\leq
h\left(\sum_{n\in\mathbb{N}}\phi(\braket{e_n|\rho|e_n})\right).
\end{equation}
\noindent In other words, we obtain
\begin{equation} \label{eq:staticalmixtureC}
\mathbf{H}_{(h,\phi)}(\rho) \leq H_{(h,\phi)}(p).
\end{equation}
\noindent A similar conclusion holds for the case of $h$ strictly
increasing and $\phi$ strictly concave. It is interesting to compare
inequality \ref{eq:staticalmixtureC} with Proposition $5$
of~\cite{QINP-paper}.
Due to the fact that the trace is invariant under arbitrary
isometries (i.e., transformations implemented by operations
satisfying $U^\dag U=\textbf{I}$\footnote{Notice that all unitary
operators are isometries}), it is easy to check that:
\begin{proposition} \label{prop:unitary} The quantum $(h,\phi)$-entropies are
invariant under any isometric transformation $\rho \to U \rho
U^\dag$ where $U$ is an isometric operator:
\begin{equation} \label{eq:unitary}
\mathbf{H}_{(h,\phi)}(U \rho U^\dag) = \mathbf{H}_{(h,\phi)}(\rho).
\end{equation}
\end{proposition}
Several properties of the family of entropies from
Definition~\ref{def:QuantumSalicru} were studied for the
case of finite dimensional Hilbert spaces
in~\cite{QINP-paper}. The study of the properties in the
infinite dimensional case is left for future work.
\section{Entropies in generalized probabilistic models} \label{s:GeneralizedEntropies}
In this section, we introduce an extension of the
definition of classical and quantum Salicr\'u entropies
to a more general family of probabilistic theories.
\subsection{$(h,\phi)$-entropies in C$^\ast$-algebras}\label{s:Cstar}
Let us first define our extension to C$^\ast$-algebra models. We follow a
strategy that is analogous to that of~\cite{Ohya-1989}.
\begin{definition}
Given a C$^\ast$-algebra $\mathcal{M}$, for every
$\omega\in\mathcal{C}(\mathcal{M})$, let:
\begin{eqnarray}
&D_\omega(\mathcal{C}(\mathcal{M})) := \Big\{\mu \in M_\omega(\mathcal{C}(\mathcal{M})) \quad \Big| &\\[2mm] \nonumber
&\exists \, \{ \mu_k \}_{k \in \mathbb{N}} \subset \mathbb{R}^+ \:
\mbox{ and} \quad \phi_k \subset \mathcal{E}(\mathcal{C}(\mathcal{M})) \: \mbox{
s.t.} \: \sum_{k \in \mathbb{N}} \mu_k = 1 \: \mbox{ and}
\quad \mu = \sum_{k \in \mathbb{N}} \mu_k \delta(\phi_k)
\Big\}&
\end{eqnarray}
where $\delta(\phi)$ is the Dirac measure centered at point $\phi$. Now, for
$\mu \in D_\omega$ let
\begin{equation}
H(\mu) = h\left( \sum_{k \in \mathbb{N}} \phi(\mu_k) \right)
\end{equation}
when the above sum converges and $H(\mu)=+\infty$ otherwise. Then,
by imposing to the functions $h$ and $\phi$ the same
conditions of Definition~\ref{def:QuantumSalicru}, we define
\begin{equation}
\mathbf{H}_{(h,\phi)}(\omega) = \left\{
\begin{array}{l}
\inf \left\{H(\mu) \: | \: \mu \in D_\omega \right\} \\[2mm]
+\infty \quad \mbox{if} \quad D_\omega = \emptyset\\[2mm]
+\infty \quad \mbox{if} \quad \forall \, \mu \in D_\omega, \:
H(\mu)=+\infty
\end{array}\right.
\end{equation}
\end{definition}
It is important to remark that all models of standard quantum
mechanics are Type I factors (Type I$_n$ for finite dimensional
models and Type I$_{\infty}$ for infinite dimensional ones), which
are C$^\ast$-algebras. For many measures, the above definition
collapses into the one of standard quantum mechanics when
restricted to Type I factors. Indeed, if $\mathcal{M}$ is a Type I factor,
by Gleason's theorem~\cite{Gleason}, every state $\omega$ can be
described by a positive trace class operator $\rho_\omega$ of
trace one. For finite dimensional models, we reobtain the quantum
$(h,\phi)$-entropies of~\cite{QINP-paper}. For infinite dimensional
Hilbert spaces, as is well known, the von Neumann entropy has the
same minimization property (see for example~\cite{Ohya-1989}).
Furthermore, Abelian C$^\ast$-algebras are in correspondence to
classical statistical models. Thus, our definition contains also
an important class of classical models as particular cases.
\subsection{More general models}
We now discuss how to define the $(h,\phi)$-entropies
in an arbitrary compact convex set $\mathcal{C}$, understood as
the state-space of a generalized probabilistic model. We
will combine the approach presented
in~\cite{Entropy-PaperGeneralized} with the strategy used
in Section~\ref{s:Cstar}. Given a probabilistic model
described by a compact convex set $\mathcal{C}$, let $\omega \in
\mathcal{C}$ be a state. Denote by $M_1(\mathcal{C})$ to the set of
normalized Radon measures~\cite{Bratteli,Alfsen}. If
$\omega$ is the barycenter of $\mathcal{C}$ with respect to
measure $\mu$, we denote this by $\omega = b(\mu) = \int
d\mu(\omega') \omega'$. Define
\begin{equation}
M_\omega(\mathcal{C}) = \left\{ \mu \: | \: \mu \in M_1(\mathcal{C}) \quad \mbox{and} \quad \omega
= b(\mu) \right\}
\end{equation}
Now, in analogy to the procedure of
Section~\ref{s:Review}, we build the set $D_\omega(\mathcal{C})$,
and proceed in a similar way as before.
\begin{definition}
Given a statistical theory whose state space is
represented by a compact convex set $\mathcal{C}$, define
\begin{eqnarray}
&D_\omega(\mathcal{C}) := \Big\{\mu \in M_\omega(\mathcal{C}) \quad \Big| &\\[2mm] \nonumber
&\exists \, \{ \mu_k \}_{k \in \mathbb{N}} \subset \mathbb{R}^+ \:
\mbox{and} \quad \phi_k \subset \mathcal{E}(\mathcal{C}) \: \mbox{ s.t.} \:
\sum_{k \in \mathbb{N}} \mu_k = 1 \: \mbox{ and} \quad \mu =
\sum_{k \in \mathbb{N}} \mu_k \delta(\phi_k) \Big\}&
\end{eqnarray}
\noindent For $\mu \in D_\omega$, define
\begin{equation}
H(\mu) = h\left( \sum_{k \in \mathbb{N}} \phi(\mu_k) \right)
\end{equation}
\noindent when the above sum converges and $H(\mu) =+\infty$
otherwise. Then, by imposing the same conditions on the functions
$h$ and $\phi$ as in Definition~\ref{def:QuantumSalicru}, we define
\begin{equation}
\mathbf{H}_{(h,\phi)}(\omega) = \left\{
\begin{array}{l}
\inf\left\{H(\mu)) \: | \: \mu \in D_\omega \right\}\\[2mm]
+\infty \quad \mbox{if} \quad D_\omega = \emptyset\\[2mm]
+\infty \quad \mbox{if} \quad \forall \, \mu \in D_\omega, \:
\mathcal{H}(\mu) = \infty
\end{array}\right.
\end{equation}
\end{definition}
\noindent In this way, we obtain a formal expression
for the $(h,\phi)$-entropies in generalized probabilistic
models.
It is important to notice that, given a state $\omega \in
\mathcal{C}$, the set $D_\omega(\mathcal{C})$ can be used to define a
notion of majorization in generalized probabilistic
models in a similar way as
in~\cite{Entropy-PaperGeneralized}.
\begin{definition}
Suppose that there exists a discrete measure $\tilde{\mu}$ such
that, for all $\mu \in D_\omega(\mathcal{C})$, if we put $\{ \tilde{\mu}_i
\}_{i \in \mathbb{N}}$ and $\{ \mu_i \}_{i \in \mathbb{N}}$ in decreasing
order, we have that $\sum_{i=1}^k \mu_i \leq \sum_{i =1}^k
\tilde{\mu}_i$ for all $k$ and $\sum_{i\in\mathbb{N}}\mu_i
=\sum_{i\in\mathbb{N}}\tilde{\mu}_i$. Then, by construction, we have that
$\{ \tilde{\mu}_i \}_{i \in \mathbb{N}}$ majorizes $\{ \mu_i \}_{i \in
\mathbb{N}}$ (and we write $\{ \mu_i \}_{i \in \mathbb{N}} \preceq\{
\tilde{\mu}_i \}_{i \in \mathbb{N}}$) for all $\mu \in D_\omega(\mathcal{C})$. In
that case, we say that $\tilde{\mu}$ is the majorant of
$D_\omega(\mathcal{C})$, and we call $\{ \tilde{\mu}_i \}_{i \in \mathbb{N}}$
the spectra of $\omega$. Thus, if $\tilde{\mu}$ is the spectra of
$\omega$ and $\tilde{\nu}$ is the spectra of $\sigma$, and we have
$\{\tilde{\nu}_i \}_{i \in \mathbb{N}} \preceq \{\tilde{\mu}_i \}_{i
\in \mathbb{N}}$, we then say that $\sigma\preceq \omega$ (i.e.,
\textit{$\omega$ majorizes $\sigma$}).
\end{definition}
\section{Final comments}\label{s:Conclusions}
In this short paper, we have advanced a definition of
the $(h,\phi)$-entropies for general probabilistic
theories, extending previous definitions by including
(possibly) infinite dimensional models. These examples
include those of unital C$^\ast$-algebras and more
general compact convex sets. Associated to the above
definitions, a natural definition of majorization for
generalized probabilistic models arises (generalizing the
definitions presented
in~\cite{Entropy-PaperGeneralized}). A thorough study of
the properties of these entropic measures is left for
future work.
\section*{Acknowledgements}
MP, FH, PWL, GMB and GB acknowledge CONICET, UNLP and UBA
(Argentina), and MP and PWL also acknowledge SECyT-UNC (Argentina)
for financial support. SZ is grateful to the University of
Grenoble-Alpes and CNRS (France). MP acknowledges an AUIP grant and
warm hospitality at Universidad de Granada (Spain).
|
{
"timestamp": "2018-02-26T02:12:29",
"yymm": "1802",
"arxiv_id": "1802.08673",
"language": "en",
"url": "https://arxiv.org/abs/1802.08673"
}
|
\section{Introduction}
In the following definitions we will suppose that $X$ is a
subspace of the Cantor set $\mathbf{C}$.
A function $f: X\mapsto Y$ is called {\it piecewise open} if $X$
admits a countable, closed and disjoint cover $\mathcal{V}$, such
that for each $V\in \mathcal{V}$ the restriction $f|V$ is open.
Recall, that a subset $E$ of a metric space $X$ is {\it
resolvable} \cite{kr}, if for each nonempty closed in $X$ subset
$F$ we have $cl_{X}(F\cap E)\cap cl_{X}(F\setminus E)\neq F$.
If $E\subset X$ is resolvable, then $E$ is $\Delta^0_2$-set in $X$
and vice versa if the space $X$ is Polish.
Recall that a subsets $X$ is $LC_n$-set if it can be written as
union of $n$ locally closed in $X$ sets (a set is locally closed
if it is the intersection of an open set and a closed set.) Every
$LC_n$-set (constructible) set is resolvable.
A mapping $f$ is open if it maps open sets onto open ones. More
generally, for $n\in \omega$ a mapping $f$ is said to be {\it
open-resolvable} (open-$LC_n$) if $f$ maps open set onto
resolvable ($LC_n$-set) ones.
In the following definitions we will suppose that $X$ is a
subspace of the Cantor set $\mathbf{C}$.
A piecewise open function $f:X\mapsto Y$ is called {\it
scatteredly open} if, in addition, the cover $\mathcal{V}$ is
scattered, that is: for every nonempty subfamily
$\mathcal{T}\subset \mathcal{V}$ there is a clopen set $G\subset
X$ such that $\mathcal{T}_G=\{T\in \mathcal{T}: T\subset G\}$ is a
singleton and $T\bigcap G=\emptyset$ for every $T\in
\mathcal{T}\setminus \mathcal{T}_G$.
\section{Main result}
A.V. Ostrovsky proved the interesting results
\begin{theorem}\label{th1}(Theorem 1 in \cite{Os1}) Let $X$ be subspace of
the Cantor set $\mathbf{C}$, and $f: X \mapsto Y$ a continuous
bijection. If the image under $f$ of every open set in $X$ is
resolvable in $Y$, then $f$ is scatteredly open, and, hence, $f$
is scattered homeomorphism.
\end{theorem}
\begin{theorem}\label{th2}(Proposition 3.2 in \cite{Os3}) Every
continuous open-$LC_1$ function $X\mapsto Y$ onto a metrizable
crowded space $Y$ is open.
\end{theorem}
\medskip
In (\cite{Os2}, Problem 2) A.V. Ostrovsky posed the following
\medskip
{\bf Problem.} Is every open-$LC_n$ function between Polish spaces
piecewise open for $n=2,3,... ? $
\medskip
We prove that
$\bullet$ the condition {\it 'one-to-one'} of mapping $f$ in
Theorem \ref{th1} is necessary.
$\bullet$ the Ostrovsky's Problem has a negative solution for
$n=2$ (hence for every $n>1$).
\medskip
{\bf Example.} Let $\mathbf{C}$ be the Cantor set such that
$\mathbf{C}\subset [0,1]$. As usually, we starts be deleting the
open middle third $(\frac{1}{3}, \frac{2}{3})$ from the interval
$[0,1]$, leaving two segments: $P_1=C_0\cup
C_2=[0,\frac{1}{3}]\cup [\frac{2}{3},1]$. Next, the open middle
third of each of these remaining segments is deleted, leaving four
segments: $P_2=C_{00}\cup C_{02}\cup C_{20}\cup
C_{22}=[0,\frac{1}{9}]\cup
[\frac{2}{9},\frac{1}{3}]\cup[\frac{2}{3},\frac{7}{9}]\cup
[\frac{8}{9},1]$. This process is continued ad infinitum, where
the $n$th set is $P_n=\frac{P_{n-1}}{3}\cup (\frac{2}{3}+
\frac{P_{n-1}}{3})$ for $n\geq 1$, and $P_0=[0,1]$.
The Cantor ternary set contains all point in the interval $[0,1]$
that are not deleted at any step in this infinite process:
$\mathbf{C}:=\bigcap\limits_{n=1}^{\infty} P_n$.
Let us fix a countable dense set
$\{(a_n, b_n) : n\in \omega \}$ in $\mathbf{C}\times \mathbf{C}$
such that $a_n\neq a_m$ and $b_n\neq b_m$, for $n\neq m$, for each
$n$ pick $a_{n,i}\mapsto a_n$ such that $a_{n,i}\neq a_m$,
$a_{n,i}\neq a_{m,j}$ for $(n,i)\neq (m,j)$, and $|a_{n,i}-a_n|<
\frac{1}{n}$.
Consider the standard clopen base $\mathcal{B}:=\{\mathbf{C}\cap
C_{s_1,...,s_k}: s_i\in \{0,2\}, i\in\overline{1,k}$,
$k\in\omega\}$ in $\mathbf{C}$, and we enumerate
$\mathcal{B}=\{B_n : n\in \omega\}$ such that $b_n\in B_n$ for
every $n\in \omega$.
Let $X=(\mathbf{C}\times \mathbf{C})\setminus \bigcup\limits_{n,i}
\{a_{n,i}\}\times B_n$. Note that $X$ is $G_\delta$-set of
$\mathbf{C}\times \mathbf{C}$. It follows that $X$ is
$\check{C}$ech-complete and, moreover, it is Polish space.
Let $\pi | X: X \mapsto \mathbf{C}$ be the restriction to $X$ of
the projection $\pi : \mathbf{C}\times \mathbf{C} \mapsto
\mathbf{C}$ onto the first coordinate. Note that
$\pi(X)=\mathbf{C}$ because $diam \mathbf{C}> diam B_n$ for any
$n\in \omega$.
Suppose $X=\bigcup\limits_{n\in \omega} X_n$ is a countable union
of closed subsets $X_n$ (apply the Baire Category Theorem), there
is $X_{m}$ such that $V=Int X_{m}\neq \emptyset$.
Since the set $\{(a_n, b_n) : n\in \omega \}$ is dense in $X$,
there are $n'\in \omega$ and $W\in \mathcal{B}$ such that a point
$(a_{n'},b_{n'})\in ((W\times B_{n'})\cap X)\subset V$. Since the
set $\{(a_n, b_n) : n\in \omega \}$ is dense in $(W\times
B_{n'})\cap X$, choose $n''\in \omega$ such that $n''>n'$, $2*diam
B_{n''}<diam B_{n'}$ and $(a_{n''},b_{n''})\in ((W\times
B_{n''})\cap X)\subset (W\times B_{n'})\cap X$. Then $\pi | X_{m}
: X_{m} \mapsto \pi(X_{m})$ is not open at $(a_{n''}, b_{n''})$
because of $\pi((W\times B_{n''})\cap X)$ is not contains
$\{a_{n'',i}: i\in \omega\}$ and, hence, it is not open set of
$\pi(X_{m})$. Therefore $\pi | X$ is not piecewise open and,
hence, is not scatteredly open.
Let $U\subset \mathbf{C}\times \mathbf{C}$ be open. We have to
check that $\pi(U\cap X)\in \Delta^0_2$.
Construct for every point $(a,b)\in U\cap X$ a sets $W(a)$ and
$B(b)$ such that
$\bullet$ $a\in W(a)\in \mathcal{B}$, $b\in B(b)\in \mathcal{B}$
and $(W(a)\times B(b))\cap X\subset U$.
$\bullet$ if $a\neq a_m$ for any $m\in \omega$, then
$\pi((W(a)\times B(b))\bigcap X)=W(a)$.
$\bullet$ if $a=a_m$ for some $m\in \omega$, then $\pi((W(a)\times
B(b))\bigcap X)=W(a)\setminus \{ a_{m,i_j}: j\in \omega \}$ for
some subsequence $\{ a_{m,i_j}: j\in \omega \}\subseteq \{
a_{m,i}: i\in \omega \}$.
Case 1. Let $a\neq a_m$ for any $m\in \omega$, one can
choose $W$, $B(b)=B_{n'}\in \mathcal{B}$ such that $a\in W$, $b\in
B(b)$, $(W\times B(b))\cap X\subset U$ and $B(b)\setminus B_n\neq
\emptyset$ for all $n>n'$. Since $a\neq a_m$ for any $m\in
\omega$, then there exist $W(a)\in \mathcal{B}$ such that $a\in
W(a)\subset W$ and $W(a)\cap \{a_i\cup \{a_{i,j}: j\in \omega\} :
i\in \overline{1,n'}\}=\emptyset$. Then $\pi((W(a)\times
B(b))\bigcap X)=W(a)$.
Case 2. Let $a=a_m$ for some $m\in \omega$, analogically to Case
1, we can choose $B(b)\in \mathcal{B}$ such that $B(b)\setminus
B_n\neq \emptyset$ for all $n>n'>m$, and $W(a)\in \mathcal{B}$ can
choose such that $W(a)\cap \{a_i\cup \{a_{i,j}: j\in \omega\} :
i\in \overline{1,n'}$ and $i\neq m\}=\emptyset$.
Then $W(a)\setminus \pi((W(a)\times B(b))\bigcap X)\subset \{
a_{m,i}: i\in \omega \}$, hence $\pi((W(a)\times B(b))\bigcap
X)=W(a)\setminus \{ a_{m,i_j}: j\in \omega \}=W_a\cup \{a_m\}$
where $W_a=W(a)\setminus (\{a_m\}\cup \{ a_{m,i_j}: j\in \omega
\})$ is an open in $\mathbf{C}$.
Thus $\pi(U\cap X)=\bigcup\limits_{(a,b)\in U\cap X}
\pi((W(a)\times B(b))\bigcap X)=$
$=(\bigcup\limits_{(a,b)\in U\cap X, a\neq a_m} W(a))\cup
(\bigcup\limits_{(a,b)\in U\cap X, a=a_m} W_a \cup \{a_m\})$.
By definition of the clopen base $\mathcal{B}$,
$\pi(U\cap X)=S\cup D$ where $S=(\bigcup\limits_{(a,b)\in U\cap X,
a\neq a_m} W(a))\cup (\bigcup\limits_{(a,b)\in U\cap X, a=a_m}
W_a)$ is an open set in $\mathbf{C}$ and $D=\{a_{m_k}: k\in \omega
\}$ is a discrete in itself such that $S\bigcap D=\emptyset$.
Indeed, by Case 2, for every $a_{m_k}\in D$ there is
$W(a_{m_k})\in \mathcal{B}$ such that $a_{m_k}\in W(a_{m_k})$ and
$W(a_{m_k})\bigcap \{a_{m_i} : i\in \omega, i\neq k \}=\emptyset$.
It follows that $D$ is a discrete in itself, and, hence,
$\pi(U\cap X)$ is $\Delta^0_2$. Since $\pi(X)=\mathbf{C}$ is
Polish, the mapping $\pi|X$ is continuous open-resolvable.
Note that $\pi(U\cap X)=S\cup ((\bigcup\limits_{(a,b)\in U\cap X}
W(a))\bigcap \overline{D})$. It follows that $\pi(U\cap X)$ is
$LC_2$-set and, hence, $\pi|X$ is open-$LC_2$.
\smallskip
\bibliographystyle{model1a-num-names}
|
{
"timestamp": "2018-02-27T02:07:28",
"yymm": "1802",
"arxiv_id": "1802.08889",
"language": "en",
"url": "https://arxiv.org/abs/1802.08889"
}
|
\section{Introduction and Related Works}
The recent interest in quantum technologies has brought forward a vision of \emph{quantum internet} \cite{elkouss2017} that could implement a collection of known protocols for enhanced security or communication complexity (see a recent review in \cite{BC2016}). On the other hand the rapid development of quantum hardware has increased the computational capacity of quantum servers that could be linked in such a communicating network. This raised the necessity/importance of privacy preserving functionalities such as the research developed around quantum computing on encrypted data (see a recent review in \cite{fitzsimons2017}).
However, there exist some challenges in adapting widely the above vision: A reliable long-distance quantum communication network connecting all the interested parties might be very costly. Moreover, currently, some of the most promising quantum computation devices (e.g. superconducting such as the devices developed by IBM, Google, etc) do not yet offer the possibility of ``networked'' architecture, i.e. cannot receive and send quantum states.
For this reason, there has been extensive research focusing on the practicality aspect of quantum delegated computation protocols (and related functionalities). One direction is to reduce the required communications by exploiting classical fully-homomorphic-encryption schemes \cite{broadbent2015quantum,dulek2016quantum,alagic2017quantum}, or by defining their direct quantum analogues \cite{liang2015quantum,ouyang2015quantum,tan2016quantum,lai2017statistically}. Different encodings, on the client side, could also reduce the communication \cite{mantri2013optimal,GMMR2013}. However, in all these approaches the client still requires some quantum capabilities. While no-go results indicate restrictions on which of the above properties are jointly achievable for classical clients \cite{armknecht2014general,yu2014limitations,ACGK2017,newman2017limitations}, completing this picture remains an open problem. Another direction is to consider fully-classical client protocols, compatible with the no-go results, that can therefore achieve more restricted levels of security. The first such procedure achieving statistical security (but not for universal computations) was proposed in \cite{mantri2017flow}. Focusing on post-quantum computational security a universal blind delegated protocol was proposed in \cite{mahadev2017} and a verifiable one in \cite{mahadev2018}.
Our own independent work presented here, is also based on post-quantum computational security, appeared (in preprint \cite{CCKW18}) in between the above mentioned two works, taking a different approach, more natural to measurement-based quantum computing protocols. The approach we take is modular. We replace the need for (a particular) quantum communication channel with a computationally (but post-quantum) secure generation of secret and random qubits. This can be used by classical clients to achieve blind quantum computing and a number of other applications.
\subsection{Our Contributions}
\begin{enumerate}
\item We define a classical client/quantum server delegated ideal functionality of pseudo-secret random qubit generator (PSRQG), in \autoref{Sec:Ideal}. PSRQG can replace the need for quantum channel between parties in certain quantum communication protocols with trade-off that the protocols become computationally secure (against \emph{quantum} adversaries).
\item We give a basic protocol (QFactory) that achieves this functionality, given a trapdoor one-way function that is quantum-safe, two-regular and collision resistant resistant in \autoref{Sec:Protocol} and prove its correctness.
\item We prove the security of the QFactory against Quantum-Honest-But-Curious server or against any malicious third party by proving that the classical description of the generated qubits is a hard-core function (following a reduction similar that of the Goldreich-Levin Theorem) in \autoref{Sec:Privacy}.
\item While our previous results do not depend on the specific function used, the existence of such specific functions (with all desired properties) makes the PSRQG a practical primitive that can be employed as described in this paper. In \autoref{Sec:Functions}, we first give methods for obtaining two-regular trapdoor one-way functions with extra properties (collision resistant or second preimage resistant) assuming the existence of simpler trapdoor one-way functions (permutation trapdoor or homomorphic trapdoor functions).
We use reductions to prove that the resulting functions maintain all the properties required. Furthermore, we give in \autoref{Subsec:actual_trapdoor} an explicit family of functions that respect all the required properties based on the security of the Learning-With-Errors problem as well as a possible instantiation of the parameters. Thus, this function is also quantum-safe, and thus directly applicable for our setting. Note, that other functions may also be used, such as the one in \cite{mah2018} or functions based on the Niederreither cryptosystem and the construction in \cite{freeman2010}.
\end{enumerate}
\subsection{Applications}\label{Sec:applications}
The PSRQG functionality, viewed as a resource, has a wide range of applications. Here we give a general overview of the applications, while for details on \emph{how} to use the \emph{exact output} of the PSRQG obtained in this paper in specific protocols we refer the reader to \autoref{app:applications}. PSRQG enables fully-classical parties to participate in many quantum protocols using only public classical channels and a single (potentially malicious) quantum server.
\textbf{The first type of applications} concerns a large class of delegated quantum computation protocols, including blind quantum computation and \emph{verifiable} blind quantum computation. These protocols are of great importance, enabling information-theoretically secure (and verifiable) access to a quantum cloud. However, the requirement for quantum communication limits their domain of applicability. This limitation is removed by replacing the off-line preparation stage with our QFactory protocol. Concretely, we can use QFactory to implement the blind quantum computation protocol of \cite{bfk}, as well as the \emph{verifiable} blind quantum computation protocols (e.g. those in \cite{fk,Broadbent2015,FKD2017}), in order to achieve classical-client secure and verifiable access to a quantum cloud.
In all these cases, the cost of using PSRQG is that the security becomes post-quantum computational (from information-theoretic). However, the possibility of information-theoretically secure classical client blind quantum computation seems highly unlikely due to strong complexity-theoretic arguments given in \cite{ACGK2017} and therefore this is the best we could hope for.
\textbf{The second type of applications} involves the family of protocols for which their quantum communication consists of random single qubits as the ones provided by QFactory, such as: quantum-key-distribution \cite{BB84},
quantum money \cite{BOV2018},
quantum coin-flipping \cite{PCDK2011},
quantum signatures \cite{WDKA2015},
two-party quantum computation \cite{KW2017,KMW2017},
multiparty quantum computation \cite{KP16},
etc.
Finally, we note that in order to use PSRQG as a subroutine in a larger protocol, we need to address the issue of composition and formulate the functionality in the universal composability framework \cite{unruh2010uni}. This could be done as in \cite{DK2016} (where \emph{quantum} communication was required, using a quantum version of SRQG), but the full details are outside of the scope of this paper.
\subsection{Overview of the Protocol and Proof}\label{Sec:Overview}
The general idea is that a classical client gives instructions to a quantum server to perform certain actions (quantum computation). Those actions lead to the server having as output a single qubit, which is randomly chosen from within a set of possible states of the form $\ket{0}+e^{ir\pi/4}\ket{1}$, where $r\in\{0,\cdots,7\}$. The randomness of the output qubit is due to the (fundamental) randomness of quantum measurements that are part of the instructions that the client gives. Moreover, the server cannot guess the value of $r$ any better than if he had just received that state directly from the client (up to negligible probability). This is possible because the instructed quantum computation is generically a computation that is hard to (i) classically simulate and (ii) to reproduce quantumly because it is unlikely (exponentially in the number of measurements) that by running the same instructions the server obtains the exact same measurement outcomes twice. On the other hand, we wish the client to \emph{know} the classical description and thus the value of $r$. To achieve this task, the instructions/quantum computation the client uses are based on a family of trapdoor one-way functions with certain extra properties\footnote{The functions should also be two-regular (each image has exactly two preimages), quantum safe (secure against quantum attackers) and collision resistant (hard to find two inputs with the same image).}. Such functions are hard to invert (e.g. for the server) unless someone (the client in our case) has some extra ``trapdoor'' information $t_k$. This extra information makes the quantum computation easy to classically reproduce for the client, which can recover the value $r$, while it is still hard to classically reproduce for the server. Sending random qubits of the above type, is exactly what is required from the client in most of the protocols and applications given earlier, while with simple modifications our protocol could achieve other similar sets of states. \\ \\ \\
Our QFactory protocol can heuristically be described in the next steps:
\noindent \textbf{Preparation.} The client randomly selects a function $f_k$, from a family of trapdoor one-way, quantum-safe, two-regular and collision resistant functions. The choice of $f_k$ is public (server knows), but the trapdoor information $t_k$ needed to invert the function is known only to the client.
\noindent \textbf{Stage 1: Preimages Superposition.} The client instructs the server
(i) to apply Hadamard(s) on the control register, (ii) to apply $U_{f_k}$ on the target register i.e. to obtain $\sum_x \ket{x}\otimes\ket{f_k(x)}$ and (iii) to measure the target register in the computational basis, in order to obtain a value $y$. This collapses his state to the state $(\ket{x}+\ket{x'})\otimes\ket{y}$, where $x,x'$ are the unique two preimages of $y$.
\emph{Remarks.} First we note that each image $y$ appears with same probability (therefore, obtaining twice the same $y$ happens with negligible probability). We now consider the first register $\ket{x}+\ket{x'}=\ket{x_1\cdots x_n}+\ket{x'_1\cdots x'_n}$, where the subscripts denote the different bits of the corresponding preimages $x$ and $x'$. We rewrite this:
\begin{align*}
\big(\otimes_{i\in \bar{G}}\ket{x_{i}}\big)\otimes\Big(\prod_{j\in G}X^{x_{j}}\Big)\big(\ket{0\cdots 0}_{G}+\ket{1\cdots 1}_{G}\big)
\end{align*}
where $\bar{G}$ is the set of bits positions where $x,x'$ are identical, $G$ is the set of bits positions where the preimages differ, while we have suitably changed the order of writing the qubits. It is now evident that the state at the end of Stage 1 is a tensor
product of isolated $\ket{0}$ and $\ket{1}$ states, and a Greenberger-Horne-Zeilinger (GHZ)
state with random $X$'s applied. The crucial observation is that the connectivity (which qubit belongs to the GHZ and which doesn't) depends on the XOR of the two preimages $x\oplus x'$ and is computationally impossible to determine, with non-negligible advantage, without the trapdoor information $t_k$.
\noindent \textbf{Stage 2: Squeezing.} The client instructs the server to measure each qubit $i$ (except the output) in a random basis $\{\ket{0}\pm e^{i\alpha_i\pi/4}\ket{1}\}$ and return back the measurement outcome $b_i$. The output qubit is of the form $\ket{+_\theta}=\ket{0}+e^{i\theta}\ket{1}$, where (see \cite{CCKW18}):
\begin{equation}
\label{eq:theta0} \theta=\frac{\pi}{4}(-1)^{x_n}\sum_{i=1}^{n-1}(x_i-x_i')(4b_i+\alpha_i)\bmod 8
\end{equation}
Intuitively, measuring qubits that are not connected has no effect to the output, while measuring qubits within the GHZ part, rotates the phase of the output qubit (by a $(-(1)^{x_i}\alpha_i+4b_i)\pi/4$ angle).
\noindent \textbf{Security.} The protocol is secure, if we can prove that the server (or other third parties) cannot guess (obtain noticeable advantage in guessing) the classical description of the state, i.e. the value of $\theta$. We consider a quantum-honest-but-curious server (see formal definition below) which means that he essentially follows the protocol and the security reduces in proving that the server cannot use his classical information to obtain \emph{any} advantage in guessing the classical description of the (honest) quantum output.
The server does not know the two preimages $x,x'$ and needs to guess $\theta$ from the value of the image $y$. A similar (simpler) result that we use is the Goldreich-Levin theorem \cite{GoldreichLevin}, that (informally) states that the inner product of the preimage of a one-way function with a random vector, taken modulo 2, is indistinguishable from a random bit. Our case is similar since Eq. (\ref{eq:theta0}) has the form of an inner product of the XOR of two preimages with a random vector taken modulo 8. We prove that if a computationally bounded server could obtain non-trivial advantage in guessing $\theta$, then he could also break the property of ``second preimage resistance'' which we requested for our function $f_k$.
\noindent \textbf{The function.} Our protocol relies on using functions that have a number of properties (one-way, trapdoor, two-regular, collision resistant (see \autoref{rmk:collision_resistance})), quantum safe). \emph{Any} function satisfying those conditions is suitable for our protocol. While in first thought some of these appear hard to satisfy jointly (e.g. two-regularity and collision resistance), we give two constructions that achieve those properties from simpler functions: one from injective, homomorphic trapdoor one-way function and one from bijective trapdoor one-way function. Both constructions define a new function that has domain extended by one bit, and the value of that bit ``decides'' whether one uses the initial basic function or not.
We then use a (slight) modification of the first construction and the trapdoor one-way function based on the Learning-with-Errors of \cite{MP2012} with suitable choice of parameters, and obtain a function that has all the desired properties. In a nutshell, the idea is to use the construction of \cite{MP2012}, to create an injective function $g(s,e)$ hard to invert without the secret trapdoor, and then to sample from a Gaussian distribution a small error term $e_0 \in Z_q^m$ as well as a (uniform) random $s_0 \in \mathbb{Z}_q^n$. According to \cite{MP2012}, it should be impossible to recover efficiently $s_0$ and $e_0$ from $b_0 := g(s_0,e_0)$. Then, to create the function $f(s,e,c)$, we define $f(s,e,0) = g(s,e)$ and $f(s,e,1) = g(s,e) + b_0$, and we require $e$ to have infinity norm smaller than a parameter $\mu$. Because the function is ``nearly homomorphic'', it appears that $f(s,e,1) = f(s+s_0, e+e_0, 0)$, so this function has intuitively two preimages. However, $e+e_0$ may not be small enough to stay in the input domain, so it may be possible to have only one preimage for some $y$. What we show is that if we sample $e_0$ ``small enough'' (at least as small as $O(\mu/m)$), then the probability to have two preimages is at least constant. Moreover, we prove that this modification does not break the security of $g$, and leads to a function $f$ that is both one-way and collision resistant under the $LWE$ assumption, which reduces to $\textsc{SIVP}_\gamma$, with $\gamma = \poly[n]$.
\section{Preliminaries}\label{Sec:Prelim}
\subsection{Classical Definitions}
We are considering protocols secure against quantum
adversaries, so we assume that all the properties of our functions hold for a general Quantum Polynomial Time (QPT) adversary, rather than the usual Probabilistic Polynomial Time (PPT) one. We will denote $D$ the domain of the functions, while $D(n)$ is the subset of strings of length $n$.
\begin{definition}[Quantum-Safe (informal)]
A protocol/function is quantum-safe (also known as post-quantum secure), if all its properties remain valid when the adversaries are QPT (instead of PPT).
\end{definition}
\noindent The following definitions are for PPT adversaries, however in this paper we will generally use quantum-safe versions of those definitions and thus security is guaranteed against QPT adversaries.
\begin{definition}[One-way]\label{def:one_way_function}
A family of functions $\{f_k : D \rightarrow R\}_{k \in K}$ is \textbf{one-way} if:
\begin{itemize}
\item There exists a PPT algorithm that can compute $f_k(x)$ for any index function $k$, outcome of the PPT parameter-generation algorithm \text{Gen} and any input $x \in D$;
\item Any PPT algorithm $\mathcal{A}$ can invert $f_k$ with at most negligible probability over the choice of $k$: \\
$ \underset{\substack{k \leftarrow Gen(1^n) \\ x \leftarrow D \\ rc \leftarrow \{0, 1\}^{*}}} \Pr [f(\mathcal{A}(k, f_k(x)) = f(x)] \leq \negl$\\
where $rc$ represents the randomness used by $\mathcal{A}$
\end{itemize}
\end{definition}
\begin{definition}[Second preimage resistant]\label{def:second_preimage_resistant}
A family of functions $\{f_k : D \rightarrow R\}_{k \in K}$ is \textbf{second preimage resistant} if:
\begin{itemize}
\item There exists a PPT algorithm that can compute $f_k(x)$ for any index function $k$, outcome of the PPT parameter-generation algorithm \text{Gen} and any input $x \in D$;
\item For any PPT algorithm $\mathcal{A}$, given an input $x$, it can find a different input $x'$ such that $f_k(x) = f_k(x')$ with at most negligible probability over the choice of $k$: \\
$ \underset{\substack{k \leftarrow Gen(1^n) \\ x \leftarrow D \\ rc \leftarrow \{0, 1\}^{*}}}
\Pr [\mathcal{A}(k, x) = x' \text{such that } x \neq x' \text{ and }
f_k(x) = f_k(x')] \leq \negl$\\
where $rc$ is the randomness of $\mathcal{A}$;
\end{itemize}
\end{definition}
\begin{definition}[Collision resistant]\label{def:collision_resistant}
A family of functions $\{f_k : D \rightarrow R\}_{k \in K}$ is \textbf{collision resistant} if:
\begin{itemize}
\item There exists a PPT algorithm that can compute $f_k(x)$ for any index function $k$, outcome of the PPT parameter-generation algorithm \text{Gen} and any input $x \in D$;
\item Any PPT algorithm $\mathcal{A}$ can find two inputs $x \neq x'$ such that $f_k(x) = f_k(x')$ with at most negligible probability over the choice of $k$: \\
$\underset{
\substack{k \leftarrow Gen(1^n) \\ rc \leftarrow \{0, 1\}^{*}}}
\Pr [\mathcal{A}(k) = (x,x') \text{such that } x \neq x' \text{ and } f_k(x) =
f_k(x')] \leq \negl$\\
where $rc$ is the randomness of $\mathcal{A}$ ($rc$ will be omitted from now).
\end{itemize}
\end{definition}
\begin{theorem}\cite{Lindell}\label{thm:resistant}
Any function that is \textit{collision resistant} is also \textit{second preimage resistant}.
\end{theorem}
\begin{definition}[k-regular]\label{def:k_regular}
A deterministic function $f \colon D \rightarrow R$ is \textbf{k-regular} if $ \, \, \forall y \in \Im(f)$, we have ${|f^{-1}(y)| = k}$.
\end{definition}
\begin{definition}[Trapdoor Function]\label{def:trapdoor_function}
A family of functions $\{f_k : D \rightarrow R \}$
is a \textbf{trapdoor function} if:
\begin{itemize}
\item There exists a PPT algorithm {\tt Gen} which on input $1^n$ outputs $(k, t_k)$, where $k$ represents the index of the function;
\item $\{f_k : D \rightarrow R\}_{k \in K}$ is a family of one-way functions;
\item There exists a PPT algorithm {\tt Inv}, which on input $t_k$ (which is called the trapdoor information) output by {\tt Gen}($1^n$) and $y = f_k(x)$ can invert $y$ (by returning all preimages of $y$\footnote{While in the standard definition of trapdoor functions it suffices for the inversion algorithm {\tt Inv} to return one of the preimages of any output of the function, in our case we require a two-regular tradpdoor function where the inversion procedure returns both preimages for any function output.})
with non-negligible probability over the choice of $(k, t_k)$ and uniform choice of $x$.
\end{itemize}
\end{definition}
\begin{definition}[Hard-core Predicate]\label{def:hardcore_predicate}
A function $hc \colon D \rightarrow \{0, 1\}$ is a \textbf{hard-core predicate} for a function $f$ if:
\begin{itemize}
\item There exists a QPT algorithm that for any input $x$ can compute $hc(x)$;
\item Any PPT algorithm $\mathcal{A}$ when given $f(x)$, can compute $hc(x)$ with negligible better than $1/2$ probability: \\
$ \underset{\substack{x \leftarrow D(n) \\ rc \leftarrow \{0, 1\}^{*}}} \Pr [\mathcal{A}(f(x), 1^n) = hc(x)] \leq \frac{1}{2} + \negl$, where $rc$ represents the randomness used by $\mathcal{A}$;
\end{itemize}
\end{definition}
\begin{definition}[Hard-core Function]\label{def:hardcore_function}
A function $h : D \rightarrow E$ is a \textbf{hard-core function} for a function $f$ if:
\begin{itemize}
\item There exists a QPT algorithm that can compute $h(x)$ for any input $x$
\item For any PPT algorithm $\mathcal{A}$ when given $f(x)$, $\mathcal{A}$ can distinguish between $h(x)$ and a uniformly distributed element in $E$ with at most negligible probability: \\
\[ \big|
\underset{
\substack{x \leftarrow D(n)}
}
{\Pr} [\mathcal{A}(f(x), h(x)) = 1]
-
\underset{
\substack{x \leftarrow D(n) \\
r \leftarrow E(|h(x)|)}
}
{\Pr} [\mathcal{A}(f(x), r) = 1]\big| \leq \negl\]
\end{itemize}
\end{definition}
The intuition behind this definition is that as far as a QPT adversary is concerned, the hard-core function appears indistinguishable from a randomly chosen element of the same length.
\begin{theorem}[Goldreich-Levin \cite{GLth}]\label{thm:GL}
From any one-way function $f \colon D \rightarrow R$, we can construct another one-way
function $g \colon D \times D \rightarrow R \times D $ and a hard-core predicate for $g$. If $f$ is a one-way
function, then:
\begin{itemize}
\item $g(x, r) = (f(x), r)$ is a one-way function, where $|x|=|r|$.
\item $hc(x, r) = \langle x, r\rangle \bmod 2$ is a hard-core predicate
for $g$
\end{itemize}
\end{theorem}
Informally, the Goldreich-Levin theorem is proving that when $f$ is a
one-way function, then $f(x)$ is hiding the xor of a random subset of
bits of $x$ from any PPT adversary\footnote{ The Goldreich-Levin proof is using a reduction from breaking the hard-core predicate $hc(x, r)$ to breaking the one-wayness of $h$. In this paper the functions we consider are one-way against quantum adversaries, and using the same reduction we conclude that $hc(x, r)$ is a hard-core predicate against QPT adversaries.}.
\begin{theorem}[Vazirani-Vazirani XOR-Condition Theorem \cite{Vazirani}]\label{thm:VV}
Function $h$ is hard-core function for $f$ if and only if the
xor of any non-empty subset of $h$'s bits is a hard-core predicate for $f$.
\end{theorem}
\noindent The Learning with Errors problem (\textsc{LWE}{}) can be described in the following way:
\begin{definition}[\textsc{LWE}{} problem (informal)]\label{def:lwe}
Given $s$, an $n$ dimensional vector with elements in $\mathbb{Z}_q$, the task is to distinguish between a set of polynomially many noisy random linear combinations of the elements of $s$ and a set of polynomially many random numbers from $\mathbb{Z}_q$.
\end{definition}
Regev \cite{Regev} and Peikert \cite{Peikert} have given quantum and classical reductions from the average case of \textsc{LWE}{} to problems such as approximating the length of the shortest vector or the shortest independent vectors problem in the worst case, problems which are conjectured to be hard even for quantum computers.
\begin{theorem}[Reduction \textsc{LWE}{}, from {{\cite[Therem 1.1]{Regev}}}]
Let $n$, $q$ be integers and $\alpha \in (0,1)$ be such that $\alpha q > 2\sqrt{n}$. If there exists an efficient algorithm that solves $\textsc{LWE}{}_{q, \bar{\Psi}_\alpha}$, then there exists an efficient quantum algorithm that approximates the decision version of the shortest vector problem \textsc{GapSVP}{} and the shortest independent vectors problem \textsc{SIVP}{} to within $\tilde{O}(n/\alpha)$ in the worst case.
\end{theorem}
\subsection{Quantum definitions}
We assume basic familiarity with quantum computing notions. For any function $f : A \rightarrow B$ that can be described by a polynomially-sized classical circuit, we define the controlled-unitary $U_f$, as acting in the following way:
\EQ{U_f\ket{x}\ket{y} = \ket{x}\ket{y \oplus f(x)} \, \, \, \forall x \in A \, \, \, \forall y \in B,}
where we name the first register $\ket{x}$ control and the second register $\ket{y}$ target.
Given the classical description of this function $f$, we can always define a QPT algorithm that efficiently implements $U_f$.
The protocol we want to implement (achieving PSRQG) can be viewed as a special case of a two-party quantum computation protocol, where one side (Client) has only classical information and thus the communication consists of classical messages. Furthermore, the client is honest, so we only need to prove security (and simulators) against adversarial server. Finally, the ideal protocol (giving same output but mediated by a trusted party; see definition below) that the real protocol implements, needs to be by itself PSRQG, i.e. obtaining the legitimate outputs should not leak any extra information (see Sections \ref{Sec:Ideal} and \ref{Sec:Privacy}).
In this paper, unless stated otherwise, we use the convention that all quantum operators considered are described by polynomially-sized quantum circuits.
We follow the notations and conventions of \cite{DNS10}. We have two parties $A,B$ with registers $\mathcal{A},\mathcal{B}$ and an extra register $\mathcal{R}$ with $\dim \mathcal{R}=(\dim\mathcal{A}+\dim\mathcal{B})$. The input state is denoted $\rho_{in}\in D(\mathcal{A}\otimes\mathcal{B}\otimes\mathcal{R})$, where $D(\mathcal{A})$ is the set of all possible quantum states in register $\mathcal{A}$. We also denote with $L(\mathcal{A})$ the set of linear mappings from $\mathcal{A}$ to itself.
The ideal output\footnote{In case of unitary protocol $U$, while it generalises for any quantum operations.} is given by $\rho_{out}=(U\otimes\mathbb{I}_{\mathcal{R}})\cdot \rho_{in}$, where for simplicity we write $U\cdot \rho$ instead of $U\rho U^\dagger$. For two states $\rho_0,\rho_1$ we denote the trace norm distance $\Delta(\rho_0,\rho_1):=\frac12 \lVert \rho_0-\rho_1\rVert$. If $\Delta(\rho_0,\rho_1)\leq\epsilon$ then any process applied on $\rho_0$ behaves as for $\rho_1$ except with probability at most $\epsilon$.
\begin{definition}[taken from \cite{DNS10}] An $n$-step two party strategy
is denoted $\Pi^O=(A,B,O,n)$:
\begin{enumerate}
\item input spaces $\mathcal{A}_0,\mathcal{B}_0$ and memory spaces $\mathcal{A}_1,\cdots,\mathcal{A}_n$ and $\mathcal{B}_1,\cdots,\mathcal{B}_n$
\item $n$-tuple of quantum operations $(L_1^A,\cdots,L_n^A)$ and $(L_1^B,\cdots,L_n^B)$ such that $L_i^A: L(\mathcal{A}_{i-1})\rightarrow L(\mathcal{A}_i)$ and similarly for $L_i^B$.
\item $n$-tuple of global operations $(\mathcal{O}_1,\cdots,\mathcal{O}_n)$ for that step, $\mathcal{O}_i:L(\mathcal{A}_i\otimes\mathcal{B}_i)\rightarrow L(\mathcal{A}_i\otimes\mathcal{B}_i)$
\end{enumerate}
\end{definition}
The global operations (in our case) are communications that transfers some (classical) register from one party to another. The quantum state in each step of the protocol is given by:
\EQ{\rho_1(\rho_{in})&:=&(\mathcal{O}_1\otimes\mathbb{I})(L_1^A\otimes L_1^B\otimes\mathbb{I})(\rho_{in})\nonumber\\
\rho_{i+1}(\rho_{in})&:=&(\mathcal{O}_{i+1}\otimes\mathbb{I})(L_{i+1}^A\otimes L_{i+1}^B\otimes\mathbb{I})(\rho_i(\rho_{in}))
}
\begin{definition}[Ideal Protocol]\label{def:ideal}
Given a real protocol, we call the corresponding\emph{``ideal protocol''} a protocol that has same input/output distributions with an honest run of the real protocol, but all intermediate steps are completed by a trusted third party.
\end{definition}
The security definitions are based on the corresponding ideal protocol of secure two-party quantum computation
(S2PQC) that takes a joint input $\rho_{in}\in \mathcal{A}_0\otimes\mathcal{B}_0$, obtains the state $U\cdot\rho_{in}$ and returns to each party their corresponding quantum registers. A protocol $\Pi^O_U$ implements the protocol securely, if no possible adversary in any step of the protocol, can
distinguish with a non negligible probability whether they interact with the real protocol or with a simulator (which has access to the ideal protocol). When a party is malicious we add the notation ``$\sim$'', e.g. $\tilde A$.
\begin{definition}[Simulator]
$\mathcal{S}(\tilde{A})=\langle (\mathcal{S}_1,\cdots,\mathcal{S}_n),q \rangle$ is a simulator for adversary $\tilde{A}$ in $\Pi^O_U$ if it consists of:
\begin{enumerate}
\item operations where $\mathcal{S}_i:L(\mathcal{A}_0)\rightarrow L(\tilde{\mathcal{A}_i})$ are described by polynomially-sized quantum circuits,
\item sequence of bits $q\in\{0,1\}^n$ determining if the simulator calls the ideal functionality at step $i$ ($q_i=1$ calls the ideal functionality).
\end{enumerate}
\end{definition}
Given input $\rho_{in}$ the simulated view for step $i$ is defined as:
\EQ{\label{eq:simulator_defn}\nu_i(\tilde A,\rho_{in}):=\Tr_{\mathcal{B}_0}\left((\mathcal{S}_i\otimes\mathbb{I})(U^{q_i}\otimes\mathbb{I})\cdot \rho_{in}\right)
}
\begin{definition}[Privacy with respect to the Ideal Protocol] \label{def:private}We say that the protocol is $\delta$-private (with respect to an ideal protocol) if for all adversaries and for all steps $i$:
\EQ{\label{eq:private_real_simul}
\Delta(\nu_i(\tilde{A},\rho_{in}),\Tr_{\mathcal{B}_i}(\tilde{\rho}_i(\tilde{A},\rho_{in})))\leq\delta
}
where $\tilde{\rho}_i(\tilde{A},\rho_{in})$ is the state of the real protocol with corrupted party $\tilde{A}$, at step $i$.
\end{definition}
The honest-but-curious (HBC) adversaries, follow the protocol honestly, keeping records of all communication
and attempt to learn from those more than what they should. Since quantum states cannot be copied, in \cite{DNS10} they defined an adversary that could be considered the quantum analogue, called specious adversary.
\begin{definition}[Specious]\label{def:specious}
An adversary $\tilde{A}$ is $\epsilon$-specious if there exists a sequence of operations $(\mathcal{T}_1,\cdots,\mathcal{T}_n)$, where $\mathcal{T}_i: L(\tilde{\mathcal{A}}_i)\rightarrow L(\mathcal{A}_i)$ can be described by polynomially-sized quantum circuits, such that for all $i$:
\EQ{\label{eq:specious}
\Delta\left((\mathcal{T}_i\otimes\mathbb{I})(\tilde\rho_i(\tilde A,\rho_{in})),\rho_i(\rho_{in})\right)\leq\epsilon
}
\end{definition}
In our protocol, where communications are classical, it is sensible to define a weaker version of the adversary:
\begin{definition}[Quantum-Honest-But-Curious (QHBC)]\label{def:QHBC}
An adversary $\tilde{A}$ is QHBC if it is $0$-specious.
\end{definition}
\section{Ideal Functionality}\label{Sec:Ideal}
In many distributed protocols the required communication consists of sending sequence of single qubits prepared in random states
that are unknown to the receiver (and any other third parties). What we want to achieve is a way to generate remotely single
qubits that are random and (appear to be) unknown to all parties but the ``client'' that gives the instructions.
In this work, for clarity and having in mind the applications we wish to implement, we will focus on a particular choice for the set $R$ of possible states that contains eight different single-qubit states (see below). One could easily modify our work to restrict to a smaller set (e.g. the four BB84 states \cite{BB84} that would actually simplify our proofs) or a larger set.
\begin{definition}
Let $\ket{+_\theta}=1/\sqrt{2}\left(\ket{0}+e^{i\theta}\ket{1}\right)$. We define the set of states
\EQ{\label{eq:output_states}
R:= \{ \ket{+_\theta} \} \ \textrm{ where } \ \theta\in\{ 0,\pi/4,\pi/2,\cdots,7\pi/4\}
}
\end{definition}
By including magic states ($\ket{+_{\pi/4}}$), this set of states can be viewed as a ``universal'' resource, as applying Clifford operations on those states is sufficient for universal quantum computation. Furthermore, it is sufficient to implement both Blind Quantum Computation (e.g. \cite{bfk}) and Verifiable Blind Quantum Computation (e.g. \cite{FKD2017}).
\begin{algorithm}[H]
\caption{Ideal Functionality: Secret Random Qubit Generator (SRQG) -- $\mathcal{F}(M,p)$}
\label{ideal:q_factory}
\textbf{Public Information:} A distribution on pairs of lists $M$, intuitively containing the values of the classical variables used by the client and by the server\\
\textbf{Trusted Party:}\\
-- With some probability $p$ returns to both parties $\mathsf{abort}$, otherwise:\\
-- Samples $(m_C, m_S)\leftarrow M$\\
-- Samples $r\leftarrow\{0,1\}^3$\\
-- Prepares a qubit in state $\ket{+_{(r\pi/4)}}$\\
\textbf{Outputs:}\\
-- Either returns $\mathsf{abort}$ to both client and server\\
-- Or returns $(m_C, r)$ to the client, and $(m_S, \ket{+_{(r\pi/4)}})$ to the server
\end{algorithm}
\noindent\textbf{Remarks}: (i) The outcome of this functionality is the client ``sending'' the qubit $\ket{+_\theta}$ (that he knows) to the server, thus simulating a quantum channel. (ii) We note that there is an abort possibility and some auxiliary classical message $m$, both included to make the functionality general enough to allow for our construction. Furthermore, the classical description of the qubit $r$ and the classical message $m$ are totally uncorrelated (as $r$ is chosen randomly for each $m$). (iii) While the server \emph{can} learn something about the classical description (e.g. by measuring the qubit), this information is limited and is the exact same information that he could obtain if the client had prepared and send a random qubit. Therefore, the privacy is defined with respect to this ideal setting.
We are interested only in the honest-but-curious setting for now. The idea is that we will allow the adversary to have access to the classical registers/variables of the server (we will call these information a ``\emph{view}''), as well as the classical variables produced by the ideal functionality (uncorrelated with the quantum output, so secure by definition). The goal of the adversary will be to distinguish whether he is interacting with a view of the ideal functionality or a view of the real protocol. More formally, we will denote by $\mathcal{P}_S$ the view of server $S$ in protocol $\mathcal{P}$, which is the list of the content of the variables/classical registers assigned by the server $S$ in the protocol $\mathcal{P}$. Similarly, $\mathcal{F}_S$ will be the view of the server $S$ in the ideal functionality $\mathcal{F}$, equal to the value of $m_S$ in a run of the idea functionality.
\begin{definition}[Pseudo-Secret Random Qubit Generator (PSRQG)]\label{def:c_q_factory}
We call a protocol $\mathcal{P}(1^n)$ to be
$\varepsilon(n)$-Pseudo-Secret Random Qubit Generator ($\varepsilon(n)$-PSRQG) if there exists a SRQG $\mathcal{F}(M,p)$ such that for all
Quantum Polynomial Time (QPT) adversaries/distinguishers
$\mathcal{A}$:
\EQ{\label{eq:c_q_factory}\left|\Pr[\mathcal{A}(\mathcal{P}_S(1^n))=1]-\Pr[\mathcal{A}(\mathcal{F}_S(M,p))=1]\right|\leq \varepsilon(n)}
If $\varepsilon(n)$ is a negligible function, we will simply denote it
PSQRG (omitting $\varepsilon(n)$).
\end{definition}
To achieve the PSRQG functionality we define an ideal protocol, called Ideal QFactory\footnote{We call this a ``qubit factory'', since we use this protocol to produce strings of qubits.}, mediated
by a trusted third party that (under certain assumptions) achieves the PSRQG functionality.
This ideal protocol, can be realised by a concrete protocol without any trusted parties (see later), and certain choices in the definition of the ideal QFactory (e.g. the function required) are done with this in mind.
\begin{algorithm}[H]
\caption{Ideal QFactory Protocol}
\label{ideal:c_q_factory}
\textbf{Public Information:} A security parameter $n\in \mathbb{N}^*$, a trapdoor one-way function that are quantum-safe, two-regular and collision resistant (or the weaker second preimage resistance property, see \autoref{rmk:collision_resistance}) $\{f_k \colon D \rightarrow R\}_{k \in K}$ and a family of functions $\{g_k \colon D \times D \times E \rightarrow \{0,1\}^3\}_{k \in K}$\\
\textbf{Trusted Party:}\\
-- Runs the algorithm $\mathsf{Gen}(1^n)=(k,t_k)$ of the trapdoor function\\
-- Samples randomly $x \leftarrow D,\beta\leftarrow E$\\
-- Using $t_k$, computes the unique other preimage $x' \neq x$ such that $f_k(x)=f_k(x')=y$\\
-- If the last bit of $x$ and $x'$ is the same, $\mathsf{abort}$ otherwise\\
-- Computes $\tilde{B} :=g_k(x,x',\beta)$. Setting $\theta := \tilde{B} \times \pi/4$, prepares a qubit in the state $\ket{+_\theta}$\\
\textbf{Outputs:}\\
-- Either returns $\mathsf{abort}$ to both parties\\
-- Or returns $(k,y,\beta, \ket{+_\theta})$ to server $S$ and $(t_k,y,\beta,\theta)$ to client $C$. Note that the $\theta$ is optional and could have been recomputed by the client from $t_k$.
\end{algorithm}
\begin{remark}\label{rmk:collision_resistance}
It appears that the second preimage resistance property will be enough to prove the security of our scheme in the honest-but-curious setting. However, as soon as the server can be malicious, the collision resistance property will be very important, else the server might forge known valid states, which would break the security.
\end{remark}
We will denote by $M_{QF}$ the distribution obtained by sampling as above the index $k$ and trapdoor $t_k$ according to $\mathsf{Gen}(1^n)$, the $y$ uniformly in the elements of $R$ having two preimages, and the $\beta$ uniformly in $E$, and then outputting $((t_k,y,\beta),(k,y,\beta))$.
\begin{lemma}\label{lemma:hardcore}
Ideal QFactory \autoref{ideal:c_q_factory} is a PSRQG protocol as described in \autoref{def:c_q_factory} (with $M$ having the distribution $M_{QF}$) if the function
$g_k(x,x',\beta)$ (restricted on $x,x'$ such that $f_k(x)=f_k(x')$) is a hard-core function for $f_k$.
\end{lemma}
\begin{proof}
We can see that \autoref{ideal:c_q_factory} is identical with \autoref{ideal:q_factory} with $M=M_{QF}$ (since the Client having $t_k$ can determine if it aborts or not), apart from the fact that in \autoref{ideal:c_q_factory} the state received by the server is $\ket{+_\theta}$, while in \autoref{ideal:q_factory} is $\ket{+_{r}}$.
Now we use the fact that $g_k$ is a hard-core function. By definition \ref{def:hardcore_function}, for a QPT adversary that has access to $m=(k,y=f_k(x),\beta)$ the value of the hard-core function $g_k(x,x',\beta)=4\theta/\pi$ where $x,x'$ are the unique preimages of $y$, is indistinguishable (up to negligible probability) to that of a random value $r$. It follows that such adversary cannot distinguish (apart with negligible probability) whether he received the state $\ket{+_\theta}$ as in \autoref{ideal:c_q_factory} or the state $\ket{+_{r}}$ as in the ideal functionality described on \autoref{ideal:q_factory}, and therefore satisfies Eq. \ref{eq:c_q_factory}).
\end{proof}
It is not sufficient to prove that given the image $y=f_k(x)$ it is hard to obtain the exact value of the function $g$ (we will omit the $k$ if it is clear from the context), we want the stronger requirement that given $y$,
a QPT adversary obtains no advantage in distinguishing the value of $g$ (the classical description of the state), from a totally random value $r$. Intuitively, what \autoref{ideal:c_q_factory} achieves, is that it produces (truly) random qubits in states that are pseudo-secret, i.e. their classical description is computationally unknown to anyone that does not have access to the trapdoor $t_k$ (i.e. the server).
\section{The Real Protocol}\label{Sec:Protocol}
We assume the existence\footnote{See \autoref{Sec:Functions} for our function. With that choice, we are guaranteed that the last bits of the two preimages are always different, and thus no need for an abort. We keep the protocol general so that different functions can be used.}
of a family
$\{f_k \colon \{0,1\}^n \rightarrow \{0,1\}^m\}_{k \in K}$ of trapdoor
one-way functions that are two-regular and collision resistant (or the weaker second preimage resistance property, see \autoref{rmk:collision_resistance}) even against a quantum adversary. For
any $y$, we will denote by $x(y)$ and $x'(y)$ the two unique different
preimages of $y$ by $f_k$ (if the $y$ is clear, we may remove it).
Note that because of the two-regularity property $m \geq n-1$. We use subscripts to denote the different bits of the strings.
\begin{algorithm}[H]
\caption{Real QFactory Protocol}\label{protocol:concrete_c_q_factory}
\textbf{Requirements:} \\
Public: A family $\mathcal{F}=\{f_k : \{0,1\}^n \rightarrow \{0,1\}^m \}$ of trapdoor one-way functions that are quantum-safe, two-regular and collision resistant (or second preimage resistant, see \autoref{rmk:collision_resistance})\\
\textbf{Input:}\\
-- Client: uniformly samples a set of random three-bits strings $\alpha=(\alpha_1,\cdots,\alpha_{n-1})$ where $\alpha_i\leftarrow\{0,1\}^3$, and runs the algorithm $(k,t_k) \leftarrow \text{Gen}_{\mathcal{F}}(1^n)$. The $\alpha$ and $k$ are public inputs (known to both parties), while $t_k$ is the ``private'' input of the Client.\\
\textbf{Stage 1: Preimages superposition} \\
-- Client: instructs Server to prepare one register at $\otimes^n H\ket{0}$ and second register initiated at $\ket{0}^{m}$
-- Client: returns $k$ to Server and the Server applies $U_{f_k}$ using the first register as control and the second as target
-- Server: measures the second register in the computational basis, obtains the outcome $y$ and returns this result $y$ to the Client. Here, an honest Server would have a state ${(\ket{x}+\ket{x'}) \otimes \ket{y}}$ with $f_k(x)=f_k(x')=y$ and $y\in \Im f_k$.
\\
\textbf{Stage 2: Squeezing}\\
-- Client: instructs the Server to measure all the qubits (except the last one) of the first register in the $\left\{\ket{0}\pm e^{\alpha_i\pi/4}\ket{1}\right\}$ basis. Server obtains the outcomes $b=(b_1,\cdots,b_{n-1})$ and returns the result $b$ to the Client
\\
-- Client: using the trapdoor $t_k$ computes $x,x'$. Then check if the $n$th bit of $x$ and $x'$ (corresponding to the $y$ received in stage 1) are the same or
different. If they are the same, returns $\mathsf{abort}$, otherwise, obtains the classical
description of the Server's state.\\
\textbf{Output:} If the protocol is run honestly, when there is no
abort, the state that Server has is $\ket{+_\theta}$, where the Client (only)
knows the classical description (see \autoref{Thm:correctness}):
\EQ{\label{eq:theta} \theta=\frac{\pi}{4}(-1)^{x_n}\sum_{i=1}^{n-1}(x_i-x_i')(4b_i+\alpha_i)\bmod 8}
\end{algorithm}
\noindent\textbf{Remarks:} The first thing to note is that the server
should not only be unable to guess $\theta$ from his classical
communications, but he should also be unable to distinguish it from a
random string with probability greater than negligible. We will prove
this later, but for now it is enough to point out that $\theta$
depends on the pre-images $x$ and $x'$ of $y$ (which the Client can
obtain using $t_k$).
The second thing to note is that previously, in \autoref{ideal:c_q_factory} and in \autoref{lemma:hardcore}, we used the variable $\beta$. In our
case, $\beta$ corresponds to both $\alpha_i$'s and $b$. While our
expression resembles the inner product in the Goldreich-Levin (GL)
theorem, it differs in a number of places and our proof (that $\theta$
is a hard-core function), while it builds on GL theorem proof, is
considerably more complicated. Details can be found in the security
proof, but here we simply mention the differences: (i) our case involves three-bits rather than a predicate, and the different bits, if we view them separately, may not be independent, (ii) we have a term $(x-x')$ rather than a single preimage, so rather than the one-way property of the function we will need the second preimage resistance and (iii) for the same reason, if we view our function as an inner product, it can take both negative and positive values ($(x-x')$
could be negative).
A third thing to note is that we have singled-out the last qubit of the first register, as the qubit that will be the output qubit. One could have a more general protocol where the output qubit is chosen randomly, or, for example, in the set of the qubits that are known to have different bit values between $x$ and $x'$, but this would not
improve our analysis so we keep it like this for simplicity. Moreover,
while the ``inner product'' normally involves the full string $x$ that
one tries to invert, in our case, it does not include one of the bits
(the last) of the string we wish to invert. It is important to note,
that it does not change anything to our proofs, since if one can
invert all the string apart from one bit with inverse polynomial
probability of success, then trivially one can invert the full string
with inverse polynomial probability (by randomly guessing the
remaining bit or by trying out both values of that bit). Therefore all
the proofs by contradiction are still valid and in the remaining, for
notational simplicity, we will take the inner products to involve all
$n$ bits.
\subsection{Correctness and intuition}
\begin{theorem}\label{Thm:correctness}
If both the Client and the Server follow \autoref{protocol:concrete_c_q_factory}, the protocol aborts when $x_n=x_n'=f_k^{-1}(y)$, while otherwise the Server ends up with the output (single) qubit being in the state $\ket{+_\theta}$, where $\theta$ is given by Eq. (\ref{eq:theta}).
\end{theorem}
\begin{proof}
In the first stage, before the first measurement, but after the application of $U_{f_k}$, the state is $\sum_x\ket{x}\otimes\ket{f_k(x)}$. What the
measurement does, is that it collapses the first register in the equal
superposition of the two unique preimages of the measured $y=f_k(x)=f_k(x')$,
in other words in the state
$\left(\ket{x}+\ket{x'}\right)\otimes\ket{y}$. It is not possible, even for malicious adversary (not considered here), to force the output of the measurement to be a given $y$ (see \cite{postbqp} for relation of PostBQP with BQP).
This completes the first stage of the protocol. Before proceeding with the proof of correctness we make three observations.
By the second preimage resistance property of the trapdoor function, learning $x$ is not sufficient to learn $x'$ but with negligible probability, and intuitively, by the stronger collision resistance property, even a malicious server cannot forge a state $\ket{x}+\ket{x'}$ (with $f(x)=f(x')$) fully known to him.
Then, we examine what happens if the last bit of $x$ and $x'$ are the same and see why the protocol aborts. In this case, in the first register, the last qubit is in product form with the remaining state, and therefore any further measurements in stage 2
do not affect it, leaving it in the state $\ket{x_{n}}$. Because of this, the output state is not of the form of Eq. (\ref{eq:theta}), while including this states in the set of possible outputs would change considerably our analysis.
Finally, we should note that the resulting state is essentially a Greenberger-Horne-Zeilinger (GHZ)
state \cite{GHZ1989}: let $G$ be the set of bits positions where $x$ and $x'$ differ (which
include $n$ -- output qubit), while $\bar{G}$ is the set where they are identical. The state is then (where we no longer keep the qubits in order, but group them depending on their belonging to $G$ or
$\bar{G}$):
\EQ{
\big(\otimes_{i\in \bar{G}}\ket{x_{i}}\big)\otimes\big(\otimes_{j\in G}\ket{x_{j}}+\otimes_{j\in G}\ket{x_{j}\oplus 1}\big)
}
This can be rewritten as (up to trivial re-normalization):
\EQ{
\big(\otimes_{i\in \bar{G}}\ket{x_{i}}\big)\otimes\Big(\prod_{j\in G}X^{x_{j}}\Big)\big(\ket{0\cdots 0}_{G}+\ket{1\cdots 1}_{G}\big)
}
It is now evident that the state at the end of Stage 1 is a tensor
product of isolated $\ket{0}$ and $\ket{1}$ states, and a GHZ state
with random $X$'s applied. You can find on
Figure~\ref{fig:drawing_ghz} an illustration of this state\footnote{GHZ-states when viewed as graph states correspond to stars due to the corresponding graph as we can see here too.}, before and
after the Stage 2.
\begin{figure}
\centering
\begin{subfigure}[t]{0.47\textwidth}
\centering
\includegraphics[width=\linewidth]{drawing-left_fig.pdf}
\caption{End of Stage 1: The yellow qubits are in a big \mbox{GHZ-like} state. The server does not know which qubits are in the GHZ state, and which qubits are not (in red).}
\label{fig:drawing_ghz_left}
\end{subfigure}\hspace*{0.05\textwidth}
\begin{subfigure}[t]{0.47\textwidth}
\centering
\includegraphics[width=\linewidth]{drawing_right_fig_eyes.pdf}
\caption{End of Stage 2: Measuring any qubit in the GHZ state will rotate the last (ouput) qubit depending on the angle (and result) of the measurement.}
\label{fig:drawing_ghz_right}
\end{subfigure}
\caption{A simplified representation of the protocol. The red and yellow ellipses represent the qubits, the inner circle contains the bits of $x$ and the outer circle contains the bits of $x'$. The central qubit is the last one, which is not measured and which will be the output qubit.}
\label{fig:drawing_ghz}
\end{figure}
The important thing to note, is that the set $G$, that determines which qubits are in the GHZ state and which qubits are not, is \emph{not} known to the server (apart from the fact that the position of the output qubit belongs to $G$ since otherwise the protocol aborts). Moreover, this set denotes the positions where $x$ and $x'$ differ, which is given by the XOR of the two preimages $x\oplus x':=(x_1\oplus x_1',\cdots,x_n\oplus x_n')$. Because of
second preimage resistance of the function, the server should not be able to invert and obtain $x\oplus x'$ apart with negligible probability (without access to the trapdoor $t_k$). This in itself does not guarantee that the Server cannot learn \emph{any} information about the XOR of the preimages, but we will see that the actual form of the state is such that being able to obtain information would lead to invert the full XOR and thus break the second preimage resistance.
Now let us continue towards Stage 2. Measuring a qubit (other than the
last one) in $\bar{G}$ has no effect on the last qubit (since it is
disentangled). When the qubit index is in $G$, then measuring it at angle
$\alpha_i\pi/4$ gives a phase to the output qubit of the form
$(-(-1)^{x_{i}}\alpha_i+4b_i)\pi/4$ as one can easily
check\footnote{The $(-1)^{x_i}$-term arises because of the commutation
of $X_i^{x_{i}}$ with the measurement angle, and the final
$X_n^{x_n}$ gate gives an overall $(-1)^{x_n}$ to the angle of
deviation}.
Therefore, adding all the phases leads to the output state being:
\EQ{
\ket{+_\theta}; \ \theta=\frac\pi4(-1)^{x_{n}}\left(\sum_{i\in G\setminus\{n\}}\big(-\alpha_i(-1)^{x_{i}}+4b_i\big)\right)\bmod 8
}
Because $\theta$ is defined modulo $2\pi$ and $-4 = 4 \bmod 8$, we can express the output angle in a more
symmetrical way:
\EQ{
\theta=\frac\pi4(-1)^{x_{n}}\left(\sum_{i=1}^{n-1}(x_{i}-x'_{i})\big(4b_i+\alpha_i\big)\right) \bmod 8
}
Note that because the angles are defined modulo $2\pi$, one can
represent this angle as a 3-bits string $\tilde{B}$ (interpretable as
an integer) such that $\theta := \tilde{B} \times \frac{\pi}{4}$ and
eventually remove the $(-1)^{x_{n}}$ if needed by choosing the suitable convention in defining $x$ and
$x'$.
\end{proof}
A final remark is that in an honest run of this protocol, the measurement outcomes $b_i$ and $y$ are uniformly chosen from $\{0,1\}$ and $\Im(f_k)$ respectively. This justifies why in the honest-but-curious model we can view the protocol as sampling randomly
the different $\alpha,y,b$'s.
\section{Privacy against QHBC adversaries}\label{Sec:Privacy}
Here we will prove the security of \autoref{protocol:concrete_c_q_factory} against QHBC adversaries (\autoref{def:QHBC}). It can easily be generalised for specious adversaries (\autoref{def:specious}). Before proceeding further, it is worth stressing that this security level has three-fold importance. Firstly, the QHBC model concerns any application of PSQRG that involves a protocol where the adversaries are third parties that have access to the classical communication and nothing else. In this case, we can safely assume that the quantum part of the protocol is followed honestly and we only require to prove that the third parties learn nothing about the classical description of the state from the classical public communication.
Second case of interest is scenarios where the ``server'' does not intend to sabotage/corrupt the computation but may be interested to learn (for free) extra information. In such case, the protocol should be followed honestly, since any non-reversible deviation other than copying classical information could corrupt the computation. Finally, the QHBC case, as in the classical setting, is a first step towards proving the full security against malicious adversaries, as we will discuss in \autoref{Sec:Malicious}).
\begin{theorem}\label{Thm:privacy}
\autoref{protocol:concrete_c_q_factory} realises a PSRQG Ideal Protocol (as in \autoref{def:c_q_factory}) that is private with respect to this ideal protocol (as in \autoref{def:private}) against a QHBC server $\mathcal{A}$ (\autoref{def:QHBC}).
\end{theorem}
Before proving the privacy with respect to the ideal functionality (see below for construction of simulators), the first step is to show that the corresponding ideal protocol (\autoref{def:ideal}) is a PSRQG. By \autoref{lemma:hardcore} this reduces to proving that the classical description is a hard-core function with respect to $f_k$.
\begin{theorem}\label{Thm:hardcore}
The function $\theta$ given here
\EQ{\theta=\frac{\pi}{4}\left(\sum_{i=1}^{n-1}(x_i-x_i')(4b_i+\alpha_i)\right) \bmod 8}
as was defined in \autoref{protocol:concrete_c_q_factory}, is a hard-core function with respect to $f_k$.\\
NB: here the collision resistance is not needed and is replaced by the weaker second preimage resistance property.
\end{theorem}
\begin{proof}[Sketch Proof of Theorem \ref{Thm:hardcore}]
In \autoref{protocol:concrete_c_q_factory}, the adversary (Server) can only use the classical information that he possesses ($k,y,\alpha,b$) in order to try and guess with some probability the value of $\theta$ in the case that there is no abort. Since the adversary follows the honest protocol, the choices of $y,b$ are truly random (and not determined by the adversary as he could in the malicious case).
\noindent\textbf{Outline of sketch proof:} We first express the classical description of the state into expressions for each of the corresponding three bits. The aim is to prove that it is impossible to distinguish the sequence of these three bits from three random bits with non-negligible probability. To show this we follow five steps. In \textbf{Step 1} we express each of the the bits as a sum
mod two, of an inner product (of the form present in GL theorem) and some other terms. In \textbf{Step 2} we show that guessing the sum modulo two of the two preimages breaks the second preimage resistance of the function and thus is impossible. We assume that the adversary can achieve some inverse polynomial advantage in guessing certain predicates and in the remaining steps we show that in that case he can obtain a polynomial inversion algorithm for the one-way function $f_k$, and thus reach the contradiction. In \textbf{Step 3} we use the Vazirani-Vazirani \autoref{thm:VV} to reduce the proof of hard-core function to a number of single hard-core bits (predicates). In \textbf{Step 4} we use a Lemma that allows us to fix all but one variable in each expression, with an extra cost
that is an inverse polynomial probability and therefore the (fixed variables) guessing algorithm still needs to have negligible success probability. Finally, in \textbf{Step 5}, we reduce all the predicates in a form of a known hard-core predicate XOR with a function that involves variables not included in that predicate. Using the previous step, it reduces to guessing the XOR of a hard-core predicate with a constant, which is bounded by the probability of guessing the (known to be hard-core) predicate.
Here we give the sketch described above, while the full proof can be found in the \autoref{app:hardcore}. Let us start by defining
\EQ{\label{eq:proof1}\widetilde{B}=\widetilde{B}_1\widetilde{B}_2\widetilde{B}_3=\left(\sum_{i=1}^{n-1}(x_i-x_i')(4b_i+\alpha_i)\right)\bmod 8}
where $\widetilde{B}_i$ are single bits. Moreover, we treat $x,x'$ as vectors in $\{0,1\}^n$; we define $\alpha^{(j)}=(\alpha_1^{(j)},\cdots,\alpha_{n-1}^{(j)})$ the vector that involves the $j\in\{1,2,3\}$ bit of each of three-bit strings $\alpha$, and we define $\tilde{x}:=x\oplus x'$. We define $z$ as a vector in $\{-1,0,1\}^n$ defined as the element-wise differences of the bits of $x$ and $x'$, i.e. $z_i=x_i-x_i'$. Finally, as in GL theorem, we use the notation for the inner product $\langle a,b\rangle=\sum_{i=1}^{n-1} a_ib_i$.
We will prove that any QPT adversary $\mathcal{A}$ having all the classical information that Server has $(y,\alpha,b)$, can guess $\widetilde{B}$ with at most negligible probability
\EQ{\underset{\substack{ x \leftarrow \{0, 1\}^n \\ \alpha \leftarrow \{0, 1\}^{3n} \\ b \leftarrow \{0,1\}^n }} {\Pr} [\mathcal{A}(f(x), {\alpha}^{(1)}, {\alpha}^{(2)}, {\alpha}^{(3)}, b) = \widetilde{B_1}\widetilde{B_2}\widetilde{B_3}] \leq \frac18+\negl
}
where for simplicity we denote the function $f$ instead of $f_k$. This means that the adversary $\mathcal{A}$ cannot distinguish $\widetilde{B}$
from a random three-bit string with non-negligible probability and thus
\autoref{protocol:concrete_c_q_factory} is PSRQG as given in
\autoref{def:c_q_factory}.
\noindent\textbf{Step 1:} We decompose Eq. (\ref{eq:proof1}) into
three separate bits, and use the variable $\tilde{x},z$ defined above.
\EQ{\label{eq:proof2}
\widetilde{B}_3&=&\langle\tilde{x},\alpha^{(3)}\rangle\bmod 2 \nonumber\\
\widetilde{B}_2&=& \langle\tilde{x},\alpha^{(2)}\rangle\bmod 2\oplus h_2(z,\alpha^{(3)})\nonumber\\
\widetilde{B}_1&=&\langle\tilde{x},\alpha^{(1)}\rangle\bmod 2\oplus h_1(z,\alpha^{(3)},\alpha^{(2)},b)
}
where the derivation and exact expressions for the functions $h_1,h_2$
are given in \autoref{app:hardcore}. We notice from
Eq. (\ref{eq:proof2}) that each bit includes a term of the form
$\langle \tilde{x},\alpha^{(i)}\rangle\bmod 2$ which on its own is a
hard-core predicate following the GL theorem.
\noindent\textbf{Step 2:} By the second preimage resistance we have:
\EQ{\label{eq:proof3}& &\underset{x \leftarrow \{0,1\}^n} {\Pr} [\mathcal{A}(1^n, x) = x' \text{ such that } f(x) = f(x') \text{ and }x\neq x'] \leq \negl \Rightarrow \nonumber\\
& &\underset{x \leftarrow \{0,1\}^n} {\Pr} [\mathcal{A}(1^n, x) = x' \oplus x=\tilde{x}]\leq \negl
}
For each bit $j\in\{1,2,3\}$, separately we assume that the adversary can achieve an advantage in guessing the $\tilde{x}$ which is $\frac12+\varepsilon_j(n)$. Then, similarly to GL theorem, we prove that if this $\varepsilon_j(n)$ is inverse polynomial, this leads to contradiction with Eq. (\ref{eq:proof3}) since one can obtain an inverse-polynomial inversion algorithm for the one-way function $f$.
\noindent\textbf{Step 3:} While each bit includes terms that on its own it would make it hard-core predicate (as stated in Step 1),
if we XOR the overall bit with other bits it could destroy this property. To proceed with the proof that $\widetilde{B}$ is hard-core function we use the Vazirani-Vazirani theorem which states that it suffices to show that individual bits as well as combinations of XOR's of individual bits are all hard-core predicates. In this way one evades the need to show explicitly that the guesses for different bits are not correlated. To proceed with the proof, we use a trick that ``disentangles'' the different variables.
\noindent\textbf{Step 4:} We would like to be able to fix one variable and vary only the remaining, while in the same time maintain some bound on the guessing probability.
The advantage $\varepsilon_j(n)$ that we assume the adversary has for guessing one bit (or an XOR) is calculated ``on average'' over all the random choices of $(\tilde{x},\alpha^{(i)},b)$. Using \autoref{lemma:proof1} we can fix one-by-one all but one variable (applying the lemma iteratively, see \autoref{app:hardcore}). With suitable choices, the cardinality of the set of values that satisfies all these conditions is $O(2^n\varepsilon_j(n))$ for each iteration. Unless $\varepsilon_j(n)$ is negligible, this size is an inverse polynomial fraction of all
values. This suffices to reach the contradiction. The actual inversion probability that we will obtain is simply a product of the extra cost of fixing the variables with the standard GL inversion probability. This extra cost is exactly the ratio of the cardinality of the $\text{Good}$ sets (defined below) to the set of all values and is $O(\varepsilon_{v_i}(n))$.
\begin{lemma}\label{lemma:proof1}
Let $\underset{(v_1,\cdots,v_k)| v_j\leftarrow\{0,1\}^n \forall j}{\Pr}[\text{Guessing}]\geq p+\varepsilon(n)$, then for any variable $v_i$, $\exists$ a set $\text{Good}_{v_i}\subseteq\{0,1\}^n$ of size at least $\frac{\varepsilon(n)}{2}2^n$, where $\forall$ $v_i\in \text{Good}_{v_i}$
\[
\underset{(v_1,\cdots,\cancel{v_i},\cdots,v_k)| v_j\in\{0,1\}^n}{\Pr}[\text{Guessing}]\geq p+\frac{\varepsilon(n)}2
\]
where the probability is taken over all variables except $v_i$.
\end{lemma}
\noindent\textbf{Step 5:} If the expression we wish to guess involves
XOR of terms that depend on different variables, then by using Step 4
we can fix the variables of all but one term. Then we note that trying
to guess a bit (that depends on some variable and has expectation value close to $1/2$) is at least as hard as
trying to guess the XOR of that bit with a constant. For example, if
the bit we want to guess is
$\langle \widetilde{x}, r_1\rangle \bmod 2 \oplus h(z, r_2, r_3)]$ and
we have a bound on the guessing probability where only $r_1$ is
varied, then we have: \footnote{Here and in the full proof, when we compare winning probabilities for QPT adversaries, it is understood that we take the adversary that maximises these probabilities.}
\EQ{&\underset{\substack{r_1 \leftarrow \{0, 1\}^{n}}} {\Pr} [\mathcal{A}(f(x), r_1, r_2, r_3) = \langle \widetilde{x}, r_1\rangle \bmod 2 \oplus h(z, r_2, r_3)]\leq \nonumber\\ & \underset{\substack{r_1 \leftarrow \{0, 1\}^{n}}} {\Pr} [\mathcal{A}(f(x), r_1, r_2, r_3) = \langle \widetilde{x}, r_1\rangle \bmod 2]\nonumber
}
We note that all bits of $\widetilde{B}$ and their XOR's can be
brought in this form. Then using this, we can now prove security, as
the r.h.s. is exactly in the form where the GL theorem provides an
inversion algorithm for the one-way function $f$. For details, see \autoref{app:hardcore}.
\end{proof}
We now return and prove \autoref{Thm:privacy}.
\begin{proof}[Proof of \autoref{Thm:privacy}]
In the proof we use QHBC adversaries, but following closely the more general Specious adversaries, we can see that it would easily generalise. The proof has two steps. In the \textbf{first step} of the proof, using the operators $\mathcal{T}_i$ (as per definition of specious) and the existence of certain fixed state (see below) the simulator can reproduce the real view of the Server, if he can reproduce the honest state $\rho_i(\rho_{in})$ of the corresponding part of the protocol. The \textbf{second step} of the proof is to notice that apart from the last step of the protocol (decision to abort or not), the (only) secret input $t_k$ of the Client plays no role, and thus the simulator can reproduce the view of the Server without calling the ideal functionality. Finally, the simulator of the last step of the protocol, calls the ideal functionality (and thus $q_i=1$ in Eq. (\ref{eq:simulator_defn})) and receives the decision to abort (without access to the secret $t_k$).
\noindent \textbf{Step 1:} We use the no-extra information lemma from \cite{KW2017}:
\begin{lemma}[No-extra Information (from \cite{KW2017})]\label{rushing1}
Let $\Pi_U=(A,B,n)$ be a correct protocol for two party evaluation of $U$. Let $\tilde A$ be any $\epsilon$-specious
adversary. Then there exists an isometry $T_i:\tilde A_i\rightarrow A_i\otimes \hat{A}$ and a (fixed) mixed state $\hat{\rho_i}\in D(\hat{A_i})$ such that for all joint input states $\rho_{in}$,
\EQ{\label{eq:rushing}
\Delta\left((T_i\otimes\mathbb{I})(\tilde\rho_i(\tilde A,\rho_{in})),\hat{\rho_i}\otimes\rho_i(\rho_{in})\right)\leq 12\sqrt{2\epsilon}
}
where $\rho_i(\rho_{in})$ is the state in the honest run and $\tilde\rho_i(\tilde A,\rho_{in})$ is the real state (with the specious adversary $\tilde A$).
\end{lemma}
By setting $\epsilon=0$ (as QHBC) and using the inverse of the isometry $T_i$, we have\footnote{We denote the two parties $C$ for Client and $S$ for Server and their corresponding spaces (instead of the generic $A,B$ used in the definitions).}
\EQ{
\tilde\rho_i(\tilde S,\rho_{in})=T_i^{-1}\otimes\mathbb{I}(\hat{\rho_i}\otimes\rho_i(\rho_{in}))
}
and the operation $S_i$ of the simulator for \emph{any} step, consists of generating $\rho_i(\rho_{in})$ (see next part of the proof), tensor it with the fixed state $\hat{\rho_i}$ and apply the inverse of the isometry $T_i$. This recovers exactly the real state $\tilde\rho_i(\tilde S,\rho_{in})$ and thus tracing out the system of the Client to obtain the simulated view $\nu_i(\tilde S,\rho_{in})$ gives $(\delta=0)$-private with respect to the ideal protocol (see Eq. (\ref{eq:private_real_simul})).
\noindent \textbf{Step 2:} We give below the honest states at the two steps of the protocol before the Server (classically) communicates with the Client, noting that a simulator (with no access to the private information $t_k$) could interact with the Server (instead of the Client) just following the normal steps of the protocol, using the public inputs ($k,\alpha$).
\begin{itemize}
\item State after the Server measures the second register:
\EQ{
\left(\ket{k,t_k,\alpha}\right)_C\otimes\left(\ket{k,\alpha}\otimes (\ket{x}+\ket{x'})\otimes\ket{y}\right)_S
}
\item State after the Server measures the first registers in $\alpha$ angles:
\EQ{
\left(\ket{k,t_k,\alpha,y}\right)_C\otimes\left(\ket{k,\alpha,y}\otimes \ket{b}\otimes\ket{Output}\right)_S
}
where $\ket{Output}=\ket{+_\theta}$ if there is no abort, while $\ket{Output}=\ket{x_n}$ otherwise.
\end{itemize}
The final state is
\EQ{\rho_f(\rho_{in})&=&
\left(\ket{k,t_k,\alpha,b}\right)_C\otimes\left(\ket{k,\alpha,y,b}\otimes\ket{+_\theta}\otimes\ket{\mathsf{no- abort}}\right)_S \ \textrm{if no abort}\nonumber\\
&=& \left(\ket{k,t_k,\alpha,b}\right)_C\otimes\left(\ket{k,\alpha,y,b}\otimes\ket{x_n}\otimes \ket{\mathsf{abort}}\right)_S \ \textrm{if abort}
}
To obtain the corresponding view, the Simulator calls the ideal functionality, but only uses the $\mathsf{abort}/\mathsf{no-abort}$ decision, and otherwise acts as in previous steps: Generates the state $\hat{\rho}_f$ (from the no-extra information lemma), obtains the final state $\rho_f(\rho_{in})$ by running the actual protocol until the previous step and adding the extra register $\ket{\mathsf{abort}}/\ket{\mathsf{no-abort}}$, and then applies the inverse of the isometry $T_f$ and traces out the Client's registers. Note that, as given in the definitions, all operators used correspond to polynomially-sized quantum circuits and therefore the Simulator is also QPT.
\end{proof}
Before moving to the constructions of trapdoor functions with the required properties and discussing the malicious case, we need to make an important observation. The ideal Protocol
\ref{ideal:q_factory} other than the classical information $(k,y,\alpha,b)$, returns the state $\ket{+_\theta}$ to the Server. The security of our real \autoref{protocol:concrete_c_q_factory} that we proved is with respect to the ideal protocol (i.e. no information beyond that of the ideal protocol is obtained). However, having access to (a single copy) of the state $\ket{+_\theta}$ can (and does) give some non-negligible information on the classical description of that specific $\theta$. For example, by making a measurement one can rule-out one of the eight states with certainty. This, naively, would appear to be in contradiction with the properties of the function we have (where we prove that one can have only negligible advantage in guessing $\theta$). This is no different from the SRQG functionality, that the server can obtain some information on $r_m$. The resolution to this apparent contradiction, is that the basis of the proof of the hard-core property of $\theta$ with respect to the function, is that one can repeat the same guessing algorithm keeping same $x$ (or $y$) but varying $\alpha$'s. However, to obtain any information from the (output) qubit, one needs to measure it and disturb it. Then repeating the experiment the probability of obtaining the same $y$ a second time (and thus having prepared the same $\theta$) is negligible for any QPT adversary (if one can repeat only polynomial number of times). Therefore, this one-shot extra information on $\theta$, cannot be distinguished from a one-shot information on a truly random $r_m$.
\section{Function Constructions}\label{Sec:Functions}
For our \autoref{protocol:concrete_c_q_factory} we need a trapdoor one-way function that is also quantum-safe, two-regular and second preimage resistant (or the stronger collision resistance property). These properties may appear to be too strong to achieve, however, we give here methods to construct functions that achieve these properties starting from trapdoor one-way functions that have fewer (more realistic) conditions, and we specifically give one example that achieves \emph{all} the desired properties. In particular we give:
\begin{itemize}
\item A general construction given either (i) an injective,
homomorphic (with respect to any operation\footnote{in particular it is only required to be homomorphic once - rephrase this}) trapdoor one-way function or (ii) a bijective trapdoor one-way function, to obtain a two-regular, second preimage resistant\footnote{In (i) we prove the stronger collision-resistant property.},
trapdoor one-way function. In both cases the quantum-safe property is maintained (if the initial function has this property, so does the constructed function).
\item (taken from \cite{MP2012}) A method of how to realise injective quantum-safe trapdoor functions derived from the \textsc{LWE}{} problem, that has certain homomorphic property.
\item A way to use the first construction with the trapdoor from \cite{MP2012} that requires a number of modifications, including relaxation of the notion of two-regularity. The resulting function satisfy all the desired properties if a choice of parameters that satisfy multiple constraints, exists.
\item A specific choice of these parameters, satisfying all constraints, that leads to a concrete function with all the desired properties.
\end{itemize}
\subsection{Obtaining two-regular, collision resistant/second preimage resistant, trapdoor one-way functions}\label{sec:general_two_regular}
Here we give two constructions. The first uses as starting point an injective, homomorphic trapdoor function while the second a bijective trapdoor function. While we give both constructions, we focus on the first construction since (i) we can prove the stronger collision-resistance property and (ii)
(to our knowledge) there is no known bijective trapdoor function that is believed to be quantum-safe.
\begin{theorem}\label{Thm:injective_to_desired}
If $\mathcal{G}$ is a family of injective, homomorphic, trapdoor one-way functions, then there exists a family $\mathcal{F}$ of two-regular, collision resistant, trapdoor one-way functions. Moreover the family $\mathcal{F}$ is quantum-safe if and only if the family $\mathcal{G}$ is quantum-safe.
\end{theorem}
From now on, we consider that any function $g_k \in \mathcal{G}$ has domain $D$ and range $R$ and let $\square$ be the closed operation on $D$ and $\star$ be the closed operation on $R$ such that $g_k$ is the morphism between $D$ and $R$ with respect to these 2 operations:
$$ g_k(a) \star g_k(b) = g_k(a \, \square \, b) \, \, \, \, \forall a,b \in D $$
We also denote the operation $\triangle$ on $D$, the inverse operation of $\square$, specifically: $ a \, \square \, b^{-1} = a \, \triangle \, b \, \, \, \, \forall a,b \in D $ and $0$ be the identity element for $\square$. \\
Then, the family $\mathcal{F}$ is described by the following PPT algorithms: \\
\procedure [linenumbering]{{\tt FromInj.Gen}$_{\mathcal{F}}(\secparam) $}{
(k, t_k) \sample Gen_{\mathcal{G}}(\secparam) \, \, \,\, \, \, \pccomment{ $k$ is an index of a function from $\mathcal{G}$ and $t_k$ is its associated trapdoor} \\
x_0 \sample D \setminus \{0\} \, \, \, \pccomment{$x_0 \neq 0$ to ensure that the 2 preimages mapped to the same output are distinct} \\
k' := (k, g_k(x_0)) \, \, \, \pccomment{the description of the new function} \\
t_k' := (t_k, x_0) \, \, \, \pccomment{ the trapdoor associated with the function $f_{k'}$} \\
\pcreturn k', t_k'
}\\
The Evaluation procedure receives as input an index $k'$ of a function from $\mathcal{F}$ and an element $\bar{x}$ from the function's domain ($\bar{x} \in D \times \{0, 1\}$): \\
\procedure {{\tt FromInj.Eval}$_{\mathcal{F}}(k', \bar{x})$}{
\pcreturn f_{k'}(\bar{x})
}\\
where every function from $\mathcal{F}$ is defined as:
\begin{equation}
\boxed{
\begin{aligned}[b]
& f_{k'} : D \times \{0, 1\} \rightarrow R \nonumber \\
& f_{k'}(x,c) =
\begin{cases}
g_k(x) \text{, } &\quad\text{if } c = 0\\
g_k(x) \star g_k(x_0) = g_k(x \, \square \, x_0) \footnotemark\ \text{, } &\quad\text{if } c = 1\\
\end{cases} \nonumber
\end{aligned}
}
\end{equation}
\footnotetext{The last equality follows since each function $g_k$ from $\mathcal{G}$ is homomorphic}
\procedure [linenumbering]{{\tt FromInj.Inv}$_{\mathcal{F}}(k', y, t_k')$} {
\pccomment{y is an element from the image of $f_{k'}$, $k' = (k, g_k(x_0)), \, \, t_k' = (t_k, x_0)$} \\
x_1 := Inv_{\mathcal{G}}(k, y, t_{k}) \\
x_2 := x_1 \triangle x_0 \\
\pcreturn (x_1, 0) \, \, \, and \, \, \, (x_2, 1) \, \, \, \pccomment{the unique 2 preimages corresponding to } \\
\tab \tab \tab \tab \pccomment{an element from the image of $f_{k'}$}
}
\begin{proof}
To prove \autoref{Thm:injective_to_desired} we give below five lemmata showing that, the family $\mathcal{F}$ of functions defined above, satisfies the following properties: (i) two-regular, (ii) trapdoor, (iii) one-way, (iv) collision-resistant and (v) quantum-safe if $\mathcal{G}$ is quantum-safe.
\end{proof}
\begin{lemma}[two-regular]\label{lemma:two-regular}
If $\mathcal{G}$ is a family of injective, homomorphic functions, then $\mathcal{F}$ is a family of two-regular functions.
\end{lemma}
\begin{proof}
For every $y \in \Im f_{k'} \subseteq R$, where $k'=(k,g_k(x_0))$:
\begin{enumerate}
\item Since $\Im f_{k'}= \Im g_k$ and $g_k$ is injective, there exists a unique $x:=g^{-1}_k(y)$ such that $f_{k'}(x,0)=g_k(x)=y$.
\item Assume $x'$ such that $f_{k'}(x',1)=y$. By definition $f_{k'}(x',1)=g_k(x' \, \square \, x_0) = y$, but $g_k$ is injective and $g_k(x) = y$ by assumption, therefore there exists a unique $x'= x \triangle x_0$ such that $f_{k'}(x',1) = y$
\end{enumerate}
Therefore, we conclude that:
\EQ{\label{eq:preimages}
\forall \ y \ \in \Im f_{k'}: \ f_{k'}^{-1}(y):=\{(g_k^{-1}(y),0),(g_k^{-1}(y) \, \triangle \, x_0,1)\}
}
\end{proof}
\begin{lemma}[trapdoor]
If $\mathcal{G}$ is a family of injective, homomorphic, trapdoor functions, then $\mathcal{F}$ is a family of trapdoor functions.
\end{lemma}
\begin{proof}
Let $y \in \Im f_{k'} \subseteq R$. We construct the following inversion algorithm:\\
\procedure [linenumbering]{$Inv_{\mathcal{F}}(k', y, t_k')$} {
\pccomment{ $t_k' = (t_k, x_0)$, $k' = (k, g_k(x_0))$}\\
x := Inv_{\mathcal{G}}(k, y, t_k)\\
\pcreturn (x,0) \text{ and } (x \, \triangle \, x_0, 1)
}
\end{proof}
\begin{lemma}[one-way]\label{lemma:onewayFromInj}
If $\mathcal{G}$ is a family of injective, homomorphic, one-way functions, then $\mathcal{F}$ is a family of one-way functions.
\end{lemma}
\begin{proof}
We prove it by contradiction. We assume that a QPT adversary $\mathcal{A}$ can invert any function in $\mathcal{F}$ with non-negligible probability $P$ (i.e. given $y\in \Im f_{k'}$ to return a correct preimage of the form $(x',b)$ with probability $P$). We then construct a QPT adversary $\mathcal{A'}$ that inverts a function in $\mathcal{G}$ with the same non-negligible probability $P$ reaching the contradiction since $\mathcal{G}$ is one-way by assumption.
From Eq. (\ref{eq:preimages}) of \autoref{lemma:two-regular} we know the two preimages of $y$ are: (i) $(g_k^{-1}(y),0)$ and (ii) $(g^{-1}_k(y) \triangle x_0,1)$. We see that information on $g^{-1}_k(y)$ is obtain in both cases, i.e. obtaining any of these two preimages, is sufficient to recover $g_k^{-1}(y)$ if $x_0$ is known. We now construct an adversary $\mathcal{A'}$ that for any function $g_k : D \rightarrow R$, inverts any output $y=g_k(x)$ with the same probability $P$ that $\mathcal{A}$ succeeds.\\
\procedure [linenumbering] {$\mathcal{A}'(k, y)$} {
x_0 \sample D \setminus \{0\} \pccomment{$\mathcal{A'}$ knows $x_0$, but is not given to $\mathcal{A}$}\\
k' := (k, g_k(x_0)) \\
(x', b) \gets \mathcal{A}(k', y) \\
\pcif ((b == 0) \wedge (g_k(x') = y) \pcthen\\
\t \pccomment{equivalent to $\mathcal{A}$ succeeded in returning the first preimage} \\
\t \pcreturn x' \\
\pcelseif ((b == 1) \wedge (g_k(x' \, \square \, x_0)) = y) \pcthen\\
\t \pccomment{$\mathcal{A}$ succeeded in returning the second preimage} \\
\t \pcreturn x' \, \square \, x_0\pccomment{$\mathcal{A'}$ uses $x_0$ known from step 1} \\
\pcelse \pccomment{$\mathcal{A}$ failed in giving any of the preimages (happens with probability $1-P$)}\\
\t \pcreturn 0
}\\
\end{proof}
\begin{lemma}[collision-resistance]
If $\mathcal{G}$ is a family of injective, homomorphic, one-way functions, then any function $f \in \mathcal{F}$ is collision resistant.
\end{lemma}
\begin{proof}
Assume there exists a QPT adversary $\mathcal{A}$ that given $k'=(k,g_k(x_0))$ can find a collision $(y, (x_1,b_1),(x_2,b_2))$ where $f_k(x_1,b_1)=f_{k'}(x_2,b_2)=y$ with non-negligible probability $P$. From Eq. (\ref{eq:preimages}) we know that the two preimages are of the form $(x,0),(x \, \triangle \, x_0,1)$ where $g_k(x)=y$. It follows that when $\mathcal{A}$ is successful, by comparing the first arguments of the two preimages, can recover $x_0$.
We now construct a QPT adversary $\mathcal{A'}$ that inverts the function $g_k$ with the same probability $P$, reaching a contradiction:\\
\procedure [linenumbering]{$\mathcal{A'}(k, g_k(x))$} {
k' = (k, g_k(x)) \\
(y, (x_1, b_1), (x_2, b_2)) \wedge x_1\neq x_2 \gets \mathcal{A}(k') \pccomment{where $y$ is an element from the image of $f_{k'}$} \\
\pcif f(x_1,b_1)=f(x_2,b_2)=y\\
\pcreturn x= x_1 \, \triangle \, x_2 \\
\pcelse \pccomment{$\mathcal{A}$ failed to find collision of $f_{k'}$; happens with probability $(1-P)$}\\
\pcreturn 0
}
\end{proof}
\begin{lemma}[quantum-safe]
If $\mathcal{G}$ is a family of quantum-safe trapdoor functions, with properties as above, then $\mathcal{F}$ is also a family of quantum-safe trapdoor functions.
\end{lemma}
\begin{proof}
The properties that require to be quantum-safe is the one-wayness and collision resistance. Both these properties of $\mathcal{F}$ that we derived above were proved using reduction to the hardness (one-wayness) of $\mathcal{G}$. Therefore if $\mathcal{G}$ is quantum-safe, its one-wayness is also quantum-safe and thus both properties of $\mathcal{F}$ are also quantum-safe.
\end{proof}
\begin{theorem}\label{Thm:bijective_to_desired}
If $\mathcal{G}$ is a family of bijective, trapdoor one-way functions, then there exists a family $\mathcal{F}$ of two-regular, second preimage resistant, trapdoor one-way functions. Moreover ,the family $\mathcal{F}$ is quantum-safe if and only if the family $\mathcal{G}$ is quantum-safe.
\end{theorem}
The family $\mathcal{F}$ is described by the following PPT algorithms, where each function $g_k \in \mathcal{G}$ has domain $D$ and range $R$:\\
\procedure [linenumbering] {{\tt FromBij.Gen}$_{\mathcal{F}}(\secparam) $} {
(k_1, t_{k_1}) \sample Gen_{\mathcal{G}}(\secparam) \\
(k_2, t_{k_2}) \sample Gen_{\mathcal{G}}(\secparam) \\
k' := (k_1, k_2) \\
t_k' := (t_{k_1}, t_{k_2}) \\
\pcreturn k', t_k'
}\\ \\
\procedure {{\tt FromBij.Eval}$_{\mathcal{F}}(k', \bar{x})$}{
\pcreturn f_{k'}(\bar{x})
}\\ \\
where every function from $\mathcal{F}$ is defined as:
\begin{equation}
\boxed{
\begin{aligned}[b]
& f_{k'} : D \times \{0, 1\} \rightarrow R \nonumber \\
& f_{k'}(x,c) =
\begin{cases}
g_{k_1}(x) \text{, } &\quad\text{if } c = 0\\
g_{k_2}(x) \text{, } &\quad\text{if } c = 1\\
\end{cases} \nonumber
\end{aligned}
}
\end{equation}
\procedure [linenumbering]{{\tt FromBij.Inv}$_{\mathcal{F}}(k', y, t_k')$} {
\pccomment{y is an element from the image of $f_{k'}$, $k' = (k_1, k_2), \, \, t_k' = (t_{k_1}, t_{k_2})$} \\
x_1 := Inv_{\mathcal{G}}(k_1, y, t_{k_1}) \\
x_2 := Inv_{\mathcal{G}}(k_2, y, t_{k_2}) \\
\pcreturn (x_1, 0) \, \, \, and \, \, \, (x_2, 1) \, \, \, \pccomment{the unique 2 preimages corresponding to } \\
\tab \tab \tab \tab \pccomment{an element from the image of $f_{k'}$}
}
The proof of \autoref{Thm:bijective_to_desired}, using the family of function defined above, follows same steps as of \autoref{Thm:injective_to_desired} and is given in the \autoref{app:bijective}.
\subsection{Injective, homomorphic quantum-safe trapdoor one-way function from \textsc{LWE}{} (taken from \cite{MP2012})}
We outline the Micciancio and Peikert \cite{MP2012} construction of injective trapdoor one-way functions, naturally derived from the Learning With Errors problem. At the end we comment on the homomorphic property of the function, since this is crucial in order to use this function as the basis to obtain our desired two-regular, collision resistant trapdoor one-way functions.
The algorithm below generates the index of an injective function and its corresponding trapdoor. The matrix $G$ used in this procedure, is a fixed matrix (whose exact form can be seen in \cite{MP2012}) for which the function from the family $\mathcal{G}$ with index $G$ can be efficiently inverted.
\procedure [linenumbering]{{\tt LWE.Gen}$_{\mathcal{G}}(1^n) $}{
A' \sample \mathbb{Z}_q^{n \times \bar{m}} \\
\pccomment{$\mathcal{D}$ denotes the element-wise gaussian distribution with mean 0}\\
\pccomment{and standard deviation $\alpha q$ on matrices of size $\bar{n} \times kn$}\\
R \sample \mathcal{D}^{\bar{m} \times kn}_{\alpha q} \, \, \, \pccomment{trapdoor information} \\
A := (A', G - A'R) \, \, \, \pccomment{concatenation of matrices A' and G - A'R, representing the index of the function} \\
\pcreturn A, R
} \\
The actual description of the injective trapdoor function is given in the Evaluation algorithm below, where each function from $\mathcal{G}$ is defined on: $ g_K : \mathbb{Z}_q^n \times L^m \rightarrow \mathbb{Z}_q^m$, and $L$ is the domain of the errors in the \textsc{LWE}{} problem (the set of integers bounded in absolute value by $\mu$):
\procedure [linenumbering]{{\tt LWE.Eval}$_{\mathcal{G}}(K, (s, e)) $}{
y := g_K(s, e) = s^tK + e^t \\
\pcreturn y
}\\
The inversion algorithm returns the unique preimage $(s, e)$ corresponding to $b^t \in \Im (g_K)$. The algorithm uses as a subroutine the efficient algorithm $Inv_G$ for inverting the function $g_G$, with $G$ the fixed matrix mentioned before.
\procedure {{\tt LWE.Inv}$_{\mathcal{G}}(K, t_K, b^t) $}{
\pcln {b'}^t := b^t \begin{bmatrix} R \tabularnewline I \end{bmatrix} \\
\pcln (s', e') := Inv_G(b') \\
\pcln s := s' \\
\pcln e := b - K^ts \\
\pcln \pcreturn s, e
}\\
We examine now whether the functions $g_K$ are homomorphic with respect to some operation.
Given $a = (s_1, e_1) \in \mathbb{Z}_q^n \times L^m $ and $b = (s_2, e_2) \in \mathbb{Z}_q^n \times L^m$, the operation $\star$ is defined as:
$$ (s_1, e_1) \, \star \, (s_2, e_2) = (s_1 + s_2 \bmod q, e_1 + e_2)$$
Given $y_1 = g_K(a) \in \mathbb{Z}_q^m$ and $y_2 = g_K(b) \in \mathbb{Z}_q^m$, the operation $\square$ is defined as:
$$ y_1 \, \square \, y_2 = y_1 + y_2 \bmod q $$
Then, we can easily verify that:
\EQ{& g_K(s_1, e_1) + g_K(s_2, e_2) \bmod q = {s_1}^tK + {e_1}^t + {s_2}^tK + {e_2}^t \bmod q = \nonumber\\
& (s_1 + s_2 \bmod q)^tK + (e_1 + e_2)^t = g_K((s_1 + s_2) \bmod q, e_1 + e_2)\nonumber}
However, the sum of two error terms, each being bounded by $\mu$, may not be bounded by $\mu$. This means that the function is not (properly) homomorphic. Instead, what we conclude is that as long the vector $e_1 + e_2$ lies inside the domain of $g_K$, then $g_K$ is homomorphic. To address this issue, we will need to define a weaker notion of 2-regularity, and a (slight) modification of the {\tt FromInj} construction to provide a desired function starting from the trapdoor function of \cite{MP2012}.
\subsection{A suitable $\delta$-2 regular trapdoor function}\label{Subsec:actual_trapdoor}
Using the homomorphic injective trapdoor function of Micciancio and Peikert \cite{MP2012} and the construction defined in the proof of \autoref{Thm:injective_to_desired}, we derive a family $\mathcal{F}$ of collision-resistant trapdoor one-way function, but with a weaker notion of 2-regularity, called $\delta$-2 regularity:
\begin{definition}[$\delta$-2 regular]
A family of functions $(f_i)_{i \leftarrow Gen_{\mathcal{F}}}$ is said to be \mbox{$\delta$-2 regular}, with $\delta \in [0,1]$ if:
\[\Pr_{i \leftarrow Gen_{\mathcal{F}}, y \in \Im (f_i)} [~ |f^{-1}_i(y)| = 2 ~] \geq \delta \]
\end{definition}
Given this definition, we should note here that in \autoref{protocol:concrete_c_q_factory} we need to modify the abort case to include the possibility that the image $y$ obtained from the measurement does not have two preimages (something that happens with at most probability $(1-\delta)$).
\begin{theorem}[Existence of a $\delta$-2 regular trapdoor function family]
There exists a family of functions that are $\delta$-2 regular (with $\delta$ at least as big as a fixed constant), trapdoor, one-way, collision resistant and quantum-safe, assuming that there is no quantum algorithm that can efficiently solve $\textsc{SIVP}{}_\gamma$ for $\gamma = \poly[n]$.
\end{theorem}
\begin{proof}
To prove this theorem, we define a function similar to the one in the {\tt FromInj} construction, where the starting point is the function defined in \cite{MP2012}. Crucial for the security is a choice of parameters that satisfy a number of conditions given by \autoref{thm:req_param} and proven in \autoref{app:implementation}. The proof is then completed by providing a choice of parameters given in \autoref{thm:exist_param} that satisfies all conditions as it is shown in \autoref{app:implementation2}.
\end{proof}
\begin{definition}\label{def:REG2_fct}
For a given set of parameter $\mathcal{P}$ chosen as in \autoref{thm:req_param}, we define the following functions, that are similar to the construction {\tt FromInj}, except for the key generation that require an error sampled from a smaller set:\\
\begin{minipage}[t]{.45\textwidth}
\procedure [linenumbering]{{\tt REG2.Gen}$_\mathcal{P}(1^n) $}{
A, R \gets {\tt LWE.Gen}_{\mathcal{G}}(1^n)\\
s_0 \gets \mathbb{Z}_q^{n,1} \\
e_0 \gets \mathcal{D}^{m,1}_{\alpha' q}\\
b_0 := {\tt LWE.Eval}(A, (s_0 + e_0))\\
k := (A,b_0)\\
t_k := (R,(s_0,e_0))\\
\pcreturn (k, t_k)
}
\end{minipage}
\begin{minipage}[t]{.45\textwidth}
\procedure [linenumbering]{{\tt REG2.Eval}$_\mathcal{P}((A,b_0), (s,e,c)) $}{
\pccomment{$s$ is a random element in $\mathbb{Z}_q^{n,1}$, $c \in \{0, 1\}$}\\
\pccomment{$e$ is sampled uniformly and such that}\\
\pccomment{\t each component is smaller than $\mu$}\\
\pcreturn {\tt LWE.Eval}(A, (s, e)) + c b_0
} \\
\procedure [linenumbering]{{\tt REG2.Inv}$_\mathcal{P}((A, R,(s_0,e_0)), b) $}{
(s_1,e_1) := {\tt LWE.Inv}(R, b)\\
\pcif ||e_1 - e_0||_\infty \leq \mu \pcthen \ \pcreturn \bot \\
\pcreturn ((s_1,e_1,0), (s_1-s_0,e_1-e_0,1))
} \\
\end{minipage}
\end{definition}
Note, that the pairs $(s,e)$ and $(s_0,e_0)$ correspond to $x$ and $x_0$ of the {\tt FromInj} construction of \autoref{sec:general_two_regular}. The idea behind this construction is that the noise of the trapdoor is sampled from a set which is small compared to the noise of the input function. That way, when you will add the trapdoor together with an input, the total noise will still be small enough to stay in the set of possible input noise with good probability, mimicking the homomorphic property needed in \autoref{Thm:injective_to_desired}. Note that the parameters need to be carefully chosen, and a trade-off between probability of success and security exists.
\begin{lemma}[Requirements on the parameters]\label{thm:req_param}
For all $n,q, \mu \in \mathbb{Z}, \mu' \in \mathbb{R}$, let us define:
\begin{itemize}
\item $k := \ceil{log(q)}$
\item $\bar{m} = 2n$
\item $\omega = nk$
\item $m := \bar{m} + \omega = 2n + nk$
\item $\alpha' = \frac{\mu'}{\sqrt{m}q} $
\item $\alpha = m \alpha' $
\item $C$ the constant in Lemma 2.9 of \cite{MP2012} which is around $\frac{1}{\sqrt{2\pi}}$
\item $B = 2$ if $q$ is a power of 2, and $B = \sqrt{5}$ otherwise.
\end{itemize}
Now, if for all security parameters $n$ (dimension of the lattice), there exist $q$ (the modulus of \textsc{LWE}{}) and $\mu$ (the maximum amplitude of the components of the errors) such that:
\begin{enumerate}
\item $m$ is such that $n = o(m)$ (required for the injectivity of the function (see e.g. \cite{Lecture13}))
\item $0 < \alpha < 1$
\item $\mu' = O(\mu / m)$ (required to have non negligible probability to have two preimages)
\item $\alpha' q \geq 2 \sqrt{n}$ (required for the \textsc{LWE}{} to \textsc{SIVP}{} reduction)
\item $\frac{n}{\alpha'}$ is \poly[n] (representing, up to a constant factor, the approximation factor $\gamma$ in the $\textsc{SIVP}{}_\gamma$ problem)
\item \[\sqrt{m} \mu < \underbrace{\frac{q}{2 B \sqrt{\left(C \cdot (\alpha \cdot q) \cdot (\sqrt{2n} + \sqrt{kn} + \sqrt{n})\right)^2+1}}}_{r_{max}} - \mu'\sqrt{m} \]
(required for the correctness of the inversion algorithm - $r_{max}$ represents the maximum length of an error vector that one can correct using the \cite{MP2012} function\footnote{We chose to use the computational definition of \cite{MP2012}, but this theorem can be easily extended to other definitions of this same paper, or even to other construction of trapdoor short basis)}, and the last term is needed in the proof of collision resistance to ensure injectivity even when we add the secret trapdoor noise, as illustrated in \autoref{fig:picture_balls_squares})
\end{enumerate}
then the family of functions of \autoref{def:REG2_fct} is $\delta$-2 regular (with $\delta$ at least as big as a fixed constant), trapdoor, one-way and collision resistant (all these properties are true even against a quantum attacker), assuming that there is no quantum algorithm that can efficiently solve $SIVP{}_\gamma$ for $\gamma = \poly[n]$.
\end{lemma}
\begin{proof}
The proof follows by showing that the function with these constraints on the parameters is (i) $\delta$-2 regular, (ii) collision resistant, (iii) one-way and (iv) trapdoor. In \autoref{app:implementation} we give and prove one lemma for each of those properties. For an intuition of the choice of parameters see also \autoref{fig:picture_balls_squares}.
\end{proof}
\begin{figure}[h]
\begin{minipage}[c]{0.5\textwidth}
\includegraphics[width=\textwidth]{picture_balls_squares.pdf}
\end{minipage}\hfill
\begin{minipage}[c]{0.4\textwidth}
\caption{The red circle represents the domain of the error term from the trapdoor information, which is being sampled from a Gaussian distribution. The orange square is an approximation of this domain, which must satisfy that its length is much smaller (by a factor of at least $m$ -- the dimension of the error) than the length of the blue square, used for the actual sampling from the domain of the error terms, for which it is known that the trapdoor function is invertible, domain represented by the green circle. The dashed part is needed to ensure that if there is a collision $(x_1, x_2)$, then $x_1 = x_2 \pm x_0$. \label{fig:picture_balls_squares}}
\end{minipage}
\end{figure}
\newpage%
\subsection{Parameter Choices}
\begin{lemma}[Existence of parameters]\label{thm:exist_param}
The following set of parameters fulfills \autoref{thm:req_param}.
\begin{align*}
n &= \lambda\\
k &= 5\ceil{log(n)} + 21\\
q &= 2^k\\
\bar{m} &= 2n\\
\omega &= nk\\
m &= \bar{m} + \omega\\
\mu &= \ceil{2mn \sqrt{2+k}}\\
\mu' &= \mu/m\\
B &= 2\\
\end{align*}
and $\alpha, \alpha', C$ are defined like in \autoref{thm:req_param}.
\end{lemma}
The proof is given in \autoref{app:implementation2}. As a final remark, we stress that other choices of the parameters are possible (considering the trade-off between security and probability of success) and we have not attempted to find an optimal set.
\section{Discussion}\label{Sec:Malicious}
In this work we deal with Quantum-Honest-But-Curious adversaries. Naturally, the final aim should be to provide security against a (fully) malicious adversary/Server. There are two (linked) issues to consider when dealing with malicious adversaries. The \textbf{first issue} is whether the Server (by deviating arbitrarily) can obtain extra information about the secret classical description (of the state supposingly prepared). The \textbf{second issue} is whether the actual state at the end of the protocol is (essentially) the one that the Client believes, i.e. if the functionality provides verification. We make remarks separately on these issues, and then conclude with an approach that could lead in a solution to both issues.
\noindent\textbf{Issue 1 (privacy):} The most naive attempt for the Server to deviate in order to obtain information, is to return $y,b$ other than those obtained from an honest run of the protocol. Since $y,b$ \emph{determine} (along with other parameters) the value of the secret $\theta$ a deviation there could lead to breaking the security. For example, instead of the (truly) random $y$ that is obtained in the honest run, the Server can choose $y$ such that he has information on the preimages for the given $k$ or can choose $b$ adaptively depending on values of $\alpha$.
However, the function $f_k$ is \emph{collision}-resistant, which means that even if the adversary chooses the $y$ he cannot find such $y$ that both preimages are known with a non-negligible probability. Moreover, if the Server chooses $y$, it means that the protocol was not followed and thus the final output state is not going to be related with the value $\theta$ as expected. We conjecture the hard-core function proof (\autoref{Thm:hardcore}) will remain valid in that case with minor modifications. The more significant difficulty, however, comes from ``mixed'' strategies, where the adversary follows partly the protocol (and thus the output qubit is correlated with the classical secret description), and partly deviates. In those cases it is hard to quantify what information the server has, and whether this is strictly less than that of an ideal protocol (where the state $\ket{+_\theta}$ gives some legitimate information).
\noindent\textbf{Issue 2 (verification):} The first thing to note, is that the adversary has in his lab the output state, and therefore (trivially) he can always apply a final-step deviation corrupting the legitimate output. Thus when we speak of verification, we mean a correct state, up to a (reversible) deviation on the Server's side (as the operations $\mathcal{T}_i$ in the definition of specious). The second thing to stress, is that \autoref{protocol:concrete_c_q_factory} cannot be verifiable against a malicious Server, unless some extra mechanism is added. There is a way, by deviating from the instructions, to corrupt the output in a way that depends on the secret classical description ($\theta$), but without actually learning any information about the same classical description. In particular it is possible by measuring all qubits of the first register in $3\alpha$ angle to generate the state $\ket{+_{3\theta}}$ as output. This deviation does not help the Server to learn \emph{any} information about $\theta$ (protocol still ``private'') but affects the output state in a ``non-reversible'' way and thus compromises the verifiability.
\noindent \textbf{A way forward:} The ultimate goal would be to extent QFactory into a Quantum Universal Composable protocol \cite{unruh2010uni} in order to be able to compose it with any other protocol, or at least to proof the security against a malicious adversary. In classical protocols (and recently in quantum too \cite{KMW2017}), the way to boost the security from honest-but-curious to malicious is to introduce a ``compiler'' (e.g. using the construction in \cite{GMW87} or a cut-and-choose technique) and boost the security by essentially enforcing the honest-but-curious behaviour to malicious adversaries (or abort). In our case, the protocol is simple enough, having single qubits as outputs. One method could be to prepare a large string of qubits, and have the Client choose a random subset of those and instructs the Server to measure them. By observing the correct statistics on the ``test'' qubits, one can infer the correct preparation. This is closely related to the parameter-estimation in QKD, and with self-testing \cite{MYS2012}. The exact details are involved, as the analogous cases of compilers, parameter-estimation and self-testing suggests, and will be explored in a future publication.
\section*{Acknowledgements}
A.C. and P.W. are very grateful to Thomas Zacharias, Aggelos Kiayias and especially Yiannis Tselekounis for many useful discussions about the security proofs. L.C. also thanks Atul Mantri and Dominique Unruh for useful preliminary conversations, with a special mention to C\'{e}line Chevalier whose discussions were a precious source of inspiration.
A.C. would also like to show his appreciation to his grandmother, Petra Ilie for all the support and help she has given to him his entire life.
The work was supported by EPSRC grants EP/N003829/1 and EP/M013243/1.
\newpage%
|
{
"timestamp": "2018-06-13T02:14:16",
"yymm": "1802",
"arxiv_id": "1802.08759",
"language": "en",
"url": "https://arxiv.org/abs/1802.08759"
}
|
\subsection{Terminology}
To understand deep learning, it is helpful to first understand the related concepts of artificial intelligence and machine learning.
Artificial intelligence is a set of computer algorithms that are able to perform complicated tasks or tasks that require intelligence when conducted by humans.
Machine learning is a subset of artificial intelligence algorithms which, to perform these complicated tasks, are able to learn from provided data and do not require pre-defined rules of reasoning.
The field of machine learning is very diverse and has already had notable applications in medical imaging~\cite{Erickson2017}.
Deep learning is a sub-discipline of machine learning that relies on networks of simple interconnected units.
In deep learning models, these units are connected to form multiple layers that are capable of generating increasingly high level representations of the provided input (e.g., images).
Below, in order to explain the architecture of deep learning models, we introduce the artificial neural network in general and one specific type: the convolutional neural network.
Then, we detail the "learning" process of these networks, which is the process of incorporating the patterns extracted from data into deep neural networks.
\subsection{Artificial Neural Networks}
Artificial neural networks (ANNs) are machine learning models based on basic concepts dating as far as 1940s, significant development in 1970s and 1980s and a period notable popularity in 1990s and 2000s followed by a period of being overshadowed by other machine learning algorithms.
ANNs consist of a multitude of interconnected processing units, called neurons, usually organized in layers.
A traditional ANN typically used in the practice of machine learning contains 2 to 3 layers of neurons.
Each neuron performs a very simple operation.
While many neuron models were proposed, a typical neuron simply multiplies each input by a certain weight, then adds all the products for all the inputs and applies a simple nondecreasing function at the end.
Even though each neuron performs a very rudimentary calculation, the interconnected nature of the network allows for the performance of very sophisticated calculations and implementation of very complicated functions.
\subsection{Convolutional Neural Networks}
Deep neural networks are a special type of an ANN.
The most common type of a deep neural network is a deep convolutional neural network (CNN).
A deep convolutional neural network, while inheriting the properties of a generic ANN, has also its own specific features.
First, it is deep.
A typical number of layers is 10\nobreakdash-30 but in extreme cases it could exceed 1\,000.
Second, the neurons are connected such that multiple neurons share weights.
This effectively allows the network to perform convolutions (or template matching) of the input image with the filters (defined by the weights) within the CNN.
Other special feature of CNNs is that between some layers, they perform pooling which makes the network invariant to small shifts of the images.
Finally, CNNs typically use a different activation function of the neurons as compared to traditional ANNs.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.7\linewidth]{images/CNN.png}
\caption{A diagram illustrating a typical architecture of a convolutional neural network.}
\label{fig:cnn}
\end{figure}
Figure~\ref{fig:cnn} shows an example of a small architecture for a typical CNN.
One can see that the first layers are the convolutional ones which serve the role of generating useful features for classification.
Those layers can be thought of as implementing image filters, ranging from simple filters that match edges to those that eventually match much more complicated shapes such as eyes, or tumors.
Further from the network input are so called fully connected layers (similar to traditional ANNs) which utilize the features extracted by the convolutional layers to generate a decision (e.g., assign a label).
A variety of deep learning architectures have been proposed, often driven by characteristics of the task at hand (e.g., fully convolutional neural networks for image segmentation).
Some of these are described in more detail in the section of this paper that reviews the current state of the art.
\subsection{The learning process in convolutional neural networks}
Above, we described general characteristics of traditional neural networks and deep learning’s flagship: the convolutional neural network.
Next, we will explore how to make those networks perform useful tasks.
This is accomplished in the process referred to as learning or training.
The learning process of a convolutional neural network simply consists of changing the weights of the individual neurons in response to the provided data.
In the most popular type of learning process, called supervised learning, a training example contains an object of interest (e.g., an image of a tumor) and a label (e.g., the tumor’s pathology: benign or malignant).
In our example, the image is presented to the network’s input, and the calculation is carried out within the network to produce a prediction based on the current weights of the network.
Then, the network’s prediction is compared to the actual label of the object and an error is calculated.
This error is then propagated through the network to change the values of the network's weights such that the next time the network analyzes this example, the error decreases.
In practice, the adaptation of the weights is performed after a group of examples (a batch) are presented to the network.
This process is called error backpropagation or stochastic gradient descent.
Various modifications of stochastic gradient descent algorithm have been developed~\cite{ruder2016overview}.
In principle, this iterative process consists of calculations of error between the output of the model and the desired output and adjusting the weights in the direction where the error decreases.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.7\linewidth]{images/training.png}
\caption{An illustration of different ways of training in deep neural networks.}
\label{fig:training}
\end{figure}
The most straightforward way of training is to start with a random set of weights and train using available data specific to the problem being solved (training from scratch).
However, given the large number of parameters (weights) in a network, often above 10 million, and a limited amount of training data (common in medical imaging), a network may overfit to the available data, resulting in poor performance on test data.
Two training methods have been developed to address this issue: transfer learning~\cite{yosinski2014transferable} and off-the-shelf features (a.k.a. deep features)~\cite{sharif2014cnn}.
A diagram comparing training from scratch with transfer learning and off-the-shelf deep features is shown in Figure~\ref{fig:training}.
In the transfer learning approach, the network is first trained using a different dataset, for example an ImageNet collection.
Then, the network is "fine-tuned" through additional training with data specific to the problem to be addressed.
The idea behind this approach is that solving different visual tasks shares a certain level of processing such as recognition of edges or simple shapes.
This approach has been shown successful in, for example, prediction of survival time from brain MRI in patients with glioblastoma tumor~\cite{ahmed2017fine} or in skin lesion classification~\cite{Esteva}.
Another approach that addresses the issue of limited training data is the deep "off-the-shelf" features approach which uses convolutional neural networks which have been trained on a different dataset to extract features from the images.
This is done by extracting outputs of layers prior to the network's final layer.
Those layers typically have hundreds or thousands of outputs.
Then, these outputs are used as inputs to "traditional" classifiers such as linear discriminant analysis, support vector machines, or decision trees.
This is similar to transfer learning (and is sometimes considered a part of transfer learning) with the difference being that the last layers of a CNN are replaced by a traditional classifier and the early layers are not additionally trained.
\subsection{Deep learning vs "traditional" machine learning}
Increasingly often we hear a distinction between deep learning and "traditional" machine learning (see Figure~\ref{fig:comparison}).
The difference is very important, particularly in the context of medical imaging.
In traditional machine learning, the typical first step is feature extraction.
This means that to classify an object, one must decide which characteristics of an object will be important and implement algorithms that are able to capture these characteristics.
A number of sophisticated algorithms in the field of computer vision have been proposed for this purpose and a variety of size, shape, texture, and other features were extracted.
This process is to a large extent arbitrary since the machine learning researcher or practitioner often must guess which features will be of use for a particular task and runs the risk of including useless and redundant features and, more importantly, not including truly useful features.
In deep learning, the process of feature extraction and decision making are merged and trainable, and therefore no choices need to be made regarding which features should be extracted; this is decided by the network in the training process.
However, the cost of allowing the neural network to select its own features is a requirement for much larger training data sets.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.66\linewidth]{images/comparison.png}
\caption{An illustration of difference between “traditional” machine learning and deep learning.}
\label{fig:comparison}
\end{figure}
\end{document}
\section{Introduction}
\label{sec:introduction}
\subfile{introduction}
\section{The practice of radiology}
\label{sec:practice}
\subfile{practice}
\section{An introduction to deep learning}
\label{sec:deeplearning}
\subfile{deeplearning}
\section{Deep learning in radiology: state of the art}
\label{sec:sota}
\subfile{sota}
\section{Future of deep learning in radiology}
\label{sec:future}
\subfile{future}
\section{Conclusion}
\label{sec:Conclusion}
\subfile{conclusion}
\vspace{2em}
\noindent
\textbf{Acknowledgments:} The authors would like to acknowledge funding from the National Institutes of Biomedical Imaging and Bioengineering grant 5 R01 EB021360.
The authors would like to thank Gemini Janas for reviewing and editing this article.
\subsection{Disease Detection}
One of the most challenging tasks in the interpretation of imaging is the rapid differentiation of abnormalities from normal background anatomy.
For example, in the interpretation of mammography, each radiograph contains thousands of individual focal densities, regional densities, and geometric points and lines that must be interpreted to detect a small number of suspicious or abnormal findings.
In most cases, the entire mammogram should be interpreted as normal or negative, adding further complexity to the interpretive task.
In order to be useful, a computer algorithm does not have to detect all objects of interest (e.g., abnormalities) and be perfectly specific (i.e., not mark any normal locations).
For example, in screening mammography, approximately 80\% of screening mammograms should be read as negative according to the ACR BI-RADS guideline, and of the 20\% of examinations that trigger additional evaluation, many will ultimately be categorized as negative or benign~\cite{Ghate2005}.
An algorithm that could successfully categorize even half of screening mammograms as definitely negative would dramatically reduce the effort required to interpret a large batch of examinations.
\subsection{Disease Diagnosis and Management}
Once an abnormality has been detected, the often-complex task of determining a diagnosis and the disease management implications is undertaken.
For focal masses generically, a large number of features must be integrated in order to decide how to appropriately manage the finding.
These features can include size, location, attenuation or signal intensity, borders, heterogeneity, change over time, and others.
In some cases, simple criteria have been established and validated for the management of focal findings.
For example, most focal lesions in the kidney can be characterized as either simple or minimally complex cysts, which are almost uniformly benign.
On the other hand, most lesions in the kidney that are solid are considered to have high malignant potential.
Finally, a minority of focal kidney lesions is considered indeterminate and can be managed accordingly.
While for some types of abnormalities making the diagnostic and disease management decision follows straightforward guidelines, for other types of abnormalities, management algorithms are much more complex.
In the BI-RADS guideline for assessing focal lesions in the breast, a mass is categorized according to its shape (oval, round, or irregular), margin (circumscribed, obscured, microlobulated, indistinct, or spiculated), and its density (higher, equal to, or lower density than the glandular tissue, or fat-containing)~\cite{Ghate2005}.
Based on the constellation of features, the radiologist must then decide whether a mass is likely benign or requires follow-up or biopsy.
In the LI-RADS criteria for assessing focal liver lesions in patients at risk for developing hepatocellular carcinoma, five major features, and up to 21 ancillary features, are assessed to risk-stratify lesions and determine their management~\cite{Elsayes2017}.
Deep learning algorithms have the potential to assess a large number of features, even those previously not considered by radiologists, and arrive at a repeatable conclusion in a fraction of the time required for a human interpreter.
Perhaps most promisingly, these algorithms could be used to categorize large amounts of existing imaging data and correlate features with downstream health outcomes, a process that is currently extremely laborious and time-consuming when human interpretation is required.
\subsection{Workflow}
While detection, diagnosis, and characterization of disease receive the primary attention among algorithm developers, another important area where artificial intelligence could contribute is in facilitating the workflow of the radiologists while interpreting images.
With the widespread conversion from printed films to centralized Picture Archiving and Viewing Systems (PACS) as well as the availability of multi-planar, multi-contrast, and multi-phase imaging, radiologists have seen exponential growth in the size and complexity of image data to be analyzed.
Additionally, interpretations must often be rendered in the context of a multitude of prior examinations.
The simple task of finding and presenting these data is complex, and artificial intelligence systems may be well-suited for this role.
An example of a highly complex workflow is that for many cancer patients.
Such patients are not uncommonly afflicted with more than one primary tumor, metastatic disease to numerous sites, and may have undergone a variety of biopsies, locoregional therapies, and systemic therapies with varying results.
In the simplest scenario, interpretation of a follow-up imaging examination requires colocalization of all relevant sites of disease between the current and prior examinations.
Measurements of size are performed, and in some cases functional features, such as tumor perfusion or diffusion restriction, are assessed either subjectively or objectively.
Most radiology practices utilize imaging equipment of different types, generations, and often different vendors, thus simply identifying the appropriate image sets in prior examinations can be very challenging.
After the appropriate images have been identified, the radiologist must colocalize disease sites and attempt to obtain precise repeated measurements in order to ensure that the values obtained from the current and prior examinations can be compared.
Each of the above tasks is time-consuming and does not necessarily require the full skill of a radiologist.
However, standard PACS systems are not able to reliably present all of the above data for a variety of reasons, including the variability in labeling the types and components of imaging examinations, the variability in patient positioning and anatomy between examinations, the variability in modalities used to image the same portion of the anatomy, as well as other factors.
In principle, an artificial intelligence algorithm could assess a patient’s prior imaging, bring forward examinations that include the relevant body part(s), detect the image modality and contrast type, and determine the location of the area of interest within the relevant anatomy to reduce the radiologist’s effort in performing these relatively mundane tasks.
\subsection{Image interpretation tasks that radiologists do not perform but deep learning could}
In addition to performing tasks that are a part of the current radiological practice, computer algorithms could perform medical image interpretation tasks that radiologists do not perform on a regular basis.
The research toward this goal has been underway for some time, mostly using traditional machine learning and image processing algorithms.
One example is radiogenomics~\cite{mazurowski2015radiogenomics}, which aims to find relationships between imaging features of tumors and their genomic characteristics.
Examples can be found in breast cancer~\cite{Mazurowski2014}, glioblastoma~\cite{Gutman2013}, low grade glioma~\cite{mazurowski2017radiogenomics}, and kidney cancer~\cite{Karlo2014}.
Radiogenomics is not a part of the typical clinical practice of a radiologist.
Another example is prediction of outcomes of cancer patients with applications in glioblastoma~\cite{Gutman2013, Mazurowski2013}, lower grade glioma,~\cite{mazurowski2017radiogenomics}, and breast cancer~\cite{Mazurowski2015Recurrence}.
While imaging features have a potential to be informative of patient outcomes, very few are currently used to guide oncological treatment.
Deep learning could facilitate the process of incorporating more of the information available from imaging into the oncology practice.
\end{document}
\subsection{Classification}
In a classification task, an object is assigned to one of the predefined classes.
A number of different classification tasks can be found in the domain of radiology such as: classification of an image or an examination to determine the presence or an absence of an abnormality; classification of abnormalities as benign or malignant; classification of cancerous lesions according to their histopathological and genomic features; prognostication; and classification for the purpose of organization radiological data.
Deep learning is becoming the methodology of choice for classifying radiological data.
The majority of the available deep learning classifiers use convolutional neural networks with a varying number of convolutional layers followed by fully connected layers.
The availability of radiological data is limited as compared to the natural image datasets which drove the development of deep learning techniques in the last 5 years.
Therefore, many applications of deep learning in medical image classification have resorted to techniques meant to alleviate this issue: off-the-shelf features and transfer learning~\cite{tajbakhsh2016convolutional} discussed in the previous section of this article.
Off-the-shelf features have performed well in a variety of domains~\cite{sharif2014cnn}, and this technique has been successfully applied to medical imaging~\cite{Antropova2017, Paul2016}.
In~\cite{Antropova2017}, the authors combined the deep off-the shelf features extracted from a pre-trained VGG19 network with hand-crafted features for determining malignancy of breast lesions in mammography, ultrasound, and MRI.
In~\cite{Paul2016}, long-term and short term survival was predicted for patients with lung carcinoma.
The transfer learning strategy, which involves fine tuning of a network pre-trained on a different dataset, has been applied to a variety of tasks such as classification of prostate MR images to distinguish patients with prostate cancer from patients with benign prostate conditions~\cite{Wang2017a}, identification of CT images with pulmonary tuberculosis~\cite{Lakhani2017}, and classification of radiographs to identify hip osteoarthritis~\cite{Xue2017}.
Most of the studies which apply the transfer learning strategy replace and retrain the deepest layer of a network, whereas shallow layers are fixed after the initial training.
A variation of the transfer learning strategy combines fine-tuning and deep features approach.
It fine-tunes a pre-trained network on a new dataset to obtain more task-specific deep feature representations.
An example of this is the study~\cite{Chi2017}, which performed ultrasound imaging-based thyroid nodule classification using features extracted from a fine-tuned pre-trained GoogLeNet.
An ensemble of fine-tuned CNN classifiers was shown to predict radiological image modality in the study~\cite{Kumar2017}.
A comparison of approaches using deep features and transfer learning with fine tuning was shown in the study~\cite{Zhu2017a} identifying radiogenomic relationships in breast cancer MR images and in~\cite{Zhu2017} for predicting the upstaging of ductal carcinoma in situ to invasive breast cancer from breast cancer MR images.
In both of these problems, deep features performed better than transfer learning with the fine tuning approach.
However, both of these studies faced the issue of a small size of the training set.
When sufficient data are available, an entire deep neural network can be trained from a random initialization (training from scratch).
The size of the network to be trained depends on task and dataset characteristics.
However, the commonly used architecture in medical imaging is based on AlexNet~\cite{krizhevsky2012imagenet} and VGG~\cite{simonyan2014very} with modifications that have fewer layers and weights.
Examples of training from scratch can also be found in various studies such as: assessing for the presence of Alzheimer’s disease based on brain MRI using deep learning~\cite{li2017deep, Suk2017}, glioma grading in MRI~\cite{Khawaldeh}, and disease staging and prognosis in chest CT of smokers~\cite{Gonzalez2017}.
Recent advances in the design of CNN architectures has made networks easier to train and more efficient.
They have more layers and perform better while having fewer trainable parameters~\cite{canziani2016analysis} which reduces the likelihood of overtraining.
The most notable examples include Residual Networks (ResNets)~\cite{he2016deep} and the Inception architecture~\cite{szegedy2017inception, szegedy2016rethinking}.
A shift to these more powerful networks has also taken place in applications of deep learning to radiology both for transfer learning and training from scratch.
Three different ResNets were used to predict methylation of the O6-methylguanine methyltransferase gene status from pre-surgical brain tumor MRI~\cite{Korfiatis2017}.
In~\cite{Kim2017}, the InceptionV3 network was fine-tuned and served as a feature extractor instead of previously used GoogLeNet.
In another work using chest X-ray images~\cite{rajpurkar2017chexnet}, the authors fine-tuned a DenseNet with 121 layers for the classification of miscellaneous pathologies, achieving radiologist-level classification performance for identifying pneumonia.
In another approach, auto-encoder (AE)~\cite{hinton2006reducing} or stacked auto-encoder (SAE)~\cite{bengio2007greedy, poultney2007efficient} networks have been trained from scratch, layer by layer in unsupervised way.
A stacked denoising auto-encoder with backpropagation was used in~\cite{Ortiz2017} to determine the presence of Alzheimer’s disease.
AEs and SAEs can also be used to extract feature representations (similarly to the deep features approach) from hidden layers for further classification.
Such feature representation has been used in the classification of lung nodules into benign and malignant classes in CT~\cite{Kumar2015}, and in the identification of multiple sclerosis lesions in MRI~\cite{Yoo2018}.
Apart from the classification of radiological images, analysis of radiological text reports plays a significant role~\cite{Wang2012}.
The most prominent approach in this type of classification is deep learning-based natural language processing (NLP)~\cite{Kim2014}, which is based on the seminal work for obtaining vector representation of phrases using an unsupervised neural model~\cite{Mikolov2013}.
An example of application of this architecture can be found in~\cite{Chen2017a} where the authors classified CT radiology reports as representing presence or absence of pulmonary embolism (PE), as well as type (chronic or acute) and location (central or subsegmental) of PE when present.
They showed an improvement as compared to a non-deep learning algorithm.
The same architecture was used in~\cite{Shin2017} for classifying head CT reports of ICU patients with altered mental status as having different degrees of severity according to each of five criteria: severity of study, acute intracranial bleed, acute mass effect, acute stroke, acute hydrocephalus.
Radiology reports using the International Coding of Diseases (ICD) were auto-encoded in~\cite{Karimi2017} using a publicly available dataset.
A third application of the same architecture can be found in~\cite{Karimi2017} where radiology reports were classified according to the International Coding of Diseases9 (ICD9) using a publicly available dataset.
\subsection{Segmentation}
In an image segmentation task, an image is divided into different regions in order to separate distinct parts or objects.
In radiology, the common applications are segmentation of organs, substructures, or lesions, often as a preprocessing step for feature extraction and classification~\cite{li2017deep, akkus2017predicting}.
Below, we discuss different types of deep learning approaches used in segmentation tasks in a variety of radiological images.
The most straightforward and still widely used method for image segmentation is classification of individual pixels based on small image patches (both 2-dimensional and 3-dimensional) extracted around the classified pixel.
This approach has found usage in different types of segmentation tasks in MRI, for example brain tumor segmentation in~\cite{Havaei2015, hussain2017brain, milletari2017hough}, white matter segmentation in multiple sclerosis patients~\cite{valverde2017improving}, segmentation of 25 different structures in brain~\cite{wachinger2017deepnat}, and for rectal cancer segmentation in pelvis MRI~\cite{trebeschi2017deep}.
It allows for using the same network architectures and solutions that are known to work well for classification, however, there are some shortcomings of this method.
The primary issue is that it is computationally inefficient, since it processes overlapping parts of images multiple times.
Another drawback is that each pixel is segmented based on a limited-size context window and ignores the wider context.
In some cases, a piece of global information, e.g. pixel location or relative position to other image parts, may be needed to correctly assign its label.
One approach that addresses the shortcomings of the pixel-based segmentation is a fully convolutional neural network (fCNN)~\cite{long2015fully}.
Networks of this type process the entire image (or large portions of it) at the same time and output a 2-dimensional map of labels (i.e., a segmentation map) instead of a label for a single pixel.
Example architectures that were successfully used in both natural images and radiology applications are encoder-decoder architectures such as U-Net~\cite{christ2017automatic, ronneberger2015u, salehi2017real} or Fully Convolutional DenseNet~\cite{jegou2017one, chenmri, li2017h}.
Various adjustments to these types of architectures have been developed that mainly focus on connections between the encoder and decoder parts of the networks, called skip connections. Applications of fCNNs in radiology include prostate gland segmentation in MRI~\cite{clark2017fully}, segmentation of multiple sclerosis lesions and gliomas in MRI~\cite{mckinley2016nabla}, and ultrasound-based nerve segmentation~\cite{zhang2017image}.
Moreover, loss functions have been explored that account for class imbalance (differences in the number of examples in each class), which is typical in medical datasets, e.g. weighted cross entropy was used in~\cite{mehta2017m} for brain structure segmentation in MRI or Dice coefficient-based loss for brain tumor segmentation in MRI~\cite{sudre2017generalised}.
In order to segment 3-dimensional data, it is common to process data as 2-dimensional slices and then combine the 2-dimensional segmentation maps into a 3-dimensiaonal map since 3D fCNNs are significantly larger in terms of trainable parameters and as a result require significantly larger amounts of data.
Nevertheless, these obstacles can be overcome, and there are successful applications of 3D fCNNs in radiology, e.g. V-Net for prostate segmentation from MRI~\cite{milletari2016v} and 3D U-Net~\cite{cciccek20163d} for segmentation of the proximal femur in MRI~\cite{deniz2017segmentation} and tumor segmentation in multimodal brain MRI~\cite{shenmultimodal}.
Finally, a deep learning approach that has found some application in medical imaging segmentation is recurrent neural networks (RNNs).
In~\cite{yang2017fine}, the authors used a Boundary Completion RNN for prostate segmentation in ultrasound images.
Another notable application is in~\cite{poudel2016recurrent}, where the authors applied a recurrent fully convolutional neural network for left-ventricle segmentation in multi-slice cardiac MRI to leverage inter-slice spatial dependencies.
Similarly, \cite{cai2017improving} used Long Short-Term Memory (LSTM)~\cite{hochreiter1997long} type of RNN trained end-to-end together with fCNN to take advantage of 3D contextual information for pancreas segmentation in CT and MR images.
In addition, they proposed a novel loss function that directly optimizes a widely used segmentation metric, the Jaccard Index~\cite{jaccard1912distribution}.
\subsection{Detection}
Detection is a task of localizing and pointing out (e.g., using a rectangular box) an object in an image.
In radiology, detection is often an important step in the diagnostic process which identifies an abnormality (such as a mass or a nodule), an organ, an anatomical structure, or a region of interest for further classification or segmentation~\cite{al2010improved, oliver2010review, rey2002automatic}.
Here, we discuss the common architectures used for various detection tasks in radiology along with example specific applications.
The most common approach to detection for 2-dimensional data is a 2-phase process that requires training of 2 models.
The first phase identifies all suspicious regions that may contain the object of interest.
The requirement for this phase is high sensitivity~\cite{roth2014new} and therefore it usually produces many false positives.
A typical deep learning approach for this phase is a regression network for bounding box coordinates based on architectures used for classification~\cite{erhan2014scalable, szegedy2014scalable}.
The second phase is simply classification of sub-images extracted in the previous step.
In some applications, only one of the two steps uses deep learning.
The classification step, when utilizing deep learning, is usually performed using transfer learning.
The models are often pre-trained using natural images, for example for thoraco-abdominal lymph node detection in~\cite{shin2016deep} and pulmonary embolism detection in CT pulmonary angiogram images~\cite{tajbakhsh2016convolutional}.
In other applications, the models are pre-trained using other medical imaging dataset to detect masses in digital breast tomosynthesis images~\cite{samala2016mass}.
The same network architectures can be used for the second phase as in a regular classification task (e.g., VGG~\cite{simonyan2014very}, GoogLeNet~\cite{szegedy2015going}, Inception~\cite{szegedy2017inception}, ResNet~\cite{he2016deep}) depending on the needs of a particular application.
While in the 2-phase detection process the models are trained separately for each phase, in the end-to-end approach one model encompassing both phases is trained.
An end-to-end architecture that has proved to be successful in object detection in natural images, and was recently applied to medical imaging, is the Faster Region-based Convolutional Neural Network (R-CNN)~\cite{Ren2015}.
It uses a CNN to obtain a feature map which is shared between region proposal network that outputs bounding box candidates, and a classification network which predicts the category of each candidate.
It was recently applied for intervertebral disc detection in X-ray images~\cite{sa2017intervertebral} and detection of colitis on CT images~\cite{liu2017detection}.
A domain specific modification that uses additional preprocessing before the region proposal step was used by~\cite{ben2017domain} for detection of architectural distortions in mammograms.
Another approach to detection is a single-phase detector that eliminates the first phase of region proposals.
Examples of popular methods that were first developed for detection in natural images and rely on this approach are You Only Look Once (YOLO)~\cite{redmon2016you}, Single Shot MultiBox Detector (SSD)~\cite{liu2016ssd} and RetinaNet~\cite{lin2017focal}.
In the context of radiology, a YOLO-based network called BC-DROID has been developed by~\cite{platania2017automated} for region of interest detection in breast mammograms.
SSD has been employed, for example in~\cite{cao2017breast}, for breast tumor detection in ultrasound images, outperforming other evaluated deep learning methods that were available at the time.
The authors of~\cite{li2017detection} applied the same network for detection of pulmonary lung nodules in CT images.
In the examples above, 2-dimensional data was used.
For 3-dimensional imaging volumes, common in medical imaging, results obtained from 2-dimensional processing are combined to produce the ultimate 3-dimensional bounding box.
As an example, in \cite{de20162d} the authors performed detection of 3D anatomy in chest CT images by processing data slice by slice in one direction.
Combining output from different planes was performed in several studies.
Most of the them~\cite{prasoon2013deep, roth2016deep, roth2016improving} used orthogonal planes of MRI and CT images performing detection in each direction separately.
The results can then be combined in different ways, e.g. by an algorithm based on output probabilities~\cite{de20162d} or using another machine learning method like random forest~\cite{li2017detection}.
An alternative method for 3D detection has been proposed for automatic detection of lymph nodes using CT images by concatenating coronal, sagittal and axial views as a single 3-channel image in~\cite{roth2014new}.
\subsection{Other Tasks in Radiology}
While the majority of the applications of deep learning in radiology have been in classification, segmentation, and detection, other medical imaging-related problems have found some solutions in deep learning.
Due the variety of those problems, there is no unifying methodological framework for these solutions.
Therefore below, we organize the examples according to the problem that they attempt to address.
\paragraph{Image Registration:}
In this task two or more images (often 3D volumes), typically of different types (e.g., T1-weighted and T2-weighted MRIs) must be spatially aligned such that the same location in each image represents the same physical location in the depicted organ.
Several approaches can be taken to address the problem.
In one approach, it is necessary to calculate similarity measures between image patches taken from the images of interest to register them.
The authors of~\cite{Simonovsky2016} used deep learning to learn a similarity measure from T1-T2 MRI image pairs of adult brain and tested it to register T1-T2 MRI interpatient images of the neonatal brain.
This similarity measure performed better than the standard measure, called mutual information, which is widely used in registration~\cite{Maes1997}.
In another deep learning-based approach to image registration, the deformation parameters between image pairs are directly learned using misaligned image pairs.
In~\cite{Liao2017}, the authors trained a CNN-based model to learn the sequence of movements that resulted in the misalignment of the image pairs of CT and cone-beam CT examinations of the abdominal spine and heart.
In another study~\cite{Sokooti2017}, chest CT follow-up examinations were registered by training a CNN to predict three-dimensional displacement vector fields between the fixed and moving image pairs.
A CNN-based network was trained to correct respiratory motion in 3D abdominal MR images by predicting spatial transforms~\cite{Lv2017}.
All of these techniques are supervised regression techniques as they were trained using ground truth deformation information.
In another approach~\cite{DeVos2017}, which was unsupervised, a CNN was trained end-to-end to generate a spatial transformation which minimized dissimilarity between misaligned image pairs.
\paragraph{Image generation:}
Acquisition parameters of a radiological image strongly affect the visual quality and detail of the images obtained using the same modality.
First, we discuss the applications that synthesize images generated using different acquisition parameters within the same modality.
In~\cite{Bahrami2016}, 7T like images were generated from 3T MR images by training a CNN with patches centered around voxels in the 3T MR images.
Undersampled (in k-space) cardiac MRIs were reconstructed using a deep cascade of CNNs in~\cite{Schlemper2017}.
A real-time method to reconstruct compressed sensed MRI using GAN was proposed by~\cite{Yang2017}.
In another approach~\cite{Chartsias2017} in order to synthesize brain MRI images based on other MRI sequences in the same patient, convolutional encoders were built to generate a latent representation of images.
Then, based on that representation a sequence of interest was generated.
Reconstruction of "normal-dose" CT images from low-dose CT images (which are degraded in comparison to normal-dose images) has been performed using patch-by-patch mapping of low-dose images to high-dose images using a shallow CNN~\cite{Chen2017}.
In contrast, a deep CNN has been trained with low-dose abdominal CT images for reconstruction of normal-dose CT~\cite{Kang2017}.
In another study, CT images were reconstructed from a lower number of views using an U-Net inspired architecture~\cite{Jin2017}.
Deep learning has also been applied to synthesizing images of different modalities.
For example, CT images have been generated using MRIs by adopting an FCN to learn an end-to-end non-linear mapping between pelvic CTs and MRIs~\cite{Nie2016}.
Synthetic CT images of brain were generated from one T1-weighted MRI sequence in~\cite{Han2017}.
In another application to aid a classification framework for Alzheimer’s disease diagnosis with missing PET scans, PET patterns were predicted from MRI using CNN~\cite{Li2014}.
\paragraph{Image enhancement:}
Image enhancement aims to improve different characteristics of the image such as resolution, signal-to-noise-ratio, and necessary anatomical structures (by suppressing unnecessary information) through various approaches such as super-resolution and denoising.
Super-resolution of images is important specifically in cardiac and lung imaging.
Three dimensional near–isotropic cardiac and lung images often require long scan times in comparison to the time the subject can hold his or her breath.
Thus, multiple 2D slices are acquired instead and the super-resolution methodology is applied to improve the resolution of the images.
An example of using deep learning in super-resolution in cardiac MRI can be found in~\cite{Oktay2016}, where the authors developed different models for single image super-resolution and for generating high resolution three-dimensional image volumes from two-dimensional image stacks.
In another study using CT, a single image super-resolution approach based on CNN was applied in a publicly available chest CT image dataset to generate high-resolution CT images, which are preferred for interstitial lung disease detection~\cite{Umehara2017}.
In this study, upscaled bicubic-interpolated images were first passed through one convolutional layer to generate low-resolution features.
Then, a non-linear transformation of those features was mapped to generate high resolution image features for the reconstruction.
An example of an application of deep learning in denoising can be found in~\cite{Benou2017} where the authors performed denoising of DCE-MRI images of a brain (for stroke and brain tumors) by training an ensemble of deep auto-encoders using synthesized data.
Removal of Rician noise in MR images using a deep convolutional neural network aided with residual learning was performed in~\cite{Jiang2017}.
In an attempt to enhance the visual details of lung structure in chest radiographs, the effect of bone structures (ribs and clavicles) were suppressed.
Bone structure has been estimated by conditional random field based fusion of the outputs of a cascaded architecture of CNNs at multiple scales~\cite{Yang2017a}.
Metal artifacts (caused by prosthetics, dental procedures etc.) have also been suppressed by using a trained CNN model to generate metal-free images using CT~\cite{Zhang2017}.
\paragraph{Content-based image retrieval:}
In the most typical version of this task, the algorithm, given a query image, finds the most similar images in a given database.
To accomplish this task, in~\cite{Qayyum2017}, a deep CNN was first trained to distinguish between different organs.
Then, features from the three fully connected layers in the network were extracted for the images in the set from which the images were retrieved (evaluation dataset).
The same features were then extracted from the query image and compared with those of the evaluation dataset to retrieve the image.
In another study, a method was developed to retrieve, arrange, and learn the relationships between lesions in CT images~\cite{Yan2017}.
\paragraph{Objective image quality assessment:}
Objective quality assessment measures of medical images aim to classify an image to be of satisfactory or unsatisfactory quality for the subsequent tasks.
Objective quality measures of medical images are important to improve diagnosis and aid in better treatment~\cite{Chow2016}.
Image quality of fetal ultrasound was predicted using CNN in a recent study~\cite{Wu2017}.
Another study \cite{Abdi2017}, attempted to reduce the data acquisition variability in echocardiograms using a CNN trained on the quality scores assigned by an expert radiologist.
Using a simple CNN architecture, T2-weighted liver MR images were classified as diagnostic or non-diagnostic quality by CNN in~\cite{Esses2017}.
\end{document}
|
{
"timestamp": "2018-02-27T02:00:59",
"yymm": "1802",
"arxiv_id": "1802.08717",
"language": "en",
"url": "https://arxiv.org/abs/1802.08717"
}
|
\section{Formalization}
\input{formalization}
\section{Small step semantics}
\input{smallstep}
\subsection{Transactions}
Formally, a transaction is a tuple $(\textit{nonce}, \textit{gasPrize}, \allowbreak \textit{gaslimit}, \allowbreak \textit{to}, \allowbreak \textit{value}, \allowbreak \textsf{sender}, \allowbreak \textit{input}, \textit{type})$
where
\begin{itemize}
\item
$\textit{nonce}$ is a number counting the number of transactions issued by the sender
\item
$\textit{gasPrize}$ is the amount of ether to pay for one unit of gas when executing this transaction
\item
$\textit{gaslimit}$ is the maximum amount of gas to be spent on the execution of the transaction
\item
$\textit{to}$ is the recipient of the the transaction
\item
$\textit{value}$ is the amount of {$\textit{wei}$} transferred by the transaction
\item
$\textsf{sender}$ is the sender of the transaction
\item
$\textit{input}$ is the input given to the transaction. This might either be the arguments given to a contract in case of a call transaction or the byte code that initializes the newly created contract in the case of a create transaction
\item
$\textit{type}$ is the type of the transaction which is either a call or a create transaction.
\end{itemize}
A transaction not only causes byte code to be executed, but additionally includes some initialization and finalization steps that together with the effects of the code execution determines the effects of the transaction on the global state of the system.
\subsection{Small step semantics}
We define a relation of the form $\sstep{\transenv}{\callstack}{\callstack'}$ where $\transenv$ is the transaction environment and $\callstack$ and $\callstack'$ denote call stacks of the execution.
The transaction environment contains the parameters of the transaction that can be accessed by the code during execution, but can not be altered. In particular $\transenv$ contains the following information:
\begin{itemize}
\item the address $\originator$ of the account that made the transaction
\item the gas prize $\textit{gasPrize}$ for the transaction that specifies the amount of {wei} to pay for a unit of gas
\item the block header $H$ of the block the transaction is executed in
\end{itemize}
A block header is of the form $(\textit{parent},$ $\textit{beneficiary},$ $\textit{difficulty},$ $\textit{number},$ $\textit{gaslimit},$ $\textit{timestamp})$.
Where $\textit{parent}$ identifies the header of the block's parent block, $\textit{beneficiary}$ is the address of the beneficiary of the transaction, $\textit{difficulty}$ is a measure of the difficulty of solving the proof of work puzzle required to mine the block, $\textit{number}$ is the number of ancestor blocks, $\textit{gaslimit}$ is the maximum amount of gas that might be consumed when executing the blocks transactions and $\textit{timestamp}$ is the Unix time stamp at the block's inception.
\section{Content from previous definitions}
As for our definitions, we are only considered in parts of the traces produced by the execution, we assume projection functions that filter only specific actions of a trace:
\begin{definition}[Projection on execution traces]
Let $f \in \textit{Act} \to \mathbb{B}$ be a filtering function. Then the projection on traces is recursively defined as follows
\begin{align*}
\project{\pi}{f} =
\begin{cases}
\epsilon & \pi = \epsilon \\
\cons{a}{(\project{\pi'}{f})} & \pi = \cons{a}{\pi'} \land f(a) = 1 \\
\project{\pi'}{f} & \pi = \cons{a}{\pi'} \land f(a) = 0
\end{cases}
\end{align*}
\end{definition}
In the following, let $\filtercallscreates{c}$ denote the function filtering all call and create actions of contract $c$:
\begin{align*}
\filtercallscreates{c}(a) =
\begin{cases}
1 & a = \textsf{CALL}_{c}(g, \textit{to}, \textit{va}, \textit{io}, \textit{is}, \textit{oo}, \textit{os}) \\
& ~\lor~ a = \textsf{CREATE}_{c}(\textit{va}, \textit{io}, \textit{is}) \\
& ~\lor~ a = \textsf{CALLCODE}_{c}(g, \textit{to}, \textit{va}, \textit{io}, \textit{is}, \textit{oo}, \textit{os}) \\
& ~\lor~ a = \textsf{DELEGATECALL}_{c}(g, \textit{to}, \textit{io}, \textit{is}, \textit{oo}, \textit{os}) \\
& \text{for some } g, \textit{to}, \textit{va}, \textit{io}, \textit{is}, \textit{oo}, \textit{os} \in \mathbb{B}^{256} \\
0 & \text{otherwise}
\end{cases}
\end{align*}
We define a function for updating the components of the global state and also lift it to an update function on execution states.
\begin{definition}[Strong updates on global state]
Let the function $\strongpartialupdategstate{\cdot}{\cdot} \in \Sigma \allowbreak \times (\mathcal{A} \to (\mathbb{N} \to \mathbb{N}) \times (\mathbb{N} \to \mathbb{N}) \times ((\mathbb{B}^{256} \to \mathbb{B}^{256}) \to (\mathbb{B}^{256} \to \mathbb{B}^{256}) ) \allowbreak \times (\arrayof{\mathbb{B}}\to \arrayof{\mathbb{B}})) \to \Sigma$
be defined as follows:
\begin{align*}
\strongpartialupdategstate{\sigma}{u} =
\lambda a.
(\textit{u}_{\textit{n}}(n), \textit{u}_{\textit{b}}(b), \textit{u}_{\textit{s}}(\textit{s}), \textit{u}_{\textit{c}}(\textit{c}) )
\end{align*}
given
$u(a) = (\textit{u}_{\textit{n}}, \textit{u}_{\textit{b}}, \textit{u}_{\textit{s}}, \textit{u}_{\textit{c}})$
and $\sigma(a) = \accountstate{n}{b}{s}{c}$
\end{definition}
We say that a strong update does not affect a component of the global state (for a certain address) if the update $u$ maps (this address) for this component to the identity function.
Using this, we can easily express the partial update of contract codes in the global state in the execution states.
\begin{definition}[Code updates in execution states]
Let the function $\updatecodeexstate{\cdot}{\cdot} \in \mathcal{S} \times (\mathcal{A} \pto \arrayof{\mathbb{B}}) \to \mathcal{S}$ be defined as follows:
\begin{align*}
\updatecodeexstate{s}{f} =
\begin{cases}
\regstate{\mu}{\iota}{\strongpartialupdategstate{\sigma}{u}} & s = \regstate{\mu}{\iota}{\sigma} \\
s & \text{otherwise}
\end{cases}
\end{align*}
with
\begin{align*}
u = \lambda a.
\begin{cases}
(\idfun{\mathbb{N}}, \idfun{\mathbb{N}}, \idfun{\mathbb{B}^{256} \to \mathbb{B}^{256}}, \lambda b. f(a)) & a \in \domain{f} \\
(\idfun{\mathbb{N}}, \idfun{\mathbb{N}}, \idfun{\mathbb{B}^{256} \to \mathbb{B}^{256}}, \idfun{\arrayof{\mathbb{B}}}) & \text{otherwise}
\end{cases}
\end{align*}
and $\idfun{\mathbb{N}}$, $\idfun{(\mathbb{B}^{256} \to \mathbb{B}^{256})}$ and $\idfun{\arrayof{\mathbb{B}}}$ the identity functions on natural numbers, the storage type and byte arrays, respectively.
\end{definition}
Note that for defining call robustness, we require that depending on the code of untrusted accounts, the effect of the contract execution should not affect the balances of the global state. This definition captures that the attacker should not be able to influence the (overall) money flows of a considered contract (as was done in the DAO hack). Still this does not consider whether the money flows were performed by the attacker directly or by invoking a third contract (and consequently spending the attackers or the third contract's money) or if they were performed using the original contract.
For this reason, we want to present an alternative definition for call robustness that captures this other facet of reentrancy, namely that the actions of the considered contract are influenced by untrusted code rather than its overall effect on the block chain.
This definition is incomparable to the one before as not every different sequence of actions might result in a different effect on the balances.
For expressing this property, we introduce the notion of substacks and to this end we first define concatenation ($++$) for plain (annotated) call stacks using a recursive definition:
\begin{definition}[Concatenation of plain call stacks]
\begin{align*}
\concatstack{\epsilon}{S_{\textit{plain}}} &= S_{\textit{plain}} \\
\concatstack{\cons{s}{S_{\textit{plain}}}}{S_{\textit{plain}}'} &= \cons{s}{(\concatstack{S_{\textit{plain}}}{S_{\textit{plain}}'})}
\end{align*}
\end{definition}
Using concatenation, we define the substack relation:
\begin{definition}[Substack]
The plain call stack $S_{\textit{plain}}$ is a (strict) substack of $\callstack$ (written $S_{\textit{plain}} \subset \callstack$ or $S_{\textit{plain}} \subseteq \callstack$ respectively) if there exists an execution state $s$ and a plain call stack $S_{\textit{plain}}'$ such that $\callstack = \cons{s}{(\concatstack{S_{\textit{plain}}'}{S_{\textit{plain}}})}$.
\end{definition}
The definition carries easily over to annotated call stacks.
The notion of substacks helps to argue about nested calls. Given call stacks $\callstack, \callstack' \in \mathbb{S}$ such that $\ssteps{\transenv}{\callstack}{\callstack'}$ and $\callstack \subset \callstack'$ one knows that the top execution state of $\callstack$ was in the mode of calling and $\callstack'$ is a configuration within this call (before returning) as otherwise the both call stack could not agree on their suffix. (Note that execution states are unique within an execution due to the strictly monotonically decreasing gas values).
For completeness, we also define the independence of the locally loaded untrusted code and of return values of untrusted contract's execution:
\begin{definition}[Weak update on global state]
We define the function $\weakupdategstate{\cdot}{\cdot} \in \Sigma \times\Sigma \to \Sigma$ that performs a weak update on global state as follows:
\begin{align*}
\weakupdategstate{\sigma}{\sigma'} =
\lambda a.
\begin{cases}
\sigma'(a) & \sigma(a) = \bot \\
\sigma (a) & \text{otherwise}
\end{cases}
\end{align*}
\end{definition}
Intuitively, when a contract $c$ should not depend on the effect of another contract, after each call to this contract, the state needs considered to be changed in an arbitrary way and the contract should be considered to return an arbitrary value. Consequently, only those contracts that produce the same traces after the the call independently of possible changes to the global state and independently of the return value that was given back can be considered independent. We, however, assume that a contract cannot arbitrarily alter the global state: Codes of existing contracts can never be changed by other contracts and in addition we assume that nonce and balance of $c$ are not touched as this would require a re-entering of the contract.
\begin{definition}[Independence of external contract effects]
A contract $c \in \mathcal{C}$ is independent of the return values of a set of untrusted contract addresses $\mathcal{A}_C$ if for all valid initial configurations $(\transenv_{\textit{init}}, \annotate{s_{\textit{init}}}{x}) \in \mathcal{T}_{\textit{env}} \times \mathcal{S}_{\textit{annotated}}$ and all states $s, s'' \in \mathcal{S}$, all $c' \in \mathcal{A}_C$ and all annotated call stacks $\callstack \in \callstacks_n$ such that $\ssteps{\transenv_{\textit{init}}}{\cons{\annotate{s_{\textit{init}}}{x}}{\epsilon}}{\cons{\annotate{s}{c}}{\callstack}} \rightarrow \cons{\annotate{s''}{c'}}{\cons{\annotate{s}{c}}{\callstack}}$ it should hold that
for all
$s', t' \in \text{Fin}_\text{pot}$
\begin{align*}
\sstepstrace{\transenv_{\textit{init}}}{\cons{\annotate{s'}{c'}}{\cons{\annotate{s}{c}}{\callstack}}}{\cons{\annotate{s''}{c}}{\callstack}}{\pi}
\land \finalstate{s''} \\
\land \sstepstrace{\transenv_{\textit{init}}}{\cons{\annotate{t'}{c'}}{\cons{\annotate{s}{c}}{\callstack}}}{\cons{\annotate{t''}{c}}{\callstack}}{\pi'}
\land \finalstate{t''} \\
\implies \project{\pi}{\filtercallscreates{c}} = \project{\pi'}{\filtercallscreates{c}}
\end{align*}
\end{definition}
where the set of potential final states $\text{Fin}_\text{pot}$ is defined as follows:
\begin{align*}
&\text{Fin}_\text{pot} := \\
\{ & \textit{EXC}, \haltstate{\weakupdategstate{\strongpartialupdategstate{\textsf{gstate}(s)}{u_s}}{u_w}}{g}{d} \\
~&|~ g \in \mathbb{N} \land d \in \arrayof{\mathbb{B}} \land u_w \in \sigma \\
&\land u_s \in \addresses \to (\NN \to \NN) \times (\NN \to \NN) \times ((\BB^{256} \to \BB^{256}) \to (\BB^{256} \to \BB^{256}) \\
&\land \exists u_\textit{bal} \in \mathbb{N}.\, u_s(c) = (\idfun{\mathbb{N}}, u_\textit{bal}, \idfun{(\mathbb{B}^{256} \to \mathbb{B}^{256}) \to (\mathbb{B}^{256} \to \mathbb{B}^{256})}, \idfun{\arrayof{\mathbb{B}}}) \\
&\land \forall a \in \mathcal{A} / \{c\}.\exists u_n \in \mathbb{N} \to \mathbb{N}, u_b \in \mathbb{N} \to \mathbb{N}, u_\textit{stor} \in (\mathbb{B}^{256} \to \mathbb{B}^{256}) \to (\mathbb{B}^{256} \to \mathbb{B}^{256}). \\
& \qquad \, u_s(a) = (u_b, u_n, u_{\textit{stor}}, \idfun{\arrayof{\mathbb{B}}})\}
\end{align*}
Finally, we need to capture the case that the contract might directly access untrusted code and depend on this value. To this end we introduce the notion of small steps under local update. Intuitively, this allows for accessing different codes for addresses in the global state while running a contract than those used when calling.
\begin{definition}[Small steps under local update]
The small step relation under local (code) update $f \in \mathcal{A} \pto \arrayof{\mathbb{B}}$ is recursively defined by the following rules:
\begin{mathpar}
\infer
{ }
{\sstepslocalupdate{\transenv}{\cons{s}{\callstack}}{\cons{s}{\callstack}}{f}}
\infer
{\sstep{\transenv}{\cons{\updatecodeexstate{s}{f}}{\callstack}}{\cons{\updatecodeexstate{s'}{f}}{\callstack}} \\
\sstepslocalupdate{\transenv}{\cons{s'}{\callstack}}{\cons{s''}{\callstack}}{f} }
{\sstepslocalupdate{\transenv}{\cons{s}{\callstack}}{\cons{s''}{\callstack}}{f}}
\infer
{\sstep{\transenv}{\cons{s}{\callstack}}{\cons{s'}{\cons{s}{\callstack}}} \\
\ssteps{\transenv}{\cons{s'}{\cons{s}{\callstack}}}{\cons{s''}{\cons{s}{\callstack}}} \\
\finalstate{s''} \\
\sstep{\transenv}{\cons{s''}{\cons{s}{\callstack}}}{\cons{s'''}{\callstack}} \\
f' = f \cup \{(a, \textit{code}) ~|~ (\textsf{gstate}(s))(a) = \bot \land
\exists n, b, \text{stor}.\, (\textsf{gstate}(s))(a) = (n, b, \textit{stor}, \textit{code}) \} \\
\sstepslocalupdate{\transenv}{\cons{s'''}{\callstack}}{\cons{s''''}{\callstack}}{f'}
}
{\sstepslocalupdate{\transenv}{\cons{s}{\callstack}}{\cons{s''''}{\callstack}}{f}}
\end{mathpar}
\end{definition}
This definition can also be easily extended to the (annotated) traces semantics.
\begin{definition}[Local independence of untrusted code]
A contract $c \in \mathcal{C}$ is locally independent of a set of untrusted contract addresses $\mathcal{A}_C$ if for all valid initial configurations $(\transenv_{\textit{init}}, \annotate{s_{\textit{init}}}{x}) \in \mathcal{T}_{\textit{env}} \times \mathcal{S}_{\textit{annotated}}$ and all states $s \in \mathcal{S}$ and all annotated call stacks $\callstack \in \callstacks_n$ such that $\ssteps{\transenv_{\textit{init}}}{\cons{\annotate{s_{\textit{init}}}{x}}{\epsilon}}{\cons{\annotate{s}{c}}{\callstack}}$, for all execution states $s', s'' \in s$, for all $f \in \mathcal{A} \pto \arrayof{\mathbb{B}}$ with $\domain{f} = \mathcal{A}_C$
\begin{align*}
\sstepstrace{\transenv_{\textit{init}}}{\cons{\annotate{s}{c}}{\callstack}}{\cons{\annotate{s'}{c}}{\callstack}}{\pi}
~\land~ \finalstate{s'} \\
~\land~ \sstepslocalupdatetrace{\transenv_{\textit{init}}}{\cons{\annotate{s}{c}}{\callstack}}{\cons{\annotate{s''}{c}}{\callstack}}{f}{\pi'}
~\land~ \finalstate{s''} \\
\implies \project{\pi}{\filtercallscreates{c}} = \project{\pi'}{\filtercallscreates{c}}
\end{align*}
\end{definition}
In addition, we define balance equality on final execution states:
\begin{definition}[Balance equality on final execution states]
Two final execution states $s, s' \in \mathcal{S}$ are equal with respect to the balances ($s \approx_{\textit{balances}} s'$) if for some global states $\sigma, \sigma\ \in \Sigma$ and some gas values $\textit{gas}, \textit{gas}' \in \mathbb{N}$ and some output data $d, d' \in \arrayof{\mathbb{B}}$ one of the following holds:
\begin{enumerate}
\item $s = s' = \textit{EXC}$
\item
$s = \haltstate{\sigma}{\textit{gas}}{c}
\land s' = \haltstate{\sigma'}{\textit{gas}'}{d'}$ \\
$\land \forall a \in \mathcal{A}. \, \textsf{b} \,(\sigma \, (a)) = \textsf{b} \, (\sigma' \, (a))$
\end{enumerate}
\end{definition}
We define independence of (parts of) the transaction environment as this can be influenced by miners.
Intuitively, the effects a contract execution has on the flow of money should not be affected by (previously specified) components of the transaction environment.
We assume $\mathcal{C}_{\transenv}$ to be the set of the accessor functions of the transaction environment.
We define the equality up to a component for transaction environments:
\begin{definition}[Equality up to components]
Two transaction environments $\transenv$, $\transenv'$ are equal upto component $c_{\transenv} \in \mathcal{C}_{\transenv}$ (written $\transenv \equalupto{c_{\transenv}} \transenv'$) if
\begin{align*}
\forall c_{\transenv}' \in \mathcal{C}_{\transenv} / \{ c_{\transenv} \}. \; c_{\transenv}'(\transenv) = c_{\transenv}'(\transenv')
\end{align*}
\end{definition}
\begin{definition}
A contract $c$ is independent of a subset $I \subseteq \mathcal{C}_{\transenv}$ of components of the transaction environment if for all $c_{\transenv} \in I$, for all valid initial configurations $(\transenv_{\textit{init}}, \annotate{s_{\textit{init}}}{x}) \in \mathcal{T}_{\textit{env}} \times \mathcal{S}_{\textit{annotated}}$ and all states $s \in \mathcal{S}$ and all annotated call stacks $\callstack \in \callstacks_n$ such that $\ssteps{\transenv_{\textit{init}}}{\cons{\annotate{s_{\textit{init}}}{x}}{\epsilon}}{\cons{\annotate{s}{c}}{\callstack}}$ for all $\transenv \in \mathcal{T}_{\textit{env}}$ the following condition holds:
\begin{align*}
& c_{\transenv}(\transenv_{\textit{init}}) \neq c_{\transenv}(\transenv)
\land \transenv_{\textit{init}} \equalupto{c_{\transenv}} \transenv \\
&\land \ssteps{\transenv_{\textit{init}}}{\cons{\annotate{s}{c}}{\callstack}}{\cons{\annotate{s'}{c}}{\callstack}}
~\land~ \ssteps{\transenv}{\cons{\annotate{s}{c}}{\callstack}}{\cons{\annotate{s''}{c}}{\callstack}} \\
& \land \finalstate{s'}
\land \finalstate{s''}
\implies s' \approx_{\textit{balances}} s''
\end{align*}
\end{definition}
\subsection{Call restriction}
Even though the Solidity syntax conveys the impression of being able to communicate with well-defined contract instances, this is not necessarily the case. If a contract shall interact with an already existing contract on the blockchain, the developer needs to specify the address of this contract or the contract address needs to be derived dynamically from the context. Both these approaches are error-prone and may lead to money transfers or to code executions that were not intended by the user.
Consider the following example:
\begin{lstlisting}
contract FriendlyMoney {
address friend = 0xBa8AA02Fec8d3D440B3A1B60edDAAD80521581c9;
function sendMoneyToMyFriend(uint amount){
DonateMeMoney(friend).donate(amount);
}
}
\end{lstlisting}
Specifying that a contract is an instance of a certain class (as done by \lstinline|DonateMeMoney(friend)|) does not imply any run time checks, but only facilitates the calling of its functions. If at the specified address no contract of the of the specified form can be found, the callback function of the account at this address will be executed. Consequently, giving a wrong \lstinline|friend| address might not only result in sending money to the wrong account, but also in executing untrusted code.
In order to prevent such undesired effects, a user should be able to specify the set of contract addresses it expects to perform calls to. We introduce a property which we call \emph{call restriction} that ensures that only the a predefined set of desired contracts is entered during execution.
\begin{definition}[Call restriction]
A contract $c \in \mathcal{C}$ restricts calls to a set of addresses $\mathcal{A}_{C} \subseteq \mathcal{A}$
if for all reachable configurations $(\transenv, \cons{\annotate{s}{c}}{\callstack})$ it holds that
\begin{align*}
\ssteps{\transenv}{\cons{\annotate{s}{c}}{\callstack}}{\cons{\annotate{s'}{c'}}{\concatstack{\callstack'}{\cons{\annotate{s}{c}}{A}}}}
\implies \getcontractaddress{c'} \in \mathcal{A}_{C}
\end{align*}
\end{definition}
A more liberal version of this property might require that contracts outside the call set might be called, but only if they do not perform any modifications on the global state (so basically they don't have side effects).
A potential use case of this definition might be a contract only calling library functions for computations.
\TODOC{For this we do not have a proper motivating example. My idea was something that one would like to ensure that only libraries for computations are used or something similar. But I am not sure whether I like to property too much, but it's a trace property}
\begin{definition}[Relaxed call restriction]
A contract $c \in \mathcal{C}$ restricts calls to a set of addresses $\mathcal{A}_{C} \subseteq \mathcal{A}$ and only performs side-effectless calls otherwise
if for all reachable configurations $(\transenv, \cons{\annotate{s}{c}}{\callstack})$ it holds for all $c' \not \in \mathcal{A}_{C}$ that
\begin{align*}
\ssteps{\transenv}{\cons{\annotate{s}{c}}{\callstack}}{\cons{\annotate{s'}{c'}}{\concatstack{\callstack'}{\cons{\annotate{s}{c}}{\callstack}}}} \rightarrow^* \cons{\annotate{s''}{c'}}{\concatstack{\callstack'}{\cons{\annotate{s}{c}}{\callstack}}}
~\land~ \finalstate{s''} \\
\implies s'' = \textit{EXC} ~\lor~ s'' = \haltstatefull{\sigma}{d}{g}{\eta} ~\land~ \forall a \in \mathcal{A}_{C} . \, (\textsf{gstate}(s'))(a) = \sigma(a)
\end{align*}
\end{definition}
\subsubsection{Securify}
Recently, the online tool Securify \cite{securify} has been presented at the third Ethereum Foundation Developers conference on November 1-4 2017. Securify supports automated checks for transaction reorder dependency, reentrancy, some insecure coding patterns and some more specific properties as checking whether a contract locks Ether and a version of miner dependency that checks whether influenceable data is used as input to the SHA3 function. The authors of Securify claim that the tool provides 'guarantees, to avoid reporting vulnerable contracts are safe'.
In the case of Securify, we unfortunately could not find any work published on the underlying theory so far, so that we were only able to evaluate their tool for checking the soundness claim. We came to the conclusion that either the underlying abstraction they perform cannot be sound or that the security patterns that they use for characterizing good behavior are not sufficient.
For example, they report the example from Figure~\ref{exc_fn} not to contain mishandled exceptions.
For transaction order dependency, they seem to follow a more advanced approach than Oyente, but still the following example is wrongly classified as independent of transaction ordering:
\lstinputlisting{tod_fn_securify.sol}
This contract clearly is transaction order dependent as only the first transaction calling the \lstinline|payoutMoney| functions receives the payout that is sent in the initialization code of the \lstinline|Dummy| contract created in the function.
\section{Introduction}
\input{intro}
\section{Background on Ethereum}
\label{sec:ethereum}
\subsubsection{Ethereum}
\input{ethereum}
\subsubsection{EVM Bytecode}
\input{bytecode}
\subsubsection{Solidity}
\input{solidity}
\section{Small-Step Semantics}
\label{sec:semantics}
\input{small_step}
\section{Security Definitions}
\label{sec:definitions}
\input{security_properties}
\subsection{Discussion}
\label{sec:limitations}
\input{otherapproaches}
\section{Conclusions}
\label{sec:conclusion}
We presented the first complete small-step semantics of EVM bytecode and formalized a large fragment thereof in the F* proof assistant, successfully validating it against the official Ethereum test suite. We further defined for the first time a number of salient security properties for smart contracts, relying on a combination of hyper- and safety properties. Our framework is available to the academic community in order to facilitate future research on rigorous security analysis of smart contracts.
In particular, this work opens up a number of interesting research directions. First, it would be interesting to formalize in F* the semantics of Solidity code and a compiler from Solidity into EVM, formally proving its soundness against our semantics. This would allow us to provide software developers with a tool to verify the security of their code, from which they could obtain bytecode that is secure by construction. Second, we intend to design an efficient static analysis technique for EVM bytecode and to formally prove its soundness against our semantics.
\paragraph{Acknowledgments.} This work has been partially supported by the European Research Council (ERC) under the European Union's Horizon 2020 research (grant agreement No 771527-BROWSEC).
\section{Formalization}
\input{formalization}
\section{Small step semantics}
\input{smallstep}
\subsection{Notations}
\label{subsec:formalization_notations}
In the following, we will use $\mathbb{B}$ to denote the set $\{0,1\}$ of bits and accordingly $\mathbb{B}^{x}$ for sets of bitstrings of size $x$. We further let $\integer{x}$ denote the set of non-negative integers representable by $x$ bits and allow for implicit conversion between those two representations (assuming bitstrings to represent a big-endian encoding of natural numbers). In addition, we will use the notation $\arrayof{X}$ (resp. $\stackof{X}$) for arrays (resp. lists) of elements from the set $X$. We use standard notations for operations on arrays and lists. In particular we write $\arraypos{a}{\textit{pos}}$ to access position $\textit{pos} \in [1, \size{a} - 1]$ of array $a \in \arrayof{X}$ and $\arrayinterval{a}{\textit{down}}{\textit{up}}$ to access the subarray of size $\textit{up} - \textit{down}$ from position $\textit{down} \in [1, \size{a} - 1]$ to $\textit{up} \in [1, \size{a} - 1]$. In case that $\textit{down} > \textit{up}$ this operation results in the empty array $\epsilon$.
In addition, we write $\concat{a_1}{a_2}$ for the concatenation of two arrays $a_1, a_2 \in \arrayof{X}$.
In the following formalization, we will make use of bytearrays $b \in \arrayof{\BB^8}$. To this end, we will assume functions $\bitstringtobytearray{(\cdot)} \in \mathbb{B}^x \to \arrayof{\BB^8}$ and $\bytearraytobitstring{(\cdot)} \in \arrayof{\BB^8} \to \mathbb{B}^x$ to chunk bitstrings with size dividable by $8$ to bytearrays and vice versa.
To denote the zero byte, we write $0^8$ and accordingly for an array of zero bytes of size n, we write $0^{8\cdot n}$.
For lists, we denote the empty list by $\epsilon$ and write $\cons{x}{\textit{xs}}$ for placing element $x \in X$ on top of list $\textit{xs} \in \stackof{X}$.
In addition, we write $\concatstack{\textit{xs}}{\textit{ys}}$ for concatenating lists $\textit{xs}, \textit{ys} \in \stackof{X}$.
We let $\mathcal{A}$ denote the set of $160$-bit addresses ($\mathbb{B}^{160}$).
\subsection{Configurations}
The global state of the system is defined by the accounts that are existing and their current state, including their balances and their codes. Formally, the global state is a (partial) mapping from account addresses to accounts:
\[\sigma \in \Sigma = \mathcal{A} \to \optionof{(\integer{256}\times \integer{256} \times (\mathbb{B}^{256} \to \mathbb{B}^{256}) \times \arrayof{\mathbb{B}^8})}\]
An account \account{\textit{nonce}}{\textit{balance}}{\textit{stor}}{\textit{code}} is described by the account's balance $\textit{balance} \in \integer{256}$, the state of its persistent storage $\textit{stor} \in \mathbb{B}^{256} \to \mathbb{B}^{256}$, its nonce $\textit{nonce} \in \integer{256}$ and the account's code $\textit{code} \in \arrayof{\BB^8}$.
A configuration $\exconf{\callstack}{\eta}$ of the execution consists of the stack $\callstack$ of execution states .
The call stack $\callstack$ keeps track of the calls made during execution. To this end it consists of execution states of one of the following forms:
\begin{itemize}
\item $\textit{EXC}$ denotes an exceptional halting state and can only occur as top element. It expresses that the execution of the current call ended with an exception.
\item $\haltstatefull{\sigma}{\textit{gas}}{d}{\eta}$ denotes regular halting and can only occur as top element. It expresses that the execution of the current call halted in global state $\sigma \in \sigma$ with transaction effects $\eta \in N$ and with an amount $\textit{gas} \in \integer{256}$ of remaining gas and return data $d \in \arrayof{\BB^8}$
\item $\regstatefull{\mu}{\iota}{\sigma}{\eta}$ denotes a regular execution state and represents the state of the execution of the current call. A regular execution state includes the local state of the stack machine $\mu \in M$, the execution environment $\iota \in I$ that contains the parameters given to the call and the current global state $\sigma \in \Sigma$ and the transaction effects $\eta \in N$
\end{itemize}
The reason to make the global state part of the call stack is that it does not change linearly during the execution. In the case of an exception, all effects of the call's execution on the global state are reverted and the execution continues in the global state of the caller.
The same holds for the transaction effects.
Formally, we give the syntax of call stacks as follows:
\begin{align*}
\mathbb{S} := \{& \cons{\textit{EXC}}{S_{\textit{plain}}}, ~\cons{\haltstatefull{\sigma}{\textit{gas}}{d}{\eta}}{S_{\textit{plain}}}, ~S_{\textit{plain}} \\ ~|~
& \sigma \in \Sigma, ~\textit{gas} \in \mathbb{N}, ~d \in \arrayof{\BB^8},~ \eta \in N ~,S_{\textit{plain}} \in \stackof{M \times I \times \Sigma \times N} \}
\end{align*}
In Figure~\ref{fig:grammar'} we give a full grammar for call stacks:
\begin{figure*}[h]
\begin{mathpar}
\begin{array}{rlclll}
\text{Call stacks} & \mathbb{S} & \ni & \callstack & := & \cons{\textit{EXC}}{S_{\textit{plain}}} ~|~ \cons{\haltstatefull{\sigma}{d}{g}{\eta}}{S_{\textit{plain}}} ~|~ S_{\textit{plain}} \\
\text{Plain call stacks} &
\mathbb{S}_{\textit{plain}} & \ni & S_{\textit{plain}} & := & \cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{S_{\textit{plain}}} \\
\text{Machine states} & M & \ni & \mu & := & \smstate{\textit{gas}}{\textit{pc}}{m}{i}{s} \\
\text{Execution environments} & I & \ni & \iota & := & \sexenv{\textit{actor}}{\textit{input}}{\textit{sender}}{\textit{value}}{\textit{code}}\\
\text{Global states} & \Sigma & \ni &\sigma & & \\
\text{Account states} & \mathbb{A} & \ni & \textit{acc} & := & \accountstate{n}{b}{\textit{code}}{\textit{stor}} ~|~ \bot \\
\text{Transaction effects} & N & \ni &\eta & := & (b, L, \textit{S}_\dagger) \\
\text{Transaction environments} & \mathcal{T}_{\textit{env}} & \ni & \transenv & := & (o, \textit{prize}, H) \\
\\
\text{Notations:}
&\multicolumn{5}{c}{
d \in \arrayof{\BB^8}, \quad g \in \integer{256}, \quad \eta \in N, \quad
o \in \mathcal{A}, \quad \textit{prize} \in \integer{256}, \quad H \in \mathcal{H}} \\
&\multicolumn{5}{c}{
\textit{gas} \in \integer{256}, \quad \textit{pc} \in \integer{256}, \quad m \in \mathbb{B}^{256}, \to \mathbb{B}^8 \quad i \in \integer{256}, \quad s \in \stackof{\mathbb{B}^{256}}} \\
&\multicolumn{5}{c}{
\textit{sender} \in \mathcal{A} \quad \textit{input} \in \arrayof{\BB^8} \quad \textit{sender} \in \mathcal{A} \quad \textit{value} \in \integer{256} \quad \textit{code} \in \arrayof{\BB^8}} \\
&\multicolumn{5}{c}{
b \in \integer{256} \quad L \in \sequenceof{\textit{Ev}_{\textit{log}}} \quad \textit{S}_\dagger \subseteq \mathcal{A} \quad
\Sigma = \mathcal{A} \to \mathbb{A}
}
\end{array}
\end{mathpar}
\caption{Grammar for calls stacks and transaction environments}
\label{fig:grammar'}
\end{figure*}
\subsubsection{Regular execution states}
In the following we give a detailed description of the components of regular executions state.
\paragraph{Local machine state}
The local machine state $\mu \in M = \integer{256} \times \integer{256} \times (\mathbb{B}^{256} \to \BB^8) \times \integer{256} \times \stackof{\mathbb{B}^{256}}$ represents the state of the underlying state machine used for execution consists of the following components:
\begin{itemize}
\item $\textit{gas} \in \integer{256}$ is the current amount of gas still available for execution;
\item $\textit{pc}\in \integer{256}$ is the current program counter;
\item $\textit{m} \in \BB^{256} \to \BB^8$ is a mapping from 256-bit words to bytes that represents the local memory;
\item $\textit{i} \in \integer{256}$ is the current number of active words in memory;
\item $\textit{s} \in \stackof{\mathbb{B}^{256}}$ is the local 256-bit word stack of the stack machine.
\end{itemize}
The execution of each internal transaction starts in a fresh machine state, with an empty stack, memory initialized to all zeros, and program counter and active words in memory set to zero. Only the gas is instantiated with the gas value available for the execution.
\paragraph{Execution environment}
The execution environment $\iota$ of an internal transaction specifies the static parameters of the transaction. It is a tuple of the form $\sexenv{\textit{actor}}{\textit{input}}{\textit{sender}}{\textit{value}}{\textit{code}} \in I = \mathcal{A} \times \arrayof{\BB^8} \times \mathcal{A} \times \integer{256} \times \arrayof{\BB^8}$ with the following components:
\begin{itemize}
\item $\textit{actor} \in \mathcal{A}$ is the address of the account currently executing;
\item $\textit{input} \in \arrayof{\BB^8}$ is the data given as an input to the internal transaction;
\item $\textit{sender} \in \mathcal{A}$ is the address of the account that initiated the internal transaction;
\item $\textit{value} \in \integer{256}$ is the value transferred by the internal transaction;
\item $\textit{code} \in \arrayof{\BB^8}$ is the code currently executed.
\end{itemize}
This information is determined at the beginning of an internal transaction execution and it can be accessed, but not altered during the execution.
\paragraph{Transaction effects}
The transaction effects $\eta \in N= \integer{256} \times \stackof{\textit{Ev}_{\textit{log}}} \times \setof{\mathcal{A}}$ collect information on changes that will be applied to the global state after the transaction's execution. They do not effect the code execution itself.
In particular, the transaction effects contain the following components:
\begin{itemize}
\item $\textit{bal}_r \in \integer{256}$ is the refund balance that is increased by memory operations and will finally be paid to the transaction's beneficiary
\item $L \in \stackof{\textit{Ev}_{\textit{log}}}$ is the sequence of log events performed during executions. A log event is a tuple of the address of the currently executing a count, a tuple with zero to four components specified when executing a logging instruction and finally a fraction of the local memory. Consequently, $\textit{Ev}_{\textit{log}} = \mathcal{A} \times (\{() \} \cup \mathbb{B}^{256} \cup (\mathbb{B}^{256})^2 \cup (\mathbb{B}^{256})^3 \cup (\mathbb{B}^{256})^4) \times \arrayof{\BB^8}$.
\item
$\textit{S}_\dagger \subseteq \mathcal{A}$ is the suicide set that keeps track of the contracts that destroyed themselves (using the $\textsf{SELFDESTRUCT}$ command) during the execution (of the external transaction). These contracts are recorded in $\textit{S}_\dagger$ and only removed from the global state after the end of the execution.
\end{itemize}
\subsection{Transaction environment}
The transaction environment represents the static information of the block that the transaction is executed in and the immutable parameters given to the transaction as the gas prize or the gas limit.
More specifically, the transaction environment $\transenv \in \mathcal{T}_{\textit{env}} = \mathcal{A} \times \integer{256} \times \mathcal{H}$ is a tuple of the form $(o, \textit{gasPrize}, H)$ with the following components:
\begin{itemize}
\item
$o \in \mathcal{A}$ is the address of the account that made the transaction
\item $\textit{gasPrize} \in \integer{256}$ denotes the amount of wei that needs to paid for a unit of gas in this transaction
\item $H \in \mathcal{H} = \integer{256} \times \mathcal{A} \times \integer{256} \times \integer{256} \times \integer{256} \times \integer{256}$ is the header of the block that the transaction is part. A block header is of the form $(\textit{parent},$ $\textit{beneficiary},$ $\textit{difficulty},$ $\textit{number},$ $\textit{gaslimit},$ $\textit{timestamp})$.
Where $\textit{parent} \in \integer{256}$ identifies the header of the block's parent block, $\textit{beneficiary} \in \mathcal{A}$ is the address of the beneficiary of the transaction, $\textit{difficulty} \in \integer{256}$ is a measure of the difficulty of solving the proof of work puzzle required to mine the block, $\textit{number} \integer{256}$ is the number of ancestor blocks, $\textit{gaslimit} \in \integer{256}$ is the maximum amount of gas that might be consumed when executing the blocks transactions and $\textit{timestamp} \in \integer{256}$ is the Unix time stamp at the block's inception.
Note that this is a simplified version of the block header described in the yellow paper~\cite{yellowpaper} that only contains those components needed for transaction execution.
\end{itemize}
\subsection{Semantics}
\newcommand\newsubcap[1]{\phantomcaption%
\caption*{\thefigure.\thesubfigure: #1}}
As previously discussed, we are not aware of any prior formal security definitions of smart contracts. Nevertheless, we compared our definitions with the verification conditions used in Oyente \cite{oyente}. Our investigation shows that the verification conditions adopted in this tool are neither sound nor complete.
For detecting mishandled exceptions, it is checked whether each $\textsf{CALL}$ instruction in the contract code is directly followed by the $\textsf{ISZERO}$ instruction that checks whether the top element of the stack is zero. Unfortunately, Oyente (although stated in the paper) does not implement this check, so that we needed to manually inspect the bytecodes for determining the outcomes of the syntactic check.
As shown in Figure~\ref{exc_fn} a check for the caller returning zero does not necessarily imply a proper exception handling and therefore atomicity of the contract.
This excerpt of a simple banking contract that keeps track of the users' balances and allows users to withdraw their balances using the function \lstinline|withdraw| checks for the success of the performed call, but still does not react accordingly. It only makes sure that the number of successes is updated consistently, but does not perform the update on the user's balance record according to the call outcome.
On the other hand, not performing the desired check does not imply the absence of atomicity as illustrated in Figure~\ref{exc_fp}.
Writing the outcome in some variable before checking it, satisfies the negative pattern, but still correct exception handling is performed.
For detecting timestamp dependency, Oyente checks whether the contract has a symbolic execution path with the timestamp (that is represented as own symbolic variable) being included in one of its constraints.
This definition however, does not capture the case shown in Figure~\ref{time_fn}.
\begin{figure}[t]
\begin{subfigure}[b]{0.55\textwidth}
\lstinputlisting{exc_fn.sol}
\newsubcap{Exception handling: False negative}
\label{exc_fn}
\end{subfigure}
\begin{subfigure}[b]{0.45\textwidth}
\lstinputlisting{exc_fp.sol}
\newsubcap{Exception handling: False positive}
\label{exc_fp}
\end{subfigure}
\bigskip
\begin{subfigure}[b]{.5\textwidth}
\centering
\lstinputlisting{time_fn.sol}
\newsubcap{Timestamp dependency: False negative}
\label{time_fn}
\end{subfigure}
\begin{subfigure}[b]{.5\textwidth}
\centering
\lstinputlisting{time_fp.sol}
\newsubcap{Timestamp dependency: False positive}
\label{time_fp}
\end{subfigure}
\bigskip
\begin{subfigure}[b]{0.6 \textwidth}
\lstinputlisting{reentrant_fn.sol}
\newsubcap{Reentrancy: False negative}
\label{reentrant_fn}
\end{subfigure}
\begin{subfigure}[b]{0.4 \textwidth}
\lstinputlisting{reentrant_fp.sol}
\newsubcap{Reentrancy: False positive}
\label{reentrant_fp}
\end{subfigure}
\end{figure}
This contract is clearly timestamp dependent as whether or not the function \lstinline|pay| pays out some money to the sender depends on the timestamp set when creating the contract. A malicious miner could consequently manipulate the block timestamp for a transaction that creates such a contract in a way that money is paid out and then subsequently query it for draining it out. This is however, not captured by the characterization of the property in Oyente as they only capture the local execution paths of the contract.
On the other hand, using the block timestamp in path constraints does not imply a dependency as can easily be seen by the example in Figure~\ref{time_fp}.
For the transaction order dependency and the reentrancy property, we were unfortunately not able to reconcile the property characterization provided in the paper with the implementation of Oyente.
For checking reentrancy according to the paper, it should be checked whether the constraints on the path leading to a $\textsf{CALL}$ instruction can still be satisfied after performing the updates on the path (e.g. changing the storage). If so, the contract is flagged as reentrant. According to our understanding, this approach should not flag contracts that correctly guard their calls as reentrant. Still, by the version of Oyente provided with the paper the contract in Figure~\ref{reentrant_fp} is tagged as reentrant.
There exists an updated version of Oyente~\cite{newo} that is able to precisely tag this contract as not reentrant, but we could not find any concrete information on the criteria used for checking this property. Still, we found out that the underlying characterization can not be sufficient for detecting reentrancy as the contract in Figure~\ref{reentrant_fn} is classified not to exhibit a reentrancy vulnerability even though it should as the \lstinline|send| command also executes the recipient's callback function (even though with limited gas).
The example is taken from the Solidity documentation \cite{solidity} where it is listed as negative example.
For transaction order dependency, Oyente should check whether execution traces exhibiting different Ether flows exists. But it turned out that not even a simple example of a transaction dependent contract can be detected by any of the versions of Oyente.
\subsection{Absence of reentrancy}
A contract $c$ is not re-entrant if (when being called in an arbitrary way) it can call it self again (directly or indirectly) before its original execution has terminated.
We formally define the absence of re-entrancy as single-entrancy.
\begin{definition}[Single-entrancy]
\label{single-entrancy}
A contract $c$ is single-entrant, if
for all execution states $s$, $s'$, all call stacks $\callstack$, $\callstack'$ such that $\cons{\annotate{s}{c}}{\callstack}$ is {reasonable}, for all transaction environments $\transenv$ and all $c' \in \mathbb{B}^{160} \cup \{ \bot \}$ the following condition holds:
\begin{align*}
\ssteps{\transenv}{\cons{\annotate{s}{c}}{\callstack}}{\cons{\annotate{s'}{c'}}{\callstack'}}
\land \cons{s}{\callstack} \subset \callstack' \implies c \neq c'
\end{align*}
\end{definition}
\subsection{Correct exception handling}
The correct exception handling can be seen as purely syntactic property on the contracts byte code: After every call instruction, an $\textsf{ISZERO}$ instruction is expected that checks for the success of the call. This approach is also taken by \cite{luu2016making} and a corresponding check is even incorporated in the Solidity online compiler. However, it should be mentioned that this criterion is by far neither sufficient nor necessary to ensure correct exception handling.
Guaranteeing correct exception handling by a semantic condition is not easily possible without a precise specification of the developer's intentions.
\subsection{Calling known addresses}
To prevent calls to the unknown, it is sufficient if the user specifies the set of addresses it expects the contract under analysis to call.
We define a corresponding property that ensures that the direct and indirect calls of a contract are restricted to addresses specified in the set.
The corresponding property can be seen as a white-listing of trust-worthy contracts.
Actually this property implies single-entrancy in the case that the contract itself is not included in the the set of specified addresses. A dual property that black lists untrusted contracts could be considered a generalization of the single-entrancy property discussed in \ref{single-entrancy}.
\begin{definition}[Call restriction]
A contract $c$ restricts calls to a set $\mathcal{A}_{C}$ of addresses if for all execution states $s$, $s'$, all call stacks $\callstack$, $\callstack'$ such that $\cons{\annotate{s}{c}}{\callstack}$ is {reasonable}, for all transaction environments $\transenv$ for all $c' \in \mathbb{B}^{160} \cup \{ \bot \}$ the following condition holds:
\begin{align*}
\ssteps{\transenv}{\cons{\annotate{s}{c}}{\callstack}}{\cons{\annotate{s'}{c'}}{\callstack'}} \land \callstack \subset \callstack' \implies \getcontractaddress{c'} \in \mathcal{A}_{C}
\end{align*}
\end{definition}
\subsection{Absence of gasless send}
To ensure that a contract never performs a gasless send, it is sufficient to check for the contract calling another contract such that no local gas is available in the initial state of the callee.
\begin{definition}[Fuelled calls]
A contract $c$ performs only fuelled calls if for all execution states $s$, $s'$, all call stacks $\callstack$, $\callstack'$ such that $\cons{\annotate{s}{c}}{\callstack}$ is {reasonable}, for all transaction environments $\transenv$ and for all contracts $c'$, the following condition holds:
\begin{align*}
\ssteps{\transenv}{\cons{\annotate{s}{c}}{\callstack}}{\callstack'} \land
\sstep{\transenv}{\callstack'}{\cons{\annotate{s'}{c'}}{\callstack'}} \land \callstack \subset \callstack' \\
\implies \access{\access{s'}{\mu}}{\textit{gas}} > 0
\end{align*}
\end{definition}
\subsection{Compliance to call stack limit}
In order to rule out that a contract can be called in a way that the call stack limit will be exceeded, we define a corresponding property for call stacks.
\begin{definition}[Compliance to the call stack limit]
A contract $c$ is compliant to the call stack limit if for all execution states $s$, all call stacks $\callstack$, $\callstack'$ such that $\cons{\annotate{s}{c}}{\callstack}$ is {reasonable} and for all transaction environments $\transenv$, the following condition holds:
\begin{align*}
\ssteps{\transenv}{\cons{\annotate{s}{c}}{\callstack}}{\cons{\annotate{\textit{EXC}}{\bot}}{\callstack'}} \land \callstack \subset \cons{\annotate{\textit{EXC}}{\bot}}{\callstack'} \\
\implies \size{\callstack'} < 1024
\end{align*}
\end{definition}
This definition expresses the desired property as it makes use of the only way of the small step execution to exceed call stack limit. This case occurs when pushing the next execution state to the call stack in the semantics of the call rule would exceed the limit and therefore an exception state is pushed instead.
In this single case the call stack reaches for one state the size of $1024$ and therefore indicates this particular situation.
Note that this property does not take into consideration that the root of bugs related to exceeding the call stack limit is a wrong exception handling rather than the fact that the contract can be called in a way that makes this exception appear.
It might be more useful to formulate a property (similar to the ones in the next section) that forbids Ether thefts due to a dependence of the contract on the call stack size.
\subsection{Independence of unknown state}
As motivated by the attack shown in \ref{statedependency}, the developer might want to make sure that the effects of a contract execution are unaffected by certain components of the state that are not under the control of the user or that can be influenced by malicious miners.
In particular, it is desirable that the outcome of the contract execution is independent of (parts of) the transaction environment as e.g. the block time stamp that can be determined by the miner.
In order to avoid transaction order dependency, it is sufficient to require the outcome of a contract execution not to depend on the global state it is executed in.
First we define independence of (parts of) the transaction environment.
Intuitively, the effects a contract execution has on the flow of money should not be affected by (previously specified) components of the transaction environment.
We assume $\mathcal{C}_{\transenv}$ to be the set of the accessor functions of the transaction environment.
We define the equality up to a component for transaction environments:
\begin{definition}[Equality up to components]
Two transaction environments $\transenv$, $\transenv'$ are equal upto component $c_{\transenv} \in \mathcal{C}_{\transenv}$ (written $\transenv \equalupto{c_{\transenv}} \transenv'$) if
\begin{align*}
\forall c_{\transenv}' \in \mathcal{C}_{\transenv} / \{ c_{\transenv} \}. \; c_{\transenv}'(\transenv) = c_{\transenv}'(\transenv')
\end{align*}
\end{definition}
Additionally, we define equality on the global state for final execution states.
\begin{definition}
Two final execution states $s$ and $s'$ are equal with respect to the global state ($s \equalon{\sigma} s'$) if for some global states $\sigma$, $\sigma'$, some gas values $\textit{gas}$, $\textit{gas}'$ and some output data $\textsf{d}$, $\textsf{d}
$ the following holds:
\begin{align*}
&s = \haltstate{\sigma}{\textit{gas}}{\textsf{d}}
\land s' = \haltstate{\sigma'}{\textit{gas}'}{\textsf{d}'} \\
&\land \forall a \in \mathbb{B}^{160}. \, \sigma \, (a) = \sigma' \, (a)
\end{align*}
\end{definition}
Intuitively, a contract is independent of parts of the transaction environment if two contracts called in the same way, with transaction environments that only differ in some specified components, halt in the same global state.
This ensures in particular that the (specified) components do not influence the amount of Ether transferred between accounts during the contract execution.
At this point one could also go for different or stronger indistinguishability properties rather then equality on the final global state. Alternative properties could e.g. require the control flow not to be distinguishable or (as done by \cite{luu2016making}) the Ether flow (opposed to the overall change in Ether) to be the same. However as transaction executions only interface via the global state, requiring that the overall effects on the global state can not be distinguished seems to be a reasonable indistinguishability property.
\begin{definition}[Independence of the transaction environment]
A contract $c$ is independent of a subset $I \subseteq \mathcal{C}_{\transenv}$ of components of the transaction environment if for all $c_{\transenv} \in I$, for all execution states $s$, $s', s''$, all call stacks $\callstack$ such that $\cons{\annotate{s}{c}}{\callstack}$ is {reasonable}, for all transaction environments $\transenv$, $\transenv'$ the following condition holds:
\begin{align*}
& c_{\transenv}(\transenv) \neq c_{\transenv}(\transenv')
\land \transenv \equalupto{c_{\transenv}} \transenv' \\
&\land \ssteps{\transenv}{\cons{\annotate{s}{c}}{\callstack}}{\cons{\annotate{s'}{c}}{\callstack}}
\land \ssteps{\transenv'}{\cons{\annotate{s}{c}}{\callstack}}{\cons{\annotate{s''}{c}}{\callstack}} \\
& \land \finalstate{s'}
\land \finalstate{s''}
\implies s' \equalon{\sigma} s''
\end{align*}
\end{definition}
Next, we define independence of the global state.
Intuitively, we want to express that the global state the contract is called in does not effect the outcome of the contract execution.
\begin{definition}[Independence of global state]
A contract $c$ is independent of the global state if for all transaction environments $\transenv$, for all execution states $s$, $s'$, $s''$, all call stacks $\callstack$, $\callstack'$ and all global states $\sigma$, $\sigma'$ such that $\cons{\annotate{s}{c}}{\callstack}$ is {reasonable} for $\sigma$ and $\cons{\annotate{s}{c}}{\callstack'}$ is {reasonable} for $\sigma'$, the following condition holds:
\begin{align*}
& \sigma \neq \sigma' \land \callstack \equalupto{\sigma} \callstack' \\
&\land \ssteps{\transenv}{\cons{\annotate{s}{c}}{\callstack}}{\cons{\annotate{s'}{c}}{\callstack}}
\land \ssteps{\transenv}{\cons{\annotate{s}{c}}{\callstack'}}{\cons{\annotate{s''}{c}}{\callstack'}} \\
& \land \finalstate{s'}
\land \finalstate{s''}
\implies s' \equalon{\sigma} s''
\end{align*}
\end{definition}
In order to define equality of call stacks upto the global state ($\equalupto{\sigma}$) we first define equality of execution states up the the global state.
\begin{definition}[Equality upto global state for execution states]
Two execution states $s$, $s'$ are equal upto global state ($s \equalupto{\sigma} s'$) if either
\begin{itemize}
\item $s = \textit{EXC} = s'$
\item for some global states $\sigma$, $\sigma'$, some gas value $\textit{gas}$ and some output data $\textsf{d}$ it holds that $s = \haltstate{\sigma}{\textit{gas}}{\textsf{d}}$ and $s' = \haltstate{\sigma'}{\textit{gas}}{\textsf{d}}$
\item for some global states $\sigma$, some machine state $\mu$ and some execution environment $\iota$ it holds that $\sigma'$, $s = \regstate{\mu}{\iota}{\sigma}$ and $s' = \regstate{\mu}{\iota}{\sigma'}$
\end{itemize}
\end{definition}
Using this definition, we can extend equality upto global state to (annotated) call stacks.
\begin{definition}[Equality upto global state for call stacks]
Two annotated call stacks $\callstack$, $callstackb$ are equal up to global state ($\callstack
\equalupto{\sigma} \callstackb$) if either
\begin{itemize}
\item $\callstack = \callstackb = \epsilon$
\item $\callstack = \cons{\annotate{s}{c}}{\callstack'}$ and $\callstackb = \cons{\annotate{s'}{c}}{\callstackb'}$ and $s \equalupto{\sigma} s'$ and $\callstack' \equalupto{\sigma} \callstackb'$.
\end{itemize}
\end{definition}
The requirement that the call stacks at the point of execution are equal upto the global states (that might be influenced by the initial global states of the transaction) requires that until this point the execution was not affected by the different global states. This captures our intuition as we do not want two contracts to be considered dependent because they are (due to a different global state) called in different ways.
Note that in contrast to the previous discussed properties, the independence properties presented in this subsection are not simple reachability properties, but hyper properties as they compare two different executions.
\subsection{Simplifying security properties}
In order to model the security properties introduced above in the setting of the abstract semantics, we provide some simple relaxations to the properties.
Instead of arguing about contracts being called from {reasonable} call stacks, we can argue about contracts being executed on an arbitrary call stack. The only influence the call stack potentially has on the execution is that its size might cause an error in the top-level execution due to an exceeding of the stack size limit.
Formally, we capture this property in the following lemma:
\begin{lemma}[Call stack indifference up to size]
Let $s$ be an execution state, $c$ a contract, $\transenv$ a transaction environment and let $\callstack$, $\callstack'$ and $\callstackb$ be call stacks such that $\callstack \subset \callstack'$ and $\size{\callstack} = \size{\callstackb}$. Then the following property holds:
\begin{align*}
\ssteps{\transenv}{\cons{\annotate{s}{c}}{\callstack}}{\callstack'}
\Leftrightarrow \ssteps{\transenv}{\cons{\annotate{s}{c}}{\callstackb}}{\concatstack{(\diffstack{\callstack'}{\callstack})}{\callstackb}}
\end{align*}
\label{call stack indifference}
\end{lemma}
where the difference on call stacks (denoted by $/$) is defined as follows:
\begin{definition}[Difference on call stacks]
\begin{align*}
\diffstack{\callstack}{\callstackb} :=
\begin{cases}
\callstack' & \concatstack{\callstack'}{\callstackb} = \callstack \\
\epsilon & \callstackb \not \subset \callstack
\end{cases}
\end{align*}
\end{definition}
This lemma allows us to consider the contract execution in isolation of the concrete call stack and therefore independent of the execution history. When treating the call stack size explicitly (e.g. as part of the execution environment), it is possible to consider the execution on the empty call stack (but for arbitrary values for the call stack size).
In addition, we can easily go for a local over-approximation of the execution states. Formally, the security properties are talking about {reasonable} call stacks and therefore implicitly impose the condition that the call stack and therefore the execution states must be the outcomes of some valid transaction execution.
As these restrictions anyway permit (almost) every execution state, it is reasonable to go for a simplification here.
To this end, we extend the definition of {reasonable} to execution states.
\begin{definition}[Reasonability of execution states]
An annotated execution state $\annotate{s}{c}$ is {reasonable} if there is some annotated call stack $\callstack$ such that $\cons{\annotate{s}{c}}{\callstack}$ is {reasonable}.
\end{definition}
Additionally, we formulate a relaxation on reasonable execution states that we call well-formation.
\begin{definition}[Well-formation of annotated execution states]
An annotated execution state $\annotate{s}{c}$ is well-formed if one of the following holds:
\begin{itemize}
\item $s = \textit{EXC}$
\item $s = \haltstate{\sigma}{\textit{gas}}{\textsf{d}}$ and $\access{\sigma \, (\getcontractaddress{c})}{\textsf{code}} = \getcontractcode{c}$
\item $s= \regstate{\mu}{\iota}{\sigma}$, $\access{\sigma \, (\getcontractaddress{c})}{\textsf{code}} = \getcontractcode{c}$ and \\ $\access{\iota}{\textsf{code}} = \getcontractcode{c}$
\end{itemize}
\end{definition}
\begin{lemma}[Well-formation of reasonable execution states]
If an annotated execution state $\annotate{s}{c}$ is {reasonable}, then it is also well-formed.
\label{well-formation of reasonable execution states}
\end{lemma}
In total, Lemma \ref{call stack indifference} and Lemma \ref{well-formation of reasonable execution states} allow us to reason about arbitrary {reasonable} execution states executing the code of the contract to be analysed being executed on the empty call stack (given an explicit modelling of the call stack size) instead of arguing about all possible ways the contracts potentially got called.
If some state is reachable during calling contract $c$ from a reasonable call stack, then this state is reachable as well for some well-formed execution state when executing it on a arbitrary call stack of the correct size.
Consequently when checking for non-reachability (as will be done later in the static analysis) it is sufficient to check for this property in the relaxed setting.
\subsection{Call Integrity}
\myparagraph{Dependency on Attacker Code}
One of the most famous bugs of Ethereum's history is the so called DAO bug that led to a loss of 60 million dollars in June 2016 \cite{thedao}. This bug is in the literature classified as reentrancy bug \cite{survey,oyente} as the affected contract was drained out of money by subsequently reentering it and performing transactions to the attacker on behalf of the contract. More generally, the problem of this contract was that malicious code was able to affect the outgoing money flows of the contract.
The cause of such bugs mostly roots in the developer's misunderstanding of the semantics of Solidity's call primitives.
In general, calling a contract can invoke two kinds of actions:
Transferring Ether to the contract's account or
Executing (parts of) a contracts code.
In particular, the \lstinline|call| construct invokes the called contract's fallback function when no particular function of the contract is specified (\ref{solidity}).
Consequently, the developer may expect an atomic value transfer where potentially another contract's code is executed. For illustrating how to exploit this sort of bug, we consider the following contracts:
\begin{minipage}[t]{0.5 \textwidth}
\lstinputlisting{bob.sol}
\end{minipage}
\begin{minipage}[t]{0.5 \textwidth}
\lstinputlisting{mallory.sol}
\end{minipage}
The function \lstinline|ping| of contract \lstinline|Bob| sends an amount of $2$ {\textit{wei}} to the address specified in the argument. However, this should only be possible once, which is potentially ensured by the \lstinline|sent| variable that is set after the successful money transfer.
Instead, it turns out that invoking the \lstinline|call.value| function on a contract's address invokes the contract's fallback function as well.
Given a second contract \lstinline|Mallory|, it is possible to transfer more money than the intended $2$ {\textit{wei}} to the account of \lstinline|Mallory|.
By invoking \lstinline|Bob|'s function \lstinline|ping| with the address of \lstinline|Mallory|'s account, $2$ {\textit{wei}} are transferred to \lstinline|Mallory|'s account and additionally the fallback function of \lstinline|Mallory| is invoked.
As the fallback function again calls the \lstinline|ping| function with \lstinline|Mallory|'s address another $2$ {\textit{wei}} are transferred before the variable \lstinline|sent| of contract \lstinline|Bob| was set.
This looping goes on until all gas of the initial call is consumed or the callstack limit is reached. In this case, only the last transfer of {\textit{wei}} is reverted and the effects of all former calls stay in place. Consequently the intended restriction on contract \lstinline|Bob|'s \lstinline|ping| function (namely to only transfer $2$ {\textit{wei}} once) is circumvented.
\myparagraph{Call Integrity}
In order to protect from this class of bugs, it is crucial to secure the code against being reentered \TODOM{to be picky, re-entering or reentering? we are inconsistent} before regaining control over the control flow. From a security perspective, the fundamental problem is that the contract behaviour depends on untrusted code, even though this was not intended by the developer. We capture this intuition through a hyperproperty, \TODOM{same here, hyper-property or hyperproperty?} which we name \emph{call integrity}. The idea is that no matter how the attacker can schedule $c$ (callstacks $\callstack$ and $\callstack'$ in the definition), the calls of $c$ (traces $\pi$, $\pi'$) cannot be controlled by the attacker, even if $c$ hands over the control to the attacker. \TODOM{With regards to consistency, we should also go through the bibliography and make that consistent}
\begin{definition}[Call Integrity]
A contract $c \in \mathcal{C}$ satisfies call integrity for a set of addresses $\mathcal{A}_C \subseteq \mathcal{A}$ if for all reachable configurations $(\transenv,\cons{\annotate{s}{c}}{\callstack}),(\transenv,\cons{\annotate{s'}{c}}{\callstack'})$ with $s, s'$ differing only in the code with address in $\mathcal{A}_C$, it holds that for all $t,t'$
\begin{align*}
\sstepstrace{\transenv}{\cons{\annotate{s}{c}}{\callstack}}{\cons{\annotate{t}{c}}{\callstack}}{\pi} ~\land~ \finalstate{\annotate{t}{c}}
~\land ~
\sstepstrace{\transenv}{\cons{\annotate{s'}{c}}{\callstack'}}{\cons{\annotate{t'}{c}}{\callstack'}}{\pi'} ~\land~ \finalstate{\annotate{t'}{c}} \\
\implies \project{\pi}{\filtercallscreates{c}} = \project{\pi'}{\filtercallscreates{c}}
\end{align*}
\end{definition}
\subsection{Proof Technique for Call Integrity}
We now establish a proof technique for call integrity, based on local properties that are arguably easier to verify and that we show to imply call integrity.
As a first observation, we identify the different ways in which external contracts can influence the execution of a smart contract $c$ and introduce corresponding security properties :
\begin{description}
\item[Code Dependency] The contract $c$ might access (information on) the untrusted contracts code via the $\textsf{EXTCODECOPY}$ or the $\textsf{EXTCODESIZE}$ instructions and make his behaviour depend on those values;
\item[Effect Dependency] The contract $c$ might call the untrusted contract and might depend on its execution effects and return value;
\item[Re-entrancy] The contract $c$ might call the untrusted contract, with the latter influencing the behaviour of the former by performing changes to the global state itself or \REPLACEMfor{CY}{171125}{'}{``}on behalf\REPLACEMfor{CY}{171125}{'}{''} of $c$ by reentering it and thereby potentially decreasing the balance of $c$.
\end{description}
The first two of these properties can be seen as value dependencies and therefore can be formalized as hyperproperties. The first property says that the calls performed by a contract should not be affected by the effects on the execution state produced by adversarial contracts. Technically, we consider a contract $c$ calling an adversarial contract $c'$ (captured as $\sstep{\transenv}{\cons{\annotate{s}{c}}{\callstack}}{\cons{\annotate{s''}{c'}}{\cons{\annotate{s}{c}}{\callstack}}}$ in the premise), which we let terminate in two arbitrary states $s',t'$: we require that $c$'s continuation code performs the same calls in both states.
\begin{definition}[$\mathcal{A}_C$-effect Independence]
A contract $c \in \mathcal{C}$ is $\mathcal{A}_C$-effect independent of for a set of addresses $\mathcal{A}_C \subseteq \mathcal{A}$ if for all reachable configurations $(\transenv, \cons{\annotate{s}{c}}{\callstack})$ such that $\sstep{\transenv}{\cons{\annotate{s}{c}}{\callstack}}{\cons{\annotate{s''}{c'}}{\cons{\annotate{s}{c}}{\callstack}}}$ for some $s''$ and $\getcontractaddress{c'} \in \mathcal{A}_C$, it holds that for all final states $s', t'$ whose global state might differ in all components but the code from the global state of $s$,
\begin{align*}
\sstepstrace{\transenv_{\textit{init}}}{\cons{\annotate{s'}{c'}}{\cons{\annotate{s}{c}}{\callstack}}}{\cons{\annotate{s''}{c}}{\callstack}}{\pi}
~\land~ \finalstate{s''} \\
~\land~ \sstepstrace{\transenv_{\textit{init}}}{\cons{\annotate{t'}{c'}}{\cons{\annotate{s}{c}}{\callstack}}}{\cons{\annotate{t''}{c}}{\callstack}}{\pi'}
\land \finalstate{t''} \\
\implies \project{\pi}{\filtercallscreates{c}} = \project{\pi'}{\filtercallscreates{c}}
\end{align*}
\end{definition}
The second property says that the calls of a contract should not be affected by the code read from the blockchain (e.g., the code does not branch on code read from the blockchain).
To this end we introduce the notation $\sstepslocalupdatetrace{\transenv}{\cons{s}{\callstack}}{\cons{s'}{\callstack}}{f}{\pi}$ to denote that the local small-step execution of state $s$ on stack $\callstack$ under $\transenv$ results in several steps in state $s'$ producing trace $\pi$ given that in the local execution steps of $\textsf{EXTCODECOPY}$ and $\textsf{EXTCODESIZE}$, which are the operations used to access the code on the global state, the code returned by these functions is determined by the partial function $f \in \mathcal{A} \pto \arrayof{\mathbb{B}}$ as opposed to the global state. In other words, we consider in the premise a contract $c$ reading two different codes from the blockchain and terminating in both runs (captured as $\sstepslocalupdatetrace{\transenv}{\cons{\annotate{s}{c}}{\callstack}}{\cons{\annotate{s'}{c}}{\callstack}}{f}{\pi}$ and $\sstepslocalupdatetrace{\transenv}{\cons{\annotate{s}{c}}{\callstack}}{\cons{\annotate{s''}{c}}{\callstack}}{f'}{\pi'}$), and we require that $c$ performs the same calls in both runs.
\begin{definition}[$\mathcal{A}_C$-code Independence]
A contract $c \in \mathcal{C}$ is $\mathcal{A}_C$-code independent for a set of addresses $\mathcal{A}_C \subseteq \mathcal{A}$
if for all reachable configurations $(\transenv, \cons{\annotate{s}{c}}{\callstack})$
it holds for all local code updates $f, f' \in \mathcal{A} \pto \arrayof{\mathbb{B}}$ on $\mathcal{A}_C$ that
\begin{align*}
\sstepslocalupdatetrace{\transenv}{\cons{\annotate{s}{c}}{\callstack}}{\cons{\annotate{s'}{c}}{\callstack}}{f}{\pi}
~\land~ \finalstate{s'}
~\land~ \sstepslocalupdatetrace{\transenv}{\cons{\annotate{s}{c}}{\callstack}}{\cons{\annotate{s''}{c}}{\callstack}}{f'}{\pi'}
~\land~ \finalstate{s''} \\
\implies \project{\pi}{\filtercallscreates{c}} = \project{\pi'}{\filtercallscreates{c}}
\end{align*}
\end{definition}
Both these independence properties can be overapproximated by static analysis techniques based on program dependence graphs~\cite{Hammer:2009:FCO}, as done by Joana to verify non-interference in Java~\cite{joana14it}. The idea is to traverse the dependence graph in order to detect dependencies between the sensitive sources, in our case the data controlled by the adversary and returned to the contract, and the observable sinks, in our case the local contract calls.
The last property constitutes a safety property. Specifically, single-entrancy states that it cannot happen that when reentering the contract $c$ another call is performed before returning (i.e., after reentrancy, which we capture in the call stack as two distinct states with the same running contract $c$, the call stack cannot further increase).
\begin{definition}[Single-entrancy]
A contract $c \in \mathcal{C}$ is single-entrant if for all reachable configurations $(\transenv,\cons{\annotate{s}{c}}{\callstack})$, it holds for all $s', s'',\callstack' $ that
\begin{align*}
\ssteps{\transenv}{\cons{\annotate{s}{c}}{\callstack}}{\concatstack{\cons{\annotate{s'}{c}}{\callstack'}}{\cons{\annotate{s}{c}}{\callstack}}}
\\
\implies
\neg \exists s'' \in \mathcal{S}, c' \in \contracts_\bot . \,
\ssteps{\transenv}{\concatstack{\cons{\annotate{s'}{c}}{\callstack'}}{\cons{\annotate{s}{c}}{\callstack}}}{\cons{\annotate{s''}{c'}}{\concatstack{\cons{\annotate{s'}{c}}{\callstack'}}{\cons{\annotate{s}{c}}{\callstack}}}}
\end{align*}
\end{definition}
This safety property can be easily overapproximated by syntactic conditions, as for instance done in the Oyente analyzer~\cite{oyente}.
Finally, the next theorem proves the soundness of our proof technique, i.e., the two independence properties and the single-entrancy property together entail call integrity.
\begin{theorem}
Let $c \in \mathcal{C}$ be a contract and $\mathcal{A}_C \subseteq \mathcal{A}$ be a set of untrusted addresses.
If $c$ is $\mathcal{A}_C$-local independent, $c$ is $\mathcal{A}_C$-effect independent, and $c$ is single-entrant then $c$ provides call integrity for $\mathcal{A}_C$.
\end{theorem}
\noindent
\textit{Proof Sketch.}
Let $(\transenv,\cons{\annotate{s}{c}}{\callstack}),(\transenv,\cons{\annotate{s'}{c}}{\callstack'})$ be reachable configurations such that $s,s'$ differ only in the code with address in $\mathcal{A}_C$.
We now compare the two small-step runs of those configurations.
Due to $\mathcal{A}_C$-code independence, the execution until the first call to an address $a \in \mathcal{A}_C$ produces the same partial trace until the call to $a$. Indeed, we can express the runs under different address mappings through the code update from the $\mathcal{A}_C$-code independence property, as long as no call to one of the updated addresses is performed. When a first call to $a \in \mathcal{A}_C$ is performed, we know due to single-entrancy that the following call cannot produce any partial execution trace for any of the runs as this would imply that contract $c$ is reentered and a call out of the contract is performed.
Due to $\mathcal{A}_C$-code independence and $\mathcal{A}_C$-effect independence \TODOM{update consistently!}, the traces after returning must coincide till the next call to an address in $\mathcal{A}_C$. This argument can be iteratively applied until reaching the final state of the execution of $c$.
\subsection{Atomicity}
\subsubsection{Exception Handling}
As discussed in section \ref{solidity}, the way exceptions are propagated varies with the way contracts are called. In particular, in the case of \lstinline|call| and \lstinline|send|, exceptions are not propagated, but a manual check for the successful completion of the called function's execution is required. This behavior reflects the way exceptions are reported during bytecode execution: Instead of propagating up through the call stack, the callee reports the exception to the caller by writing zero to the stack. In the context of Ethereum, the issue of exception handling is particularly delicate as due to the gas restriction, it might always happen that a call fails simply because it ran out of gas. Intuitively, a user would expect a contract not to depend on the concrete gas value that is given to it, with the exception that a contract might always fail completely (and consequently does not perform any changes on the global state). Such a behavior would prevent contracts from entering an inconsistent state as the one presented in the following excerpt of a simple banking contract:
\lstinputlisting{exc.sol}
The contract keeps a record of the user balances and provides a function that allows a user to withdraw its own balance -- which results in an update of the record. A developer might not expect that the \lstinline|send| might fail, but as it is on the bytecode level represented by a $\textsf{CALL}$ instruction, additional to the Ether transfer, code might be executed that runs out of gas. As a consequence, the contract would end up in a state where the money was not transferred (as all effects of the call are reverted in case of an exception), but still the internal balance record of the contract was updated and consequently the money cannot be withdrawn by the owner anymore.
Inspired by such situations where an inconsistent state is entered by a contract due to mishandled gas exceptions, we introduce the notion of \emph{atomicity} of a contract. Intuitively, atomicity requires that the effects of the execution on the global state do not depend on the amount of gas available -- except when an exception is triggered, in which case the overall execution should have no effect at all. The last condition is captured by requiring that the final global state is the same as the initial one for at least one of the two executions (intuitively, the one causing the exception).
\begin{definition}
A contract $c \in \mathcal{C}$ satisfies atomicity
if for all reachable configurations $(\transenv, \callstack')$ such that
$\sstep{\transenv}{\callstack'}{\cons{\annotate{s}{c}}{\callstack}}$, it holds for all gas values $g , g' \in \integer{256}$ that
\begin{align*}
\ssteps{\transenv}{\cons{\update{\annotate{s}{c}}{\access{\mu}{\textsf{gas}}}{g}}{\callstack}}{\cons{\annotate{s'}{c}}{\callstack}}
~\land~ \finalstate{s'} \\
~\land~ \ssteps{\transenv}{\cons{\update{\annotate{s}{c}}{\access{\mu}{\textsf{gas}}}{g'}}{\callstack}}{\cons{\annotate{s''}{c}}{\callstack}}
~\land~ \finalstate{s''} \\
\implies \access{s'}{\sigma} = \access{s''}{\sigma} \lor \access{s}{\sigma} = \access{s'}{\sigma} \lor \access{s}{\sigma} = \access{s''}{\sigma}
\end{align*}
\end{definition}
\subsection{Independence of Miner controlled Parameters}
\label{statedependency}
Another particularity of the distributed blockchain environment is that users while performing transactions cannot make assumptions on large parts of the context their transaction will be executed in.
A part of this is due to the asynchronous nature of the system: it can always be that another transaction that alters the context was performed first. Actually, the situation is even more delicate as transactions are not processed in a first-come-first-serve manner, but miners have a big influence on the execution context of transactions. They can decide upon the order of the transactions in a block (and also sneak their own transactions in first) and in addition they can even control some parameters as the block timestamp within a certain range.
Consequently, contracts whose (outgoing) money flows depend either on miner controlled block information or on state information (as the state of their storage or their balance) that might be changed by other transactions are prone to manipulations by miners. A typical example adduced in the literature is the use of block timestamps as source of randomness \cite{survey,oyente}. In a classical lottery implementation that randomly pays out to one of the participants and uses the block timestamp as source of randomness, a malicious miner can easily influence the result in his favor by selecting a beneficial timestamp.
We capture the absence of the miner's influence by two definitions, one saying that the outgoing Ether flows of a contract should not be influenced by components of the transaction environment that can be (within a certain range) set by miners and the other one saying that the Ether flows should not depend on those parts of the contract state that might have been influenced by previously executed transactions. The first definition rules out what is in the literature often described as timestamp dependency \cite{survey,oyente}.
First, we define \emph{independence of} (parts of) \emph{the transaction environment}.
To this end, we assume $\mathcal{C}_{\transenv}$ to be the set of components of the transaction environment and write $\transenv \equalupto{c_{\transenv}} \transenv'$ to denote that the transaction environments $\transenv, \transenv'$ are equal up to component $c_{\transenv}$.
\begin{definition}[Independence of the Transaction Environment]
A contract $c \in \mathcal{C}$ is independent of a subset $I \subseteq \mathcal{C}_{\transenv}$ of components of the transaction environment if for all $c_{\transenv} \in I$ and all reachable configurations $(\transenv, \cons{\annotate{s}{c}}{\callstack})$ it holds for all $\transenv'$ that
\begin{align*}
& c_{\transenv}(\transenv) \neq c_{\transenv}(\transenv')
\land \transenv \equalupto{c_{\transenv}} \transenv' \\
&\land \sstepstrace{\transenv}{\cons{\annotate{s}{c}}{\callstack}}{\cons{\annotate{s'}{c}}{\callstack}}{\pi}
~\land~\finalstate{s'}
~\land~ \sstepstrace{\transenv'}{\cons{\annotate{s}{c}}{\callstack}}{\cons{\annotate{s''}{c}}{\callstack}}{\pi'} ~\land~ \finalstate{s''} \\
&\implies \project{\pi}{\filtercallscreates{c}} = \project{\pi'}{\filtercallscreates{c}}
\end{align*}
\end{definition}
Next, we define the notion of \emph{independence of the account state}. Formally, we capture this property by requiring that the outgoing Ether flows of the contract under consideration should not be affected by those parameters of the contract that might have been changed by previous executions which are the balance, the account's nonce, and the account's persistent storage.
\begin{definition}[Independence of Mutable Account State]
A contract $c \in \mathcal{C}$ is independent of the account state if for all reachable configurations $(\transenv,\cons{\annotate{s}{c}}{\callstack}),(\transenv,\cons{\annotate{s}{c}}{\callstack'})$ with $s, s'$ differing only in the nonce, balance and storage for $\getcontractaddress{c}$, it holds that
\begin{align*}
\sstepstrace{\transenv}{\cons{\annotate{s}{c}}{\callstack}}{\cons{\annotate{s'}{c}}{\callstack}}{\pi} ~\land~ \finalstate{\annotate{s'}{c}}
~\land ~
\sstepstrace{\transenv}{\cons{\annotate{s}{c}}{\callstack'}}{\cons{\annotate{s''}{c}}{\callstack}}{\pi'} ~\land~ \finalstate{\annotate{s''}{c}} \\
\implies \project{\pi}{\filtercallscreates{c}} = \project{\pi'}{\filtercallscreates{c}}
\end{align*}
\end{definition}
As far the other independence properties, both these properties can be statically verified using program dependence graphs.
\subsection{Classification of Bugs}
The previously presented security definitions are motivated by the bugs that were observed in real Ethereum smart contracts and studied in~\cite{oyente} and~\cite{survey}.
Table~\ref{tab:properties} gives an overview on the bugs from the literature that are ruled out by our security properties.
\begin{table}[b]
\centering
\caption{Bugs from~\cite{oyente} and~ \cite{survey} ruled out by the security properties}
\label{tab:properties}
\begin{tabular}{cp{5.75cm}}
\multicolumn{1}{p{1.4cm}}{\parbox{4cm}{\centering\textbf{Security Property}}} & \multicolumn{1}{c}{\textbf{Bug}} \\[7pt]
\toprule
\multicolumn{1}{c}{\multirow{2}{*}{\parbox{4cm}{\centering Call Integrity}}}
& Reentrancy~\cite{survey,oyente} \\
& Call to the Unknown~\cite{survey} \\
\midrule
\multicolumn{1}{c}{\multirow{1}{*}{\parbox{4cm}{\centering Atomicity}}}
& Mishandled Exceptions~\cite{survey,oyente}\\
\midrule
\multicolumn{1}{c}{\multirow{2}{*}{\parbox{4cm}{\centering Independence of Mutable Account State}}}
& Transaction Order Dependency~\cite{oyente} \\ & Unpredictable State~\cite{survey} \\
\midrule
\multicolumn{1}{c}{\multirow{3}{*}{\parbox{4cm}{\centering Independence of Transaction Environment}}}
& Timestamp Dependancy~\cite{oyente} \\
& Time Constraints~\cite{survey}\\
& Generating Randomness~\cite{survey} \\
\bottomrule
\end{tabular}
\end{table}
Our security properties do not cover all bugs described by Atzei et al.~\cite{survey}, as some of the bugs do not constitute violations of general security properties, i.e., properties that are not specific to the particular contract implementation. There are two classes of bugs that we do not consider:
The first class deals with the occurrence of unexpected exceptions (such as the Gasless Send and the Call stack Limit bug) and the second class encompasses bugs caused by the Solidity semantics deviating from the programmer's intuitions (such as the Keeping Secrets, Type Cast and Exception Disorders bugs).
The first class of bugs encompasses runtime exceptions that are hard to predict for the developer and that are consequently not handled correctly. Of course, it would be possible to formalize the absence of those particular kinds of exceptions as simple reachability properties using the small-step semantics. Still, such properties would not give any insight about the security of a contract: the fact that a particular exception occurs can be unproblematic in the case that proper exception handling is in place. In general, the notion of a correct exception handling highly depends on the specific contract's intended behavior. For the special case of out-of-gas exceptions, we could introduce the notion of atomicity in order to capture a generic goal of proper exception handling. But such a notion is not necessarily sufficient for characterizing reasonable ways of dealing with other kinds of runtime exceptions.
The second class of bugs are introduced on the Solidity level and are similarly hard to account for by using generic security properties. Even though these bugs might all originate from similar idiosyncrasies of the Solidity semantics, the impact of the bugs on the contract's semantics might deviate a lot. This might result in violations of the security properties discussed before, but also in violating the contract's functional correctness. Consequently, catching those bugs might require the introduction of contract-specific correctness properties.
Finally, Atzei et al.~\cite{survey} discuss the Ether Lost in Transfer bug. This bug is introduced by sending Ether to addresses that do not belong to any contract or user, so called orphan addresses. We could easily formalize a reachability property stating that no valid contract execution should ever send Ether to such an address. We omit such a definition here as it is quite straightforward and at the same time it is not a property that directly affects the security of an individual contract: Sending Ether to such an orphan address might have negative impacts on the overall system as money is effectively lost. For the specific contract sending this money, this bug can be seen as a corner case of sending Ether to an unintended address which rather constitutes a correctness violation.
\subsection{Preliminaries}
In the following, we will use $\mathbb{B}$ to denote the set $\{0,1\}$ of bits and accordingly $\mathbb{B}^{x}$ for sets of bitstrings of size $x$. We further let $\integer{x}$ denote the set of non-negative integers representable by $x$ bits and allow for implicit conversion between those two representations. In addition, we will use the notation $\arrayof{X}$ (resp. $\stackof{X}$) for arrays (resp. lists) of elements from the set $X$. We use standard notations for operations on arrays and lists.
\subsection{Global state}
As mentioned before, the global state is a (partial) mapping from account addresses (that are bitstrings of size 160) to accounts. In the case that an account does not exist, we assume it to map to $\bot$.
Accounts, irrespectively of their type, are tuples of the form $\accountstate{n}{b}{\textit{stor}}{\textit{code}}$, with $n \in \integer{256}$ being the account's nonce that is incremented with every other account that the account creates, $b \in \integer{256}$ being the account's balance in $\textit{wei}$, $\textit{stor} \in \BB^{256} \to \BB^{256}$ being the accounts persistent storage that is represented as a mapping from 256-bit words to 256-bit words and finally $\textit{code} \in \arrayof{\BB^8}$ being the contract that is an array of bytes.
In contrast to contract accounts, external accounts have the empty bytearray as code. As only the execution of code in the context of the account can access and modify the account's storage, the fact that formally external accounts have persistent storage does not have any effect.
In the following, we will denote the set of addresses with $\mathcal{A}$ and the set of global states with $\Sigma$ and we will assume that $\sigma \in \Sigma$.
\subsection{Small-Step Relation}
In order to define the small-step semantics, we give a small-step relation $\sstep{\transenv}{\exconf{\callstack}{\eta}}{\exconf{\callstack'}{\eta'}}$ that specifies how a call stack $\callstack \in \mathbb{S}$ representing the state of the execution evolves within one step under the transaction environment $\transenv \in \mathcal{T}_{\textit{env}}$.
In Figure~\ref{fig:grammar} we give a full grammar for call stacks and transaction environments:
\begin{figure*}[h]
\begin{mathpar}
\begin{array}{rlclll}
\text{Call stacks} & \mathbb{S} & \ni & \callstack & := & \cons{\textit{EXC}}{S_{\textit{plain}}} ~|~ \cons{\haltstatefull{\sigma}{d}{g}{\eta}}{S_{\textit{plain}}} ~|~ S_{\textit{plain}} \\
\text{Plain call stacks} &
\mathbb{S}_{\textit{plain}} & \ni & S_{\textit{plain}} & := & \cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{S_{\textit{plain}}} \\
\text{Machine states} & M & \ni & \mu & := & \smstate{\textit{gas}}{\textit{pc}}{m}{i}{s} \\
\text{Execution environments} & I & \ni & \iota & := & \sexenv{\textit{actor}}{\textit{input}}{\textit{sender}}{\textit{value}}{\textit{code}}\\
\text{Global states} & \Sigma & \ni &\sigma & & \\
\text{Account states} & \mathbb{A} & \ni & \textit{acc} & := & \accountstate{n}{b}{\textit{code}}{\textit{stor}} ~|~ \bot \\
\text{Transaction effects} & N & \ni &\eta & := & (b, L, \textit{S}_\dagger) \\
\text{Transaction environments} & \mathcal{T}_{\textit{env}} & \ni & \transenv & := & (o, \textit{prize}, H) \\
\\
\text{Notations:}
&\multicolumn{5}{c}{
d \in \arrayof{\BB^8}, \quad g \in \integer{256}, \quad \eta \in N, \quad
o \in \mathcal{A}, \quad \textit{prize} \in \integer{256}, \quad H \in \mathcal{H}} \\
&\multicolumn{5}{c}{
\textit{gas} \in \integer{256}, \quad \textit{pc} \in \integer{256}, \quad m \in \mathbb{B}^{256}, \to \mathbb{B}^8 \quad i \in \integer{256}, \quad s \in \stackof{\mathbb{B}^{256}}} \\
&\multicolumn{5}{c}{
\textit{sender} \in \mathcal{A} \quad \textit{input} \in \arrayof{\BB^8} \quad \textit{sender} \in \mathcal{A} \quad \textit{value} \in \integer{256} \quad \textit{code} \in \arrayof{\BB^8}} \\
&\multicolumn{5}{c}{
b \in \integer{256} \quad L \in \sequenceof{\textit{Ev}_{\textit{log}}} \quad \textit{S}_\dagger \subseteq \mathcal{A} \quad
\Sigma = \mathcal{A} \to \mathbb{A}
}
\end{array}
\end{mathpar}
\caption{Grammar for call stacks and transaction environments}
\label{fig:grammar}
\end{figure*}
\subsubsection{Transaction Environments}
The transaction environment represents the static information of the block that the transaction is executed in and the immutable parameters given to the transaction as the gas prize or the gas limit.
More specifically, the transaction environment $\transenv \in \mathcal{T}_{\textit{env}} = \mathcal{A} \times \integer{256} \times \mathcal{H}$ is a tuple of the form $(o, \textit{gasPrize}, H)$ with $o \in \mathcal{A}$ being the address of the account that made the transaction, $\textit{gasPrize} \in \integer{256}$ denoting amount of wei that needs to paid for a unit of gas in this transaction and $H \in \mathcal{H}$ being the header of the block that the transaction is part of. We do not specify the format of block headers here, but just assume a set $\mathcal{H}$ of block headers.
\subsubsection{Callstacks}
A call stack $\callstack$ is a stack of execution states which represents the state of the execution within one internal transaction. We give a formal definition of the set of possible callstacks $\mathbb{S}$ as follows:
\begin{align*}
\mathbb{S} := \{& \cons{\textit{EXC}}{S_{\textit{plain}}}, ~\cons{\haltstatefull{\sigma}{\textit{gas}}{d}{\eta}}{S_{\textit{plain}}}, ~S_{\textit{plain}} \\ ~|~
& \sigma \in \Sigma, ~\textit{gas} \in \mathbb{N}, ~d \in \arrayof{\BB^8},~ \eta \in N ~,S_{\textit{plain}} \in \stackof{M \times I \times \Sigma \times N} \}
\end{align*}
Syntactically, a call stack is a stack of regular execution states of the form $\regstatefull{\mu}{\iota}{\sigma}{\eta}$ that can optionally be topped with a halting state $\haltstatefull{\sigma}{\textit{gas}}{d}{\eta}$ or an exception state $\textit{EXC}$. We summarize these three types of states as execution states $\mathcal{S}$.
Semantically, halting states indicate regular halting of an internal transaction, exception states indicate exceptional halting, and regular execution states describe the state of internal transactions in progress.
Halting and exception states can only occur as top elements of the call stack as they represent terminated internal transactions. Exception states of the form $\textit{EXC}$ do not carry any information as in the case of an exception all effects of the terminated internal transaction are reverted and the caller state therefore stays unaffected, except for the gas.
Halting states instead are of the form $\haltstatefull{\sigma}{\textit{gas}}{d}{\eta}$ specifying the global state $\sigma$ the execution halted in, the gas $\textit{gas} \in \integer{256}$ remaining from the execution, the return data $d \in \arrayof{\BB^8}$ and the additional transaction effects $\eta \in N$ of the internal transaction. The additional transaction effects carry information that are accumulated during execution, but do not influence the small-step execution itself. Formally, the additional transaction effects are a triple of the form $(b, L, \textit{S}_\dagger) \in N = \integer{256} \times \sequenceof{\textit{Ev}_{\textit{log}}} \times \setof{\mathcal{A}}$ with $b \in \integer{256}$ being the refund balance that is increased by account storage operations and will finally be paid to the transaction's beneficiary, $L \in \sequenceof{\textit{Ev}_{\textit{log}}}$ being the sequence of log events that the bytecode execution invoked during execution and $\textit{S}_\dagger \subseteq \mathcal{A}$ being the so called suicide set -- the set of account addresses that executed the $\textsf{SELFDESTRUCT}$ command and therefore registered their account for deletion.
The information held by the halting state is carried over to the calling state.
The state of a non-terminated internal transaction is described by a regular execution state of the form $\regstatefull{\mu}{\iota}{\sigma}{\eta}$. The state is determined by the current global state $\sigma$ of the system as well as the execution environment $\iota \in I$ that specifies the parameters of the current transaction (including inputs and the code to be executed), the local state $\mu \in M$ of the stack machine, and the transaction effects $\eta \in N$ collected during execution so far.
\subsubsection{Execution Environment}
The execution environment $\iota$ of an internal transaction specifies the static parameters of the transaction. It is a tuple of the form $\sexenv{\textit{actor}}{\textit{input}}{\textit{sender}}{\textit{value}}{\textit{code}} \in I = \mathcal{A} \times \arrayof{\BB^8} \times \mathcal{A} \times \integer{256} \times \arrayof{\BB^8}$ with the following components:
\begin{itemize}
\item $\textit{actor} \in \mathcal{A}$ is the address of the account currently executing;
\item $\textit{input} \in \arrayof{\BB^8}$ is the data given as an input to the internal transaction;
\item $\textit{sender} \in \mathcal{A}$ is the address of the account that initiated the internal transaction;
\item $\textit{value} \in \integer{256}$ is the value transferred by the internal transaction;
\item $\textit{code} \in \arrayof{\BB^8}$ is the code currently executed.
\end{itemize}
This information is determined at the beginning of an internal transaction execution and it can be accessed, but not altered during the execution.
\subsubsection{Machine State}
The local machine state $\mu$ represents the state of the underlying state machine used for execution and is a tuple of the form $\smstate{\textit{gas}}{\textit{pc}}{\textit{m}}{\textit{i}}{\textit{s}}$ where
\begin{itemize}
\item $\textit{gas} \in \integer{256}$ is the current amount of gas still available for execution;
\item $\textit{pc}\in \integer{256}$ is the current program counter;
\item $\textit{m} \in \BB^{256} \to \BB^8$ is a mapping from 256-bit words to bytes that represents the local memory;
\item $\textit{i} \in \integer{256}$ is the current number of active words in memory;
\item $\textit{s} \in \stackof{\mathbb{B}^{256}}$ is the local 256-bit word stack of the stack machine.
\end{itemize}
The execution of each internal transaction starts in a fresh machine state, with an empty stack, memory initialized to all zeros, and program counter and active words in memory set to zero. Only the gas is instantiated with the gas value available for the execution.
\subsection{Small-Step Rules}
In the following, we will present a selection of interesting small-step rules in order to illustrate the most important features of the semantics.
For demonstrating the overall design of the semantics, we start with the example of the arithmetic expression $\textsf{ADD}$ performing addition of two values on the machine stack. Note that as the word size of the stack machine is $256$, all arithmetic operations are performed modulo $2^{256}$.
\begin{mathpar}
\small\infer{
\arraypos{\access{\iota}{\textit{code}}}{\access{\mu}{\textsf{pc}}}= \textsf{ADD} \\
\access{\mu}{\textsf{s}} = \cons{a}{\cons{b}{s}} \\
\access{\mu}{\textsf{gas}} \geq 3 \\
\mu'= \dec{\inc{\update{\mu}{\textsf{s}}{\cons{(a+b)}{s}}}{\textsf{pc}}{1}}{\textsf{gas}}{3}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\regstatefull{\mu'}{\iota}{\sigma}{\eta}}{\callstack}}}
\infer{
\arraypos{\access{\iota}{\textit{code}}}{\access{\mu}{\textsf{pc}}}= \textsf{ADD} \\
(\size{\access{\mu}{\textsf{s}}} < 2 \lor \access{\mu}{\textsf{gas}} < 3)}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\textit{EXC}}{\callstack}}}
\end{mathpar}
We use a dot notation, in order to access components of the different state parameters. We name the components with the variable names introduced for these components in the last section written in sans-serif-style.
In addition, we use the usual notation for updating components: $\update{\textit{t}}{\textsf{c}}{v}$ denotes that the component $\textsf{c}$ of tuple $\textit{t}$ is updated with value $v$. For expressing incremental updates in a simpler way, we additionally use the notation $\inc{t}{\textsf{c}}{v}$ to denote that the (numerical) component of $\textsf{c}$ is incremented by $v$ and similarly $\dec{t}{\textsf{c}}{v}$ for decrementing a component $\textsf{c}$ of $t$.
The execution of the arithmetic instruction $\textsf{ADD}$ only performs local changes in the machine state affecting the local stack, the program counter, and the gas budget. For deciding upon the correct instruction to execute, the currently executed code (that is part of the execution environment) is accessed at the position of the current program counter. The cost of an $\textsf{ADD}$ instruction is constantly three units of gas that get subtracted from the gas budget in the machine state.
As every other instruction, $\textsf{ADD}$ can fail due to lacking gas or due to underflows on the machine stack. In this case, the exception state is entered and the execution of the current internal transaction is terminated.
For better readability, we use here the slightly sloppy $\lor$ notation for combining the two error cases in one inference rule.
A more interesting example of a semantic rule is the one of the $\textsf{CALL}$ instruction that initiates an internal call transaction. In the case of calling, several corner cases need to be treated which results in several inference rules for this case. Here, we only present one rule for illustrating the main functionality. More precisely, we present the case in that the account that should be called exists, the call stack limit of $1024$ is not reached yet, and the account initiating the transaction has a sufficiently large balance for sending the specified amount of wei to the called account.
\begin{mathpar}\small
\infer{
\arraypos{\access{\iota}{\textit{code}}}{\access{\mu}{\textsf{pc}}}=\textsf{CALL} \\
\access{\mu}{\textsf{s}}=\cons{g}{\cons{\textit{to}}{\cons{\textit{va}}{\cons{\textit{io}}{\cons{\textit{is}}{\cons{\textit{oo}}{\cons{\textit{os}}{s}}}}}}} \\
\sigma(\textit{to}) \neq \bot\\
\size{A} + 1 < 1024 \\
\access{\getaccount{\sigma}{\access{\iota}{\textsf{actor}}}}{\textsf{b}} \geq \textit{va} \\
\textit{aw} = \memext{\memext{\access{\mu}{\textsf{i}}}{\textit{io}}{\textit{is}}}{\textit{oo}}{\textit{os}} \\
c_{\textit{call}} = \gascapacity{\textit{va}}{1}{g}{\access{\mu}{\textsf{gas}}} \\
c = \basecosts{\textit{va}}{1} + \costmem{\access{\mu}{\textsf{i}}}{\textit{aw}} +c_{\textit{call}} \\
\access{\mu}{\textsf{gas}} \geq c \\
\sigma' = \updategstate{\updategstate{\sigma}{\textit{to}}{\inc{\getaccount{\sigma}{\textit{to}}}{\textsf{b}}{\textit{va}}}}{\access{\iota}{\textsf{actor}}}{\dec{\getaccount{\sigma}{\access{\iota}{\textsf{actor}}}}{\textsf{b}}{\textit{va}}} \\
d =\getinterval{\access{\mu}{\textsf{m}}}{\textit{io}}{\textit{io} + \textit{is} -1} \\
\mu' =\smstate{c_{\textit{call}}}{0}{\lam{x}{0}}{0}{\epsilon} \\
\iota' =\update{\update{\update{\update{\update{\iota}{\textsf{sender}}{\access{\iota}{\textsf{actor}}}}{\textsf{actor}}{\textit{to}}}{\textsf{value}}{\textit{va}}}{\textsf{input}}{d}}{\textsf{code}}{\access{\getaccount{\sigma}{\textit{to}}}{\textsf{code}}}\\
}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\regstatefull{\mu'}{\iota'}{\sigma'}{\eta}}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}}}
\end{mathpar}
For performing a call, the parameters to this call need to be specified on the machine stack. These are the amount of gas $g$ that should be given as budget to the call, the recipient $\textit{to}$ of the call and the amount $\textit{va}$ of wei to be transferred with the call. In addition, the caller needs to specify the input data that should be given to the transaction and the place in memory where the return data of the call should be written after successful execution. To this end, the remaining arguments specify the offset and size of the memory fragment that input data should be read from (determined by $\textit{io}$ and $\textit{is}$) and return data should be written to (determined by $\textit{oo}$ and $\textit{os}$).
Calculating the cost in terms of gas for the execution is quite complicated in the case of $\textsf{CALL}$ as it is influenced by several factors including the arguments given to the call and the current machine state. First of all, the gas that should be given to the call (here denoted by $c_\textit{call}$) needs to be determined. This value is not necessarily equal to the value $g$ specified on the stack, but also depends on the value $\textit{va}$ transferred by the call and the currently available gas.
In addition, as the memory needs to be accessed for reading the input value and writing the return value, the number of active words in memory might be increased. This effect is captured by the memory extension function $M$. As accessing additional words in memory costs gas, this cost needs to be taken into account in the overall cost. The costs resulting from an increase in the number of active words is calculated by the function $C_\textit{mem}$.
Finally, there is also a base cost charged for the call that depends on the value $\textit{va}$.
As the cost also depends on the specific case for calling that is considered, the cost calculation functions receive a flag (here $1$) as arguments. These technical details are spelled out in
\iffull
Appendix~\ref{sec:arules}.
\else
the full version~\cite{fullversion}.
\fi
The call itself then has several effects:
First, it transfers the balance from the executing state ($\textit{actor}$ in the execution environment) to the recipient ($\textit{to}$). To this end, the global state is updated. Here we use a special notation for the functional update on the global state using $\langle \rangle$ instead of $[]$.
Second, for initializing the execution of the initiated internal transaction, a new regular execution state is placed on top of the execution stack. The internal transaction starts in a fresh machine state at program counter zero. This means that the initial memory is initialized to all zeros and consequently the number of active words in memory is zero as well and additionally the initial stack is empty. The gas budget given to the internal transaction is $c_\textit{call}$ calculated before. The transaction environment of the new call records the call parameters. This includes the sender that is the currently executing account $\textit{actor}$, the new active account that is now the called account $\textit{to}$ as well as the value $\textit{va}$ sent and the input data given to the call. To this end the input data is extracted from the memory using the offset $\textit{io}$ and the size $\textit{is}$. We use an interval notation here to denote that a part of the memory is extracted. Finally, the code in the execution environment of the new internal transaction is the code of the called account.
Note that the execution state of the caller stays completely unaffected at this stage of the execution. This is a conscious design decision in order to simplify the expression of security properties and to make the semantics more suitable to abstractions.
Besides $\textsf{CALL}$ there are two different instructions for initiating internal call transactions that implement slight variations of the simple $\textsf{CALL}$ instruction. These variations are called $\textsf{CALLCODE}$ and $\textsf{DELEGATECALL}$, which both allow for executing another's account code in the context of the caller. The difference is that in the case of $\textsf{CALLCODE}$ a new internal transaction is started and the currently executed account is registered as the sender of this transaction while in the case of $\textsf{DELEGATECALL}$ an existing call is really forwarded in the sense that the sender and the value of the initiating transaction are propagated to the new internal transaction.
Analogously to the instructions for initiating internal call transactions, there is also one instruction $\textsf{CREATE}$ that allows for the creation of a new account. The semantics of this instruction is similar to the one of $\textsf{CALL}$, with the exception that a fresh account is created, which gets the specified transferred value, and that the input provided to this internal transaction, which is again specified in the local memory, is interpreted as the initialization code to be executed in order to produce the newly created account's code as output.
In contrast to the call transaction, a create transaction does not await a return value, but only an indication of success or failure.
For discussing how to return from an internal transaction, we show the rule for returning from a successful internal call transaction.
\begin{mathpar}\small
\infer{
\arraypos{\access{\iota}{\textit{code}}}{\access{\mu}{\textsf{pc}}}=\textsf{CALL} \\
\access{\mu}{\textsf{s}}=\cons{g}{\cons{\textit{to}}{\cons{\textit{va}}{\cons{\textit{io}}{\cons{\textit{is}}{\cons{\textit{oo}}{\cons{\textit{os}}{s}}}}}}} \\
\textit{flag} = \cond{\sigma(\textit{to})= \bot}{0}{1} \\
\textit{aw} = \memext{\memext{\access{\mu}{\textsf{i}}}{\textit{io}}{\textit{is}}}{\textit{oo}}{\textit{os}} \\
c_{\textit{call}} = \gascapacity{\textit{va}}{\textit{flag}}{g}{\access{\mu}{\textsf{gas}}} \\
c = \basecosts{\textit{va}}{\textit{flag}} + \costmem{\access{\mu}{\textsf{i}}}{\textit{aw}} +c_{\textit{call}} \\
\mu' =\update{\inc{\inc{\update{\update{\mu}{\textsf{i}}{\textit{aw}}}{\textsf{s}}{\cons{1}{s}}}{\textsf{pc}}{1}}{\textsf{gas}}{\textit{gas} - c}}{\textsf{m}}{\updateinterval{\access{\mu}{\textsf{m}}}{\textit{oo}}{\textit{oo} + s -1}{d}}
}
{\sstep{\transenv}{\cons{\haltstatefull{\sigma'}{\textit{gas}}{d}{\eta'}}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}}{\cons{\regstatefull{\mu'}{\iota}{\sigma'}{\eta'}}{\callstack}}}
\end{mathpar}
Leaving the caller state unchanged at the point of calling has the negative side effect that the cost calculation needs to be redone at this point in order to determine the new gas value of the caller state.
But besides this, the rule is straightforward: the program counter is incremented as usual and the number of active words in memory is adjusted as memory accesses for reading the input and return data have been made. The gas is decreased, meaning that the overall amount of gas $c$ allocated for the execution is subtracted. However, as this cost already includes the gas budget given to the internal transaction, the gas $\textit{gas}$ that is left after the execution is refunded again. In addition, the return data $d$ is written to the local memory of the caller at the place specified by $\textit{oo}$ and $\textit{os}$. Finally, the value one is written to the caller's stack in order to indicate the success of the internal call transaction.
As the execution was successful, as indicated by the halting state, the global state and the transaction effects of the callee are adopted by the caller.
EVM bytecode offers several instructions for explicitly halting (internal) transaction execution. Besides the standard instructions $\textsf{STOP}$ and $\textsf{RETURN}$, there is the $\textsf{SELFDESTRUCT}$ instruction that is very particular to the blockchain setting.
The $\textsf{STOP}$ instruction causes regular halting of the internal transaction without returning data to the caller. In contrast, the $\textsf{RETURN}$ instruction allows one to specify the memory fragment containing the return data that will be handed to the caller.
Finally, the $\textsf{SELFDESTRUCT}$ instruction halts the execution and lists the currently execution account for later deletion. More precisely, this means that this account will be deleted when finalizing the external transaction, but its behavior during the ongoing small-step execution is not affected. Additionally, the whole balance of the deleted account is transferred to some beneficiary specified on the machine stack.
We show the small-step rules depicting the main functionality of $\textsf{SELFDESTRUCT}$. As for $\textsf{CALL}$, capturing the whole functionality of $\textsf{SELFDESTRUCT}$ would require to consider several corner cases. Here we consider the case where the beneficiary exists, the stack does not underflow and the available amount of gas is sufficient.
\begin{mathpar} \small
\infer{
\curropcode{\mu}{\iota} = \textsf{SELFDESTRUCT} \\
\access{\mu}{\textsf{s}} = \cons{a_\textit{ben}}{s} \\
a = a_\textit{ben} \mod 2^{160} \\
\sigma(a) \neq \bot \\
\access{\mu}{\textsf{gas}} \geq 5000 \\
g = \access{\mu}{\textsf{gas}} - 5000 \\
\sigma' = \updategstate{\updategstate{\sigma}{\access{\iota}{\textsf{actor}}}{\update{\sigma(\access{\iota}{\textsf{actor}})}{\textsf{balance}}{0}}}{a}{\inc{\sigma(a)}{\textsf{balance}}{\access{\access{\sigma}{(\access{\iota}{\textsf{actor}})}}{\textsf{balance}}}} \\
r = \cond{(\access{\iota}{\textsf{actor}} \in \access{\transenv}{\textit{S}_\dagger})}{0}{24000} \\
\eta' = \inc{\update{\eta}{\textsf{S}_{\dagger}}{\access{\eta}{\textsf{S}_{\dagger}} \cup \{ \access{\iota}{\textsf{actor}} \} }}{\textsf{balance}}{r}
}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\haltstatefull{\sigma'}{g}{\epsilon}{\eta'}}{\callstack}}}
\end{mathpar}
The $\textsf{SELFDESTRUCT}$ command takes one argument $a_{\textit{ben}}$ from the stack specifying the address of the beneficiary that should get the balance of the account that is destructed. If all preconditions are satisfied, the balance of the executing account ($\access{\iota}{\textsf{actor}}$) is transferred to the beneficiary address and the current internal transaction execution enters a halting state. Additionally, the transaction effects are extended by adding $\access{\iota}{\textsf{actor}}$ to the suicide set and by possibly increasing the refund balance. The refund balance is only increased in case that $\access{\iota}{\textsf{actor}}$ is not already scheduled for deletion. The halting state captures the global state $\sigma$ after the money transfer, the remaining gas $g$ after executing the $\textsf{SELFDESTRUCT}$ and the updated transaction effects $\eta'$. As no return data is handed to the caller, the empty bytearray $\epsilon$ is specified as return data in the halting state.
Note that $\textsf{SELFDESTRUCT}$ deletes the currently executing account $\access{\iota}{\textsf{actor}}$ which is not necessarily the same account as the one owning the code $\access{\iota}{\textsf{code}}$. This might be due a previous execution of $\textsf{DELEGATECALL}$ or $\textsf{CALLCODE}$.
\subsection{Transaction Execution}
The outcome of an external transaction execution does not only consist of the result of the EVM bytecode execution.
Before executing the bytecode, the transaction environment and the execution environment are determined from the transaction information and the block header.
In the following we assume $\mathcal{T}$ to denote the set of transactions.
An (external) transaction $T \in \mathcal{T}$, similar to the internal transactions, specifies a gas limit, a recipient and a value to be transferred. In addition, it also contains the originator and the gas prize that will be recorded in the transaction environment. Finally, it specifies an input to the transaction and the transaction type that can either be a call or a create transaction. The transaction type determines whether the input will be interpreted as input data to a call transaction or as initialization code for a create transaction.
In addition to the transaction of the environment initialization, some initial changes on the global state and validity checks are performed. For the sake of presentation we assume in the following a function $\inittrans{\cdot}{\cdot}{\cdot} \in \mathcal{T} \times \mathcal{H} \times \Sigma \to (\mathcal{T}_{\textit{env}} \times \mathcal{S}) \cup \{ \bot \}$ performing the initialization phase and returning a transaction environment and initial execution state in the case of a valid transaction and $\bot$ otherwise.
Similarly, we assume a function $\finalizetrans{\cdot}{\cdot}{\cdot}{\cdot} \in T \times \mathcal{S} \times N \times \Sigma$ that given the final global state of the execution, the accumulated transaction effects and the transaction, computes the final effects on the global state. These include for example the deletion of the contracts from the suicide set and the payout to the beneficiary of the transaction.
Formally we can define the execution of a transaction $T \in \mathcal{T}$ in a block with header $H \in \mathcal{H}$ as follows:
\begin{mathpar}\small
\infer
{(\transenv, s) = \inittrans{T}{H}{\sigma} \\
\ssteps{\transenv}{\exconf{\cons{s}{\epsilon}}{\epsilon_\transeffects}}{\exconf{\cons{s'}{\epsilon}}{\eta'}} \\
\finalstate{s'} \\
\sigma' = \finalizetrans{s'}{\eta'}{T}{}}
{\transstep{T}{H}{\sigma}{\sigma'}}
\end{mathpar}
where $\rightarrow^*$ denotes the reflexive and transitive closure of the small-step relation and the predicate $\finalstate{\cdot}$ characterizes a state that cannot be further reduced using the small-step relation.
\subsection{Formalization in F*}
We provide a formalization of a large fragment of our small-step semantics in the proof assistant F* \cite{fstar}. At the time of writing, we are formalizing the remaining part, which only consists of straightforward local operations, such as bitwise operators and opcodes to write code to (resp. read code from) the memory. F* is an ML-dialect that is optimized for program verification and allows for performing manual proofs as well as automated proofs leveraging the power of SMT solvers.
Our formalization strictly follows the small-step semantics as presented in this paper. The core functionality is implemented by the function \lstinline|step| that describes how an execution stack evolves within one execution state. To this end it has two possible outcomes: either it performs an execution step and returns the new callstack or -- in the case that a final configuration is reached (which is a stack containing only one element that is either a halting or an exception state) -- it reports the final state. In order to provide a total function for the step relation, we needed to introduce a third execution outcome that signalizes that a problem occurred due to an inconsistent state. When running the semantics from a valid initial configuration this result, however, should never be produced.
For running the semantics, the function \lstinline|execution| is defined that subsequently performs execution steps using \lstinline|step| until reaching the final state and reports it.
The current implementation encompasses approximately thousand lines of code. Since F* code can be compiled into OCaml, we validate our semantics against the official EVM test suite \cite{evmtests}. Our semantics passes 304 out of 624 tests, failing only in those involving any of the missing functionalities.
We make the formalization in F* publicly available~\cite{fullversion} in order to facilitate the design of static analysis techniques for EVM bytecode as well as their soundness proofs.
\subsection{Comparison with the Semantics by Luu et al.~\cite{oyente}}
The small-step semantics defined by Luu et al.~\cite{oyente} encompasses only a variation of a subset of EVM bytecode instructions (called EtherLite) and assumes a heavily simplified execution configuration. The instructions covered span simple stack operations for pushing and popping values, conditional branches, binary operations, instructions for accessing and altering local memory and account storage, as well as as the ones for calling, returning and destructing the account. Essential instructions as $\textsf{CREATE}$ and those for accessing the transaction and block information are omitted. The authors represent a configuration as a tuple of a call stack of activation records and the global state. An activation record contains the code to be executed, the program counter, the local memory and the machine stack. The global state is modelled as mapping from addresses to accounts, with the latter consisting of code, balance and persistent storage.
The overall abstraction contains a conceptual flaw, as not including the global state in the activation records of the call stack does not allow for modelling that, in the case of an exception in the execution of the callee, the global state is rolled back to the one of the caller at the point of calling.
In addition, the model cannot be easily extended with further instructions -- such as further call instructions or instructions accessing the environment -- without major changes in the abstraction as a lot of information, e.g., the one captured in our small-step semantics in the transaction and the execution environment, are missing.
\label{sec:comparison-oyente}
\subsection{Notations}
In order to present the small-step rules in a concise fashion we introduce some notations for accessing and updating state.
For the global state we use a slightly different notation for accessing and updating. As the global state is a mapping from addresses to account, the account's state can be accessed by applying the address to the global state. For updating we introduce a simplifying notation:
\begin{align*}
\updategstate{\sigma}{\textit{addr}}{s} & := \lam{a}{\cond{a = \textit{addr}}{s}{\getaccount{\sigma}{a}}}
\end{align*}
For accessing memory fragments we use the following notation:
\begin{align*}
\getinterval{\textsf{m}}{o}{s} & := [\textsf{m}(o),\textsf{m}(o + 1), \dots, \textsf{m}(o+s -1)]
\end{align*}
Correspondingly, we define updates for memory fragments. Let $o, s \in \integer{256}$ and $v \in \arrayof{\BB^8}$:
\begin{align*}
\updateinterval{\textsf{m}}{o}{s}{v} & := \lam{x}{\cond{(x \geq o \land x < o + \mini{s}{\size{v}})}{\arraypos{v}{x- o}}{\textsf{m}(x)}}
\end{align*}
Similarly to accessing arrays, we write $\extract{v}{\textit{down}}{\textit{up}}$ to extract the bitvector's bits from position $\textit{down}$ until position $\textit{up}$ (where we require $\textit{down} \leq \textit{up}$). Additionally, we assume a concatenation function for bitvectors and write $\concat{b_1}{b_2}$ for concatenating bit vectors $b_1$ and $b_2$.
Most of the state components used in the formalization of the EVM execution configurations consist of tuples. For sake of better readability, instead of accessing tuple components using projection, we name the components according to the variable names we used in the description in Section~\ref{sec:formalization} and use a dot notation for accessing them. To differentiate component names from variable names, we typeset components in sans serifs font.
For example, given $\mu \in M$, we write $\access{\mu}{\textsf{gas}}$ to access the first component of the tuple $\mu$.
Similarly, we use a simple update notation for components. E.g., instead of writing $\textit{let}~ \mu = (\textit{gas}, \textit{pc}, m, \actwv, s) ~\textit{in}~ (\textit{gas}, \textit{pc} + 1, m, \actwv, s)$, we write $\update{\mu}{\textsf{pc}}{\access{\mu}{\textsf{pc}} + 1}$. For the case of incrementing or decrementing numerical values we use the usual short cuts $+=$ and $-=$ and would for example write the example shown before as $\inc{\mu}{\textsf{pc}}{1}$.
As mentioned in section~\ref{subsec:formalization_notations}, we use the notions of $\mathbb{B}^{x}$ and $\integer{x}$ interchangeably as we interpret bitvectos usually as unsigned integers. As some operations however are performed on the signed interpretation of the machine words, we assume functions $\signed{(\cdot)}: \mathbb{B}^{x} \to \sinteger{x}$ and $\signed{(\cdot)}: \integer{x} \to \sinteger{x}$ that output the signed interpretation of a bitvector or unsigned integer respectively. Note the that $\sinteger{x}$ denotes the set of unsigned integers re presentable with $x$ bits. Accordingly, we assume a functions $\unsigned{(\cdot)}: \sinteger{x} \to \mathbb{B}^{x}$ and $\unsigned{(\cdot)}: \sinteger{x} \to \integer{x}$ for converting signed integers back to their unsigned interpretation.
\subsection{Auxiliary definitions}
\paragraph{Accessing bytecode}
For extracting the command that is currently executed, the instruction at position $\access{\mu}{\textsf{pc}}$ of the code $\textsf{code}$ provided in the execution environment needs to be accessed. For sake of presentation, we define a function doing so:
\begin{definition}[Currently executed command]
The currently executed command in the machine state $\mu$ and execution environment $\iota$ is denoted by $\curropcode{\mu}{\iota}$ and
defined as follows:
\begin{align*}
\curropcode{\mu}{\iota} := \begin{cases}
\arraypos{\access{\iota}{\textsf{code}}}{\access{\mu}{\textsf{pc}}} & \access{\mu}{\textsf{pc}} < \size{\access{\iota}{\textsf{code}}} \\
\textsf{STOP} & \text{otherwise}
\end{cases}
\end{align*}
\end{definition}
All EVM instructions have in common that running out of gas as well as over and under flows of the local machine stack cause an exception.
We define a function $\simvalid{\cdot}{\cdot}{\cdot}: \integer{256} \times \integer{256} \times \mathbb{N} \to \mathbb{B}$ that given the available gas, the instruction cost and the new stack size determines whether one of the conditions mentioned above is satisfied. We do not check for stack underflows as this is realized by pattern matching in the individual small step rules.
\begin{align*}
\simvalid{g}{c}{s} :=
\begin{cases}
1 & g \geq c \land s < 1024 \\
0 & \textit{otherwise}
\end{cases}
\end{align*}
We also write $\simvalid{g}{c}{s}$ for $\simvalid{g}{c}{s} = 1$ and $\neg \simvalid{g}{c}{s}$ for $\simvalid{g}{c}{s} = 0$.
In EVM bytecode jump potential destinations are explicitly marked by the distinct $\textsf{JUMPDEST}$ instruction. Jumps to other destination cause an exception. For simplifying this check, we define the set of valid jump destinations as follows:
\begin{definition}{Valid jump destinations \cite{yellowpaper}.}
$\funD{\cdot}: \arrayof{\BB^8} \to \setof{\mathbb{N}}$ determines the set of valid jump destinations given the code $\textit{code} \in \arrayof{\BB^8}$, that is being run. It is defined as any position in the code occupied by a $\textsf{JUMPDEST}$ instruction. Formally $\funD{c} = \funDhelp{c}{0}$, where:
\begin{align*}
\funDhelp{\cdot}{\cdot}: \arrayof{\BB^8} \times \mathbb{N} \to \setof{\mathbb{N}} \\
\funDhelp{c}{i} := \begin{cases}
\emptyset & i \geq \size{c} \\
\{i\} \cup \funDhelp{c}{\funN{i}{c[i]}} & \arraypos{c}{i} = \textsf{JUMPDEST} \\
\funDhelp{c}{\funN{i}{\arraypos{c}{i}}} & \text{otherwise}
\end{cases}
\end{align*}
where $\funN{\cdot}{\cdot}: \mathbb{N} \times \BB^8 \to \mathbb{N}$ is the next valid instruction position in the code, skipping the data of a $\PUSH{n}$ instruction, if any:
\begin{align*}
\funN{i}{\omega} :=
\begin{cases}
i + n + 1 & \omega = \PUSH{n}\\
i + n & \text{otherwise}
\end{cases}
\end{align*}
\end{definition}
\paragraph{Memory Consumption}
The execution tracks the number of active words in memory and charges fees for memory that is used.
The active words in memory are those words that are accessed either
for reading or writing. If a command increases the number of active
words, it needs to pay accordingly to the amount of words that became active.
To model the increasing number of active words in memory we define a memory expansion function as done in \cite{yellowpaper} that determines the number of active words in memory given the number of active memory words so far as well as the offset and the size of the memory fraction accessed.
\begin{align*}
\memext{i}{o}{s} :=
\begin{cases}
i & \text{if $s = 0$} \\
\maxi{i}{\left \lceil \frac{(o +s)}{32} \right \rceil} & \text{otherwise}
\end{cases}
\end{align*}
According to the amount of additional words in memory that are used by the execution of an instruction, additional execution costs are charged.
For describing the cost that occur due to memory consumption, we use a function $\costmem{\cdot}{\cdot}: \mathbb{N} \times \mathbb{N} \to \mathbb{Z}$ that given the number of active words in memory before and after the command execution, outputs the corresponding costs.
\begin{align*}
\costmem{\textit{aw}}{\textit{aw}'} & := 3 \cdot (\textit{aw}' - \textit{aw}) + \left \lfloor \frac{aw'^2}{512} \right \rfloor - \left \lfloor \frac{aw^2}{512} \right \rfloor
\end{align*}
\paragraph{Creating new account addresses}
We define a function $\getfreshaddress{\cdot}{\cdot}: \mathcal{A} \times \mathbb{N} \to \mathcal{A}$ that given an address and a nonce provides a fresh address.
\begin{align*}
\getfreshaddress{a}{n} = \keccak{\rlpencode{(a, n-1)}}[96, 255]
\end{align*}
where $\rlpencode{\cdot}$ is the RLP encoding function. The RLP encoding is a canonical way of transforming different structures such as tuples to a sequence of bytes. We will not comment on this in detail, but refer to the reader to the Ethereum yellow paper~\cite{yellowpaper}.
Note that the $\getfreshaddress{\cdot}{\cdot}$ function is assumed to be collision resistant.
\subsection{Small-step rules}
\paragraph{Binary stack operations}
We start by giving the rules for arithmetic operations. As all of these instructions alter only the local stack and gas and their only difference consists of the operations performed and the (constant) amount of gas computed, we assume assume a set $\textit{Inst}_{\textit{bin}}$ of binary operations and functions $\binopcost{\cdot}: \textit{Inst}_{\textit{bin}} \to \integer{256}$ and $\binopfun{\cdot}: \textit{Inst}_{\textit{bin}} \to (\mathbb{B}^{256} \times \mathbb{B}^{256} \to \mathbb{B}^{256})$ that map the binary operations to their costs and functionality.
For all binary operations $\textit{i}_\textit{bin} \in \textit{Inst}_{\textit{bin}}$, we create rules of the following form
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textit{i}_\textit{bin} \\
\simvalid{\access{\mu}{\textsf{gas}}}{\binopcost{\textit{i}_\textit{bin}}}{\size{s} +1} \\
\access{\mu}{\textsf{s}} = \cons{a}{\cons{b}{s}} \\
\mu'= \dec{\inc{\update{\mu}{\textsf{s}}{\cons{(\binopfun{\textit{i}_\textit{bin}})}{s}}}{\textsf{pc}}{1}}{\textsf{gas}}{\binopcost{\textit{i}_\textit{bin}}}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\regstatefull{\mu'}{\iota}{\sigma}{\eta}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textit{i}_\textit{bin} \\
(\neg \simvalid{\access{\mu}{\textsf{gas}}}{\binopcost{\textit{i}_\textit{bin}}}{\size{s} +1}
\lor \size{\access{\mu}{\textsf{s}}} < 2)}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\textit{EXC}}{\callstack}}}
\end{mathpar}
We define
\begin{align*}
\textit{Inst}_{\textit{bin}} := \{ \textsf{ADD}, \textsf{SUB}, \textsf{LT}, \textsf{GT}, \textsf{EQ}, \textsf{AND}, \textsf{OR}, \textsf{XOR}, \textsf{SLT}, \textsf{SGT}, \textsf{MUL}, \textsf{DIV}, \textsf{SDIV}, \\
\textsf{MOD}, \textsf{SMOD}, \textsf{SIGNEXTEND}, \textsf{BYTE} \}
\end{align*}
and
\begin{align*}
\binopcost{\textit{i}_\textit{bin}} =
\begin{cases}
3 & \textit{i}_\textit{bin} \in \{\textsf{ADD}, \textsf{SUB}, \textsf{LT}, \textsf{GT}, \textsf{SLT}, \textsf{SGT}, \textsf{EQ}, \textsf{AND}, \textsf{OR}, \textsf{XOR}, \textsf{BYTE} \} \\
5 &\textit{i}_\textit{bin} \in \{\textsf{MUL}, \textsf{DIV}, \textsf{SDIV}, \textsf{MOD}, \textsf{SMOD}, \textsf{SIGNEXTEND} \} \\
\end{cases}
\end{align*}
and
\begin{align*}
\binopfun{\textit{i}_\textit{bin}} =
\begin{cases}
\lambda (a,b).\, a + b \mod 2^{256} & \textit{i}_\textit{bin} = \textsf{ADD} \\
\lambda (a,b).\,a - b \mod 2^{256} & \textit{i}_\textit{bin} = \textsf{SUB} \\
\lambda (a,b).\, \cond{a < b}{1}{0} & \textit{i}_\textit{bin} = \textsf{LT} \\
\lambda (a,b). \, \cond{a > b}{1}{0} & \textit{i}_\textit{bin} = \textsf{GT} \\
\lambda (a,b). \, \cond{\signed{a} < \signed{b}}{1}{0} & \textit{i}_\textit{bin} = \textsf{SLT} \\
\lambda (a,b). \, \cond{\signed{a} > \signed{b}}{1}{0} & \textit{i}_\textit{bin} = \textsf{SGT} \\
\lambda (a,b). \, \cond{a = b}{1}{0} & \textit{i}_\textit{bin} = \textsf{EQ} \\
\lambda (a,b).\, a \bitand b & \textit{i}_\textit{bin} = \textsf{AND}\\
\lambda (a,b).\, a \| b & \textit{i}_\textit{bin} = \textsf{OR} \\
\lambda (a,b).\, a \oplus b & \textit{i}_\textit{bin} = \textsf{XOR} \\
\lambda (a,b).\, a \cdot b \mod 2^{256} & \textit{i}_\textit{bin} = \textsf{MUL} \\
\lambda (a,b).\, \cond{(b = 0)}{0}{\lfloor a \div b \rfloor} & \textit{i}_\textit{bin} = \textsf{DIV}\\
\lambda (a,b).\, \cond{(b=0)}{0}{a \mod b} & \textit{i}_\textit{bin} = \textsf{MOD}\\
\lambda (a,b).\, (b=0)?~0~:~(a= 2^{255}\land \signed{b} = -1)?~2^{256}~: \\
~\textit{let}~x = \signed{a} \div \signed{b} ~\textit{in}~ \unsigned{(\signof{x} \cdot \lfloor | x| \rfloor)} & \textit{i}_\textit{bin} = \textsf{SDIV}\\
\lambda (a,b).\, \cond{(b=0)}{0}{\unsigned{(\signof{a} \cdot |a| \mod |b|)}} &\textit{i}_\textit{bin} = \textsf{SMOD}\\
\lambda (o, b). \, \cond{(o \geq 32)}{0}{\concat{\extract{b}{8 \cdot o}{8 \cdot o + 7}}{0^{248}}} & \textit{i}_\textit{bin} = \textsf{BYTE} \\
\lambda (a, b). \, \textit{let}~x = 256-8(a+1) ~\textit{in} \\
~\textit{let}~ s = \arraypos{b}{x} ~\textit{in}~ \concat{s^x}{\extract{b}{x}{255}} & \textit{i}_\textit{bin} = \textsf{SIGNEXTEND}
\end{cases}
\end{align*}
where $\signof{\cdot}: \sinteger{x} \to \{-1, 1\}$ is defined as
\begin{align*}
\signof{x} =
\begin{cases}
1 & x \geq 0 \\
0 & \text{otherwise}
\end{cases}
\end{align*}
and $\bitand$, $\|$ and $\oplus$ are bitwise and, or and xor, respectively.
Exceptions to the normal binary operations are the exponentiation as this instruction uses non-constant costs and the computation of the Keccack-256 hash.
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{EXP} \\
\simvalid{\access{\mu}{\textsf{gas}}}{\textit{c}}{\size{s} +1} \\
\access{\mu}{\textsf{s}} = \cons{a}{\cons{b}{s}} \\
\textit{c} = \cond{(b = 0)}{10}{10 + 10* (1 + \left \lfloor \log_{256}{b} \right \rfloor)} \\
x = (a^b) \mod 2^{256} \\
\mu'= \dec{\inc{\update{\mu}{\textsf{s}}{\cons{x}{s}}}{\textsf{pc}}{1}}{\textsf{gas}}{\textit{c}}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\regstatefull{\mu'}{\iota}{\sigma}{\eta}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{EXP} \\
\textit{c} = \cond{(b = 0)}{10}{10 + 10* (1 + \left \lfloor \log_{256}{b} \right \rfloor)} \\
\access{\mu}{\textsf{s}} = \cons{a}{\cons{b}{s}} \\
\neg \simvalid{\access{\mu}{\textsf{gas}}}{\textit{c}}{\size{s} +1}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\textit{EXC}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{SHA3} \\
\simvalid{\access{\mu}{\textsf{gas}}}{\textit{c}}{\size{s} +1} \\
\access{\mu}{\textsf{s}} = \cons{\textit{pos}}{\cons{\textit{size}}{s}} \\
\textit{aw} = \memext{\access{\mu}{\textsf{i}}}{\textit{pos}}{\textit{size}}\\
\textit{c} = \costmem{\access{\mu}{\textsf{i}}}{\textit{aw}} + 30 + 6 \cdot \left \lceil \frac{\textit{size}}{32} \right \rceil\\
v = \arraypos{\access{\mu}{\textsf{m}}}{\textit{pos}, \textit{pos} + \textit{size} -1}\\
h = \keccak{v}\\
\mu'= \update{\dec{\inc{\update{\mu}{\textsf{s}}{\cons{h}{s}}}{\textsf{pc}}{1}}{\textsf{gas}}{\textit{c}}}{\textsf{i}}{\textit{aw}}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\regstatefull{\mu'}{\iota}{\sigma}{\eta}}{\callstack}}}
\end{mathpar}
where $\keccak{x}$ is the Keccak-256 hash of $x$. ()
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{SHA3} \\
\access{\mu}{\textsf{s}} = \cons{\textit{pos}}{\cons{\textit{size}}{s}} \\
\access{\mu}{\textsf{s}} = \cons{\textit{pos}}{\cons{\textit{size}}{s}} \\
\textit{aw} = \memext{\access{\mu}{\textsf{i}}}{\textit{pos}}{\textit{size}}\\
\textit{c} = \costmem{\access{\mu}{\textsf{i}}}{\textit{aw}} + 30 + 6 \cdot \left \lceil \frac{\textit{size}}{32} \right \rceil\\
\access{\mu}{\textsf{s}} = \cons{a}{\cons{b}{s}} \\
\neg \simvalid{\access{\mu}{\textsf{gas}}}{\textit{c}}{\size{s} +1}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\textit{EXC}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
(\curropcode{\mu}{\iota}= \textsf{EXP} \lor \curropcode{\mu}{\iota}= \textsf{SHA3}) \\
\size{\access{\mu}{\textsf{s}}} < 2}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\textit{EXC}}{\callstack}}}
\end{mathpar}
\paragraph{Unary stack operations}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{ISZERO} \\
\simvalid{\access{\mu}{\textsf{gas}}}{3}{\size{s} +1} \\
\access{\mu}{\textsf{s}} = \cons{a}{s} \\
x = \cond{(a=0)}{1}{0} \\
\mu'= \dec{\inc{\update{\mu}{\textsf{s}}{\cons{x}{s}}}{\textsf{pc}}{1}}{\textsf{gas}}{3}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\regstatefull{\mu'}{\iota}{\sigma}{\eta}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{NOT} \\
\simvalid{\access{\mu}{\textsf{gas}}}{3}{\size{s} +1} \\
\access{\mu}{\textsf{s}} = \cons{a}{s} \\
x = \neg a \\
\mu'= \dec{\inc{\update{\mu}{\textsf{s}}{\cons{x}{s}}}{\textsf{pc}}{1}}{\textsf{gas}}{3}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\regstatefull{\mu'}{\iota}{\sigma}{\eta}}{\callstack}}}
\end{mathpar}
where $\neg$ is bitwise negation.
\begin{mathpar}
\infer{
(\curropcode{\mu}{\iota}= \textsf{ISZERO} \lor \curropcode{\mu}{\iota}= \textsf{NOT}) \\
(\neg \simvalid{\access{\mu}{\textsf{gas}}}{3}{\size{s} +1}
\lor \size{\access{\mu}{\textsf{s}}} < 1)}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\textit{EXC}}{\callstack}}}
\end{mathpar}
\paragraph{Ternary stack operations}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{ADDMOD} \\
\simvalid{\access{\mu}{\textsf{gas}}}{8}{\size{s} +1} \\
\access{\mu}{\textsf{s}} = \cons{a}{\cons{b}{\cons{c}{s}}} \\
x = \cond{(c=0)}{0}{(a + b) \mod c} \\
\mu'= \dec{\inc{\update{\mu}{\textsf{s}}{\cons{x}{s}}}{\textsf{pc}}{1}}{\textsf{gas}}{8}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\regstatefull{\mu'}{\iota}{\sigma}{\eta}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{MULMOD} \\
\simvalid{\access{\mu}{\textsf{gas}}}{8}{\size{s} +1} \\
\access{\mu}{\textsf{s}} = \cons{a}{\cons{b}{\cons{c}{s}}} \\
x = \cond{(c=0)}{0}{(a \cdot b) \mod c} \\
\mu'= \dec{\inc{\update{\mu}{\textsf{s}}{\cons{x}{s}}}{\textsf{pc}}{1}}{\textsf{gas}}{8}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\regstatefull{\mu'}{\iota}{\sigma}{\eta}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
(\curropcode{\mu}{\iota}= \textsf{ADDMOD} \lor \curropcode{\mu}{\iota}= \textsf{MULMOD}) \\
(\neg \simvalid{\access{\mu}{\textsf{gas}}}{8}{\size{\access{\mu}{\textsf{s}}} -2}
\lor \size{\access{\mu}{\textsf{s}}} < 3)}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\textit{EXC}}{\callstack}}}
\end{mathpar}
\paragraph{Accessing the execution environment}
There are some simple access operations for accessing parts of the execution environment such as the addresses of the executing account and the caller, the value given to the internal transaction and the sizes of the executed code and the data given as input to the call.
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{ADDRESS} \\
\simvalid{\access{\mu}{\textsf{gas}}}{2}{\size{\access{\mu}{\textsf{s}}} +1} \\
\mu'= \dec{\inc{\update{\mu}{\textsf{s}}{\cons{\access{\iota}{\textsf{actor}}}{\access{\mu}{\textsf{s}}}}}{\textsf{pc}}{1}}{\textsf{gas}}{2}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\regstatefull{\mu'}{\iota}{\sigma}{\eta}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{CALLER} \\
\simvalid{\access{\mu}{\textsf{gas}}}{2}{\size{\access{\mu}{\textsf{s}}} +1} \\
\mu'= \dec{\inc{\update{\mu}{\textsf{s}}{\cons{\access{\iota}{\textsf{sender}}}{\access{\mu}{\textsf{s}}}}}{\textsf{pc}}{1}}{\textsf{gas}}{2}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\regstatefull{\mu'}{\iota}{\sigma}{\eta}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{CALLVALUE}\\
\simvalid{\access{\mu}{\textsf{gas}}}{2}{\size{\access{\mu}{\textsf{s}}} +1} \\
\mu'= \dec{\inc{\update{\mu}{\textsf{s}}{\cons{\access{\iota}{\textsf{value}}}{\access{\mu}{\textsf{s}}}}}{\textsf{pc}}{1}}{\textsf{gas}}{2}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\regstatefull{\mu'}{\iota}{\sigma}{\eta}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{CODESIZE}\\
\simvalid{\access{\mu}{\textsf{gas}}}{2}{\size{\access{\mu}{\textsf{s}}} +1} \\
\mu'= \dec{\inc{\update{\mu}{\textsf{s}}{\cons{\size{\access{\iota}{\textsf{code}}}}{\access{\mu}{\textsf{s}}}}}{\textsf{pc}}{1}}{\textsf{gas}}{2}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\regstatefull{\mu'}{\iota}{\sigma}{\eta}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{CALLDATASIZE}\\
\simvalid{\access{\mu}{\textsf{gas}}}{2}{\size{\access{\mu}{\textsf{s}}} +1} \\
\mu'= \dec{\inc{\update{\mu}{\textsf{s}}{\cons{\size{\access{\iota}{\textsf{input}}}}{\access{\mu}{\textsf{s}}}}}{\textsf{pc}}{1}}{\textsf{gas}}{2}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\regstatefull{\mu'}{\iota}{\sigma}{\eta}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
(\curropcode{\mu}{\iota}= \textsf{ADDRESS} \lor \curropcode{\mu}{\iota}= \textsf{CALLER} \lor \curropcode{\mu}{\iota}= \textsf{CALLVALUE} \\
\lor \curropcode{\mu}{\iota}= \textsf{CODESIZE} \lor \curropcode{\mu}{\iota}= \textsf{CALLDATASIZE})\\
\neg \simvalid{\access{\mu}{\textsf{gas}}}{2}{\size{\access{\mu}{\textsf{s}}} +1}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\textit{EXC}}{\callstack}}}
\end{mathpar}
Accessing the code and the input data in the execution environment is more involved.
The $\textsf{CALLDATALOAD}$ instruction writes the (first 256 bit of) data given as input to the current call to the stack.
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{CALLDATALOAD}\\
\access{\mu}{\textsf{s}} = \cons{a}{s}\\
\simvalid{\access{\mu}{\textsf{gas}}}{3}{\size{\access{\mu}{\textsf{s}}}} \\
k = \cond{(\size{\access{\iota}{\textsf{d}}} - a < 0)}{0} {\mini{\size{\access{\iota}{\textsf{d}}} - a}{32}}\\
v' = \arraypos{\access{\iota}{\textsf{d}}}{a, a + k - 1}\\
v = \concat{v'}{0^{256 - k\cdot 8}} \\
\mu'= \dec{\inc{\update{\mu}{\textsf{s}}{\cons{v}{s}}}{\textsf{pc}}{1}}{\textsf{gas}}{3}
}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\regstatefull{\mu'}{\iota}{\sigma}{\eta}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{CALLDATALOAD} \\
\neg \simvalid{\access{\mu}{\textsf{gas}}}{3}{\size{\access{\mu}{\textsf{s}}}}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\textit{EXC}}{\callstack}}}
\end{mathpar}
The $\textsf{CALLDATACOPY}$ instruction copies the data that was given as input to the current call to the memory.
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{CALLDATACOPY}\\
\access{\mu}{\textsf{s}} = \cons{\textit{pos}_{\textsf{m}}}{\cons{\textit{pos}_\textsf{d}}{\cons{\textit{size}}{s}}}\\
\textit{aw} = \memext{\access{\mu}{\textsf{i}}}{\textit{pos}_\textsf{m}}{\textit{size}} \\
\textit{c} = \costmem{\access{\mu}{\textsf{i}}}{\textit{aw}}+3 + 3 \cdot \left \lceil \frac{\textit{size}}{32} \right \rceil\\
\simvalid{\access{\mu}{\textsf{gas}}}{\textit{c}}{\size{\access{\mu}{\textsf{s}}}} \\
k = \cond{(\size{\access{\iota}{\textsf{input}})} - \textit{pos}_\textsf{d} < 0}{0}{\mini{\size{\access{\iota}{\textsf{input}}}- \textit{pos}_\textsf{d}}{\textit{size}}}\\
d' = \arraypos{\access{\iota}{\textsf{input}}}{\textit{pos}_\textsf{d}, \textit{pos}_\textsf{d} + k - 1}\\
d = \concat{d'}{0^{8 \cdot(\textit{size} - k)}}\\
\mu'= \update{\update{\dec{\inc{\update{\mu}{\textsf{s}}{s}}{\textsf{pc}}{1}}{\textsf{gas}}{\textit{c}}}{\textsf{m}}{\update{\textsf{m}}{[\textit{pos}_\textsf{m}, \textit{pos}_\textsf{m} + \textit{size} - 1]}{d}}}{\textsf{i}}{\textit{aw}}
}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\regstatefull{\mu'}{\iota}{\sigma}{\eta}}{\callstack}}}
\end{mathpar}
The $\textsf{CODECOPY}$ instruction copies the code that is currently executed to the memory.
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{CODECOPY} \\
\access{\mu}{\textsf{s}} = \cons{\textit{pos}_{\textsf{m}}}{\cons{\textit{pos}_\textit{code}}{\cons{\textit{size}}{s}}}\\
\textit{aw} = \memext{\access{\mu}{\textsf{i}}}{\textit{pos}_\textsf{m}}{\textit{size}} \\
\textit{c} = \costmem{\access{\mu}{\textsf{i}}}{\textit{aw}}+3 + 3 \cdot \left \lceil \frac{\textit{size}}{32} \right \rceil\\
\simvalid{\access{\mu}{\textsf{gas}}}{\textit{c}}{\size{\access{\mu}{\textsf{s}}}} \\
k = \cond{(\size{\access{\iota}{\textsf{code}})} - \textit{pos}_\textit{code} < 0}{0}{\mini{\size{\access{\iota}{\textit{code}}}- \textit{pos}_\textit{code}}{\textit{size}}}\\
d' = \arraypos{\access{\iota}{\textsf{code}}}{\textit{pos}_\textit{code}, \textit{pos}_\textit{code} + k - 1}\\
d = \concat{d'}{\textsf{STOP}^{\textit{size} - k}}\\
\mu'= \update{\update{\dec{\inc{\update{\mu}{\textsf{s}}{s}}{\textsf{pc}}{1}}{\textsf{gas}}{\textit{c}}}{\textsf{m}}{\update{\textsf{m}}{[\textit{pos}_\textsf{m}, \textit{pos}_\textsf{m} + \textit{size} - 1]}{d}}}{\textsf{i}}{\textit{aw}}
}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\regstatefull{\mu'}{\iota}{\sigma}{\eta}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
(\curropcode{\mu}{\iota}= \textsf{CODECOPY} \lor \curropcode{\mu}{\iota} = \textsf{CALLDATACOPY})\\
\size{\access{\mu}{\textsf{s}}} < 3}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\textit{EXC}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
(\curropcode{\mu}{\iota}= \textsf{CODECOPY} \lor \curropcode{\mu}{\iota}= \textsf{CALLDATACOPY})\\
\access{\mu}{\textsf{s}} = \cons{\textit{pos}_{\textsf{m}}}{\cons{\textit{size}}{\cons{\textit{pos}_\textit{code}}{s}}}\\
\textit{aw} = \memext{\access{\mu}{\textsf{i}}}{\textit{pos}_\textsf{m}}{\textit{pos}_\textit{code}} \\
\textit{c} = \costmem{\access{\mu}{\textsf{i}}}{\textit{aw}}+3 + 3 \cdot \left \lceil \frac{\textit{pos}_\textit{code}}{32} \right \rceil\\
\neg \simvalid{\access{\mu}{\textsf{gas}}}{\textit{c}}{\size{\access{\mu}{\textsf{s}}}}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\textit{EXC}}{\callstack}}}
\end{mathpar}
\paragraph{Accessing the transaction environment}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{ORIGIN} \\
\simvalid{\access{\mu}{\textsf{gas}}}{2}{\size{\access{\mu}{\textsf{s}}} +1} \\
\mu'= \dec{\inc{\update{\mu}{\textsf{s}}{\cons{\access{\transenv}{\originator}}{\access{\mu}{\textsf{s}}}}}{\textsf{pc}}{1}}{\textsf{gas}}{2}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\regstatefull{\mu'}{\iota}{\sigma}{\eta}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{GASPRICE} \\
\simvalid{\access{\mu}{\textsf{gas}}}{2}{\size{\access{\mu}{\textsf{s}}} +1} \\
\mu'= \dec{\inc{\update{\mu}{\textsf{s}}{\cons{\access{\transenv}{\textsf{price}}}{\access{\mu}{\textsf{s}}}}}{\textsf{pc}}{1}}{\textsf{gas}}{2}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\regstatefull{\mu'}{\iota}{\sigma}{\eta}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
(\curropcode{\mu}{\iota}= \textsf{ORIGIN} \lor \curropcode{\mu}{\iota}= \textsf{GASPRICE}) \\
\neg \simvalid{\access{\mu}{\textsf{gas}}}{2}{\size{\access{\mu}{\textsf{s}}} +1}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\textit{EXC}}{\callstack}}}
\end{mathpar}
The $\textsf{BLOCKHASH}$ command writes the hash of one of the 256 most recently completed block (that is specified on the stack) to the stack:
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{BLOCKHASH} \\
\simvalid{\access{\mu}{\textsf{gas}}}{20}{\size{\access{\mu}{\textsf{s}}}} \\
\access{\mu}{\textsf{s}} = \cons{n}{s} \\
h = \funP{\access{\iota}{\textsf{parent}}}{n}{0} \\
\mu'= \dec{\inc{\update{\mu}{\textsf{s}}{\cons{h}{\access{\mu}{\textsf{s}}}}}{\textsf{pc}}{1}}{\textsf{gas}}{20}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\regstatefull{\mu'}{\iota}{\sigma}{\eta}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{BLOCKHASH} \\
(\neg \simvalid{\access{\mu}{\textsf{gas}}}{20}{\size{\access{\mu}{\textsf{s}}}}
\lor \size{\access{\mu}{\textsf{s}}} < 1)}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\textit{EXC}}{\callstack}}}
\end{mathpar}
where the function $\funP{h}{n}{a}$ tries to access the block with number $n$ by traversing the block chain starting from $h$ until the counter $a$ reaches the limit of $256$ or the genesis block is reached.
\begin{align*}
\funP{h}{n}{a} :=
\begin{cases}
0 & n > \access{h}{\textsf{number}} \lor a = 256 \lor h = 0 \\
h & n = \access{h}{\textsf{number}} \\
\funP{\access{h}{\textsf{parent}}}{n}{a + 1} & \text{otherwise}
\end{cases}
\end{align*}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{COINBASE} \\
\simvalid{\access{\mu}{\textsf{gas}}}{2}{\size{\access{\mu}{\textsf{s}}} +1} \\
\mu'= \dec{\inc{\update{\mu}{\textsf{s}}{\cons{\access{(\access{\transenv}{H})}{\textsf{beneficiary}}}{\access{\mu}{\textsf{s}}}}}{\textsf{pc}}{1}}{\textsf{gas}}{2}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\regstatefull{\mu'}{\iota}{\sigma}{\eta}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{TIMESTAMP}\\
\simvalid{\access{\mu}{\textsf{gas}}}{2}{\size{\access{\mu}{\textsf{s}}} +1} \\
\mu'= \dec{\inc{\update{\mu}{\textsf{s}}{\cons{\access{(\access{\transenv}{H})}{\textsf{timestamp}}}{\access{\mu}{\textsf{s}}}}}{\textsf{pc}}{1}}{\textsf{gas}}{2}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\regstatefull{\mu'}{\iota}{\sigma}{\eta}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{NUMBER}\\
\simvalid{\access{\mu}{\textsf{gas}}}{2}{\size{\access{\mu}{\textsf{s}}} +1} \\
\mu'= \dec{\inc{\update{\mu}{\textsf{s}}{\cons{\access{(\access{\transenv}{H})}{\textsf{number}}}{\access{\mu}{\textsf{s}}}}}{\textsf{pc}}{1}}{\textsf{gas}}{2}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\regstatefull{\mu'}{\iota}{\sigma}{\eta}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{DIFFICULTY}\\
\simvalid{\access{\mu}{\textsf{gas}}}{2}{\size{\access{\mu}{\textsf{s}}} +1} \\
\mu'= \dec{\inc{\update{\mu}{\textsf{s}}{\cons{\access{(\access{\transenv}{H})}{\difficultyc}}{\access{\mu}{\textsf{s}}}}}{\textsf{pc}}{1}}{\textsf{gas}}{2}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\regstatefull{\mu'}{\iota}{\sigma}{\eta}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{GASLIMIT}\\
\simvalid{\access{\mu}{\textsf{gas}}}{2}{\size{\access{\mu}{\textsf{s}}} +1} \\
\mu'= \dec{\inc{\update{\mu}{\textsf{s}}{\cons{\access{(\access{\transenv}{H})}{\textsf{gaslimit}}}{\access{\mu}{\textsf{s}}}}}{\textsf{pc}}{1}}{\textsf{gas}}{2}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\regstatefull{\mu'}{\iota}{\sigma}{\eta}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
(\curropcode{\mu}{\iota}= \textsf{COINBASE} \lor \curropcode{\mu}{\iota}= \textsf{TIMESTAMP} \lor \curropcode{\mu}{\iota}= \textsf{NUMBER} \\
\lor \curropcode{\mu}{\iota}= \textsf{DIFFICULTY} \lor \curropcode{\mu}{\iota}= \textsf{GASLIMIT})\\
\neg \simvalid{\access{\mu}{\textsf{gas}}}{2}{\size{\access{\mu}{\textsf{s}}} +1}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\textit{EXC}}{\callstack}}}
\end{mathpar}
\paragraph{Accessing the global state}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{BALANCE}\\
\access{\mu}{\textsf{s}} = \cons{a}{s} \\
\simvalid{\access{\mu}{\textsf{gas}}}{400}{\size{s} +1} \\
b = \cond{(\sigma(a \mod 2^{160}) = \accountstate{\textit{nonce}}{\textit{balance}}{\textit{stor}}{\textit{code}})}{\textit{balance}}{0}\\
\mu'= \dec{\inc{\update{\mu}{\textsf{s}}{\cons{b}{\access{\mu}{\textsf{s}}}}}{\textsf{pc}}{1}}{\textsf{gas}}{400}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\regstatefull{\mu'}{\iota}{\sigma}{\eta}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{BALANCE} \\
(\neg \simvalid{\access{\mu}{\textsf{gas}}}{400}{\size{\access{\mu}{\textsf{s}}}}
\lor \size{\access{\mu}{\textsf{s}}} < 1)}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\textit{EXC}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{EXTCODESIZE}\\
\access{\mu}{\textsf{s}} = \cons{a}{s} \\
\simvalid{\access{\mu}{\textsf{gas}}}{700}{\size{s} +1} \\
\textit{size} = \size{\access{\sigma(a \mod 2^{160})}{\textsf{code}}} \\
\mu'= \dec{\inc{\update{\mu}{\textsf{s}}{\cons{s}{\access{\mu}{\textsf{s}}}}}{\textsf{pc}}{1}}{\textsf{gas}}{700}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\regstatefull{\mu'}{\iota}{\sigma}{\eta}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{EXTCODESIZE} \\
(\neg \simvalid{\access{\mu}{\textsf{gas}}}{700}{\size{\access{\mu}{\textsf{s}}}}
\lor \size{\access{\mu}{\textsf{s}}} < 1)}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\textit{EXC}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{EXTCODECOPY} \\
\access{\mu}{\textsf{s}} = \cons{a}{\cons{\textit{pos}_{\textsf{m}}}{\cons{\textit{pos}_\textit{code}}{\cons{\textit{size}}{s}}}}\\
\textit{code} = \access{\sigma(a \mod 2^{160})}{\textsf{code}} \\
\textit{aw} = \memext{\access{\mu}{\textsf{i}}}{\textit{pos}_\textsf{m}}{\textit{size}} \\
\textit{c} = \costmem{\access{\mu}{\textsf{i}}}{\textit{aw}}+700 + 3 \cdot \left \lceil \frac{\textit{size}}{32} \right \rceil\\
\simvalid{\access{\mu}{\textsf{gas}}}{\textit{c}}{\size{\access{\mu}{\textsf{s}}}} \\
k = \cond{(\size{\textit{code})} - \textit{pos}_\textit{code} < 0}{0}{\mini{\size{\access{\iota}{\textit{code}}}- \textit{pos}_\textit{code}}{\textit{size}}}\\
d' = \arraypos{\textit{code}}{\textit{pos}_\textit{code}, \textit{pos}_\textit{code} + k - 1}\\
d = \concat{d'}{\textsf{STOP}^{\textit{size} - k}}\\
\mu'= \update{\update{\dec{\inc{\update{\mu}{\textsf{s}}{s}}{\textsf{pc}}{1}}{\textsf{gas}}{\textit{c}}}{\textsf{m}}{\update{\textsf{m}}{[\textit{pos}_\textsf{m}, \textit{pos}_\textsf{m} + \textit{size} - 1]}{d}}}{\textsf{i}}{\textit{aw}}
}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\regstatefull{\mu'}{\iota}{\sigma}{\eta}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{EXTCODECOPY} \\
\size{\access{\mu}{\textsf{s}}} < 4}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\textit{EXC}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{EXTCODECOPY} \\
\access{\mu}{\textsf{s}} = \cons{a}{\cons{\textit{pos}_{\textsf{m}}}{\cons{\textit{size}}{\cons{\textit{pos}_\textit{code}}{s}}}}\\
\textit{aw} = \memext{\access{\mu}{\textsf{i}}}{\textit{pos}_\textsf{m}}{\textit{pos}_\textit{code}} \\
\textit{c} = \costmem{\access{\mu}{\textsf{i}}}{\textit{aw}}+ 700 + 3 \cdot \left \lceil \frac{\textit{pos}_\textit{code}}{32} \right \rceil\\
\neg \simvalid{\access{\mu}{\textsf{gas}}}{\textit{c}}{\size{\access{\mu}{\textsf{s}}}}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\textit{EXC}}{\callstack}}}
\end{mathpar}
\paragraph{Stack operations}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{POP} \\
\simvalid{\access{\mu}{\textsf{gas}}}{2}{\size{s}} \\
\access{\mu}{\textsf{s}} = \cons{a}{s} \\
\mu'= \dec{\inc{\update{\mu}{\textsf{s}}{s}}{\textsf{pc}}{1}}{\textsf{gas}}{2}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\regstatefull{\mu'}{\iota}{\sigma}{\eta}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{POP} \\
(\neg \simvalid{\access{\mu}{\textsf{gas}}}{2}{\size{s}} \lor \size{\access{\mu}{\textsf{s}}} < 1)}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\textit{EXC}}{\callstack}}}
\end{mathpar}
There are $32$ instructions for pushing values to the stack. We summarize the behavior of all these instructions with the following rules by parameterising the instruction with number of following bytecodes that are pushed to the stack.
The $\PUSH{n}$ (with $m \in [1, 32]$) command pushes the bytecodes at the next $n$ program counter position to the stack.
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \PUSH{n} \\
k = \mini{\size{\access{\iota}{\textit{code}}}}{\access{\mu}{\textsf{pc}} + x} \\
\simvalid{\access{\mu}{\textsf{gas}}}{3}{\size{\access{\mu}{\textsf{s}}} + 1} \\
d = \arraypos{\access{\iota}{\textit{code}}}{\access{\mu}{\textsf{pc}} + 1, k} \\
d' = \concat{d}{0^{8 \cdot (32 - (k - \access{\mu}{\textsf{pc}} ))}} \\
\mu'= \dec{\inc{\update{\mu}{\textsf{s}}{\cons{d'}{\access{\mu}{\textsf{s}}}}}{\textsf{pc}}{(x+1)}}{\textsf{gas}}{3}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\regstatefull{\mu'}{\iota}{\sigma}{\eta}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \PUSH{n} \\
\neg \simvalid{\access{\mu}{\textsf{gas}}}{3}{\size{s} + 1}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\textit{EXC}}{\callstack}}}
\end{mathpar}
The $\DUP{n}$ instructions (with $n \in [1, 16]$) duplicate the $n$th stack element:
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \DUP{n} \\
\simvalid{\access{\mu}{\textsf{gas}}}{3}{\size{\access{\mu}{\textsf{s}}} +1} \\
\access{\mu}{\textsf{s}} = \concatstack{s_1}{(\cons{x_n}{s_2})}\\
\size{s_1} = n-1 \\
\mu'= \dec{\inc{\update{\mu}{\textsf{s}}{\cons{x_n}{\access{\mu}{\textsf{s}}}}}{\textsf{pc}}{1}}{\textsf{gas}}{3}
}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\regstatefull{\mu'}{\iota}{\sigma}{\eta}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \DUP{n} \\
( \neg \simvalid{\access{\mu}{\textsf{gas}}}{3}{\size{\access{\mu}{\textsf{s}}} +1}
\lor \size{\access{\mu}{\textsf{s}}} < n)}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\textit{EXC}}{\callstack}}}
\end{mathpar}
The $\SWAP{n}$ instructions (with $n \in [1, 16]$) swap the first and the $n$th stack element:
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \SWAP{n} \\
\simvalid{\access{\mu}{\textsf{gas}}}{3}{\size{\access{\mu}{\textsf{s}}}} \\
\access{\mu}{\textsf{s}} = \cons{y}{(\concatstack{s_1}{(\cons{x_n}{s_2})})} \\
\size{s_1} = n-1 \\
\mu'= \dec{\inc{\update{\mu}{\textsf{s}}{\cons{x_n}{(\concatstack{s_1}{(\cons{y}{s_2})})}}}{\textsf{pc}}{1}}{\textsf{gas}}{3}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\regstatefull{\mu'}{\iota}{\sigma}{\eta}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \SWAP{n} \\
( \neg \simvalid{\access{\mu}{\textsf{gas}}}{3}{\size{\access{\mu}{\textsf{s}}}}
\lor \size{\access{\mu}{\textsf{s}}} < n + 1)}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\textit{EXC}}{\callstack}}}
\end{mathpar}
\paragraph{Jumps}
The $\textsf{JUMP}$ command updates the program counter to $i$ (specified in the stack) if $i$ is a valid jump destination.
\begin{mathpar}
\infer
{\curropcode{\mu}{\iota} = \textsf{JUMP} \\
\simvalid{\access{\mu}{\textsf{gas}}}{8}{\size{s}} \\
\access{\mu}{\textsf{s}} = \cons{i}{s} \\
i \in \funD{\access{\iota}{\textit{code}}} \\
\mu' = \dec{\update{\update{\mu}{\textsf{s}}{s}}{\textsf{pc}}{i}}{\textsf{gas}}{8}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\regstatefull{\mu'}{\iota}{\sigma}{\eta}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer
{\curropcode{\mu}{\iota} = \textsf{JUMP} \\
\access{\mu}{\textsf{s}} = \cons{i}{s} \\
(i \not \in \funD{\access{\iota}{\textit{code}}} \lor
\neg \simvalid{\access{\mu}{\textsf{gas}}}{8}{\size{s}})}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\textit{EXC}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{JUMP} \\
\size{\access{\mu}{\textsf{s}}} < 1}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\textit{EXC}}{\callstack}}}
\end{mathpar}
The conditional jump command $\textsf{JUMPI}$ conditionally jumps to position $i$ depending on $b$.
\begin{mathpar}
\infer
{\curropcode{\mu}{\iota} = \textsf{JUMPI} \\
\simvalid{\access{\mu}{\textsf{gas}}}{10}{\size{s}} \\
\access{\mu}{\textsf{s}} = \cons{i}{\cons{b}{s}} \\
i \in \funD{\access{\iota}{\textit{code}}} \\
j = \cond{(b=0)}{\access{\mu}{\textsf{pc}}+1}{i} \\
\mu' = \dec{\update{\update{\mu}{\textsf{s}}{s}}{\textsf{pc}}{j}}{\textsf{gas}}{10}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\regstatefull{\mu'}{\iota}{\sigma}{\eta}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer
{\curropcode{\mu}{\iota} = \textsf{JUMPI} \\
\access{\mu}{\textsf{s}} = \cons{i}{\cons{b}{s}} \\
(i \not \in \funD{\access{\iota}{\textit{code}}} \lor
\neg \simvalid{\access{\mu}{\textsf{gas}}}{10}{\size{s}})}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\textit{EXC}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{JUMPI} \\
\size{\access{\mu}{\textsf{s}}} < 2}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\textit{EXC}}{\callstack}}}
\end{mathpar}
The $\textsf{JUMPDEST}$ command marks a valid jump destination. It does not trigger any execution and consequently the only effect of the command is the increasing of the program counter and charging the fee for the command execution.
\begin{mathpar}
\infer
{\curropcode{\mu}{\iota} = \textsf{JUMPDEST} \\
\simvalid{\access{\mu}{\textsf{gas}}}{1}{\size{\access{\mu}{\textsf{s}}}} \\
\mu' = \dec{\inc{\mu}{\textsf{pc}}{1}}{\textsf{gas}}{1}
}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\regstatefull{\mu'}{\iota}{\sigma}{\eta}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{JUMPDEST} \\
\neg \simvalid{\access{\mu}{\textsf{gas}}}{1}{\size{\access{\mu}{\textsf{s}}}}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\textit{EXC}}{\callstack}}}
\end{mathpar}
\paragraph{Local memory operations}
The $\textsf{MLOAD}$ command reads a fraction of the local memory specified by $a$ and pushes it to the stack. Note that this increases the number of active words in memory and therefore causes additional cost.
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{MLOAD} \\
c = \costmem{\access{\mu}{\textsf{i}}}{\textit{aw}} + 3 \\
\simvalid{\access{\mu}{\textsf{gas}}}{c}{\size{s} + 1} \\
\access{\mu}{\textsf{s}} = \cons{a}{s} \\
v = \access{\mu}{\textsf{m}}[a, a +31]\\
\textit{aw} = \memext{\access{\mu}{\textsf{i}}}{a}{32} \\
\mu'= \dec{\inc{\update{\update{\mu}{\textsf{i}}{\textit{aw}}}{\textsf{s}}{\cons{v}{s}}}{\textsf{pc}}{1}}{\textsf{gas}}{c}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\regstatefull{\mu'}{\iota}{\sigma}{\eta}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{MLOAD} \\
\size{\access{\mu}{\textsf{s}}} < 1}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\textit{EXC}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{MLOAD} \\
\access{\mu}{\textsf{s}} = \cons{a}{s} \\
\textit{aw} = \memext{\access{\mu}{\textsf{i}}}{a}{32} \\
c = \costmem{\access{\mu}{\textsf{i}}}{\textit{aw}} + 3 \\
\neg \simvalid{\access{\mu}{\textsf{gas}}}{c}{\size{s} + 1}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\textit{EXC}}{\callstack}}}
\end{mathpar}
The $\textsf{MSTORE}$ command writes a value $b$ given at the stack to address $a$ in the local memory.
\textit{Notice that we abuse the update-notation here slightly to update whole intervals of the local memory}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{MSTORE} \\
c = \costmem{\access{\mu}{\textsf{i}}}{\textit{aw}} + 3 \quad
\access{\mu}{\textsf{s}} = \cons{a}{\cons{b}{s}} \\
\simvalid{\access{\mu}{\textsf{gas}}}{c}{\size{s}} \\
\textit{aw} = \memext{\access{\mu}{\textsf{i}}}{a}{32} \\
\mu'= \dec{\inc{\update{\update{\update{\mu}{\textsf{m}}{\update{\access{\mu}{\textsf{m}}}{[a, a+31]}{\bitstringtobytearray{b}}}}{\textsf{i}}{\textit{aw}}}{\textsf{s}}{s}}{\textsf{pc}}{1}}{\textsf{gas}}{c}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\regstatefull{\mu'}{\iota}{\sigma}{\eta}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{MSTORE} \\
\access{\mu}{\textsf{s}} = \cons{a}{\cons{b}{s}} \\
\textit{aw} = \memext{\access{\mu}{\textsf{i}}}{a}{32} \\
c = \costmem{\access{\mu}{\textsf{i}}}{\textit{aw}} + 3 \\
\neg \simvalid{\access{\mu}{\textsf{gas}}}{c}{\size{s}}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\textit{EXC}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{MSTORE8} \\
c = \costmem{\access{\mu}{\textsf{i}}}{\textit{aw}} + 3 \quad
\access{\mu}{\textsf{s}} = \cons{a}{\cons{b}{s}} \\
\simvalid{\access{\mu}{\textsf{gas}}}{c}{\size{s}} \\
\textit{aw} = \memext{\access{\mu}{\textsf{i}}}{a}{1} \\
\mu'= \dec{\inc{\update{\update{\update{\mu}{\textsf{m}}{\update{\access{\mu}{\textsf{m}}}{a}{b \mod 256}}}{\textsf{i}}{\textit{aw}}}{\textsf{s}}{s}}{\textsf{pc}}{1}}{\textsf{gas}}{c}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\regstatefull{\mu'}{\iota}{\sigma}{\eta}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{MSTORE8} \\
\access{\mu}{\textsf{s}} = \cons{a}{\cons{b}{s}} \\
\textit{aw} = \memext{\access{\mu}{\textsf{i}}}{a}{1} \\
c = \costmem{\access{\mu}{\textsf{i}}}{\textit{aw}} + 3 \\
\neg \simvalid{\access{\mu}{\textsf{gas}}}{c}{\size{s}}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\textit{EXC}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
(\curropcode{\mu}{\iota}= \textsf{MSTORE}
\lor \curropcode{\mu}{\iota}= \textsf{MSTORE8})\\
\size{\access{\mu}{\textsf{s}}} < 2}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\textit{EXC}}{\callstack}}}
\end{mathpar}
\paragraph{Persistent storage operations}
The $\textsf{SLOAD}$ command reads the executing account's persistent storage at position $a$.
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{SLOAD} \\
\simvalid{\access{\mu}{\textsf{gas}}}{200}{\size{s} + 1} \\
\access{\mu}{\textsf{s}} = \cons{a}{s} \\
\mu' = \dec{\inc{\update{\mu}{\textsf{s}}{\cons{(\access{\sigma(\access{\iota}{\textit{addr}})}{\textsf{stor}})(a)}{s}}}{\textsf{pc}}{1}}{\textsf{gas}}{200}
}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\regstatefull{\mu'}{\iota}{\sigma}{\eta}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{SLOAD} \\
(\neg \simvalid{\access{\mu}{\textsf{gas}}}{200}{\size{s} + 1} \lor \size{\access{\mu}{\textsf{s}}} < 1)}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\textit{EXC}}{\callstack}}}
\end{mathpar}
The $\textsf{SSTORE}$ command stores the value $b$ in the executing account's persistent storage at position $a$.
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{SSTORE} \\
c = \cond{(b \neq 0 \land (\access{\sigma(\access{\iota}{\textit{addr}})}{\textsf{stor}})(a) = 0)}{20000}{5000} \\
\simvalid{\access{\mu}{\textsf{gas}}}{c}{\size{s}} \\
\access{\mu}{\textsf{s}} = \cons{a}{\cons{b}{s}} \\
\mu' = \dec{\inc{\update{\mu}{\textsf{s}}{s}}{\textsf{pc}}{1}}{\textsf{gas}}{c} \\
\sigma' = \updategstate{\sigma}{\access{\iota}{\textit{addr}}}{\update{\access{\iota}{\textit{addr}}}{\textsf{stor}}{\update{\access{\sigma(\access{\iota}{\textit{addr}})}{\textsf{stor}}}{a}{b}}} \\
r = \cond{(b = 0 \land (\access{\sigma(\access{\iota}{\textit{addr}})}{\textsf{stor}})(a) \neq 0)}{15000}{0} \\
\eta' = \inc{\eta}{\textsf{balance}}{r}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\regstatefull{\mu'}{\iota}{\sigma'}{\eta'}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{SSTORE} \\
\access{\mu}{\textsf{s}} = \cons{a}{\cons{b}{s}} \\
c = \cond{(b \neq 0 \land (\access{\sigma(\access{\iota}{\textit{addr}})}{\textsf{stor}})(a) = 0)}{20000}{5000} \\
\neg \simvalid{\access{\mu}{\textsf{gas}}}{c}{\size{s}}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\textit{EXC}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{SSTORE} \\
\size{\access{\mu}{\textsf{s}}} < 2}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\textit{EXC}}{\callstack}}}
\end{mathpar}
\paragraph{Accessing the machine state}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{PC} \\
\simvalid{\access{\mu}{\textsf{gas}}}{2}{\size{\access{\mu}{\textsf{s}}} +1} \\
\mu'= \dec{\inc{\update{\mu}{\textsf{s}}{\cons{\access{\mu}{\textsf{pc}}}{\access{\mu}{\textsf{s}}}}}{\textsf{pc}}{1}}{\textsf{gas}}{2}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\regstatefull{\mu'}{\iota}{\sigma}{\eta}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{MSIZE} \\
\simvalid{\access{\mu}{\textsf{gas}}}{2}{\size{\access{\mu}{\textsf{s}}} +1} \\
\mu'= \dec{\inc{\update{\mu}{\textsf{s}}{\cons{32 \cdot \access{\mu}{\textsf{i}}}{\access{\mu}{\textsf{s}}}}}{\textsf{pc}}{1}}{\textsf{gas}}{2}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\regstatefull{\mu'}{\iota}{\sigma}{\eta}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{GAS} \\
\simvalid{\access{\mu}{\textsf{gas}}}{2}{\size{\access{\mu}{\textsf{s}}} +1} \\
\mu'= \dec{\inc{\update{\mu}{\textsf{s}}{\cons{\access{\mu}{\textsf{gas}}}{\access{\mu}{\textsf{s}}}}}{\textsf{pc}}{1}}{\textsf{gas}}{2}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\regstatefull{\mu'}{\iota}{\sigma}{\eta}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
(\curropcode{\mu}{\iota}= \textsf{PC} \lor \curropcode{\mu}{\iota}= \textsf{MSIZE} \lor \curropcode{\mu}{\iota}= \textsf{GAS}) \\
\neg \simvalid{\access{\mu}{\textsf{gas}}}{2}{\size{\access{\mu}{\textsf{s}}} +1}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\textit{EXC}}{\callstack}}}
\end{mathpar}
\paragraph{Logging instructions}
The logging operation allows to append a new log entry to the log series. The log series keeps track of archived and indexable ‘checkpoints’ in the execution of Ethereum byte code. The motivation of the log series is to allow external observers to track the program execution.
A log entry consists of the address of the currently executing account, up to for 'topics' (specified on stack) and a fraction of the memory.
There are four logging instructions, but as seen before we describe their effects using common rules parameterising the instruction by the amount of log information read from the stack.
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \LOG{n} \\
\access{\mu}{\textsf{s}} = \cons{\textit{pos}_\textsf{m}}{\cons{\textit{size}}{(\concatstack{s_1}{s_2})}}\\
\size{s_1} = n \\
\textit{aw} = \memext{\access{\mu}{\textsf{i}}}{\textit{pos}_\textsf{m}}{\textit{size}} \\
\textit{c} = \costmem{\access{\mu}{\textsf{i}}}{\textit{aw}} + 375 + 8 \cdot \textit{size} + n \cdot 375 \\
\simvalid{\access{\mu}{\textsf{gas}}}{\textit{c}}{\size{\access{\mu}{\textsf{s}}}} \\
\mu'= \update{\dec{\inc{\update{\mu}{\textsf{s}}{s}}{\textsf{pc}}{1}}{\textsf{gas}}{\textit{c}}}{\textsf{i}}{\textit{aw}} \\
d= \access{\mu}{\textsf{m}}[\textit{pos}_\textsf{m}, \textit{pos}_\textsf{m} + \textit{size} -1] \\
\eta' = \update{\eta}{\textsf{L}}{\concatstack{\access{\eta}{\textsf{L}}}{[(\access{\iota}{\textsf{actor}}, s_1, d)]}}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\regstatefull{\mu'}{\iota}{\sigma}{\eta'}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \LOG{n} \\
\access{\mu}{\textsf{s}} = \cons{\textit{pos}_\textsf{m}}{\cons{\textit{size}}{(\concatstack{s_1}{s_2})}}\\
\size{s_1} = n \\
\textit{aw} = \memext{\access{\mu}{\textsf{i}}}{\textit{pos}_\textsf{m}}{\textit{size}} \\
\textit{c} = \costmem{\access{\mu}{\textsf{i}}}{\textit{aw}} + 375 + 8 \cdot \textit{size} + n \cdot 375 \\
\neg \simvalid{\access{\mu}{\textsf{gas}}}{\textit{c}}{\size{\access{\mu}{\textsf{s}}}} }
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\textit{EXC}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \LOG{n} \\
\size{\access{\mu}{\textsf{s}}} < n + 2 }
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\textit{EXC}}{\callstack}}}
\end{mathpar}
\paragraph{Halting instructions}
The execution of a $\textsf{RETURN}$ command requires to read data from the local memory. Consequently the cost for memory consumption is charged. Additionally the read data is recorded in the halting state in order to potentially propagate it to the caller.
\begin{mathpar}
\infer
{\curropcode{\mu}{\iota}= \textsf{RETURN} \\
\access{\mu}{\textsf{s}} = \cons{\textit{io}}{\cons{\textit{is}}{s}} \\
\textit{aw} = \memext{\access{\mu}{\textsf{i}}}{io}{is} \\
\textit{c} = \costmem{\access{\mu}{\textsf{i}}}{aw} \\
\simvalid{\access{\mu}{\textsf{gas}}}{c}{\size{s}} \\
d = \access{\mu}{\textsf{m}}[\textit{io}, \textit{io}+ \textit{is} + 1] \\
g = \access{\mu}{\textsf{gas}} - \textit{c}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\haltstatefull{\sigma}{g}{d}{\eta}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer
{\curropcode{\mu}{\iota}= \textsf{RETURN} \\
\access{\mu}{\textsf{s}} = \cons{\textit{io}}{\cons{\textit{is}}{s}} \\
\textit{aw} = \memext{\access{\mu}{\textsf{i}}}{io}{is} \\
\textit{c} = \costmem{\access{\mu}{\textsf{i}}}{aw} \\
\neg \simvalid{\access{\mu}{\textsf{gas}}}{c}{\size{s}}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\textit{EXC}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{RETURN} \\
\size{\access{\mu}{\textsf{s}}} < 2 }
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\textit{EXC}}{\callstack}}}
\end{mathpar}
The execution of a $\textsf{STOP}$ command halts execution without propagating any data to the caller.
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota} = \textsf{STOP} \\
g = \access{\mu}{\textsf{gas}}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\haltstatefull{\sigma}{g}{\epsilon}{\eta}}{\callstack}}}
\end{mathpar}
The $\textsf{SELFDESTRUCT}$ instruction deletes the currently executing account. The $\textsf{SELFDESTRUCT}$ command takes one argument from the stack specifying $a_{\textit{ben}}$ the address of the beneficiary that should get the balance of the suiciding account.
We distinguish the cases where the beneficiary is an existing account and where it still needs to be created. In the latter an additional fee is charged.
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota} = \textsf{SELFDESTRUCT} \\
\access{\mu}{\textsf{s}} = \cons{a_\textit{ben}}{s} \\
a = a_\textit{ben} \mod 2^{160} \\
\sigma(a) \neq \bot \\
\simvalid{\access{\mu}{\textsf{gas}}}{5000}{\size{s}} \\
g = \access{\mu}{\textsf{gas}} - 5000 \\
\sigma' = \updategstate{\updategstate{\sigma}{\access{\iota}{\textsf{actor}}}{\update{\sigma(\access{\iota}{\textsf{actor}})}{\textsf{balance}}{0}}}{a}{\inc{\sigma(a)}{\textsf{balance}}{\access{\access{\sigma}{(\access{\iota}{\textsf{actor}})}}{\textsf{balance}}}} \\
r = \cond{(\access{\iota}{\textsf{actor}} \in \access{\transenv}{\textit{S}_\dagger})}{24000}{0} \\
\eta' = \inc{\update{\eta}{\textsf{S}_{\dagger}}{\access{\eta}{\textsf{S}_{\dagger}} \cup \{ \access{\iota}{\textsf{actor}} \} }}{\textsf{balance}}{r}
}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\haltstatefull{\sigma'}{g}{\epsilon}{\eta'}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota} = \textsf{SELFDESTRUCT} \\
\access{\mu}{\textsf{s}} = \cons{a_\textit{ben}}{s} \\
a = a_\textit{ben} \mod 2^{160} \\
\sigma(a) = \bot \\
\simvalid{\access{\mu}{\textsf{gas}}}{37000}{\size{s}} \\
g = \access{\mu}{\textsf{gas}} - 37000 \\
\sigma' = \updategstate{\updategstate{\sigma}{\access{\iota}{\textsf{actor}}}{\update{\sigma(\access{\iota}{\textsf{actor}})}{\textsf{balance}}{0}}}{a}{\account{0}{\access{\sigma(\textsf{actor})}{\textsf{balance}}}{\lam{x}{0}}{\epsilon}} \\
r = \cond{(\access{\iota}{\textsf{actor}} \in \access{\transenv}{\textit{S}_\dagger})}{0}{24000} \\
\eta' = \inc{\update{\eta}{\textsf{S}_{\dagger}}{\access{\eta}{\textsf{S}_{\dagger}} \cup \{ \access{\iota}{\textsf{actor}} \} }}{\textsf{balance}}{r}
}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\haltstatefull{\sigma'}{g}{\epsilon}{\eta'}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{SELFDESTRUCT} \\
\access{\mu}{\textsf{s}} = \cons{a_\textit{ben}}{s} \\
a = a_\textit{ben} \mod 2^{160} \\
\textit{c} = \cond{(\sigma(a) = \bot)}{37000}{5000} \\
\neg \simvalid{\access{\mu}{\textsf{gas}}}{\textit{c}}{\size{s}}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\textit{EXC}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{SELFDESTRUCT} \\
\size{\access{\mu}{\textsf{s}}} < 1}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\textit{EXC}}{\callstack}}}
\end{mathpar}
There is a designated invalid instruction that always causes an exception
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}= \textsf{INVALID}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\textit{EXC}}{\callstack}}}
\end{mathpar}
\paragraph{Calling}
The $\textsf{CALL}$ command initiates a the execution of a (potentially different) account's code. To this end, it gets as parameters the gas $g$ to be spent on the execution, the address $\textit{to}$ of the destination account, the value $\textit{va}$ to be transferred to the destination account. Additionally a fragment in the local memory containing input data for the called code is specified (by $\textit{io}$ and $\textit{is}$) and another fragment where the return values of the call are expected (specified by $\textit{oo}$ and $\textit{os}$).
If the recipient $\textit{to}$ exists, the balance of the calling account $\access{\iota}{\textsf{actor}}$ is sufficient to transfer $\textit{va}$ and the call stack limit is not reached yet, the recipient $\textit{to}$ gets the value $\textit{va}$ transferred from the calling account $\access{\iota}{\textsf{actor}}$. The input data $\textsf{input}$ to the call are read from the local memory and written to the execution environment. Additionally the execution environment is updated with the information on the originator $\textsf{sender}$, the owner of the currently executed code $\textsf{actor}$ and the code to be executed (that is the code of the called account).
The execution of the called code then starts in the updated execution environment and with an empty machine state.
We introduce some functions for simplifying the cost calculations.
First, we introduce a function that calculates the base costs for executing a $\textsf{CALL}$ command (not including costs for memory consumption and the amount of gas given to the callee).
\begin{align*}
\basecosts{\textit{va}}{\textit{flag}} &= 700 + (\cond{\textit{va} = 0}{0}{6500}) +(\cond{\textit{flag} = 0}{25000}{0})
\end{align*}
The base costs include a fixed amount ($700$ wei) for calling and additional fees depending on whether ether is transferred or a new account needs to get created.
Next, we introduce the function computing the amount of wei given to a call. This value depends on the amount of ether transferred during the call, on the amount of gas specified on the stack that should be given to the call as well as on the amount of local gas still available to the caller and the fact whether a new contract needs to be created or not.
\begin{align*}
\gascapacity{\textit{va}}{\textit{flag}}{g}{\textit{gas}} &= \\
& \textit{let} \; c_\textit{ex} = 700 + (\cond{\textit{va} = 0}{0}{9000}) + (\cond{\textit{flag} = 0}{25000}{0}) \\
& \textit{in} \; (\cond{c_{\textit{ex}} > \textit{gas}}{g}{\mini{g}{\funL{\textit{gas} - c_{\textit{ex}}}}}) + (\cond{\textit{va}=0}{0}{2300})
\end{align*}
The information on the transfer value and the existence of the called account influence the amount of fixed costs the caller needs to pay for the call independent of the execution of the callee contract.
Actually the amount of gas specified on the stack should be given to the callee, but if the local gas runs too low (namely if the fixed amount to pay already uses too much of the callee's local gas) instead only a predefined fraction of the local gas is given to the call.
We distinguish the cases where a new account needs to get created as the called address does not belong to an existing account and the one where the called account is existing.
First we consider the case where the called account already exists:
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}=\textsf{CALL} \\
\access{\mu}{\textsf{s}}=\cons{g}{\cons{\textit{to}}{\cons{\textit{va}}{\cons{\textit{io}}{\cons{\textit{is}}{\cons{\textit{oo}}{\cons{\textit{os}}{s}}}}}}} \\
\textit{to}_a = \textit{to} \mod 2^{160} \\
\sigma (\textit{to}_a) \neq \bot\\
\size{A} + 1 \leq 1024 \\
\access{\getaccount{\sigma}{\access{\iota}{\textsf{actor}}}}{\textsf{b}} \geq \textit{va} \\
\textit{aw} = \memext{\memext{\access{\mu}{\textsf{i}}}{\textit{io}}{\textit{is}}}{\textit{oo}}{\textit{os}} \\
c_{\textit{call}} = \gascapacity{\textit{va}}{1}{g}{\access{\mu}{\textsf{gas}}} \\
c = \basecosts{\textit{va}}{1} + \costmem{\access{\mu}{\textsf{i}}}{\textit{aw}} +c_{\textit{call}} \\
\simvalid{\access{\mu}{\textsf{gas}}}{c}{\size{s} + 1} \\
\sigma' = \updategstate{\updategstate{\sigma}{\textit{to}_a}{\inc{\getaccount{\sigma}{\textit{to}_a}}{\textsf{b}}{\textit{va}}}}{\access{\iota}{\textsf{actor}}}{\dec{\getaccount{\sigma}{\access{\iota}{\textsf{actor}}}}{\textsf{b}}{\textit{va}}} \\
d =\getinterval{\access{\mu}{\textsf{m}}}{\textit{io}}{\textit{io} + \textit{is} -1} \\
\mu' =\smstate{c_{\textit{call}}}{0}{\lam{x}{0}}{0}{\epsilon} \\
\iota' =\update{\update{\update{\update{\update{\iota}{\textsf{sender}}{\access{\iota}{\textsf{actor}}}}{\textsf{actor}}{\textit{to}_a}}{\textsf{value}}{\textit{va}}}{\textsf{input}}{d}}{\textsf{code}}{\access{\getaccount{\sigma}{\textit{to}_a}}{\textsf{code}}}\\
}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\regstatefull{\mu'}{\iota'}{\sigma'}{\eta}}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}}}
\end{mathpar}
Next, we consider the case where the called account does not exist. In this case an account with the called address (and the empty code) gets created in executed.
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}=\textsf{CALL} \\
\access{\mu}{\textsf{s}}=\cons{g}{\cons{\textit{to}}{\cons{\textit{va}}{\cons{\textit{io}}{\cons{\textit{is}}{\cons{\textit{oo}}{\cons{\textit{os}}{s}}}}}}} \\
\textit{to}_a = \textit{to} \mod 2^{160} \\
\getaccount{\sigma}{\textit{to}_a} = \bot\\
\size{A} + 1 \leq 1024 \\
\access{\getaccount{\sigma}{\access{\iota}{\textsf{actor}}}}{\textsf{b}} \geq \textit{va} \\
\textit{aw} = \memext{\memext{\access{\mu}{\textsf{i}}}{\textit{io}}{\textit{is}}}{\textit{oo}}{\textit{os}} \\
c_{\textit{call}} = \gascapacity{\textit{va}}{0}{g}{\access{\mu}{\textsf{gas}}} \\
c = \basecosts{\textit{va}}{0} + \costmem{\access{\mu}{\textsf{i}}}{\textit{aw}} +c_{\textit{call}} \\
\simvalid{\access{\mu}{\textsf{gas}}}{c}{\size{s} + 1} \\
\sigma' = \updategstate{\updategstate{\sigma}{\textit{to}_a}{\account{0}{\textit{va}}{\lam{x}{0}}{\epsilon}}}{\access{\iota}{\textsf{actor}}}{\dec{\getaccount{\sigma}{\access{\iota}{\textsf{actor}}}}{\textsf{b}}{\textit{va}}} \\
d =\getinterval{\access{\mu}{\textsf{m}}}{\textit{io}}{\textit{io} + \textit{is} -1} \\
\mu' =\smstate{c_{\textit{call}}}{0}{\lam{x}{0}}{0}{\epsilon} \\
\iota' =\update{\update{\update{\update{\update{\iota}{\textsf{sender}}{\access{\iota}{\textsf{actor}}}}{\textsf{actor}}{\textit{to}_a}}{\textsf{value}}{\textit{va}}}{\textsf{input}}{d}}{\textsf{code}}{\epsilon}\\
}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\regstatefull{\mu'}{\iota'}{\sigma'}{\eta}}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}}}
\end{mathpar}
If the executing account $\access{\iota}{\textsf{actor}}$ does not hold the amount of wei specified to be transferred by the $\textsf{CALL}$ instruction ($\textit{va}$) or if the call stack limit of $1024$ would be reached by performing the call, the call does not get executed. In the small step semantics this is modelled by throwing an exception on the callee level.
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota} =\textsf{CALL} \\
\access{\mu}{\textsf{s}}=\cons{g}{\cons{\textit{to}}{\cons{\textit{va}}{\cons{\textit{io}}{\cons{\textit{is}}{\cons{\textit{oo}}{\cons{\textit{os}}{s}}}}}}} \\
\textit{to}_a = \textit{to} \mod 2^{160} \\
\textit{flag} = \cond{(\getaccount{\sigma}{\textit{to}_a} = \bot)}{0}{1} \\
\textit{aw} = \memext{\memext{\access{\mu}{\textsf{i}}}{\textit{io}}{\textit{is}}}{\textit{oo}}{\textit{os}} \\
c_{\textit{call}} = \gascapacity{\textit{va}}{\textit{flag}}{g}{\access{\mu}{\textsf{gas}}} \\
c = \basecosts{\textit{va}}{\textit{flag}} + \costmem{\access{\mu}{\textsf{i}}}{\textit{aw}} +c_{\textit{call}} \\
\simvalid{\access{\mu}{\textsf{gas}}}{c}{\size{s} + 1} \\
(\textit{va}> \access{\getaccount{\sigma}{(\access{\iota}{\textsf{actor}})}}{\textsf{balance}} \lor \size{A} +1 \geq 1024)}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\textit{EXC}}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}}}
\end{mathpar}
If the execution runs out of gas or the stack limit is exceeded, an exception is thrown:
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota} =\textsf{CALL} \\
\access{\mu}{\textsf{s}}=\cons{g}{\cons{\textit{to}}{\cons{\textit{va}}{\cons{\textit{io}}{\cons{\textit{is}}{\cons{\textit{oo}}{\cons{\textit{os}}{s}}}}}}} \\
\textit{to}_a = \textit{to} \mod 2^{160} \\
\textit{flag} = \cond{(\getaccount{\sigma}{\textit{to}_a} = \bot)}{0}{1} \\
\textit{aw} = \memext{\memext{\access{\mu}{\textsf{i}}}{\textit{io}}{\textit{is}}}{\textit{oo}}{\textit{os}} \\
c_{\textit{call}} = \gascapacity{\textit{va}}{\textit{flag}}{g}{\access{\mu}{\textsf{gas}}} \\
c = \basecosts{\textit{va}}{\textit{flag}} + \costmem{\access{\mu}{\textsf{i}}}{\textit{aw}} +c_{\textit{call}} \\
\neg \simvalid{\access{\mu}{\textsf{gas}}}{c}{\size{\access{\mu}{\textsf{s}}} -6} \\
}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\textit{EXC}}{\callstack}}}
\infer{
(\curropcode{\mu}{\iota} =\textsf{CALL} \lor \curropcode{\mu}{\iota} =\textsf{CALLCODE}) \\
\size{\access{\mu}{\textsf{s}}} < 7 \\
}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\textit{EXC}}{\callstack}}}
\end{mathpar}
For returning from a call, there are several options:
\begin{enumerate}
\item The execution of the called code ends with $\textsf{RETURN}$. In this case the call was successful. The current stack specifies the fragment of the local memory that contains the return value. The return value is copied to the caller's local memory as specified on the caller's stack and the execution proceeds in the global state left by the callee. The caller gets the remaining gas of the caller's execution refunded. To indicate success $1$ is written to the caller's stack.
\item The execution of the called code ends with $\textsf{STOP}$ or $\textsf{SELFDESTRUCT}$. In this case the return value of the execution is the empty data $\epsilon$ that is written to the local memory. This essentially means that nothing is written to the caller's local memory.
\item The execution of the called code ends with an exception. In this case the remaining arguments are removed from the caller's stack and instead $0$ is written to the caller's stack. The caller does not get the remaining gas refunded
\end{enumerate}
As the first two cases can be treated analogously, we just need two rules for returning from a call.
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota} =\textsf{CALL} \\
\access{\mu}{\textsf{s}}=\cons{g}{\cons{\textit{to}}{\cons{\textit{va}}{\cons{\textit{io}}{\cons{\textit{is}}{\cons{\textit{oo}}{\cons{\textit{os}}{s}}}}}}} \\
\textit{to}_a = \textit{to} \mod 2^{160} \\
\textit{flag} = \cond{\access{\sigma}{\textit{to}_a} = \bot}{0}{1} \\
\textit{aw} = \memext{\memext{\access{\mu}{\textsf{i}}}{\textit{io}}{\textit{is}}}{\textit{oo}}{\textit{os}} \\
c_{\textit{call}} = \gascapacity{\textit{va}}{\textit{flag}}{g}{\access{\mu}{\textsf{gas}}} \\
c = \basecosts{\textit{va}}{\textit{flag}} + \costmem{\access{\mu}{\textsf{i}}}{\textit{aw}} +c_{\textit{call}} \\
\mu' =\update{\inc{\inc{\update{\update{\mu}{\textsf{i}}{\textit{aw}}}{\textsf{s}}{\cons{1}{s}}}{\textsf{pc}}{1}}{\textsf{gas}}{\textit{gas} - c}}{\textsf{m}}{\updateinterval{\access{\mu}{\textsf{m}}}{\textit{oo}}{\textit{oo} + s -1}{d}}
}
{\sstep{\transenv}{\cons{\haltstatefull{\sigma'}{\eta'}{\textit{gas}}{d}}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}}{\cons{\regstatefull{\mu'}{\iota}{\sigma'}{\eta'}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota} =\textsf{CALL} \\
\access{\mu}{\textsf{s}}=\cons{g}{\cons{\textit{to}}{\cons{\textit{va}}{\cons{\textit{io}}{\cons{\textit{is}}{\cons{\textit{oo}}{\cons{\textit{os}}{s}}}}}}} \\
\textit{to}_a = \textit{to} \mod 2^{160} \\
\textit{flag} = \cond{\getaccount{\sigma}{\textit{to}_a} = \bot}{0}{1} \\
\textit{aw} = \memext{\memext{\access{\mu}{\textsf{i}}}{\textit{io}}{\textit{is}}}{\textit{oo}}{\textit{os}} \\
c_{\textit{call}} = \gascapacity{\textit{va}}{\textit{flag}}{g}{\access{\mu}{\textsf{gas}}} \\
c = \basecosts{\textit{va}}{\textit{flag}} + \costmem{\access{\mu}{\textsf{i}}}{\textit{aw}} +c_{\textit{call}} \\
\mu' =\dec{\inc{\update{\update{\mu}{\textsf{i}}{\textit{aw}}}{\textsf{s}}{\cons{0}{s}}}{\textsf{pc}}{1}}{\textsf{gas}}{c}
}
{\sstep{\transenv}{\cons{\textit{EXC}}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}}{\cons{\regstatefull{\mu'}{\iota}{\sigma}{\eta}}{\callstack}}}
\end{mathpar}
The two other instructions for calling ($\textsf{CALLCODE}$ and $\textsf{DELEGATECALL}$) are similar to $\textsf{CALL}$.
The $\textsf{CALLCODE}$ instruction only differs in the fact that the control flow is not handed over to the called contract, but only its code is executed in the environment of the calling account. This means in particular that the amount of money transferred is only relevant as a guard for the call, but does not need to be actually transferred. In addition, in case that the account whose code should be executed does not exists, this account is not created, but only the empty code is run. However, still the amount of Ether specified on the stack influences the execution cost.
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}=\textsf{CALLCODE} \\
\access{\mu}{\textsf{s}}=\cons{g}{\cons{\textit{to}}{\cons{\textit{va}}{\cons{\textit{io}}{\cons{\textit{is}}{\cons{\textit{oo}}{\cons{\textit{os}}{s}}}}}}} \\
\textit{to}_a = \textit{to} \mod 2^{160} \\
\sigma (\textit{to}_a) \neq \bot\\
\size{A} + 1 \leq 1024 \\
\access{\getaccount{\sigma}{\access{\iota}{\textsf{actor}}}}{\textsf{b}} \geq \textit{va} \\
\textit{aw} = \memext{\memext{\access{\mu}{\textsf{i}}}{\textit{io}}{\textit{is}}}{\textit{oo}}{\textit{os}} \\
c_{\textit{call}} = \gascapacity{\textit{va}}{1}{g}{\access{\mu}{\textsf{gas}}} \\
c = \basecosts{\textit{va}}{1} + \costmem{\access{\mu}{\textsf{i}}}{\textit{aw}} +c_{\textit{call}} \\
\simvalid{\access{\mu}{\textsf{gas}}}{c}{\size{s} + 1} \\
d =\getinterval{\access{\mu}{\textsf{m}}}{\textit{io}}{\textit{io} + \textit{is} -1} \\
\mu' =\smstate{c_{\textit{call}}}{0}{\lam{x}{0}}{0}{\epsilon} \\
\iota' =\update{\update{\update{\update{\iota}{\textsf{sender}}{\access{\iota}{\textsf{actor}}}}{\textsf{value}}{\textit{va}}}{\textsf{input}}{d}}{\textsf{code}}{\access{\getaccount{\sigma}{\textit{to}_a}}{\textsf{code}}}\\
}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\regstatefull{\mu'}{\iota'}{\sigma}{\eta}}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}=\textsf{CALLCODE} \\
\access{\mu}{\textsf{s}}=\cons{g}{\cons{\textit{to}}{\cons{\textit{va}}{\cons{\textit{io}}{\cons{\textit{is}}{\cons{\textit{oo}}{\cons{\textit{os}}{s}}}}}}} \\
\textit{to}_a = \textit{to} \mod 2^{160} \\
\getaccount{\sigma}{\textit{to}_a} = \bot\\
\size{A} + 1 \leq 1024 \\
\access{\getaccount{\sigma}{\access{\iota}{\textsf{actor}}}}{\textsf{b}} \geq \textit{va} \\
\textit{aw} = \memext{\memext{\access{\mu}{\textsf{i}}}{\textit{io}}{\textit{is}}}{\textit{oo}}{\textit{os}} \\
c_{\textit{call}} = \gascapacity{\textit{va}}{1}{g}{\access{\mu}{\textsf{gas}}} \\
c = \basecosts{\textit{va}}{1} + \costmem{\access{\mu}{\textsf{i}}}{\textit{aw}} +c_{\textit{call}} \\
\simvalid{\access{\mu}{\textsf{gas}}}{c}{\size{s} + 1} \\
d =\getinterval{\access{\mu}{\textsf{m}}}{\textit{io}}{\textit{io} + \textit{is} -1} \\
\mu' =\smstate{c_{\textit{call}}}{0}{\lam{x}{0}}{0}{\epsilon} \\
\iota' =\update{\update{\update{\update{\iota}{\textsf{sender}}{\access{\iota}{\textsf{actor}}}}{\textsf{value}}{\textit{va}}}{\textsf{input}}{d}}{\textsf{code}}{\epsilon}\\
}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\regstatefull{\mu'}{\iota'}{\sigma}{\eta}}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota} =\textsf{CALLCODE} \\
\access{\mu}{\textsf{s}}=\cons{g}{\cons{\textit{to}}{\cons{\textit{va}}{\cons{\textit{io}}{\cons{\textit{is}}{\cons{\textit{oo}}{\cons{\textit{os}}{s}}}}}}} \\
\textit{to}_a = \textit{to} \mod 2^{160} \\
\textit{aw} = \memext{\memext{\access{\mu}{\textsf{i}}}{\textit{io}}{\textit{is}}}{\textit{oo}}{\textit{os}} \\
c_{\textit{call}} = \gascapacity{\textit{va}}{1}{g}{\access{\mu}{\textsf{gas}}} \\
c = \basecosts{\textit{va}}{1} + \costmem{\access{\mu}{\textsf{i}}}{\textit{aw}} +c_{\textit{call}} \\
\simvalid{\access{\mu}{\textsf{gas}}}{c}{\size{s} + 1} \\
(\textit{va}> \access{\getaccount{\sigma}{(\access{\iota}{\textsf{actor}})}}{\textsf{balance}} \lor \size{A} +1 \geq 1024)}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\textit{EXC}}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota} =\textsf{CALLCODE} \\
\access{\mu}{\textsf{s}}=\cons{g}{\cons{\textit{to}}{\cons{\textit{va}}{\cons{\textit{io}}{\cons{\textit{is}}{\cons{\textit{oo}}{\cons{\textit{os}}{s}}}}}}} \\
\textit{to}_a = \textit{to} \mod 2^{160} \\
\textit{aw} = \memext{\memext{\access{\mu}{\textsf{i}}}{\textit{io}}{\textit{is}}}{\textit{oo}}{\textit{os}} \\
c_{\textit{call}} = \gascapacity{\textit{va}}{1}{g}{\access{\mu}{\textsf{gas}}} \\
c = \basecosts{\textit{va}}{1} + \costmem{\access{\mu}{\textsf{i}}}{\textit{aw}} +c_{\textit{call}} \\
\neg \simvalid{\access{\mu}{\textsf{gas}}}{c}{\size{\access{\mu}{\textsf{s}}} -6} \\
}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\textit{EXC}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota} =\textsf{CALLCODE} \\
\access{\mu}{\textsf{s}}=\cons{g}{\cons{\textit{to}}{\cons{\textit{va}}{\cons{\textit{io}}{\cons{\textit{is}}{\cons{\textit{oo}}{\cons{\textit{os}}{s}}}}}}} \\
\textit{to}_a = \textit{to} \mod 2^{160} \\
\textit{aw} = \memext{\memext{\access{\mu}{\textsf{i}}}{\textit{io}}{\textit{is}}}{\textit{oo}}{\textit{os}} \\
c_{\textit{call}} = \gascapacity{\textit{va}}{1}{g}{\access{\mu}{\textsf{gas}}} \\
c = \basecosts{\textit{va}}{1} + \costmem{\access{\mu}{\textsf{i}}}{\textit{aw}} +c_{\textit{call}} \\
\mu' =\update{\inc{\inc{\update{\update{\mu}{\textsf{i}}{\textit{aw}}}{\textsf{s}}{\cons{1}{s}}}{\textsf{pc}}{1}}{\textsf{gas}}{\textit{gas} - c}}{\textsf{m}}{\updateinterval{\access{\mu}{\textsf{m}}}{\textit{oo}}{\textit{oo} + s -1}{d}}
}
{\sstep{\transenv}{\cons{\haltstatefull{\sigma'}{\eta'}{\textit{gas}}{d}}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}}{\cons{\regstatefull{\mu'}{\iota}{\sigma'}{\eta'}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota} =\textsf{CALLCODE} \\
\access{\mu}{\textsf{s}}=\cons{g}{\cons{\textit{to}}{\cons{\textit{va}}{\cons{\textit{io}}{\cons{\textit{is}}{\cons{\textit{oo}}{\cons{\textit{os}}{s}}}}}}} \\
\textit{to}_a = \textit{to} \mod 2^{160} \\
\textit{aw} = \memext{\memext{\access{\mu}{\textsf{i}}}{\textit{io}}{\textit{is}}}{\textit{oo}}{\textit{os}} \\
c_{\textit{call}} = \gascapacity{\textit{va}}{1}{g}{\access{\mu}{\textsf{gas}}} \\
c = \basecosts{\textit{va}}{1} + \costmem{\access{\mu}{\textsf{i}}}{\textit{aw}} +c_{\textit{call}} \\
\mu' =\dec{\inc{\update{\update{\mu}{\textsf{i}}{\textit{aw}}}{\textsf{s}}{\cons{0}{s}}}{\textsf{pc}}{1}}{\textsf{gas}}{c}
}
{\sstep{\transenv}{\cons{\textit{EXC}}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}}{\cons{\regstatefull{\mu'}{\iota}{\sigma}{\eta}}{\callstack}}}
\end{mathpar}
The $\textsf{DELEGATECALL}$ instruction does not only keep the executing account of the current call, but also the transferred value and and the sender information.
For this reason the value to be transferred does not need to be specified in the argument in this case. For this reason and because the cost calculation differs (not using the argument value, but the one from the environment) all rules from $\textsf{CALL}$ needs to be replicated. Still, the general idea is very similar.
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}=\textsf{DELEGATECALL} \\
\access{\mu}{\textsf{s}}=\cons{g}{\cons{\textit{to}}{\cons{\textit{io}}{\cons{\textit{is}}{\cons{\textit{oo}}{\cons{\textit{os}}{s}}}}}} \\
\textit{to}_a = \textit{to} \mod 2^{160} \\
\sigma (\textit{to}_a) \neq \bot\\
\size{A} + 1 \leq 1024 \\
\textit{aw} = \memext{\memext{\access{\mu}{\textsf{i}}}{\textit{io}}{\textit{is}}}{\textit{oo}}{\textit{os}} \\
c_{\textit{call}} = \gascapacity{0}{1}{g}{\access{\mu}{\textsf{gas}}} \\
c = \basecosts{0}{1} + \costmem{\access{\mu}{\textsf{i}}}{\textit{aw}} +c_{\textit{call}} \\
\simvalid{\access{\mu}{\textsf{gas}}}{c}{\size{s} + 1} \\
d =\getinterval{\access{\mu}{\textsf{m}}}{\textit{io}}{\textit{io} + \textit{is} -1} \\
\mu' =\smstate{c_{\textit{call}}}{0}{\lam{x}{0}}{0}{\epsilon} \\
\iota' =\update{\update{\iota}{\textsf{input}}{d}}{\textsf{code}}{\access{\getaccount{\sigma}{\textit{to}_a}}{\textsf{code}}}
}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\regstatefull{\mu'}{\iota'}{\sigma}{\eta}}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota}=\textsf{DELEGATECALL} \\
\access{\mu}{\textsf{s}}=\cons{g}{\cons{\textit{to}}{\cons{\textit{io}}{\cons{\textit{is}}{\cons{\textit{oo}}{\cons{\textit{os}}{s}}}}}} \\
\textit{to}_a = \textit{to} \mod 2^{160} \\
\getaccount{\sigma}{\textit{to}_a} = \bot\\
\size{A} + 1 \leq 1024 \\
\textit{aw} = \memext{\memext{\access{\mu}{\textsf{i}}}{\textit{io}}{\textit{is}}}{\textit{oo}}{\textit{os}} \\
c_{\textit{call}} = \gascapacity{0}{1}{g}{\access{\mu}{\textsf{gas}}} \\
c = \basecosts{0}{1} + \costmem{\access{\mu}{\textsf{i}}}{\textit{aw}} +c_{\textit{call}} \\
\simvalid{\access{\mu}{\textsf{gas}}}{c}{\size{s} + 1} \\
d =\getinterval{\access{\mu}{\textsf{m}}}{\textit{io}}{\textit{io} + \textit{is} -1} \\
\mu' =\smstate{c_{\textit{call}}}{0}{\lam{x}{0}}{0}{\epsilon} \\
\iota' =\update{\update{\iota}{\textsf{input}}{d}}{\textsf{code}}{\epsilon}\\
}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\regstatefull{\mu'}{\iota'}{\sigma}{\eta}}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota} =\textsf{DELEGATECALL} \\
\access{\mu}{\textsf{s}}=\cons{g}{\cons{\textit{to}}{\cons{\textit{io}}{\cons{\textit{is}}{\cons{\textit{oo}}{\cons{\textit{os}}{s}}}}}} \\
\textit{to}_a = \textit{to} \mod 2^{160} \\
\textit{aw} = \memext{\memext{\access{\mu}{\textsf{i}}}{\textit{io}}{\textit{is}}}{\textit{oo}}{\textit{os}} \\
c_{\textit{call}} = \gascapacity{0}{1}{g}{\access{\mu}{\textsf{gas}}} \\
c = \basecosts{0}{1} + \costmem{\access{\mu}{\textsf{i}}}{\textit{aw}} +c_{\textit{call}} \\
\simvalid{\access{\mu}{\textsf{gas}}}{c}{\size{s} + 1} \\
\size{A} +1 \geq 1024}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\textit{EXC}}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota} =\textsf{DELEGATECALL} \\
\access{\mu}{\textsf{s}}=\cons{g}{\cons{\textit{to}}{\cons{\textit{io}}{\cons{\textit{is}}{\cons{\textit{oo}}{\cons{\textit{os}}{s}}}}}} \\
\textit{to}_a = \textit{to} \mod 2^{160} \\
\textit{aw} = \memext{\memext{\access{\mu}{\textsf{i}}}{\textit{io}}{\textit{is}}}{\textit{oo}}{\textit{os}} \\
c_{\textit{call}} = \gascapacity{0}{1}{g}{\access{\mu}{\textsf{gas}}} \\
c = \basecosts{0}{1} + \costmem{\access{\mu}{\textsf{i}}}{\textit{aw}} +c_{\textit{call}} \\
\neg \simvalid{\access{\mu}{\textsf{gas}}}{c}{\size{\access{\mu}{\textsf{s}}} -6} \\
}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\textit{EXC}}{\callstack}}}
\infer{
\curropcode{\mu}{\iota} =\textsf{DELEGATECALL} \\
\size{\access{\mu}{\textsf{s}}} < 6 \\
}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\textit{EXC}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota} =\textsf{DELEGATECALL} \\
\access{\mu}{\textsf{s}}=\cons{g}{\cons{\textit{to}}{\cons{\textit{io}}{\cons{\textit{is}}{\cons{\textit{oo}}{\cons{\textit{os}}{s}}}}}} \\
\textit{to}_a = \textit{to} \mod 2^{160} \\
\textit{aw} = \memext{\memext{\access{\mu}{\textsf{i}}}{\textit{io}}{\textit{is}}}{\textit{oo}}{\textit{os}} \\
c_{\textit{call}} = \gascapacity{0}{1}{g}{\access{\mu}{\textsf{gas}}} \\
c = \basecosts{0}{1} + \costmem{\access{\mu}{\textsf{i}}}{\textit{aw}} +c_{\textit{call}} \\
\mu' =\update{\inc{\inc{\update{\update{\mu}{\textsf{i}}{\textit{aw}}}{\textsf{s}}{\cons{1}{s}}}{\textsf{pc}}{1}}{\textsf{gas}}{\textit{gas} - c}}{\textsf{m}}{\updateinterval{\access{\mu}{\textsf{m}}}{\textit{oo}}{\textit{oo} + s -1}{d}}
}
{\sstep{\transenv}{\cons{\haltstatefull{\sigma'}{\eta'}{\textit{gas}}{d}}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}}{\cons{\regstatefull{\mu'}{\iota}{\sigma'}{\eta'}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota} =\textsf{DELEGATECALL} \\
\access{\mu}{\textsf{s}}=\cons{g}{\cons{\textit{to}}{\cons{\textit{io}}{\cons{\textit{is}}{\cons{\textit{oo}}{\cons{\textit{os}}{s}}}}}} \\
\textit{to}_a = \textit{to} \mod 2^{160} \\
\textit{aw} = \memext{\memext{\access{\mu}{\textsf{i}}}{\textit{io}}{\textit{is}}}{\textit{oo}}{\textit{os}} \\
c_{\textit{call}} = \gascapacity{0}{1}{g}{\access{\mu}{\textsf{gas}}} \\
c = \basecosts{0}{1} + \costmem{\access{\mu}{\textsf{i}}}{\textit{aw}} +c_{\textit{call}} \\
\mu' =\dec{\inc{\update{\update{\mu}{\textsf{i}}{\textit{aw}}}{\textsf{s}}{\cons{0}{s}}}{\textsf{pc}}{1}}{\textsf{gas}}{c}
}
{\sstep{\transenv}{\cons{\textit{EXC}}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}}{\cons{\regstatefull{\mu'}{\iota}{\sigma}{\eta}}{\callstack}}}
\end{mathpar}
\paragraph{Contract creation}
The $\textsf{CREATE}$ command initiates the creation of a new contract.
The creation of a new contract is initiated if the call stack limit is not reached yet and if the initial balance $\textit{va}$ that should be initially transferred to the new account does not exceed the balance of the sender (the account owning the currently executed code).
In this case address $\rho$ of the new account is created in dependence of the sender's address $\access{\iota}{\textsf{actor}}$ and the sender's addresses current nonce incremented by one.
If there already exists an account with the address, the balance of this account is transferred to the newly created one.
Additionally, the new account gets the specified amount $\textit{va}$ of ether transferred from the sender.
Finally the execution of the contract starts by executing the initialization code $i$ ($i$ can be found in the local memory $\access{\mu}{\textsf{m}}$, its location is specified by the arguments $\textit{io}$ and $\textit{is}$ on the stack). The owner of the initialization code is the newly created account $\rho$. The owner $\access{\iota}{\textit{addr}}$ of the calling code will be recorded as the initiator $\access{\iota}{\textsf{sender}}$ of the initialization code execution.
The value $\textit{va}$ transferred to the new account is given in the environment parameter $\access{\iota}{\textsf{value}}$.
The execution starts in the empty machine state with the program counter and the number of active words set to $0$, in the empty memory $\lam{x}{0}$ (the function mapping each number to $0^{256}$) and the empty stack $\epsilon$.
The original global state $\sigma$ is recorded in the caller state in order to be able to restore it in the case of an exception in the initiation code execution.
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota} =\textsf{CREATE} \\
\access{\mu}{\textsf{s}} = \cons{\textit{va}}{\cons{\textit{io}}{\cons{\textit{is}}{s}}} \\
\textit{aw} = \memext{\access{\mu}{\textsf{i}}}{\textit{io}}{\textit{is}}\\
c= \costmem{\access{\mu}{\textsf{i}}}{\textit{aw}} + 32000 \\
\simvalid{\access{\mu}{\textsf{gas}}}{c}{\size{s} + 1} \\
\textit{va} \leq \access{\getaccount{\sigma}{\access{\iota}{\textsf{actor}}}}{\textsf{balance}} \\
\size{\callstack} + 1 \leq 1024 \\
\rho=\getfreshaddress{\access{\iota}{\textsf{actor}}}{\access{\getaccount{\sigma}{\access{\iota}{\textsf{actor}}}}{\textsf{nonce}}} \\
\getaccount{\sigma}{\rho} = \bot \\
\sigma' =\updategstate{
\updategstate{
\sigma}
{\rho}
{\accountstate{0}{\textit{va}}{\lam{x}{0}}{\epsilon}}
}
{\access{\iota}{\textsf{actor}}}
{\inc{
\dec{
\getaccount{\sigma}{\access{\iota}{\textsf{actor}}}}
{\textsf{balance}}
{\textit{va}}}
{\textsf{nonce}}
{1}} \\
i=\getinterval{\access{\mu}{\textsf{m}}}{\textit{io}}{\textit{io} + \textit{is} - 1} \\
\iota' = \update{\update{\update{\update{\update{\iota}{\textsf{sender}}{\access{\iota}{\textsf{actor}}}}{\textsf{actor}}{\rho}}{\textsf{value}}{\textit{va}}}{\textsf{code}}{i}}{\textsf{input}}{\epsilon} \\
\mu' = \smstate{\funL{\access{\mu}{\textsf{gas}} - c}}{0}{\lam{x}{0}}{0}{\epsilon}}
{\sstep{\transenv}{\cons{\regstate{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\regstate{\mu'}{\iota'}{\sigma'}{\eta}}{\cons{\regstate{\mu}{\iota}{\sigma}{\eta}}{\callstack}}}}
\end{mathpar}
Actually it should not happen that the newly created address $\rho$ already exists. By making $\rho$ dependent on the active account's address and it's nonce (which can be seen as an internal counter on the number of new accounts already created by this account), it should be ensured that the resulting address is unique.
However, in practice, the function $\getfreshaddress{\cdot}{\cdot}$ is realized by a hash function which requires to deal with collisions.
For the cases where accidentally an existing address is created, the balance of the corresponding account is saved in the newly created one.
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota} =\textsf{CREATE} \\
\access{\mu}{\textsf{s}} = \cons{\textit{va}}{\cons{\textit{io}}{\cons{\textit{is}}{s}}} \\
\textit{aw} = \memext{\access{\mu}{\textsf{i}}}{\textit{io}}{\textit{is}}\\
c= \costmem{\access{\mu}{\textsf{i}}}{\textit{aw}} + 32000 \\
\simvalid{\access{\mu}{\textsf{gas}}}{c}{\size{s} + 1} \\
\textit{va} \leq \access{\getaccount{\sigma}{\access{\iota}{\textsf{actor}}}}{\textsf{balance}} \\
\size{\callstack} + 1 \leq 1024 \\
\rho=\getfreshaddress{\access{\iota}{\textsf{actor}}}{\access{\getaccount{\sigma}{\access{\iota}{\textsf{actor}}}}{\textsf{nonce}}} \\
\getaccount{\sigma}{\rho} \neq \bot \\
b = \access{\getaccount{\sigma}{\rho}}{\textsf{balance}} + \textit{va}\\
\sigma' =\updategstate{
\updategstate{
\sigma}
{\rho}
{\accountstate{0}{b}{\lam{x}{0}}{\epsilon}}
}
{\access{\iota}{\textsf{actor}}}
{\inc{
\dec{
\getaccount{\sigma}{\access{\iota}{\textsf{actor}}}}
{\textsf{balance}}
{\textit{va}}}
{\textsf{nonce}}
{1}} \\
i=\getinterval{\access{\mu}{\textsf{m}}}{\textit{io}}{\textit{io} + \textit{is} - 1} \\
\iota' = \update{\update{\update{\update{\update{\iota}{\textsf{sender}}{\access{\iota}{\textsf{actor}}}}{\textsf{actor}}{\rho}}{\textsf{value}}{\textit{va}}}{\textsf{code}}{i}}{\textsf{input}}{\epsilon} \\
\mu' = \smstate{\funL{\access{\mu}{\textsf{gas}} - c}}{0}{\lam{x}{0}}{0}{\epsilon}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\regstatefull{\mu'}{\iota'}{\sigma'}{\eta}}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}}}
\end{mathpar}
Similarly to the $\textsf{CALL}$ case, the execution of the $\textsf{CREATE}$ instruction can fail at call time in the case that either the value $\textit{va}$ to be transferred to the newly created account exceeds the calling account's balance or if the call stack limit is reached.
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota} =\textsf{CREATE} \\
\access{\mu}{\textsf{s}} = \cons{\textit{va}}{\cons{\textit{io}}{\cons{\textit{is}}{s}}} \\
\textit{aw} = \memext{\access{\mu}{\textsf{i}}}{\textit{io}}{\textit{is}}\\
c= \costmem{\access{\mu}{\textsf{i}}}{\textit{aw}} + 32000 \\
\simvalid{\access{\mu}{\textsf{gas}}}{c}{\size{s} + 1} \\
(\textit{va} > \access{\getaccount{\sigma}{\access{\iota}{\textsf{actor}}}}{\textsf{balance}} \lor
\size{\callstack} + 1 > 1024)}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\textit{EXC}}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}}}
\end{mathpar}
In addition the usual out-of-gas exception and violations of the stack limits need to be considered:
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota} =\textsf{CREATE} \\
\access{\mu}{\textsf{s}} = \cons{\textit{va}}{\cons{\textit{io}}{\cons{\textit{is}}{s}}} \\
\textit{aw} = \memext{\access{\mu}{\textsf{i}}}{\textit{io}}{\textit{is}}\\
c= \costmem{\access{\mu}{\textsf{i}}}{\textit{aw}} + 32000 \\
\neg \simvalid{\access{\mu}{\textsf{gas}}}{c}{\size{s} + 1}}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\textit{EXC}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota} =\textsf{CREATE} \\
\access{\mu}{\textsf{s}} < 3}
{\sstep{\transenv}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}{\cons{\textit{EXC}}{\callstack}}}
\end{mathpar}
To return from contract creation we need to consider different cases:
\begin{enumerate}
\item
The initialization code ends with a $\textsf{RETURN}$.
In this case contract creation was successful. The return value specifies the code of the new contract. This code will be executed when the contract is called later on.
To indicate success and to make the newly created contract accessible to the caller, the address of the new contract account is written to the stack.
The caller of the contract creation needs to proceed with the remaining gas from the contract creation and additionally needs to pay a final contract creation cost depending on the length of the contract body code.
\item
The initialization code ends with $\textsf{STOP}$ or $\textsf{SELFDESTRUCT}$.
In this case contract creation was theoretically successful, but no practical usable contract was created as calls to this contract do not cause code to be executed.
Nevertheless the final contract creation cost needs to be paid.
\item
The initialization code causes an exception.
In this case the contract creation was not successful. The former global state is restored and therefore all side effects of the contract creation are deleted.
To indicate the failure of the contract creation the number $0$ is written to the stack of the caller. Additionally all gas of the caller state is deleted.
\end{enumerate}
Cases one and two result in regular halting of the callee. The command specific changes affecting the global state, the remaining gas and the output data are recorded in the halting state.
In the case of contract creation, a final fee is charged that depends on the size of the return data.
If the gas remaining from the execution of the initialization code is not sufficient to pay the additional fee, an exception occurs.
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota} =\textsf{CREATE} \\
\access{\mu}{\textsf{s}} = \cons{\textit{va}}{\cons{\textit{io}}{\cons{\textit{is}}{s}}} \\
\textit{aw} = \memext{\access{\mu}{\textsf{i}}}{\textit{io}}{\textit{is}}\\
c= \costmem{\access{\mu}{\textsf{i}}}{\textit{aw}} + 32000 \\
c_{\textit{final}} = 200 \cdot \size{d} \\
\textit{gas} \geq c_{\textit{final}} \\
\rho=\getfreshaddress{\access{\iota}{\textsf{actor}}}{\access{\getaccount{\sigma}{\access{\iota}{\textsf{actor}}}}{\textsf{nonce}}} \\
\mu' =\update{\inc{ \inc{\update{\mu}{\textsf{s}}{\cons{\rho}{s}}}{\textsf{pc}}{1}}{\textsf{gas}}{\textit{gas}-c-c_{\textit{final}}}}{\textsf{i}}{\textit{aw}} \\
\sigma'' = \updategstate{\sigma'}{\rho}{\update{\getaccount{\sigma'}{\rho}}{\textsf{code}}{d}}\\
}
{\sstep{\transenv}{\cons{\haltstatefull{\sigma'}{\eta'}{\textit{gas}}{d}}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}}{\cons{\regstatefull{\mu'}{\iota}{\sigma''}{\eta'}}{\callstack}}}
\end{mathpar}
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota} =\textsf{CREATE} \\
c_{\textit{final}} = 200 \cdot \size{d} \\
\textit{gas} < c_{\textit{final}}}
{\sstep{\transenv}{\cons{\haltstatefull{\sigma'}{\eta'}{\textit{gas}}{d}}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}}{\cons{\textit{EXC}}{\callstack}}}
\end{mathpar}
In the case of exceptional halting of the callee, as in the $\textsf{CALL}$ case, the remaining gas is not refunded and the global state as well as the transaction effects are reverted.
\begin{mathpar}
\infer{
\curropcode{\mu}{\iota} =\textsf{CREATE} \\
\access{\mu}{\textsf{s}} = \cons{\textit{va}}{\cons{\textit{io}}{\cons{\textit{is}}{s}}} \\
\textit{aw} = \memext{\access{\mu}{\textsf{i}}}{\textit{io}}{\textit{is}}\\
c= \costmem{\access{\mu}{\textsf{i}}}{\textit{aw}} + 32000 \\
\mu' =\update{\inc{\dec{\update{\mu}{\textsf{s}}{\cons{0}{s}}}{\textsf{pc}}{1}}{\textsf{gas}}{c}}{\textsf{i}}{\textit{aw}} \\
}
{\sstep{\transenv}{\cons{\textit{EXC}}{\cons{\regstatefull{\mu}{\iota}{\sigma}{\eta}}{\callstack}}}{\cons{\regstatefull{\mu'}{\iota}{\sigma}{\eta}}{\callstack}}}
\end{mathpar}
\subsection{Instrumented semantics}
First, we enrich the original small step semantics with some additional information needed for the horn clause generation.
\begin{itemize}
\item Each execution state needs to be annotated with its current global gas value. The global gas value of the bottom element on the call stack is its local gas.
\item In the current formalization of the small step semantics, once code is loaded to the execution environment it cannot be safely tracked which account the code is originating from. So we are annotating the execution states with the it's origin contract. A contract is defined by its address and its code.
In the cases where the code that is executed does not originate from a contract, but is a dynamically specified initialization code, we annotate the corresponding execution state with $\bot$
\item
During contract execution new accounts might be created. We annotate execution states with the set of newly created contracts.
\end{itemize}
Let $\mathcal{C} = \mathcal{A} \times \arrayof{\mathbb{B}^8}$ denote the set of contracts.
We obtain the following syntax for annotated call stacks:
\begin{align*}
A^n &:= A_{\textit{plain}}^n ~|~ \cons{\annotatep{\textit{EXC}}{g}{c}{C_{\textit{up}}}}{A_{\textit{plain}}^n} ~|~ \cons{\annotatep{\haltstate{\sigma}{\eta}{\textit{gas}}{\textsf{d}}}{g}{c}{C_{\textit{up}}}}{A_{\textit{plain}}^n} \\
A_{\textit{plain}}^n &:= \epsilon ~|~ \cons{\annotatep{\regstate{\mu}{\iota}{\sigma}{\eta}}{g}{c}{C_{\textit{up}}}}{A_{\textit{plain}}^n}
\end{align*}
where $g \in \mathbb{N}$, $c \in \mathcal{C}$ and $C_{\textit{up}} \subseteq \mathcal{C}$.
\subsection{Assumptions}
\subsubsection{Codes}
We assume all codes that are potentially called to be determined by a pre-analysis.
In general, we make a distinction between contracts that are either already present in the global state or that might get created during execution and the initialization codes that will get called in order to create new accounts. We will refer to these types of codes as contract codes and creation codes respectively.
Instead of directly mapping contract codes to the addresses they will be referred by during the execution, we introduce internal IDs to identify contract codes as well as initialization codes.
In the following we will denote the set of contract codes as $C_{\textit{call}}$ and the set of initialization codes as $C_{\textit{init}}$.
We assume these sets to contain each code and ID only once, so they describe a partial mappings from IDs to code as well as from code to IDs. For the sake of better readability, we will denote these mappings in the following by $\getinitcodeforid{\cdot}$, $\getcallcodeforid{\cdot}$ and $\getidforinitcode{\cdot}$ and $\getidforcallcode{\cdot}$.
The connection between the IDs and the actual addresses is established by a relation $\getcallablecode{\cdot}{\cdot}$ that maps addresses to IDs.
In addition, we assume a relation $\getinitcode{\cdot}{\cdot}{\cdot}$ that for a code ID $\textit{id}$ and a program counter value $\textit{pc}$ gives the ID of the initialization code that will get created if the instruction at this program counter is $\textsf{CREATE}$. In a similar fashion, we assume a relation $\getresultcode{\cdot}{\cdot}$ that for an initialization code ID gives the ID of the code that will get created (written to memory) when executing this code.
We use relations instead of functions to allow program positions as well as initialization codes to create potentially different codes.
We will later on require the following consistency property on $C_{\textit{call}}$ and $C_{\textit{init}}$:
\begin{definition}[Consistency of code sets]
We call a pair of code sets $(C_{\textit{call}}, C_{\textit{init}})$ consistent with respect to relations $\getcallablecode{\cdot}{\cdot}$, $\getinitcode{\cdot}{\cdot}{\cdot}$ and $\getresultcode{\cdot}{\cdot}$ if the following conditions hold:
\begin{enumerate}
\item $\forall \textit{id}_c. \, \forall \textit{addr}. \, \getcallablecode{\textit{addr}}{\textit{id}_c} \to \exists \textit{code}. \, (\textit{id}_c, \textit{code}) \in C_{\textit{call}}$
\item $\forall (\textit{id}_c, \textit{code}_c) \in C_{\textit{call}} \cup C_{\textit{init}}. \, \forall \textit{id}_i. \, \forall \textit{pc}. \, \getinitcode{\textit{id}_c}{\textit{pc}}{\textit{id}_i} \to \exists \textit{code}_i, \, (\textit{id}_i, \textit{code}_i) \in C_{\textit{init}}$
\item $\forall (\textit{id}_i, \textit{code}_i) \in C_{\textit{init}}. \, \forall \textit{id}_c. \, \getresultcode{\textit{id}_i}{\textit{id}_c} \to \exists \textit{code}_c, \, (\textit{id}_c, \textit{code}_c) \in C_{\textit{call}}$
\end{enumerate}
\end{definition}
In addition, we will require $C_{\textit{call}}$ and $C_{\textit{init}}$ to be functional.
For the proof, we will restrict the execution to those executing only codes in $C_{\textit{call}}$ and $C_{\textit{init}}$.
This mean that we will assume the correctness of the pre-analysis.
\subsubsection{Hash functions}
When executing EVM byte code, for several instructions it is required to calculate the result of hash functions.
\begin{itemize}
\item The $\textsf{SHA3}$ instruction offers native support for computing the Keckak-$256$ hash of a value specified on the stack. We assume the Keckak-$256$ hash function to be given by $\kec{\cdot}$.
\item The $\textsf{BLOCKHASH}$ instruction gets the hash of one of the 256 most recent complete blocks.
The block hash can be computed from the block header $H$ and the specified block number. We assume a corresponding function $\getblockhash{\cdot}{\cdot}$.
\item The $\textsf{CREATE}$ instruction requires to compute a fresh address which consists of the rightmost 160 bits of the Keckak-$256$ hash of the RLP encoding of the the address of the active account and the active account's nonce.
The corresponding function is given by $\getfreshaddress{\cdot}{\cdot}$.
\end{itemize}
As the implementation of hash functions for an SMT-solver is very bothersome and inefficient, we assume an approximation of these functions by relations. This allows for a later instantiation of the relations with arbitrary precise approximations.
\begin{definition}[Approximation]
A relation $R: A \times B$ approximates a function $f: A \to B$ if the following condition holds:
\begin{align*}
\forall a \in A. \, f\, (a) = b \to R \, (a, b)
\end{align*}
\end{definition}
We assume relations $\kecapprox{\cdot}{\cdot}$, $\blockhashapprox{\cdot}{\cdot}{\cdot}$ and $\freshaddressapprox{\cdot}{\cdot}{\cdot}$ that we will require to approximate the functions $\kec{\cdot}$, $\getblockhash{\cdot}{\cdot}$ and $\getfreshaddress{\cdot}{\cdot}$.
\subsubsection{Jump destinations}
In addition to the pre-analysis that determines the codes that potentially get executed, we require an additional analysis that determines the set of valid jump destinations for each $\textsf{JUMP}$ and $\textsf{JUMPI}$ instruction. We assume a function $\jumpdests{\cdot}{\cdot}: \mathbb{N} \times \mathbb{N} \to \setof{\mathbb{N}}$ that maps a program counter and a contract ID (specifying a point in a contract) to a set of potential jump destination (in the same contract). Similar to the pre-analysis of contract codes, we will restrict executions to those that only perform jumps according to the $\jumpdests{\cdot}{\cdot}$ function and therefore assume the correctness of the underlying pre-analysis.
\subsection{Representation function}
We define functions that translate the call stack and the contracts that potentially get called to horn clauses.
We split the translation into three parts:
\begin{enumerate}
\item The translation of the codes in $C_{\textit{call}}$ and $C_{\textit{init}}$.
\item The translation of initial infrastructure, including the transaction environment, the approximated hash functions and the initial mapping of addresses to to internal IDs.
\item The translation of call stacks.
\end{enumerate}
\subsubsection{Translating codes}
For translating codes, in addition to the codes that should get translated, one needs the information on the potential jump destinations $\textit{JD}$, on the initialization codes $\textit{IC}$ and on the result codes of the initialization codes $\textit{RC}$ as these are needed for the correct translation of the $\textsf{JUMP}$, $\textsf{JUMPI}$ and the $\textsf{CREATE}$ commands.
\begin{align*}
&\tohorncodes{C_{\textit{call}}}{C_{\textit{init}}}{\textit{JD}}{\textit{IC}}{\textit{RC}} :=\\
&\{ \tohorn{\textit{inst}}{C_{\textit{call}}}{C_{\textit{init}}}{\textit{JD}}{\textit{IC}}{\textit{RC}}{\textit{id}}{\textit{pc}} ~|~ \exists \textit{code}. \, (\textit{id}, \textit{code}) \in C_{\textit{call}} \cup C_{\textit{init}} \land \textit{inst} = \arraypos{\textit{code}}{\textsf{pc}} \} \\
& \{ \pcode{\textit{id}}{\textit{pc}}{\textit{inst}} ~|~ (\textit{id}, \textit{code}) \in C_{\textit{call}} \land \arraypos{\textit{code}}{\textit{pc}} = \textit{inst} \}
\end{align*}
\subsubsection{Translating the initial infrastructure}
The initial infrastructure contains information that stay unchanged during execution. This information gets translated to predicates as they are represented as non-necessarily functional relations that need to be accessed during execution.
\begin{align*}
&\repinit{\transenv}{\textit{Kec}}{\textit{BlockHash}}{\textit{NewAddress}} := \\
&\{\ptransenv{\access{\transenv}{\originator}}{\access{\transenv}{\textit{gasPrize}}}{\access{\access{\transenv}{H}}{\textit{beneficiary}}}{\access{\access{\transenv}{H}}{\textit{difficulty}}}{\access{\access{\transenv}{H}}{\textit{number}}}{\access{\access{\transenv}{H}}{\textit{gaslimit}}}{\access{\access{\transenv}{H}}{\textit{timestamp}}} \} \\
&\cup \{\pkec{v}{h} ~|~ v \in \N{256} \land h \in \N{256} \land \kecapprox{v}{h}\} \\
&\cup \{\pblockhash{n}{h} ~|~ n \in \N{256} \land h \in \mathbb{N} \land \blockhashapprox{\access{\transenv}{H}}{n}{h} \} \\
&\cup \{\pfreshaddress{a}{n}{\rho} ~|~ a \in \N{160} \land n \in \N{160} \land \rho \in \N{160} \land \freshaddressapprox{a}{n}{\rho} \}
\end{align*}
Note that we realize the transaction environment with a predicate even though in principle accesses to it could also be hard coded in the horn clauses for the code. We do not go for this option here as in practice, queries will not be made for concrete values of the transaction environment but only loose restrictions will be put on it. This can only be expressed by formulating logical restrictions on the values of the predicate in the premise of a horn clause. The other approach is only possible if the possible values of the predicate can be enumerated. The same holds for the implementation of the hash functions.
\subsubsection{Translating call stacks}
For translating call stacks, the IDs of the codes that get executed need to get extracted. This is possible as $C_{\textit{call}}$ and $C_{\textit{init}}$ are assumed to be injective.
We define the following function that extracts from two sets $C_{\textit{call}}$ and $C_{\textit{init}}$ and a contract $c$:
\begin{align*}
\getID{c}{C_{\textit{call}}}{C_{\textit{init}}}
:= \textit{let} \, c = (a, \textit{code}) \, \textit{in} \, \cond{(a = \bot)}{\getidforinitcode{\textit{code}}}{\getidforcallcode{\textit{code}}}
\end{align*}
The call stack gets translated by translating the states on the stacks. As the abstract semantics does not have an explicit notion of a call stack, the different elements need to be linked by the remaining global gas at the point of calling and the size of the stack needs to be made explicit. In addition the code annotations need to be mapped to the internal IDs. This is done by the function $\getID{\cdot}{\cdot}{\cdot}$.
Note that for better readability, we use here a simpler stack definition that does not enforce halting and execution states to only occur as top elements.
\begin{align*}
\repcallstack{\cons{\annotatep{s}{g}{c}{C_{\textit{up}}}}{\cons{\annotatep{s}{g'}{c'}{C_{\textit{up}}'}}{\callstack}}}{C_{\textit{call}}}{C_{\textit{init}}} &:=
\repexstate{s}{\getID{c}{C_{\textit{call}}}{C_{\textit{init}}}}{g}{g'}{\size{\callstack} + 1}{\composerel{C_{\textit{call}}}{\invrel{C_{\textit{up}}}}} \\
& \cup \repcallstack{\cons{\annotatep{s}{g'}{c'}{C_{\textit{up}}'}}{\callstack}}{C_{\textit{call}}}{C_{\textit{init}}} \\
\repcallstack{\cons{\annotatep{s}{g}{c}{C_{\textit{up}}}}{\epsilon}}{C_{\textit{call}}}{C_{\textit{init}}} &:=
\repexstate{s}{\getID{c}{C_{\textit{call}}}{C_{\textit{init}}}}{g}{\bot}{1}{\composerel{C_{\textit{call}}}{\invrel{C_{\textit{up}}}}} \\
\repcallstack{\epsilon}{C_{\textit{call}}}{C_{\textit{init}}} &:= \emptyset
\end{align*}
For translating the states, the different kinds of states need to be considered:
\begin{align*}
\repexstate{\textit{EXC}}{\textit{id}}{g}{g_c}{n}{C} & := \{\pexception{\textit{id}}{g}{g_c} \} \\
\\
\repexstate{\haltstate{\sigma}{\eta}{\textit{gas}}{\textit{d}}}{\textit{id}}{g}{g_c}{n}{C} &:= \{ \pres{\textit{id}}{g}{g'}{\textit{pos}}{v} ~|~ \textit{pos} \in [0, \size{\textit{d}} - 1] \land \arraypos{\textit{d}}{\textit{pos}} = v \}\\
& \cup \{\preturn{\textit{id}}{g}{g'}{\size{\textsf{d}}} \} \\
& \cup \{\presgstate{\textit{id}}{g}{g'}{a}{n}{b} ~|~ \getaccount{\sigma}{a} = \account{n}{b}{\textit{stor}}{\textit{code}} \} \\
&\cup \{\presstor{\textit{id}}{g}{g'}{a}{\textit{pos}}{v} ~|~ \getaccount{\sigma}{a} = \account{n}{b}{\textit{stor}}{\textit{code}} \land \arraypos{\textit{stor}}{\textit{pos}} = v \} \\
&\cup \{\presaccount{\textit{id}}{g}{g'}{a}{\some{\textit{id}_a}} ~|~ (a, \textit{id}_a) \in C \} \\
&\cup \{\presaccount{\textit{id}}{g}{g'}{a}{\bot} ~|~ a \in \mathcal{A} \land \textit{id}_a \in \mathbb{N} \land (a, \textit{id}_a) \not \in C \} \\
\\
\repstate{\regstate{\mu}{\iota}{\sigma}{\eta}}{\textit{id}}{g}{g_c}{n}{C} &:=
\repgstate{\textit{id}}{g}{\sigma}{\access{\mu}{\textsf{pc}}}{C}
\cup \repexenv{\textit{id}}{g_c}{\iota}{n}
\cup \repmstate{\textit{id}}{g}{g_c}{\mu} \\
\\
\repgstate{\textit{id}}{g}{\sigma}{\textit{pc}}{C} &:= \{\pgstate{(\textit{id}, \textit{pc})}{g}{a}{n}{b} ~|~ \getaccount{\sigma}{a} = \account{n}{b}{\textit{stor}}{\textit{code}} \} \\
& \cup \{\pstor{(\textit{id}, \textit{pc})}{g}{a}{\textit{pos}}{v} ~|~ \getaccount{\sigma}{a} = \account{n}{b}{\textit{stor}}{\textit{code}} \land \arraypos{\textit{stor}}{\textit{pos}} =v \} \\
&\cup \{\paccountexists{(\textit{id}, \textit{pc})}{g}{a}{\some{\textit{id}_a}} ~|~ (a, \textit{id}_a) \in C \} \\
&\cup \{\paccountexists{(\textit{id}, \textit{pc})}{g}{a}{\bot} ~|~ a \in \mathcal{A} \land \textit{id}_a \in \mathbb{N} \land (a, \textit{id}_a) \not \in C \} \\
\\
\repexenv{\textit{id}}{g_c}{\iota}{n} & := \{ \pexenv{\textit{id}}{g_c}{\access{\iota}{\textsf{actor}}}{\access{\iota}{\textsf{sender}}}{n}{\access{\iota}{\textsf{value}}}{\size{\access{\iota}{\textsf{input}}}} \} \\
&\cup \{\pindata{\textit{id}}{g_c}{\textit{pos}}{v} ~|~ \arraypos{\access{\iota}{}\textsf{input}}{\textit{pos}} = v \} \\
\\
\repmstate{\textit{id}}{g}{g_c}{\mu} & := \{ \pmstate{(\textit{id}, $\access{\mu}{\textsf{pc}}$)}{g}{g_c}{\access{\mu}{\textsf{gas}}}{\arrayrep{\access{\mu}{\textsf{s}}}}{\access{\mu}{\textsf{i}}}\} \\
&\cup \{\pmem{(\textit{id}, $\access{\mu}{\textsf{pc}}$)}{g}{\textit{pos}}{v} ~|~ \arraypos{\access{\mu}{\textsf{m}}}{\textit{pos}} = v \}
\end{align*}
For translating the machine state we use the following auxiliary function translating stacks in a function representation.
\begin{align*}
\arrayrep{\epsilon} &:= (0, \lam{x}{0}) \\
\arrayrep{\cons{x}{s}} & := \textsf{let} \, (\textit{size}, \textit{sa}) = \arrayrep{s} \, \textsf{in}\, (\textit{size} + 1, \store{\textit{sa}}{\textit{size}}{x})
\end{align*}
\subsection{Proof}
\subsubsection{Assumptions}
For proving soundness, we need to fix several assumptions on valid execution states.
Our final goal is to restrict runs to those that only
\begin{itemize}
\item call known contracts
\item create known contracts
\item execute known initialization code
\end{itemize}
\begin{definition}[Well-fomation of annotated execution states]
An annotated execution state $\annotatep{s}{g}{(a, \textit{code})}{C_{\textit{up}}}$ is well-formed with respect to the sets $C_{\textit{call}}$ and $C_{\textit{init}}$ and the relation $\getcallablecode{\cdot}{\cdot}$ if the following conditions hold:
\begin{enumerate}
\item For all $a'$, $\textit{id}'$ it holds that if $\getcallablecode{a'}{\textit{id}'}$ holds then there is $\textit{code}'$ s.t. $(a', \textit{code}') \in C_{\textit{up}}$ and $(\textit{id}', \textit{code}') \in C_{\textit{call}}$
\item If $s = \regstate{\mu}{\iota}{\sigma}{\eta}$ then $\textit{code} = \access{\iota}{\textsf{code}}$
\item If $a = \bot$ then it exists $\textit{id}$ s.t. $(\textit{id}, \textit{code}) \in C_{\textit{init}}$
\item If $a \neq \bot$ then exists $\textit{id}$ s.t. $(\textit{id}, \textit{code}) \in C_{\textit{call}}$
\item $\{\textit{code}' ~|~ (a', \textit{code}') \in C_{\textit{up}} \} \subseteq \{\textit{code}' ~|~ (\textit{id}', \textit{code}') \in C_{\textit{call}} \}$
\end{enumerate}
\end{definition}
\subsubsection{Dependency on miner controlled parameters}
|
{
"timestamp": "2018-04-24T02:14:00",
"yymm": "1802",
"arxiv_id": "1802.08660",
"language": "en",
"url": "https://arxiv.org/abs/1802.08660"
}
|
\section{Introduction}
We present a novel stream learning algorithm, Hoeffding Anytime Tree (HATT)\footnote{In order to distinguish it from Hoeffding Adaptive Tree, or HAT \cite{bifet2009adaptive}}. The de facto standard for learning decision trees from streaming data is Hoeffding Tree (HT) \cite{domingos2000mining}, which is used as a base for many state-of-the-art drift learners \cite{hulten2001mining, bifet2009adaptive, brzezinski2014reacting, Santos2014, Barros2016, bifet2009new, hoeglinger2007use}. We improve upon HT by learning more rapidly and guaranteeing convergence to the asymptotic batch decision tree on a stationary distribution.
Our implementation of the Hoeffding Anytime Tree algorithm, the Extremely Fast Decision Tree (EFDT), achieves higher prequential accuracy than the Hoeffding Tree implementation Very Fast Decision Tree (VFDT) on many standard benchmark tasks.
HT constructs a tree incrementally, delaying the selection of a split at a node until it is confident it has identified the best split, and never revisiting that decision. In contrast, HATT seeks to select and deploy a split as soon as it is confident the split is useful, and then revisits that decision, replacing the split if it subsequently becomes evident that a better split is available.
The HT strategy is more efficient computationally, but HATT is more efficient statistically, learning more rapidly from a stationary distribution and eventually learning the asymptotic batch tree if the distribution from which the data are drawn is stationary. Further, false acceptances are inevitable, and since HT never revisits decisions, increasingly greater divergence from the asymptotic batch learner results as the tree size increases (Sec. \ref{sec:relatedwork}).
\begin{figure}[t]
\subfloat[a][VFDT: the current de facto standard for incremental tree learning
]{
\includegraphics[height=1.4in, width=3.6in]{figures/023_3m.png}}
\vspace*{-5pt}\subfloat[b][ EFDT: our more statistically efficient variant]{
\includegraphics[height=1.4in, width=3.6in]{figures/024_3m.png} }
\caption{The evolution of prequential error over the duration of a data stream. For each learner we plot error for 4 different levels of complexity, resulting from varying the number of classes from 2 to 5. The legend includes time in CPU seconds (T) and the total error rate over the entire duration of the stream (E). This illustrates how EFDT learns much more rapidly than VFDT and is less affected by the complexity of the learning task, albeit incurring a modest computational overhead to do so. The data are generated by
MOA RandomTreeGenerator, 5 classes, 5 nominal attributes, 5 values per attribute, 10 stream average.}
\label{fig:intro}
\end{figure}
In Fig. \ref{fig:intro}, we observe VFDT taking longer and longer to learn progressively more difficult concepts obtained by increasing the number of classes. EFDT learns all of the concepts very quickly, and keeps adjusting for potential overfitting as fresh examples are observed.
In Section \ref{sec:performance}, we will see that EFDT continues to retain its advantage even 100 million examples in, and that EFDT achieves significantly lower prequential error relative to VFDT on the majority of benchmark datasets we have tested. VFDT only slightly outperforms EFDT on three synthetic physics simulation datasets---Higgs, SUSY, and Hepmass.
\section{Background}\label{sec:bg}
Domingos and Hulten presented one of the first algorithms for incrementally constructing a decision tree in their widely acclaimed work, ``Mining High-Speed Data Streams'' \cite{domingos2000mining}.
Their algorithm is the Hoeffding Tree (Table \ref{table:vfdt}), which uses the \textit{Hoeffding Bound}. For any given potential split, Hoeffding Tree checks whether the difference of averaged information gains of the top two attributes is likely to have a positive mean---if so, the winning attribute may be picked with a degree of confidence, as is described below.
\begin{table}
\caption{Hoeffding Tree, Domingos \& Hulten (2000)}
\includegraphics[width=78.0mm]{figures/vfdt1.png}
\label{table:vfdt}
\end{table}
\textbf{Hoeffding Bound}: If we have $n$ independent random variables $r_1..r_n$, with range $R$ and mean $\bar{r}$, the Hoeffding bound states that with probability $1-\delta$ the true mean is at least $\bar{r} - \epsilon$ where \cite{domingos2000mining, hoeffding1963probability}:
\begin{equation} \label{eq:1}
\epsilon = \sqrt{\frac{R^2 \ln(1/\delta)}{2n}}
\end{equation}
\textit{Hoeffding Tree} is a tree that uses this probabilistic guarantee to test at each leaf whether the computed difference of information gains $\Delta\overline{G}$ between the attributes $X_a$ and $X_b$ with highest information gains respectively, $\Delta\overline{G}(X_a)$ $-$ $\Delta\overline{G}(X_b)$, is positive and non-zero. If, for the specified tolerance $\delta$, we have $\Delta\overline{G} > \epsilon$, then we assert with confidence that $X_a$ is the better split.
Note that we are seeking to determine the best split out-of-sample. The above controls the risk that $X_a$ is inferior to $X_b$, but it does not control the risk that $X_a$ is inferior to some other attribute $X_c$. It is increasingly likely that some other split will turn out to be superior as the total number of attributes increases. There is no recourse to alter the tree in such a scenario.
\section{Hoefdding AnyTime Tree}
If the objective is to build an incremental learner with good predictive power at any given point in the instance stream, it may be desirable to exploit information as it becomes available, building structure that improves on the current state but making subsequent corrections when further alternatives are found to be even better. In scenarios where information distribution among attributes is skewed, with some attributes containing more information than others, such a policy can be highly effective because of the limited cost of rebuilding the tree when replacing a higher-level attribute with a highly informative one. However, where information is more uniformly distributed among attributes, Hoeffding Tree will struggle to split and might have to resort to using a tie-breaking threshold that depends on the number of random variables, while HATT will pick an attribute to begin with and switch when necessary, leading to faster learning.
In this paper, we describe HATT, and provide an instantiation that we denote Extremely Fast Decision Tree (EFDT).
Hoeffding Anytime Tree is equivalent to Hoeffding tree except that it uses the Hoeffding bound to determine whether the merit of splitting on the best attribute exceeds the merit of not having a split, or the merit of the current split attribute. In practice, if no split attribute exists at a node, rather than splitting only when the top candidate split attribute outperforms the second-best candidate, HATT will split when the information gain due to the top candidate split is non-zero with the required level of confidence. At later stages, HATT will split when the difference in information gain between the current top attribute and the current split attribute is non-zero, assuming this is better than having no split. HATT is presented in Algorithm \ref{table:HATT}, Function \ref{table:attempttosplit}, and Function \ref{table:reevalsplit}.
\begin{algorithm}
\DontPrintSemicolon
\SetAlgoLined
\KwIn{$S$, a sequence of examples. At time t, the observed sequence is $S^t = ((\vec{x}_1, y_1), (\vec{x}_2, y_2), ... (\vec{x}_t, y_t))$
\\ \Indp\Indp $\mathbf{X} = \{X_1, X_2... X_m\}$, a set of $m$ attributes
\\ $\delta$, the acceptable probability of choosing the wrong split attribute at a given node
\\ G(.), a split evaluation function
}
\KwResult{$HATT^t$, the model at time $t$ constructed from having observed sequence $S^t$.}
\Begin{
Let HATT be a tree with a single leaf, the $root$ \;
Let $\mathbf{X_1} = \mathbf{X} \cup {X_\emptyset}$\;
Let $G_1(X_{\emptyset})$ be the $G$ obtained by predicting the most
frequent class in S\;
\ForEach {class $y_k$}{
\ForEach { value $x_{ij}$ of each attribute $X_i \in \mathbf{X}$}{
Set counter $n_{ijk}(root) = 0$\;
}
}
\ForEach {example $(\vec{x},y)$ in S}{
Sort $(\vec{x},y)$ into a leaf $l$ using $HATT$\;
\ForEach {node in path $(root ... l)$}{
\ForEach {$x_{ij}$ in $\vec{x}$ such that $X_i \in X_{node}$}{
Increment $n_{ijk}(node)$\;
\eIf{$node = l$}{
$AttemptToSplit(l)$
}
{
$ReEvaluateBestSplit(node)$
}
}
}
}
}
\caption{Hoeffding Anytime Tree \label{table:HATT}}
\end{algorithm}
\begin{function}
\DontPrintSemicolon
\SetAlgoLined
\Begin{
Label $l$ with the majority class at $l$\;
\If {all examples at $l$ are not of the same class}{
Compute $\overline{G_l}(X_i)$ for each attribute $\mathbf{X}_l - \{\mathbf{X_{\emptyset}}\}$ using the counts $n_{ijk}(l)$\;
Let $X_a$ be the attribute with the highest $\overline{G_l}$\;
Let $X_b = X_{\emptyset}$\;
Compute $\epsilon$ using equation \ref{eq:1}\;
\If {$\overline{G_l}(X_a) - \overline{G_l}(X_b) > \epsilon$ and $X_a \neq X_{\emptyset}$}{
Replace $l$ by an internal node that splits on $X_a$\;
\For {each branch of the split}{
Add a new leaf $l_m$ and let $\mathbf{X}_m = \mathbf{X} - X_a$\;
Let $\overline{G_m}(X_{\emptyset})$ be the $G$ obtained by predicting the most frequent class at $l_m$\;
\For {each class $y_k$ and each value $x_{ij}$ of each attribute $X_i \in X_m -$ $\{X_{\emptyset}\}$}{
Let $n_{ijk}(l_m) = 0$.
}
}
}
}
}
\caption{AttemptToSplit(leafNode $l$)\label{table:attempttosplit}}
\end{function}
\begin{function}
\DontPrintSemicolon
\SetAlgoLined
\Begin{
Compute $\overline{G}_{int}(X_i)$ for each attribute $\mathbf{X}_{int} - \{\mathbf{X_{\emptyset}}\}$ using the counts $n_{ijk}({int})$\;
Let $X_a$ be the attribute with the highest $\overline{G}_{int}$\;
Let $X_{current}$ be the \textit{current} split attribute\;
Compute $\epsilon$ using equation \ref{eq:1}\;
\If {$\overline{G}_l(X_a) - \overline{G}_l(X_{current}) > \epsilon$}{
\uIf{$X_a = X_{\emptyset}$}{
Replace internal node $int$ with a leaf (kills subtree)\;
}
\ElseIf{$X_a \neq X_{current}$}{
Replace $int$ with an internal node that splits on $X_a$\;
\For {each branch of the split}{
Add a new leaf $l_m$ and let $\mathbf{X}_m = \mathbf{X} - X_a$\;
Let $\overline{G}_m(X_{\emptyset})$ be the $G$ obtained by predicting the most frequent class at $l_m$\;
\For {each class $y_k$ and each value $x_{ij}$ of each attribute $X_i \in X_m -$ $\{X_{\emptyset}\}$}{
Let $n_{ijk}(l_m) = 0$.
}
}
}
}
}
\caption{ReEvaluateBestSplit(internalNode $int$)\label{table:reevalsplit}}
\end{function}
\subsection{Convergence}
Hoeffding Tree offers guarantees on the expected disagreement from a batch tree trained on an infinite dataset (which is denoted $DT_*$ in \cite{domingos2000mining}, a convention we will follow). ``Extensional disagreement'' is defined as the probability that a pair of decision trees will produce different predictions for an example, and intensional disagreement that probability that the path of an example will differ on the two trees.
The guarantees state that either form of disagreement is bound by $\frac{\delta}{p}$, where $\delta$ is a tolerance level and $p$ is the leaf probability-- the probability that an example will fall into a leaf at a given level. $p$ is assumed to be constant across all levels for simplicity.
Note that the guarantees will weaken significantly as the depth of the tree increases. While the built trees may have good prequential accuracy in practice on many test data streams, increasing the complexity and size of data streams such that a larger tree is required increases the chance that a wrong split is picked.
On the other hand, HATT converges in probability to the batch decision tree; we prove this below.
For our proofs, we will make the following assumption:
\begin{itemize}
\item No two attributes will have identical information gain. This is a simplifying assumption to ensure that we can always split given enough examples, because $\epsilon$ is monotonically decreasing.
\end{itemize}
\begin{lemma}\label{lem:reaches_same_root_split_as_HT}
HATT will have the same split attribute at the root as HT at the time HT splits the root node.
\end{lemma}
\begin{proof}
Let $S$ represent an infinite sequence drawn from a probability space $(\Omega, \mathcal{F}, P)$, where $(\vec{x},y) \in \Omega$ constitute our data points. The components of $\vec{x}$ take values corresponding to attributes $X_1, X_2,$ ... $X_m$, if we have $m$ attributes.
We are interested in attaining confidence $1-\delta$ that $\sum_{i=0}^{n}{\Delta G}/{n} - \mu_{\Delta G} \leq \epsilon $. We don't know $\mu_{\Delta G}$, but we would like it to be non-zero, because that would imply both attributes do not have equal information gain, and that one of the attributes is the clear winner. Setting $\mu_{\Delta G}$ to $0$, we want to be confident that $\overline{\Delta G}$ differs from zero by at least $\epsilon$. In other words, we are using a corollary of Hoeffding's Inequality to state with confidence that our random variable $\overline{\Delta G}$ diverges from $0$.
In order for this to happen, we need $\overline{\Delta G}$ to be greater than $\epsilon$. $\epsilon$ is monotonically decreasing, as we can see in equation \ref{eq:1}.
Given the same infinite sequence of examples $S$, both HT and HATT will be presented with the same evidence $S_t(N_0)$ at the root level node $N_0$ for all $t$ (that is, indefinitely). They will always have an identical value of $\epsilon$.
If at a specific time $T$ Hoeffding Tree compares attributes $X_a$ and $X_b$, which correspond to the attributes with the highest and second highest information gains $X^{1:T}$ and $X^{2:T}$ at time $T$ respectively, it follows that since $S_T(N_0)(HT) = S_T(N_0)(HATT)$, that is, since both trees have the same evidence at time $T$, Hoeffding AnyTime Tree will also find $X^{1:T} = X_a$. However, $HATT$ will compare $X_a$ with $X^T$, the current split attribute. There are four possibilities: $X^T=X^{1:T}$, $X^T=X^{2:T}$, $X^T=X^{i:T}, i > 2$ or $X^T$ is the null split. We will see that under all these scenarios, HATT will select (or retain) $X^{1:T}$.
We need to consider the history of $\overline{\Delta G}$, which can be different for HT and HATT. That is, it is possible that for $t \leq T$, $\overline{\Delta G}(HT) \neq \overline{\Delta G}(HATT)$. This is because while HT always compares $X^{1:t}$ and $X^{2:t}$, HATT may compare $X^{1:t}$ with, say, $X^{3:t}$, $X^{4:t}$ or $X_\emptyset$, which may happen to be the current split.
Clearly, at any timestep, $X^{i:t}(N_0)(HT) = X^{i:t}(N_0)(HATT)$. That is, the ranking of the information gains of the potential split attributes is always the same at the root node for both HT and HATT. It should also be obvious that since the observed sequences are identical, $G(X^{i:t}(N_0)(HT)) = G(X^{i:t}(N_0)(HATT))$-- the information gains of all of the corresponding attributes at each timestep are equal. So the top split attribute at the root $X^{1:t}(N_0)$ is always the same for both trees. If we decompose $\overline{\Delta G}^t$ as $\overline{G}_{top}^t - \overline{G}_{bot}^t$, we will have $\overline{G}_{top}^t(HT) = \overline{G}_{top}^t(HATT)$, but $\overline{G}_{bot}^t(HT)$ and $\overline{G}_{bot}^t(HATT)$ wouldn't necessarily be equal.
Since at any timestep $t$ HT will always choose to compare $G(X^{1:T})$ and $G(X^{2:T})$ while HATT will always compare $G(X^{1:T})$ with $G{X_{currentSplit}}$ where $G{X_{currentSplit}} \leq G(X^{2:T})$, we have $\overline{G}_{bot}^t(HATT) \leq \overline{G}_{bot}^t(HT)$ for all $t$.
Because we have $\overline{G}_{bot}^t(HATT) \leq \overline{G}_{bot}^t(HT)$, we will have\\ ${\overline{\Delta G}^T}(HATT) \geq {\overline{\Delta G}^T}(HT)$, and ${\overline{\Delta G}^T}(HT) > \epsilon$ implies ${\overline{\Delta G}^T}(HATT) > \epsilon$, which would cause HATT to split on $X^{1:T}$ if it already does not happen to be the current split attribute simultaneously with HT at time $T$.
\end{proof}
\begin{lemma} \label{lem:same_root_split_as_DT_almost_surely}
The split attribute $X_R^{HATT}$ at the root node of HATT converges in probability to the split attribute $X_R^{DT_*}$ used at the root node of $DT_*$. That is, as the number of examples grows large, the probability that HATT will have at the root a split $X_R^{HATT}$ that matches the split $X_R^{DT_*}$ at the root node of $DT_*$ goes to 1.
\end{lemma}
\begin{proof}
Let us denote the attributes available at the root $X_i$ and the information gain of each attribute computed at time $t$ as $G(X_i)^t$, based on the observed sequence of examples $S^t = ((\vec{x}_1,y_1), (\vec{x}_2,y_2) ... (\vec{x}_t,y_t))$.
Now, we are working under the assumption that each $X_i$ has a finite, constant information gain associated with it---$DT_*$ would not converge, and thus any guarantees about $HT$'s deviation from $DT_*$ would not hold without making this assumption. Let us denote this gain $G(X_i)^{\infty}$.
This in turn implies that all pairwise differences in information gain: $\Delta G^{\infty} = G(X_a)^{\infty} - G(X_b)^{\infty}$ for any two attributes $X_a$ and $X_b$ must also be finite and constant over any given infinite dataset (from which we generate a stationary stream).
As $t \rightarrow \infty$, we expect the frequencies of our data $(\vec{x},y)$ to approach their long-term frequencies given by $P$. Consequently, we expect our measured sequences of averaged pairwise differences in information gain $\overline{\Delta G}(X_{ij})^t$ to converge to their respective constant values on the infinite dataset $\Delta G(X_{ij})^{\infty}$, which implies we will effectively have the chosen split attribute for $HATT$ converging in probability to the chosen split attribute for $DT_*$ as $t \rightarrow \infty$.
Why would this convergence only be in probability and not almost surely?
For any finite sequence of examples $S^t = ((\vec{x}_1,y_1),$ $(\vec{x}_2,y_2)$ $...$ $(\vec{x}_t,y_t))$ with frequencies of data that approach those given by $P$, we may observe with nonzero probability a followup sequence $((\vec{x}_{t+1}, y_{t+1}), (\vec{x}_{t+2}, y_{t+2}), ... (\vec{x}_{2t}, y_{2t}))$ that will result in a distribution that is unlike $P$ over the observations. Obviously, we expect the probability of observing such an anomalous sequence to go to $0$ as $t$ grows large-- if we didn't, we would not expect the observed frequencies of the instances to ever converge to their long-term frequencies.
Anytime we do observe such a sequence, we can expect to see anomalous values of $\overline{\Delta G}(X_{ij})^t$, which means that even if the top attribute has already been established as one that matches the attribute corresponding to $\Delta G(X_{ij})^{\infty}$, it may briefly be replaced by an attribute that is not the top attribute as per $G(X_i)^{\infty}$. We have already reasoned that the probability of observing such anomalous sequences must go to $0$; so we expect that the probability of observing sequences with instance frequencies approaching those given by the measure $P$ must go to $1$. And for a sequence that is distributed as per $P$, we expect our information gain differences $\overline{\Delta G}(X_{ij})^t \rightarrow \Delta G(X_{ij})^{\infty}$.
Remember that we have assumed that the pairwise differences in information gain $\Delta G(X_{ij})^t$ are nonzero (by implication of no two attributes having identical information gain). Since $\epsilon$ is monotonically decreasing and no two attributes have been assumed to be identical, as $t$ grows large, we will always pick the attribute with the largest information gain because its advantage over the next best attribute will exceed some fixed $\epsilon$; and this picked top attribute will match, in probability, the one established by $DT_*$.
\end{proof}
\begin{lemma}\label{thm:converges_to_dt}
Hoeffding AnyTime Tree converges to the asymptotic batch tree in probability.
\end{lemma}
\begin{proof}
From Lemma \ref{lem:same_root_split_as_DT_almost_surely}, we have that as $t \rightarrow \infty$, $X_R^{HATT} \xrightarrow{P} X_R^{DT_*}$, meaning that though it is possible to see at any individual timestep $X_R^{HATT} \neq X_R^{DT_*}$, we have have convergence in probability in the limit.
Consider immediate subtrees of the root node $HATT^1_i$ (denoting they are rooted at level 1). In all cases where the root split matches $X_R^{DT_*}$, the instances observed at the roots of $HATT^1_i$ will be drawn from the same data distribution that the respective $DT_{*i}^1$ draw their instances from. Do level 1 split attributes for HATT, $X_{i:L1}^{HATT}$ converge to $X_{i:L1}^{DT_*}$?
We can answer this by using the Law of Total Probability. Let us denote the event that for first level split $i$, $X_{i:L1}^{HATT} = X_{i:L1}^{DT_*}$ by $match_{i:L1}$. Then we have as $t \rightarrow \infty$:
\begin{gather*}
P(X_{i:L1}^{HATT} = X_{i:L1}^{DT_*}) \\
= P(match_{i:L1}) \\
=P(match_{i:L1}|match_{L0}) P(match_{L0}) \\
+ P(match_{i:L1}|{not\_match_{L0}}) P(not\_match_{L0})
\end{gather*}
We know that $P(match_{L0}) \rightarrow 1$ and $P(not\_match_{L0}) \rightarrow 0$ as $t \rightarrow \infty$ from Lemma \ref{lem:reaches_same_root_split_as_HT}. So we obtain
$P(X_{i:L1}^{HATT} = X_{i:L1}^{DT_*})^\infty = P(match_{i:L1}|match_{L0})^\infty$.
Effectively, we end up only having to condition on the event $match_{L0}$. In other words, we may safely use a subset of the stream where only $match_{L0}$ has occurred to reason about whether $X_{i:L1}^{HATT} = X_{i:L1}^{DT_*}$ as $t \rightarrow \infty$.
Now, we need to show that $P(match_{i:L1}|match_{L0}) \rightarrow 1$ as $t \rightarrow \infty$ to prove convergence at level 1. This is straightforward. Since we are only considering instances that result in the event $match_{L0}$ occurring, the conditional distributions at level 1 of HATT match the ones at level 1 of $DT_*$. We may extend this argument to any number of levels; thus $HATT$ converges in probability to $DT_*$.
\end{proof}
\subsection{Time and Space Complexity}
\textbf{Space Complexity:} On nominal with data with $d$ attributes, $v$ values per attribute, and $c$ classes, HATT requires $O(dvc)$ memory to store node statistics at each node, as does HT \cite{domingos2000mining}. Because the number of nodes increases geometrically, there may be a maximum of $(1-v^d)/(1-v)$ nodes, and so the worst case space complexity is O($v^{d-1}dvc$). Since the worst case space complexity for HT is given in terms of the current number of leaves $l$ as O($ldvc$) \cite{domingos2000mining}, we may write the space complexity for HATT as $O(ndvc)$, where $n$ is the total number of nodes. Note that $l$ is $O(n)$, so space complexity is equivalent for HATT and HT.
\textbf{Time Complexity:} There are two primary operations associated with learning for HT: (i) incorporating a training example by incrementing leaf statistics and (ii) evaluating potential splits at the leaf reached by an example. The same operations are associated with HATT, but we also increment internal node statistics and evaluate potential splits at internal nodes on the path to the relevant leaf.
At any leaf for HT and at any node for HATT, no more than $d$ attribute evaluations will have to be considered. Each attribute evaluation at a node requires the computation of $v$ information gains. Each information gain computation requires $O(c)$ arithmetic operations, so each split re-evaluation will require $O(dvc)$ arithmetic operations at each node. As for incorporating an example, each node the example passes through will require $dvc$ counts updated and thus $O(dvc)$ associated arithmetic operations. The cost for updating the node statistics for HATT is $O(hdvc)$, where $h$ is the maximum height of the tree, because up to $h$ nodes may be traversed by the example, while it is $O(dvc)$ for HT, because only one set of statistics needs to be updated. Similarly, the worst-case cost of split evaluation at each timestep is $O(dvc)$ for HT and $O(hdvc)$ for HATT, as one leaf and one path respectively have to be evaluated.
\section{Related Work}\label{sec:relatedwork}
There is a sizable literature that adapts HT in sometimes substantial ways \cite{jin2003efficient, Gama:2003:ADT:956750.956813, rutkowski2013decision} that do not, to the best of our knowledge, lead to the same fundamental change in learning premise as does HATT. \cite{rutkowski2013decision} and \cite{jin2003efficient} substitute the Hoeffding Test with McDiarmid's and the ``Normal'' test respectively; \cite{Gama:2003:ADT:956750.956813} adds support for Naive Bayes at leaves. Methods proposed prior to HT are either significantly less tight compared to HT in their approximation of a batch tree \cite{gratch1996sequential} or unsuitable for noisy streams and prohibitively computationally expensive \cite{utgoff1989incremental}.
The most related other works are techniques that seek to modify a tree through split replacement, usually for concept drift adaptation.
Drift adaptation generally requires explicit forgetting mechanisms in order to update the model so that it is relevant to the most recent data; this usually takes the form of a moving window that forgets older examples or a fading factor that decays the weight of older examples. In addition, when the underlying model is a tree, drift adaptation can involve subtree or split replacement.
Hulten et al \cite{hulten2001mining} follow up on the Hoeffding Tree work with a procedure for drift adaptation (Concept-adapting Very Fast Decision Tree, CVFDT). CVFDT has a moving window that diminishes statistics recorded at a node due to an example that has fallen out of a window at a given time step. The example statistics at each internal node change as the window moves, and existing splits are replaced if the split attribute is no longer the winning attribute and one of a set of alternate subtrees grown by splitting on winning attributes registers greater accuracy.
The idea common to both CVFDT and HATT is that of split re-evaluation. However, the circumstances, objectives, and methods are entirely different. CVFDT is explicitly designed for a drifting scenario; HATT for a stationary one. CVFDT's goal is to reduce prequential error for the current window in the expectation that this is the best way to respond to drift; HATT's goal is reduce prequential error overall for a stationary stream so that it asymptotically approaches that of a batch learner. CVFDT builds and substitutes alternate subtrees; HATT does not. CVFDT deliberately employs a range of forgetting mechanisms; HATT only forgets as a side effect of replacing splits---when a subtree is discarded, so too are all the historical distributions recorded therein. CVFDT always compares the top attributes, while HATT compares with either the current split attribute or the null split.
However, CVFDT is not incompatible with the core idea of Hoeffding Anytime Tree; it would be interesting to examine whether the idea of comparing with the null split or the current split attribute when applied to CVFDT will boost its performance on concept drifting streams. However, that is beyond the scope of this paper.
In order to avoid confusion, we will also mention the Hoeffding Adaptive Tree (HAT) \cite{bifet2009adaptive}. This method builds a tree that grows alternate subtrees if a subtree is observed to have poorer prequential accuracy on more recent examples, and substitutes an alternate when it has better accuracy than the original subtree. HAT uses an error estimator, such as ADWIN \cite{bifet2007learning} at each node to determine whether the prediction error due to a recent sequence of examples is significantly greater than the prediction error from a longer historical sequence so it can respond to drift. HATT, on the other hand, does not rely on prediction results or error, and does not aim to deliberately replace splits in response to drift.
\section{Performance}\label{sec:performance}
Our EFDT implementation was built by changing the split evaluations of the MOA implementation of VFDT \cite{bifet2010moa}. We compared VFDT and EFDT on all UCI \cite{Lichman:2013} classification data sets with over $200,000$ instances that had an obvious classification target variable, did not require text mining, and did not contain missing values (MOA has limited support for handling missing values). To augment this limited collection of large datasets, we also studied performance on the WISDM dataset \cite{Kwapisz10activityrecognition}. In all, we have 12 benchmark datasets with a mixture of numeric and nominal attributes ranging from a few dimensions to hundreds of dimensions.
Many UCI datasets are ordered. VFDT and EFDT are both designed to converge towards the tree that would be learned by a batch learner if the examples in a stream are drawn i.i.d.{} from a stationary distribution. The ordered UCI datasets do not conform to this scenario, so we also study performance when they are shuffled in order to simulate it. To this end, we shuffled the data 10 times with the Unix \emph{shuf} utility seeded by a reproducible stream of random bytes \cite{randomGNU} to create 10 different streams, averaged our prequential accuracy results over the streams, as well as comparing with performance on the corresponding unshuffled stream.
Our experiments are easily reproducible. Instructions for processing datasets, source code for VFDT and EFDT to be used with MOA, and Python scripts to run the experiments are all available at [ \url{https://github.com/chaitanya-m/kdd2018.git}{} ].
EFDT attains substantially higher prequential accuracy on most streams (Figs. \ref{fig:kdd98} to \ref{fig:pamap2}) whether shuffled or unshuffled. Where VFDT wins (\ref{fig:higgs}, \ref{fig:hepmass}, \ref{fig:susy}) the margin is far smaller than most of the EFDT wins. While EFDT runtime generally exceeds that of VFDT, we find it rarely requires more than double the time and in some cases, when it learns smaller trees, requires less time. We evaluate leaves every $200$ timesteps and internal nodes every $2000$ timesteps.
\begin{figure}
\subfloat[a][10 stream shuffled average.]{
\includegraphics[height=1.18in, width=3.6in]{figures/005.png}
}
\vspace*{-5pt}\subfloat[b][Unshuffled.]{
\includegraphics[height=1.18in, width=3.6in]{figures/05a.png} }
\caption{KDD intrusion detection dataset \cite{Lichman:2013}}
\label{fig:kdd98}
\end{figure}
\begin{figure}
\subfloat[a][ 10 stream shuffled average.]{
\includegraphics[height=1.18in, width=3.6in]{figures/008.png} }
\vspace*{-5pt}\subfloat[b][Unshuffled.]{\includegraphics[height=1.18in, width=3.6in]{figures/08a.png} }
\caption{Poker dataset \cite{Lichman:2013}}
\label{fig:poker}
\end{figure}
\begin{figure}
\subfloat[a][10 stream shuffled average.]{
\includegraphics[height=1.18in, width=3.6in]{figures/020.png} }
\vspace*{-5pt}\subfloat[b][Unshuffled.]{
\includegraphics[height=1.18in, width=3.6in]{figures/20a.png} }
\caption{Fonts dataset \cite{Lichman:2013}}
\label{fig:font}
\end{figure}
\begin{figure}
\subfloat[a][10 stream shuffled average.]{
\includegraphics[height=1.18in, width=3.6in]{figures/011.png} }
\vspace*{-5pt}\subfloat[b][Unshuffled.]{
\includegraphics[height=1.18in, width=3.6in]{figures/11a.png} }
\caption{Forest covertype dataset \cite{covtypedataset, Lichman:2013}}
\label{fig:covtype}
\end{figure}
\begin{figure}
\subfloat[a][ 10 stream shuffled average.]{
\includegraphics[height=1.18in, width=3.6in]{figures/018.png} }
\vspace*{-5pt}\subfloat[b][Unshuffled.]{
\includegraphics[height=1.18in, width=3.6in]{figures/18a.png} }
\caption{Skin dataset \cite{datasetskin, Lichman:2013}}
\label{fig:skin}
\end{figure}
\begin{figure}
\subfloat[a][ 10 stream shuffled average.]{
\includegraphics[height=1.18in, width=3.6in]{figures/010.png} }
\vspace*{-5pt}\subfloat[b][Unshuffled.]{
\includegraphics[height=1.18in, width=3.6in]{figures/10a.png} }
\caption{Gas sensor dataset \cite{huerta2016online, Lichman:2013}}
\label{fig:gas}
\end{figure}
\begin{figure}
\subfloat[a][ 10 stream shuffled average.]{
\includegraphics[height=1.18in, width=3.6in]{figures/002.png} }
\vspace*{-5pt}\subfloat[b][Unshuffled.]{
\includegraphics[height=1.18in, width=3.6in]{figures/02a.png} }
\caption{WISDM dataset \cite{Kwapisz10activityrecognition, Lichman:2013}}
\label{fig:wisdm}
\end{figure}
\begin{figure}
\subfloat[a][ 10 stream shuffled average.]{
\includegraphics[height=1.18in, width=3.6in]{figures/007.png} }
\vspace*{-5pt}\subfloat[b][Unshuffled.]{
\includegraphics[height=1.18in, width=3.6in]{figures/07a.png} }
\caption{Human Activity Recognition dataset: Phone, watch accelerometer, and gyrometer data combined. \cite{Stisen:2015:SDD:2809695.2809718, Lichman:2013}}
\label{fig:har}
\end{figure}
\begin{figure}
\subfloat[a][ 10 stream shuffled average.]{
\includegraphics[height=1.18in, width=3.6in]{figures/019.png} }
\vspace*{-5pt}\subfloat[b][Unshuffled.]{
\includegraphics[height=1.18in, width=3.6in]{figures/19a.png} }
\caption{PAMAP2 Activity Recognition dataset (UCI)-- 9 subjects data combined \cite{reiss2012introducing, Lichman:2013}}
\label{fig:pamap2}
\end{figure}
Differences in shuffled and unshuffled performance highlight the amount of order that is present in the unshuffled data. The unshuffled Skin dataset contains B,G,R values and a target variable that indicates whether the input corresponds to skin or not. All positive examples are at the start followed by all negative examples; the net effect is that a learner will replace one extremely simple concept with another (Fig. \ref{fig:skin}). When shuffled, it is necessary to learn a more complex decision boundary, affecting performance for both learners.
A different effect is observed with the higher dimensional Fonts dataset (Fig. \ref{fig:font}). The goal is to predict which of 153 fonts corresponds to a 19x19 greyscale image, with each pixel able to take 255 intensity values. When instances are sorted, by font name alphabetically,each time a new font is encountered VFDT needs to learn the new concept at every leaf of an increasingly complex tree. In contrast, EFDT is able to readjust the model, efficiently discarding outdated splits to achieve an accuracy of around 99.8\%, making it a potentially powerful base learner for methods designed for concept drifting scenarios.
The results on the Poker and Forest-Covertype datasets (Figs. \ref{fig:poker}, \ref{fig:covtype}) reflect both effects: EFDT performs significantly better on ordered data, and performance for both learners deteriorates with shuffled data in comparison with unshuffled data.
Every additional level of a decision tree fragments the input space, slowing down tree growth exponentially. A delay in splitting at one level delays the start of collecting information with respect to the splits for the next level. These delays cascade, greatly delaying splitting at deeper levels of the tree.
Thus, we expect HATT to have an advantage over HT in situations where HT considerably delays splits at each level---such as when the difference in information gain between the top attributes at a node is low enough to require a large number of examples in order to overcome the Hoeffding bound, though the information gains themselves happen to be significant. This would lead to a potentially useful split in HT being delayed, and poor performance in the interim.
Conversely, when the differences in information gain between top attributes as well as the information gains themselves are low, it is possible that HATT chooses a split that would require a large number of examples to readjust. However, since we expect this to keep up with VFDT on the whole, the main source of underperformance for EFDT is likely to be an overfitted model making low-level adjustments. Synthetic data from physics simulations available in the UCI repository (Higgs, Hepmass, SUSY) led to such a scenario.
\begin{figure}
\subfloat[a][ 10 stream shuffled average.]{
\includegraphics[height=1.18in, width=3.6in]{figures/006.png} }
\vspace*{-5pt}\subfloat[b][Unshuffled.]{
\includegraphics[height=1.18in, width=3.6in]{figures/06a.png} }
\caption{Higgs dataset \cite{baldi2014searching,Lichman:2013}}
\label{fig:higgs}
\end{figure}
\begin{figure}
\subfloat[a][ 10 stream shuffled average.]{
\includegraphics[height=1.18in, width=3.6in]{figures/001.png} }
\vspace*{-5pt}\subfloat[b][Unshuffled.]{
\includegraphics[height=1.18in, width=3.6in]{figures/01a.png} }
\caption{Hepmass dataset \cite{baldihepmass,Lichman:2013}}
\label{fig:hepmass}
\end{figure}
\begin{figure}
\subfloat[a][ 10 stream shuffled average.]{
\includegraphics[height=1.18in, width=3.6in]{figures/003.png} }
\vspace*{-5pt}\subfloat[b][Unshuffled.]{
\includegraphics[height=1.18in, width=3.6in]{figures/03a.png} }
\caption{SUSY dataset \cite{baldi2014searching,Lichman:2013}}
\label{fig:susy}
\end{figure}
Fig. \ref{fig:outro} shows us that with the MOA tree generator used in Fig. \ref{fig:intro}, even on a 100 million length stream, EFDT's prequential error is still an order of magnitude lower than that of VFDT.
\begin{figure}[t]
\subfloat[a][VFDT]{
\includegraphics[height=1.18in, width=3.6in]{figures/023_100m.png}}
\vspace*{-5pt}\subfloat[b][EFDT]{
\includegraphics[height=1.18in, width=3.6in]{figures/024_100m.png} }
\caption{A longer term view of the experiments from Fig. \ref{fig:intro} shows us that even 100 million examples in, EFDT maintains a commanding lead on prequential accuracy.}
\label{fig:outro}
\end{figure}
\section{Conclusions}
Hoeffding AnyTime Tree makes a simple change to the current de facto standard for incremental tree learning. The current state-of-the-art Hoeffding Tree aims to only split at a node when it has identified the best possible split and then to never revisit that decision. In contrast HATT aims to split as soon as a useful split is identified, and then to replace that split as soon as a better alternative is identified. Our results demonstrate that this strategy is highly effectively on benchmark datasets.
Our experiments find that HATT has some inbuilt tolerance to concept drift, though it is not specifically designed as a learner for drift. It is easy to conceive of ensemble, forgetting, decay, or subtree replacement approaches built upon HATT to deal with concept drift, along the lines of approaches that have been proposed for HT.
HT cautiously works toward the asymptotic batch tree, ignoring, and thus not benefiting from potential improvements on the current state of the tree, until it is sufficiently confident that they will not need to be subsequently revised. If an incrementally learned tree is to be deployed to make predictions before fully learned, HATT's strategy of always utilizing the most useful splits identified to date has profound benefit.
\section{Introduction}
As a new technology, Wireless Sensor Networks (WSNs) has a wide
range of applications \cite{Culler-01, Bahl-02, Akyildiz-01}, including
environment monitoring, smart buildings, medical care, industrial and
military applications. Among them, a recent trend is to develop
commercial sensor networks that require pervasive sensing of both
environment and human beings, for example, assisted living
\cite{Akyildiz-02, Harvard-01,CROSSBOW} and smart homes
\cite{Harvard-01, Adya-01,CROSSBOW}.
\begin{quote}
``For these applications, sensor devices are incorporated into human
cloths \cite{Natarajan-01, Zhou-06, Bahl-02, Adya-01} for monitoring
health related information like EKG readings, fall detection, and
voice recognition''.
\end{quote}
While collecting all these multimedia information
\cite{Akyildiz-02} requires a high network throughput, off-the-shelf
sensor devices only provide very limited bandwidth in a single
channel: 19.2\,Kbps in MICA2 \cite{Bahl-02} and 250\,Kbps in MICAz.
In this article, we propose MMSN, abbreviation for Multifrequency
Media access control for wireless Sensor Networks. The main
contributions of this work can be summarized as follows.
\begin{itemize}
\item To the best of our knowledge, the MMSN protocol is the first
multifrequency MAC protocol especially designed for WSNs, in which
each device is equipped with a single radio transceiver and
the MAC layer packet size is very small.
\item Instead of using pairwise RTS/CTS frequency negotiation
\cite{Adya-01, Culler-01, Tzamaloukas-01, Zhou-06},
we propose lightweight frequency assignments, which are good choices
for many deployed comparatively static WSNs.
\item We develop new toggle transmission and snooping techniques to
enable a single radio transceiver in a sensor device to achieve
scalable performance, avoiding the nonscalable ``one
control channel + multiple data channels'' design \cite{Natarajan-01}.
\end{itemize}
\section{MMSN Protocol}
\subsection{Frequency Assignment}
We propose a suboptimal distribution to be used by each node, which is
easy to compute and does not depend on the number of competing
nodes. A natural candidate is an increasing geometric sequence, in
which
\begin{equation}
\label{eqn:01}
P(t)=\frac{b^{\frac{t+1}{T+1}}-b^{\frac{t}{T+1}}}{b-1},
\end{equation}
where $t=0,{\ldots}\,,T$, and $b$ is a number greater than $1$.
In our algorithm, we use the suboptimal approach for simplicity and
generality. We need to make the distribution of the selected back-off
time slice at each node conform to what is shown in
Equation~\eqref{eqn:01}. It is implemented as follows: First, a random
variable $\alpha$ with a uniform distribution within the interval $(0,
1)$ is generated on each node, then time slice $i$ is selected
according to the following equation:
\[
i=\lfloor(T+1)\log_b[\alpha(b-1)+1]\rfloor.
\]
It can be easily proven that the distribution of $i$ conforms to Equation
(\ref{eqn:01}).
So protocols \cite{Bahl-02, Culler-01,Zhou-06,Adya-01,
Tzamaloukas-01, Akyildiz-01} that use RTS/CTS
controls\footnote{RTS/CTS controls are required to be implemented by
802.11-compliant devices. They can be used as an optional mechanism
to avoid Hidden Terminal Problems in the 802.11 standard and
protocols based on those similar to \cite{Akyildiz-01} and
\cite{Adya-01}.} for frequency negotiation and reservation are not
suitable for WSN applications, even though they exhibit good
performance in general wireless ad-hoc
networks.
\subsubsection{Exclusive Frequency Assignment}
In exclusive frequency assignment, nodes first exchange their IDs
among two communication hops so that each node knows its two-hop
neighbors' IDs. In the second broadcast, each node beacons all
neighbors' IDs it has collected during the first broadcast period.
\paragraph{Eavesdropping}
Even though the even selection scheme leads to even sharing of
available frequencies among any two-hop neighborhood, it involves a
number of two-hop broadcasts. To reduce the communication cost, we
propose a lightweight eavesdropping scheme.
\subsection{Basic Notations}
As Algorithm~\ref{alg:one} states, for each frequency
number, each node calculates a random number (${\textit{Rnd}}_{\alpha}$) for
itself and a random number (${\textit{Rnd}}_{\beta}$) for each of its two-hop
neighbors with the same pseudorandom number generator.
\begin{algorithm}[t]
\SetAlgoNoLine
\KwIn{Node $\alpha$'s ID ($ID_{\alpha}$), and node $\alpha$'s
neighbors' IDs within two communication hops.}
\KwOut{The frequency number ($FreNum_{\alpha}$) node $\alpha$ gets assigned.}
$index$ = 0; $FreNum_{\alpha}$ = -1\;
\Repeat{$FreNum_{\alpha} > -1$}{
$Rnd_{\alpha}$ = Random($ID_{\alpha}$, $index$)\;
$Found$ = $TRUE$\;
\For{each node $\beta$ in $\alpha$'s two communication hops
}{
$Rnd_{\beta}$ = Random($ID_{\beta}$, $index$)\;
\If{($Rnd_{\alpha} < Rnd_{\beta}$) \text{or} ($Rnd_{\alpha}$ ==
$Rnd_{\beta}$ \text{and} $ID_{\alpha} < ID_{\beta}$)\;
}{
$Found$ = $FALSE$; break\;
}
}
\eIf{$Found$}{
$FreNum_{\alpha}$ = $index$\;
}{
$index$ ++\;
}
}
\caption{Frequency Number Computation}
\label{alg:one}
\end{algorithm}
Bus masters are divided into two disjoint sets, $\mathcal{M}_{RT}$
and $\mathcal{M}_{NRT}$.
\begin{description}
\item[RT Masters]
$\mathcal{M}_{RT}=\{ \vec{m}_{1},\dots,\vec{m}_{n}\}$ denotes the
$n$ RT masters issuing real-time constrained requests. To model the
current request issued by an $\vec{m}_{i}$ in $\mathcal{M}_{RT}$,
three parameters---the recurrence time $(r_i)$, the service cycle
$(c_i)$, and the relative deadline $(d_i)$---are used, with their
relationships.
\item[NRT Masters]
$\mathcal{M}_{NRT}=\{ \vec{m}_{n+1},\dots,\vec{m}_{n+m}\}$ is a set
of $m$ masters issuing nonreal-time constrained requests. In our
model, each $\vec{m}_{j}$ in $\mathcal{M}_{NRT}$ needs only one
parameter, the service cycle, to model the current request it
issues.
\end{description}
Here, a question may arise, since each node has a global ID. Why
don't we just map nodes' IDs within two hops into a group of
frequency numbers and assign those numbers to all nodes within two
hops?
\section{Simulator}
\label{sec:sim}
If the model checker requests successors of a state which are not
created yet, the state space uses the simulator to create the
successors on-the-fly. To create successor states the simulator
conducts the following steps.
\begin{enumerate}
\item Load state into microcontroller model.
\item Determine assignments needed for resolving nondeterminism.
\item For each assignment.
\begin{enumerate}
\item either call interrupt handler or simulate effect of next instruction, or
\item evaluate truth values of atomic propositions.
\end{enumerate}
\item Return resulting states.
\end{enumerate}
Figure~\ref{fig:one} shows a typical microcontroller C program that
controls an automotive power window lift. The program is one of the
programs used in the case study described in Section~\ref{sec:sim}.
At first sight, the programs looks like an ANSI~C program. It
contains function calls, assignments, if clauses, and while loops.
\begin{figure}
\includegraphics{mouse}
\caption{Code before preprocessing.}
\label{fig:one}
\end{figure}
\subsection{Problem Formulation}
The objective of variable coalescence-based offset assignment is to find
both the coalescence scheme and the MWPC on the coalesced graph. We start
with a few definitions and lemmas for variable coalescence.
\begin{definition}[Coalesced Node (C-Node)]A C-node is a set of
live ranges (webs) in the AG or IG that are coalesced. Nodes within the same
C-node cannot interfere with each other on the IG. Before any coalescing is
done, each live range is a C-node by itself.
\end{definition}
\begin{definition}[C-AG (Coalesced Access Graph)]The C-AG is the access
graph after node coalescence, which is composed of all C-nodes and C-edges.
\end{definition}
\begin{lemma}
The C-MWPC problem is NP-complete.
\end{lemma}
\begin{proof} C-MWPC can be easily reduced to the MWPC problem assuming a
coalescence graph without any edge or a fully connected interference graph.
Therefore, each C-node is an uncoalesced live range after value separation
and C-PC is equivalent to PC. A fully connected interference graph is made
possible when all live ranges interfere with each other. Thus, the C-MWPC
problem is NP-complete.
\end{proof}
\begin{lemma}[Lemma Subhead]The solution to the C-MWPC problem is no
worse than the solution to the MWPC.
\end{lemma}
\begin{proof}
Simply, any solution to the MWPC is also a solution to the
C-MWPC. But some solutions to C-MWPC may not apply to the MWPC (if any
coalescing were made).
\end{proof}
\section{Performance Evaluation}
During all the experiments, the Geographic Forwarding (GF) by Akyildiz
et al.~\shortcite{Akyildiz-01} routing protocol is used. GF exploits
geographic information of nodes and conducts local data-forwarding to
achieve end-to-end routing. Our simulation is configured according to
the settings in Table~\ref{tab:one}. Each run lasts for 2 minutes and
repeated 100 times. For each data value we present in the results, we
also give its 90\% confidence interval.
\begin{table}%
\caption{Simulation Configuration}
\label{tab:one}
\begin{minipage}{\columnwidth}
\begin{center}
\begin{tabular}{ll}
\toprule
TERRAIN\footnote{This is a table footnote. This is a
table footnote. This is a table footnote.} & (200m$\times$200m) Square\\
Node Number & 289\\
Node Placement & Uniform\\
Application & Many-to-Many/Gossip CBR Streams\\
Payload Size & 32 bytes\\
Routing Layer & GF\\
MAC Layer & CSMA/MMSN\\
Radio Layer & RADIO-ACCNOISE\\
Radio Bandwidth & 250Kbps\\
Radio Range & 20m--45m\\
\bottomrule
\end{tabular}
\end{center}
\bigskip\centering
\footnotesize\emph{Source:} This is a table
sourcenote. This is a table sourcenote. This is a table
sourcenote.
\emph{Note:} This is a table footnote.
\end{minipage}
\end{table}%
\section{Conclusions}
In this article, we develop the first multifrequency MAC protocol for
WSN applications in which each device adopts a
single radio transceiver. The different MAC design requirements for
WSNs and general wireless ad-hoc networks are
compared, and a complete WSN multifrequency MAC design (MMSN) is
put forth. During the MMSN design, we analyze and evaluate different
choices for frequency assignments and also discuss the nonuniform
back-off algorithms for the slotted media access design.
\section{Typical References in New ACM Reference Format}
A paginated journal article \cite{Abril07}, an enumerated
journal article \cite{Cohen07}, a reference to an entire issue \cite{JCohen96},
a monograph (whole book) \cite{Kosiur01}, a monograph/whole book in a series (see 2a in spec. document)
\cite{Harel79}, a divisible-book such as an anthology or compilation \cite{Editor00}
followed by the same example, however we only output the series if the volume number is given
\cite{Editor00a} (so Editor00a's series should NOT be present since it has no vol. no.),
a chapter in a divisible book \cite{Spector90}, a chapter in a divisible book
in a series \cite{Douglass98}, a multi-volume work as book \cite{Knuth97},
an article in a proceedings (of a conference, symposium, workshop for example)
(paginated proceedings article) \cite{Andler79}, a proceedings article
with all possible elements \cite{Smith10}, an example of an enumerated
proceedings article \cite{VanGundy07},
an informally published work \cite{Harel78}, a doctoral dissertation \cite{Clarkson85},
a master's thesis: \cite{anisi03}, an online document / world wide web
resource \cite{Thornburg01, Ablamowicz07, Poker06}, a video game (Case 1) \cite{Obama08} and (Case 2) \cite{Novak03}
and \cite{Lee05} and (Case 3) a patent \cite{JoeScientist001},
work accepted for publication \cite{rous08}, 'YYYYb'-test for prolific author
\cite{SaeediMEJ10} and \cite{SaeediJETC10}. Other cites might contain
'duplicate' DOI and URLs (some SIAM articles) \cite{Kirschmer:2010:AEI:1958016.1958018}.
Boris / Barbara Beeton: multi-volume works as books
\cite{MR781536} and \cite{MR781537}.
A couple of citations with DOIs: \cite{2004:ITE:1009386.1010128,
Kirschmer:2010:AEI:1958016.1958018}.
Online citations: \cite{TUGInstmem, Thornburg01, CTANacmart}.
\section{Introduction}
This format is to be used for submissions that are published in the
conference publications. We wish to give this volume a consistent,
high-quality appearance. We therefore ask that authors follow some
simple guidelines. In essence, you should format your paper exactly
like this document. The easiest way to do this is to replace the
content with your own material.
\section{ACM Copyrights \& Permission}
Accepted extended abstracts and papers will be distributed in the
Conference Publications. They will also be placed in the ACM Digital
Library, where they will remain accessible to thousands of researchers
and practitioners worldwide. To view the ACM's copyright and
permissions policy, see:
\url{http://www.acm.org/publications/policies/copyright_policy}.
\section{Page Size}
All SIGCHI submissions should be US letter (8.5 $\times$ 11
inches). US Letter is the standard option used by this \LaTeX\
template.
\section{Text Formatting}
Please use an 8.5-point Verdana font, or other sans serifs font as
close as possible in appearance to Verdana in which these guidelines
have been set. Arial 9-point font is a reasonable substitute for
Verdana as it has a similar x-height. Please use serif or
non-proportional fonts only for special purposes, such as
distinguishing \texttt{source code} text.
\subsubsection{Text styles}
The \LaTeX\ template facilitates text formatting for normal (for body
text); heading 1, heading 2, heading 3; bullet list; numbered list;
caption; annotation (for notes in the narrow left margin); and
references (for bibliographic entries). Additionally, here is an
example of footnoted\footnote{Use footnotes sparingly, if at all.}
text. As stated in the footnote, footnotes should rarely be used.
\begin{table}
\caption{Table captions should be placed above the table. We
recommend table lines be 1 point, 25\% black. Minimize use of
table grid lines.}
\label{tab:table1}
\begin{tabular}{l r r r}
& & \multicolumn{2}{c}{\small{\textbf{Test Conditions}}} \\
\cmidrule(r){3-4}
{\small\textit{Name}}
& {\small \textit{First}}
& {\small \textit{Second}}
& {\small \textit{Final}} \\
\midrule
Marsden & 223.0 & 44 & 432,321 \\
Nass & 22.2 & 16 & 234,333 \\
Borriello & 22.9 & 11 & 93,123 \\
Karat & 34.9 & 2200 & 103,322 \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Language, style, and content}
The written and spoken language of SIGCHI is English. Spelling and
punctuation may use any dialect of English (e.g., British, Canadian,
US, etc.) provided this is done consistently. Hyphenation is
optional. To ensure suitability for an international audience, please
pay attention to the following:
\begin{itemize}
\item Write in a straightforward style. Use simple sentence
structure. Try to avoid long sentences and complex sentence
structures. Use semicolons carefully.
\item Use common and basic vocabulary (e.g., use the word ``unusual''
rather than the word ``arcane'').
\item Briefly define or explain all technical terms. The terminology
common to your practice/discipline may be different in other design
practices/disciplines.
\item Spell out all acronyms the first time they are used in your
text. For example, ``World Wide Web (WWW)''.
\item Explain local references (e.g., not everyone knows all city
names in a particular country).
\item Explain ``insider'' comments. Ensure that your whole audience
understands any reference whose meaning you do not describe (e.g.,
do not assume that everyone has used a Macintosh or a particular
application).
\item Explain colloquial language and puns. Understanding phrases like
``red herring'' requires a cultural knowledge of English. Humor and
irony are difficult to translate.
\item Use unambiguous forms for culturally localized concepts, such as
times, dates, currencies, and numbers (e.g., ``1-5- 97'' or
``5/1/97'' may mean 5 January or 1 May, and ``seven o'clock'' may
mean 7:00 am or 19:00). For currencies, indicate equivalences:
``Participants were paid {\fontfamily{txr}\selectfont \textwon}
25,000, or roughly US \$22.''
\item Be careful with the use of gender-specific pronouns (he, she)
and other gender-specific words (chairman, manpower,
man-months). Use inclusive language (e.g., she or he, they, chair,
staff, staff-hours, person-years) that is gender-neutral. If
necessary, you may be able to use ``he'' and ``she'' in alternating
sentences, so that the two genders occur equally
often~\cite{Schwartz:1995:GBF}.
\item If possible, use the full (extended) alphabetic character set
for names of persons, institutions, and places (e.g.,
Gr{\o}nb{\ae}k, Lafreni\'ere, S\'anchez, Nguy{\~{\^{e}}}n,
Universit{\"a}t, Wei{\ss}enbach, Z{\"u}llighoven, \r{A}rhus, etc.).
These characters are already included in most versions and variants
of Times, Helvetica, and Arial fonts.
\end{itemize}
\begin{marginfigure}
\includegraphics[width=\marginparwidth]{cats}
\caption{In this image, the cats are tessellated within a square
frame. Images should also have captions and be within the
boundaries of the sidebar on page~\pageref{bar:sidebar}. Photo:
\cczero~jofish on Flickr.}
\label{fig:marginfig}
\end{marginfigure}
\section{Figures}
The examples on this and following pages should help you get a feel
for how screen-shots and other figures should be placed in the
template. Your document may use color figures (see
Figures~\ref{fig:sample}), which are included in the page limit; the
figures must be usable when printed in black and white. You can use
the \texttt{marginfigure} environment to insert figures in the (left) margin
of the document (see Figure~\ref{fig:marginfig}). Finally, be sure to
make images large enough so the important details are legible and
clear (see Figure~\ref{fig:cats}).
\begin{figure*}
\includegraphics[width=\fulltextwidth]{map}
\caption{In this image, the map maximizes use of space.
Note that \LaTeX\ tends to render large figures on a
dedicated page. Image: \ccbynd~ayman on Flickr.}~\label{fig:cats}
\end{figure*}
\section{Tables}
\begin{margintable}
\caption{A simple narrow table in the left margin
space.}
\label{tab:table2}
\begin{tabular}{r r l}
& {\small \textbf{First}}
& {\small \textbf{Location}} \\
\toprule
Child & 22.5 & Melbourne \\
Adult & 22.0 & Bogot\'a \\
\midrule
Gene & 22.0 & Palo Alto \\
John & 34.5 & Minneapolis \\
\bottomrule
\end{tabular}
\end{margintable}
You man use tables inline with the text (see Table~\ref{tab:table1})
or within the margin as shown in Table~\ref{tab:table2}. Try to
minimize the use of lines (especially vertical lines). \LaTeX\ will
set the table font and captions sizes correctly; the latter must
remain unchanged.
\section{Accessibility}
The Executive Council of SIGCHI has committed to making SIGCHI
conferences more inclusive for researchers, practitioners, and
educators with disabilities. As a part of this goal, the all authors
are asked to work on improving the accessibility of their
submissions. Specifically, we encourage authors to carry out the
following five steps:
\begin{itemize}
\item Add alternative text to all figures
\item Mark table headings
\item Generate a tagged PDF
\item Verify the default language
\item Set the tab order to ``Use Document Structure''
\end{itemize}
For links to instructions and resources, please see:
\url{http://chi2016.acm.org/accessibility}
Unfortunately good tools do not yet exist to create tagged PDF files
from Latex (see the ongoing effort at
\url{http://tug.org/twg/accessibility/}). \LaTeX\ users will need to
carry out all of the above steps in the PDF directly using Adobe
Acrobat, after the PDF has been generated.
For more information and links to instructions and resources, please
see:
\url{http://chi2016.acm.org/accessibility} and
\url{http://tug.org/twg/accessibility/}.
\section{Producing and Testing PDF Files}
We recommend that you produce a PDF version of your submission well
before the final deadline. Your PDF file must be ACM DL Compliant and
meet stated requirements,
\url{http://www.sheridanprinting.com/sigchi/ACM-SIG-distilling-settings.htm}.
\begin{sidebar}
So long as you don't type outside the right
margin or bleed into the gutter, it's okay to put annotations over
here on the left. You may need to have
to manually align the margin paragraphs to your \LaTeX\ floats using
the \texttt{{\textbackslash}vspace{}} command.
\end{sidebar}
Test your PDF file by viewing or printing it with the same software we
will use when we receive it, Adobe Acrobat Reader Version 10. This is
widely available at no cost. Note that most reviewers will use a North
American/European version of Acrobat reader, so please check your PDF
accordingly.
\begin{acks}
We thank all the volunteers, publications support, staff, and
authors who wrote and provided helpful comments on previous versions
of this document. As well authors 1, 2, and 3 gratefully acknowledge
the grant from \grantsponsor{001}{NSF}{}
(\#\grantnum{001}{1234-2222-ABC}). Author 4 for example may want to
acknowledge a supervisor/manager from their original employer. This
whole paragraph is just for example. Some of the references cited in
this paper are included for illustrative purposes only.
\end{acks}
\section{References Format}
Your references should be published materials accessible to the
public. Internal technical reports may be cited only if they are
easily accessible and may be obtained by any reader for a nominal fee.
Proprietary information may not be cited. Private communications
should be acknowledged in the main text, not referenced (e.g.,
[Golovchinsky, personal communication]). References must be the same
font size as other body text. References should be in alphabetical
order by last name of first author. Use a numbered list of references
at the end of the article, ordered alphabetically by last name of
first author, and referenced by numbers in brackets. For papers from
conference proceedings, include the title of the paper and the name of
the conference. Do not include the location of the conference or the
exact date; do include the page numbers if available.
References should be in ACM citation format:
\url{http://www.acm.org/publications/submissions/latex_style}. This
includes citations to Internet
resources~\cite{CHINOSAUR:venue,cavender:writing,psy:gangnam}
according to ACM format, although it is often appropriate to include
URLs directly in the text, as above. Example reference formatting for
individual journal articles~\cite{ethics}, articles in conference
proceedings~\cite{Klemmer:2002:WSC:503376.503378},
books~\cite{Schwartz:1995:GBF}, theses~\cite{sutherland:sketchpad},
book chapters~\cite{winner:politics}, an entire journal
issue~\cite{kaye:puc},
websites~\cite{acm_categories,cavender:writing},
tweets~\cite{CHINOSAUR:venue}, patents~\cite{heilig:sensorama},
games~\cite{supermetroid:snes}, and
online videos~\cite{psy:gangnam} is given here. See the examples of
citations at the end of this document and in the accompanying
\texttt{BibTeX} document. This formatting is a edited version of the
format automatically generated by the ACM Digital Library
(\url{http://dl.acm.org}) as ``ACM Ref''. DOI and/or URL links are
optional but encouraged as are full first names. Note that the
Hyperlink style used throughout this document uses blue links;
however, URLs in the references section may optionally appear in
black.
|
{
"timestamp": "2018-02-27T02:03:20",
"yymm": "1802",
"arxiv_id": "1802.08780",
"language": "en",
"url": "https://arxiv.org/abs/1802.08780"
}
|
\section{Introduction}
Player ranking is one of the most studied subjects in sports analytics \cite{Swartz}. In this paper we consider predicting success in the National Hockey League(NHL) from junior league data, with the goal of supporting draft decisions. The publicly available junior league data aggregate a season’s performance into a single set of numbers for each player. Our method can be applied to any data of this type, for example also to basketball NBA draft data(\url{www.basketball-reference.com/draft/}). Since our goal is to support draft decisions by teams, we ensure that the results of our data analysis method can be easily explained to and interpreted by sports experts.
Previous approaches for analyzing hockey draft data take a regression approach or a similarity-based approach. Regression approaches build a predictive model that takes as input a set of player features, such as demographics (age, height, weight) and junior league performance metrics (goals scored, plus-minus), and output a predicted success metric (e.g. number of games played in the professional league). The current state-of-the-art is a generalized additive model \cite{Schuckers2016}. Cohort-based approaches divide players into groups of comparables and predict future success based on a player’s cohort. For example, the PCS model \cite{PCS} clusters players according to age, height, and scoring rates. One advantage of the cohort model is that predictions can be explained by reference to similar known players, which many domain experts find intuitive. For this reason, several commercial sports analytics systems, such as Sony’s Hawk-Eye system, identify groups of comparables for each player. Our aim in this paper is to describe a new model for draft data that achieves the best of both approaches, regression-based and similarity-based.
Our method uses a model tree \cite{Friedman00, GUIDE}. Each node in the tree defines a new yes/no question, until a leaf is reached. Depending on the answers to the questions, each player is assigned a group corresponding to a leaf. The tree builds a different regression model for each leaf node. Figure 1 shows an example model tree. A model tree offers several advantages.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=1.0\textwidth]{LMT_tree.png}
\caption{Logistic Regression Model Trees for the ${2004, 2005, 2006}$ cohort in NHL. The tree was built using the LogitBoost algorithm implemented in the LMT package of the Weka Program \cite{Weka1, Hall2}.}
\end{center}
\end{figure}
\begin{itemize}
\item {Compared to a single regression model, the tree defines an ensemble of regression models, based on non-linear thresholds. This increases the expressive power and predictive accuracy of the model. The tree can represent complex interactions between player features and player groups. For example, if the data indicate that players from different junior leagues are sufficiently different to warrant building distinct models, the tree can introduce a split to distinguish different leagues.}
\item {Compare to a similarity-based model, tree construction learns groups of players from the data, without requiring the analyst to specify a similarity metric. Because tree learning selects splits that increase predictive accuracy, the learned distinctions between the groups are guaranteed to be predictively relevant to future NHL success. Also, the tree creates a model, not a single prediction, for each group, which allows it to differentiate players from the same group.}
\end{itemize}
A natural approach would be to build a linear regression tree to predict NHL success, which could be measured by the number of games a draft pick plays in the NHL. However, only about half the draft picks ever play a game in the NHL \cite{Tingling}. As observed by \cite{Schuckers2016}, this creates a zero-inflation problem that limits the predictive power of linear regression. We propose a novel solution to the zero-inflation problem, which applies logistic regression to predict whether a player will play at least one game in the NHL. We learn a logistic regression model tree, and rank players by the probability that the logistic regression model tree assigns to them playing at least one game. Intuitively, if we can be confident that a player will play at least one NHL game, we can also expect the player to play many NHL games. Empirically, we found that on the NHL draft data, the logistic regression tree produces a much more accurate player ranking than the linear regression tree.
Following \cite{Schuckers2016}, we evaluate the logistic regression ranking by comparing it to ranking players by their future success, measured as the number of NHL games they play after 7 years. The correlation of the logistic regression ranking with future success is competitive with that achieved by the generalized additive model of \cite{Schuckers2016}. We show in case studies that the logistic model tree adds information to the NHL’s Central Scouting Service Rank (CSS). For example, Stanley Cup winner \textit{Kyle Cumiskey} was not ranked by the CSS in his draft year, but was ranked as the third draft prospect in his group by the model tree, just behind \textit{Brad Marchand} and \textit{Mathieu Carle}. Our case studies also show that the feature weights learned from the data can be used to explain the ranking in terms of which player features contribute the most to an above-average ranking. In this way the model tree can be used to highlight exceptional features of a player for scouts and teams to take into account in their evaluation.
\textit{Paper Outline.} After we review related work, we show and discuss the model tree learned from the 2004-2006 draft data. The rank correlations are reported to evaluate predictive accuracy. We discuss in detail how the ensemble of group models represents a rich set of interactions between player features, player categories, and NHL success. Case studies give examples of strong players in different groups and show how the model can used to highlight exceptional player features.
\section{Related Work}
Different approaches to player ranking are appropriate for different data types. For example, with dynamic play-by-play data, Markov models have been used to rank players \cite{Cervone2014,Thomas2013,Oliver2017,Kaplan2014}. For data that record the presence of players when a goal is scored, regression models have also been applied to extend the classic plus-minus metric \cite{Macdonald2011,Gramacy2013}. In this paper, we utilize player statistics that aggregate a season's performance into a single set of numbers. While this data is much less informative than play-by-play data, it is easier to obtain, interpret, and process.
\textit{Regression Approaches.} To our knowledge, this is the first application of model trees to hockey draft prediction, and the first model for predicting whether a draftee plays any games at all. The closest predecessor to our work is due to Schuckers \cite{Schuckers2016}, who uses a single generalized additive model to predict future NHL game counts from junior league data.
\textit{Similarity-Based Approaches} assume a similarity metric and group similar players to predict performance. A sophisticated example from baseball is the nearest neighbour analysis in the PECOTA system \cite{PECOTA}. For ice hockey, the Prospect Cohort Success (PCS) model \cite{PCS}, cohorts of draftees are defined based on age, height, and scoring rates. Model tree learning provides an automatic method for identifying cohorts with predictive validity. We refer to cohorts as groups to avoid confusion with the PCS concept. Because tree learning is computationally efficient, our model tree is able to take into account a larger set of features than age, height, and scoring rates. Also, it provides a separate predictive model for each group that assigns group-specific weights to different features. In contrast, PCS makes the same prediction for all players in the same cohort. So far, PCS has been applied to predict whether a player will score more than 200 games career total. Tree learning can easily be modified to make predictions for any game count threshold.
\section{Dataset}
Our data was obtained from public-domain on-line sources, including \url{nhl.com}, \url{eliteprospects.com}, and \url{draftanalyst.com}. We are also indebted to David Wilson for sharing his NHL performance dataset \cite{Wilson2016}. The full dataset is posted on the Github(\url{https://github.com/liuyejia/Model_Trees_Full_Dataset}). We consider players drafted into the NHL between 1998 to 2008 (excluding goalies). Following \cite{Schuckers2016}, we took as our dependent variable \textbf{the total number of games $g_i$ played} by a player $i$ after 7 years under an NHL contract. The first seven seasons are chosen because NHL teams have at least seven-year rights to players after they are drafted \cite{Schucker2013}. Our dataset includes also the total time on ice after $7$ years. The results for time on ice were very similar to number of games, so we discuss only the results for number of games. The independent variables include demographic factors (e.g. age), performance metrics for the year in which a player was drafted (e.g., goals scored), and the rank assigned to a player by the NHL Central Scouting Service (CSS). If a player was not ranked by the CSS, we assigned (1+ the maximum rank for his draft year) to his CSS rank value. Another preprocessing step was to pool all European countries into a single category. If a player played for more than one team in his draft year (e.g., a league team and a national team), we added up this counts from different teams. Table 1 lists all data columns and their meaning. Figure 1 shows an excerpt from the dataset.
\begin{table}[!h]
\begin{center}
\begin{tabular}{ | l | p{10cm} |}
\hline
Variable Name & Description \\ \hline
id & nhl.com id for NHL players, otherwise Eliteprospects.com id \\ \hline
DraftAge & Age in Draft Year \\ \hline
Country & Nationality. Canada -> 'CAN', USA -> 'USA', countries in Europe -> 'EURO' \\ \hline
Position & Position in Draft Year. Left Wing -> 'L', Right Wing -> 'R', Center -> 'C', Defencemen -> 'D' \\ \hline
Overall & Overall pick in NHL Entry Draft \\ \hline
CSS\_rank & Central scouting service ranking in Draft Year \\ \hline
rs\_GP & Games played in regular seasons in Draft Year \\ \hline
rs\_G & Goals in regular seasons in Draft Year \\ \hline
rs\_A & Assists in regular seasons in Draft Year \\ \hline
rs\_P & Points in regular seasons in Draft Year \\ \hline
rs\_PIM & Penalty Minutes in regular seasons in Draft Year \\ \hline
rs\_PlusMinus & Goal Differential in regular seasons in Draft Year\\ \hline
po\_GP & Games played in playoffs in Draft Year \\ \hline
po\_G & Goals in playoffs in Draft Year \\ \hline
po\_A & Assists in playoffs in Draft Year \\ \hline
po\_P & Points in playoffs in Draft Year \\ \hline
po\_PIM & Penalty Minutes in playoffs in Draft Year \\ \hline
po\_PlusMinus & Goal differential in playoffs in Draft Year \\ \hline
sum\_7yr\_GP & Total NHL games played in player's first 7 years of NHL career \\ \hline
sum\_7yr\_TOI & Total NHL Time on Ice in player's first 7 years of NHL career \\ \hline
GP\_7yr\_greater\_than\_0 & Played a game or not in player's first 7 years of NHL career\\ \hline
\end{tabular}
\caption{Player Attributes listed in dataset \textit{(excluding weight and height)}.}
\end{center}
\end{table}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.9\textwidth]{excerp_NHL_datasets.png}
\caption{Sample Player Data for their draft year. rs = regular season. We use the same statistics for the playoffs \textit{(not shown)}.}
\end{center}
\end{figure}
\section{Model Tree Construction}
Model trees are a flexible formalism that can be built for any regression model. An obvious candidate for a regression model would be linear regression; alternatives include a generalized additive model \cite{Schuckers2016}, and a Poisson regression model specially built for predicting counts \cite{Ryder}. We introduce a different approach: a logistic regression model to predict whether a player will play any games at all in the NHL ($g_i>0$). The motivation is that many players in the draft never play any NHL games at all (up to 50\% depending on the draft year) \cite{Tingling}. This poses an extreme zero-inflation problem for any regression model that aims to predict directly the number of games played. In contrast, for the classification problem of predicting whether a player will play any NHL games, zero-inflation means that the data set is balanced between the classes. This classification problem is interesting in itself; for instance, a player agent would be keen to know what chances their client has to participate in the NHL. The logistic regression probabilities $p_i=P(g_i>0)$ can be used not only to predict whether a player will play any NHL games, but also to rank players such that the ranking correlates well with the actual number of games played. Our method is therefore summarized as follows.
\begin{enumerate}
\boxitem{
\item[1.]
Build a tree whose leaves contain a logistic regression model.
\item[2.]
The tree assigns each player $i$ to a unique leaf node $l_i$, with a logistic regression model $m(l_i)$.
\item[3.]
Use $m(l_i)$ to compute a probability $p_i= P(g_i>0)$.
}
\end{enumerate}
Figure 1 shows the logistic regression model tree learned for our second cohort by the LogiBoost algorithm. It places CSS rank at the root as the most important attribute. Players ranked better than $12$ form an elite group, of whom almost $82\%$ play at least one NHL games. For players at rank $12$ or below, the tree considers next their regular season points total. Players with rank and total points below $12$ form an unpromising group: only $16\%$ of them play an NHL game. Players with rank below $12$ but whose points total is $12$ or higher, are divided by the tree into three groups according to whether their regular season plus-minus score is positive, negative, or $0$. (A three-way split is represented by two binary splits). If the plus-minus score is negative, the prospects of playing an NHL game are fairly low at about $37\%$. For a neutral plus-minus score, this increases to $61\%$. For players with a positive plus-minus score, the tree uses the number of playoff assists as the next most important attribute. Players with a positive plus-minus score and more than $10$ playoff assists form a small but strong group that is $92\%$ likely to play at least one NHL game.
\section{Results: Predictive Modelling}
Following \cite{Schuckers2016}, we evaluated the predictive accuracy of the LMT model using the Spearman Rank Correlation(SRC) between two player rankings: $i)$ the performance ranking based on the actual number of NHL games that a player played, and $ii)$ the ranking of players based on the probability $pi$ of playing at least one game(Tree Model SRC). We also compared it with $iii)$ the ranking of players based on the order in which they were drafted (Draft Order SRC). The draft order can be viewed as the ranking that reflects the judgment of NHL teams. We provide the formula for the Spearman correlation in the Appendix. Table 2 shows the Spearman correlation for different rankings.
\begin{table}[!h]
\centering
\begin{tabular}{|l|c|c|c|r|}
\hline
\begin{tabular}{@{}c@{}} Training Data \\ NHL Draft Years \end{tabular} & \begin{tabular}{@{}c@{}} Out of Sample \\ Draft Years\end{tabular} & \begin{tabular}{@{}c@{}} Draft Order \\ SRC\end{tabular} & \begin{tabular}{@{}c@{}} LMT \\ Classification Accuracy\end{tabular} & \begin{tabular}{@{}c@{}} LMT \\ SRC\end{tabular} \\ \hline
1998, 1999, 2000 & 2001 & 0.43 & 82.27\% & 0.83 \\ \hline
1998, 1999, 2000 & 2002 & 0.30 & 85.79\% & 0.85 \\ \hline
2004, 2005, 2006 & 2007 & 0.46 & 81.23\% & 0.84 \\ \hline
2004, 2005, 2006 & 2008 & 0.51 & 63.56\% & 0.71 \\ \hline
\end{tabular}
\caption{Predictive Performance (our Logitic Model Trees, over all draft ranking) using Spearman Rank Correlation. Bold indicates the best values.}
\end{table}
\textit{Other Approaches.} We also tried designs based on a linear regression model tree, using the M5P algorithm implemented in the Weka program. The result is a decision stump that splits on CSS rank only, which had substantially worse predictive performance(i.e., Spearman correlation of only $0.4$ for the $2004-2006$ cohort). For the generalized additive model (gam), the reported correlations were $2001: 0.53, 2002: 0.54, 2007: 0.69, 2008: 0.71$ \cite{Schuckers2016}. Our correlation is not directly comparable to the gam model because of differences in data preparation: the gam model was applied only to drafted players who played at least one NHL game, and the CSS rank was replaced by the Cescin conversion factors: for North American players, multiply CSS rank by $1.35$, and for European players, by $6.27$ \cite{Fyffe}. The Cescin conversion factors represent an interaction between the player's country and the player's CSS rank. A model tree offers another approach to representing such interactions: by splitting on the player location node, the tree can build a different model for each location. Whether the data warrant building different models for different locations is a data-driven decision made by the tree building algorithm. The same point applies to other sources of variability, for example the draft year or the junior league. Including the junior league as a feature has the potential to lead to insights about the differences between leagues, but would make the tree more difficult to interpret; we leave this topic for future work. In the next section we examine the interaction effects captured by the model tree in the different models learned in each leaf node.
\section{Results: Learned Groups and Logistic Regression Models}
We examine the learned group regression models, first in terms of the dependent success variable, then in terms of the player features.
\subsection{Groups and the Dependent Variable}
Figure 3 shows boxplots for the distribution of our dependent variable $g_i$. The strongest groups are, in order, 1, 6, and 4. The other groups show weaker performance on the whole, although in each group some players reach high numbers of games. Most players in Group 2\&3\&4\&5 have GP equals to zero while Group 1\&6 represent the strongest cohort in our prediction, where over $80\%$ players played at least 1 game in NHL. The tree identifies that among the players who do not have a very high CSS rank (worse than $12$), the combination of regular season $Points >= 12$, $PlusMinus > 0$, and $play-off Assists > 10$ is a strong indicator of playing a substantive number of NHL games (median $g_i = 128$).
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.9\textwidth]{nhl_boxplot.png}
\caption{Boxplots for the dependent variable $g_i$ , the total number of NHL games played after $7$ years under an NHL contract. Each boxplot shows the distribution for one of the groups learned by the logistic regression model tree. The group size is denoted $n$.}
\end{center}
\end{figure}
\subsection{Groups and the Independent Variables}
Figure 5.2 shows the average statistics by group and for all players. The CSS rank for Group 1 is by far the highest. The data validate the high ranking in that $82\%$ players in this group went on to play an NHL game. Group 6 in fact attains an even higher proportion of $92\%$. The average statistics of this group are even more impressive than those of group 1 (e.g., $67$ regular season points in group $6$ vs. $47$ for group 1). But the average CSS rank is the lowest of all groups. So this group may represent a small group of players ($n = 13$) overlooked by the scouts but identified by the tree. Other than Group 6, the group with the lowest CSS rank on average is Group 2. The data validate the low ranking in that only $16\%$ of players in this group went on to play an NHL game. The group averages are also low (e.g., $6$ regular season points is much lower than other groups).
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.9\textwidth]{mean_points_nhl.png}
\caption{Statistics for the average players in each group and all players.}
\end{center}
\end{figure}
\section{Group Models and Variable Interactions}
Figure 5 illustrates logistic regression weights by group. A positive weight implies that an increase in the covariate value predicts a large increase in the probability of playing more than one game, compared to the probability of playing zero games. Conversely, a negative weight implies that an increase in the covariate value decreases the predicted probability of playing more than one game. Bold numbers show the groups for which an attribute is most relevant. The table exhibits many interesting interactions among the independent variables; we discuss only a few. Notice that if the tree splits on an attribute, the attribute is assigned a high-magnitude regression weight by the logistic regression model for the relevant group. Therefore our discussion focuses on the tree attributes.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=1.0\textwidth]{nhl_weights_new.png}
\caption{Group 200(4 + 5 + 6 + 7 + 8) Weights Illustration. E = Europe, C = Canada, U = USA, rs = Regular Season, po = Playoff. Largest-magnitude weights are in bold. Underlined weights are discussed in the text.}
\end{center}
\end{figure}
At the tree root, \textit{CSS rank} receives a large negative weight of $-17.9$ for identifying the most successful players in Group 1, where all CSS ranks are better than $12$. Figure 6a shows that the proportion of above-zero to zero-game players decreases quickly in Group 1 with worse CSS rank. However, the decrease is not monotonic. Figure 6b is a scatterplot of the original data for Group 1. We see a strong linear correlation ($p = -0.39$), and also a large variance within each rank. The proportion aggregates the individual data points at a given rank, thereby eliminating the variance. This makes the proportion a smoother dependent variable than the individual counts for a regression model.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=1.0\textwidth]{CSS_rank_NHL_plot.png}
\caption{Proportion and scatter plots for CSS\_rank vs. sum\_7yr\_GP in Group 1.}
\end{center}
\end{figure}
Group 5 has the smallest logistic regression coefficient of $-0.65$. Group 5 consists of players whose CSS ranks are worse than $12$, regular season points above $12$, and plus-minus above $1$. Figure 7a plots CSS rank vs. above-zero proportion for Group 5. As the proportion plot shows, the low weight is due to the fact that the proportion trends downward only at ranks worse than $200$. The scatterplot in Figure 7b shows a similarly weak linear correlation of $-0.12$.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=1.0\textwidth]{Group_5_CSSrank.png}
\caption{Proportion and scatter plots for CSS\_rank vs.sum\_7yr\_GP in Group 5.}
\end{center}
\end{figure}
\textit{Regular season points} are the most important predictor for Group 2, which comprises players with CSS rank worse than $12$, and regular season points below $12$. In the proportion plot Figure 8, we see a strong relationship between points and the chance of playing more than 0 games (logistic regression weight $14.2$). In contrast in Group 4 (overall weight $-1.4$), there is essentially no relationship up to $65$ points; for players with points between $65$ and $85$ in fact the chance of playing more than zero games slightly decreases with increasing points.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=1.0\textwidth]{rs_points_nhl.png}
\caption{Proportion\_of\_Sum\_7yr\_GP\_greater\_than\_0 vs. rs\_P in Group 2\&4.}
\end{center}
\end{figure}
In Group 3, players are ranked at level $12$ or worse, have collected at least $12$ regular season points, and show a negative plus-minus score. The most important feature for Group $3$ is the \textit{regular season plus-minus} score (logistic regression weight $13.16$), which is negative for all players in this group. In this group, the chances of playing an NHL game increase with plus-minus, but not monotonically, as Figure 9 shows.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=1.0\textwidth]{rs_plusminus_nhl.png}
\caption{Proportion and scatter plots for rs\_PlusMinus vs.sum\_7yr\_GP in group 3.}
\end{center}
\end{figure}
For \textit{regular season goals}, Group 5 assigns a high logistic regression weight of $3.59$. However, Group 2 assigns a surprisingly negative weight of $-2.17$. Group 5 comprises players at CSS rank worse than $12$, regular season points 12 or higher, and positive plus-minus greater than $1$. About $64.8\%$ in this group are offensive players (see Figure 10). The positive weight therefore indicates that successful forwards score many goals, as we would expect.
Group 2 contains mainly defensemen ($61.6\%$; see Figure 10). The typical strong defenseman scores $0$ or $1$ goals in this group. Players with more goals tend to be forwards, who are weaker in this group. In sum, the tree assigns weights to goals that are appropriate for different positions, using statistics that correlate with position (e.g., plus-minus), rather than the position directly.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.9\textwidth]{position_nhl_plot.png}
\caption{Distribution of Defenseman vs. Forwards in Group 5\&2. The size is denoted as $n$.}
\end{center}
\end{figure}
\section{Identifying Exceptional Players}
Teams make drafting decisions not based on player statistics alone, but drawing on all relevant source of information, and with extensive input from scouts and other experts. As Cameron Lawrence from the Florida Panthers put it, \lq the numbers are often just the start of the discussion\rq \cite{Joyce}. In this section we discuss how the model tree can be applied to support the discussion of individual players by highlighting their special strengths. The idea is that the learned weights can be used to identify which features of a highly-ranked player differentiate him the most from others in his group.
\subsection*{Explaining the Rankings: identify weak points and strong points
}
Our method is as follows. For each group, we find the average feature vector of the players in the group, which we denote by $\overline{x_{g1}}, \overline{x_{g2}}, ..., \overline{x_{gm}}$ (see Figure 4). We denote the features of player $i$ as $x_{i1}, x_{i2}, ..., x_{im}$ . Then given a weight vector $(w_1, w_m)$ for the logistic regression model of group $g$, the log-odds difference between player $i$ and a random player in the group is given by
\begin{center}
$\sum_{j=1}^{m}w_j(x_{ij} - \overline{x_{gi}})$
\end{center}
We can interpret this sum as a measure of how high the model ranks player $i$ compared to other players in his group. This suggests defining as the player's strongest features the $x_{ij}$ that maximize $w_j(x_{ij} - \overline{x_{gi}})$, and as his weakest features those that minimize $w_j(x_{ij} - \overline{x_{gi}})$. This approach highlights features that are $i$) relevant to predicting future success, as measured by the magnitude of $w_j$, and $ii$) different from the average value in the player's group of comparables, as measured by the magnitude of $x_{ij} - \overline{x_{gi}}$.
\subsection*{Case Studies}
Figure 11 shows, for each group, the three strongest points for the most highly ranked players in the group. We see that the ranking for individual players is based on different features, even within the same group. The table also illustrates how the model allows us to identify a group of comparables for a given player. We discuss a few selected players and their strong points. The most interesting cases are often those where are ranking differs from the scouts' CSS rank. We therefore discuss the groups with lower rank first.
Among the players who were not ranked by CSS at all, our model ranks \textit{Kyle Cumiskey} at the top. Cumiskey was drafted in place $222$, played $132$ NHL games in his first $7$ years, represented Canada in the World Championship, and won a Stanley Cup in $2015$ with the Blackhawks. His strongest points were being Canadian, and the number of games played (e.g., $27$ playoff games vs. $19$ group average).
In the lowest CSS-rank group 6 (average $107$), our top-ranked player \textit{Brad Marchand} received CSS rank $80$, even below his Boston Bruin teammate Lucic's. Given his Stanley Cup win and success representing Canada, arguably our model was correct to identify him as a strong NHL prospect. The model highlights his superior play-off performance, both in terms of games played and points scored. Group 2 (CSS average $94$) is a much weaker group. \textit{Matt Pelech} is ranked at the top by our model because of his unusual weight, which in this group is unusually predictive of NHL participation. In group 4 (CSS average $86$), \textit{Sami Lepisto} was top-ranked, in part because he did not suffer many penalties although he played a high number of games. In group 3 (CSS average $76$), \textit{Brandon McMillan} is ranked relatively high by our model compared to the CSS. This is because in this group, left-wingers and shorter players are more likely to play in the NHL. In our ranking, \textit{Milan Lucic} tops Group 5 (CSS average $71$). At $58$, his CSS rank is above average in this group, but much below the highest CSS rank player (Legein at $13$). The main factors for the tree model are his high weight and number of play-off games played. Given his future success (Stanley Cup, NHL Young Stars Game), arguably our model correctly identified him as a star in an otherwise weaker group. The top players in Group 1 like \textit{Sidney Crosby} and \textit{Patrick Kane} are obvious stars, who have outstanding statistics even relative to other players in this strong group.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.58\textwidth]{NHL_exceptional_player.png}
\caption{Strongest Statistics for the top players in each group. Underlined players are discussed in the text.}
\end{center}
\end{figure}
\section{Conclusion and Future Work}
We have proposed building a regression model tree for ranking draftees in the NHL, or other sports, based on a list of player features and performance statistics. The model tree groups players according to the values of discrete features, or learned thresholds for continuous performance statistics. Each leaf node defines a group of players that is assigned its own regression model. Tree models combine the strength of both regression and cohort-based approaches, where player performance is predicted with reference to comparable players. An obvious approach is to use a linear regression tree for predicting our dependent variable, the number of NHL games played by a player within $7$ NHL years. However, we found that a linear regression tree performs poorly due to the zero-inflation problem (many draft picks never play any NHL game). Instead, we introduced the idea of using a logistic regression tree to predict whether a player plays any NHL game within $7$ years. Players are ranked according to the model tree probability that they play at least $1$ game.
Key findings include the following. 1) The model tree ranking correlates well with the actual success ranking according to the actual number of games played: better than draft order and competitive with the state-of-the-art generalized additive model \cite{Schuckers2016}. 2) The model predictions complement the Central Scouting Service (CSS) rank. For example, the tree identifies a group whose average CSS rank is only $107$, but whose median number of games played after $7$ years is $128$, including several Stanley Cup winners. 3) The model tree can highlight the exceptionally strong and weak points of draftees that make them stand out compared to the other players in their group.
Tree models are flexible and can be applied to other prediction problems to discover groups of comparable players as well as predictive models. For example, we can predict future NHL success from past NHL success, similar to Wilson \cite{Wilson2016} who used machine learning models to predict whether a player will play more than $160$ games in the NHL after $7$ years. Another direction is to apply the model to other sports, for example drafting for the National Basketball Association.
\bibliographystyle{alpha}
|
{
"timestamp": "2018-02-27T02:02:32",
"yymm": "1802",
"arxiv_id": "1802.08765",
"language": "en",
"url": "https://arxiv.org/abs/1802.08765"
}
|
\section{Introduction}
\label{section_1}
\IEEEPARstart{M}{any-objective} Optimization Problems (MaOPs) refer to the optimization tasks involving $m$ (i.e., $m>3$) conflicting objectives to be optimized concurrently~\cite{khare2003performance}. Generally, an MaOP is with the mathematic form represented by~(\ref{equ_maop})
\begin{equation}
\label{equ_maop}
\begin{array}{ll}
\setlength{\arraycolsep}{1pt}
\renewcommand{\arraystretch}{1.5}
\left\{
\begin{array}{l}
\textbf{y}=\textbf{f}(\textbf{x})=(f_1(\textbf{x}),\cdots,f_m(\textbf{x})) \\
s.t.~~~\textbf{x}\in \Omega
\end{array}
\right.
\end{array}
\end{equation}
where $\Omega \subseteq \mathbb{R}^n$ is the feasible search space for the decision variables $\textbf{x}=(x_1,\cdots,x_n)^T$, and $\textbf{f}: \Omega \rightarrow \Theta\subseteq \mathbb{R}^m$ is the corresponding objective vector including $m$ objectives which maps the $n$-dimensional decision space $\Omega$ to the $m$-dimensional objective space $\Theta$. Without loss of generality, $\textbf{f}(\textbf{x})$ is assumed to be minimized since the maximization problems can be transformed into the minimization problems due to the duality principle. Because of the conflicting nature in the objective functions, there is no single perfect solution for \textbf{f}(\textbf{x}), but a set of tradeoff solutions which form the Pareto Set (PS) in the decision space and the corresponding Pareto Front (PF) in the objective space.
Optimization algorithms for addressing an MaOP aim at searching for a set of uniformly distributed solutions which are closely approximating the PF. Because the MaOPs widely exist in diverse real-world applications, such as policy management in land exploitation with $14$-objective~\cite{chikumbo2012approximating} and calibration of automotive engine with $10$-objective~\cite{lygoe2013real}, to name a few, various algorithms for solving MaOPs have been developed. Among these algorithms, the evolutionary paradigms are considerably preferable due to their population-based meta-heuristic characteristics obtaining a set of quality solutions in a single run.
During the past decades, various Multi-Objective Evolutionary Algorithms (MOEAs), such as elitist Non-dominated Sorting Genetic Algorithm (NSGA-II)~\cite{deb2002fast}, advanced version of Strength Pareto Evolutionary Algorithm (SPEA2)~\cite{zitzler2001spea2}, among others, have been proposed to effectively solve Multi-Objective Optimization Problems~(MOPs). Unfortunately, these MOEAs do not scale well with the increasing number of objectives, mainly due to the loss of selection pressure. To be specific, the number of non-dominated solutions in MaOPs accounts for a large proportion of the current population because of the dominance resistance phenomenon~\cite{fonseca1998multiobjective} caused by the curse of dimensionality~\cite{purshouse2007evolutionary}, so that the traditional elitism mechanism based on Pareto-domination cannot effectively differentiate which solutions should survive into the next generation. As a result, the density-based diversity promotion mechanism is considered the sole mechanism for mating and environmental selections~\cite{li2015many}. However, the solutions with good diversity in MaOPs are generally not only distant from each other but also away from the PF. Consequently, the evolution with the solutions generated by the activated diversity promotion is stagnant or even far away from the PF~\cite{wagner2007pareto}. To this end, various Many-Objective Evolutionary Algorithms (MaOEAs) specifically designed for addressing MaOPs have been proposed in recent years.
Generally, these MaOEAs can be divided into four different categories. The first category covers the algorithms employing reference prior to enhancing the diversity promotion which in turn improve the convergence. For example, the MaOEA using reference-point-based non-dominated sorting approach (NSGA-III)~\cite{deb2014evolutionary} employs a set of reference vectors to assist the algorithm to select solutions which are close to these reference vectors. Yuan \textit{et al.}~\cite{yuan2016new} proposed the reference line-based algorithm which not only adopted the diversity improvement mechanism like that in NSGA-III but also introduced convergence enhancement scheme by measuring the distance between the origin to the solution projections on the corresponding reference line. In addition, a reference line-based estimation of distribution algorithm was introduced in~\cite{sun2017reference} for explicitly promoting the diversity of an MaOEA. Furthermore, an approach (RVEA) was presented in~\cite{cheng2016reference} to adaptively revise the reference vector positions based on the scales of the objective functions to balance the diversity and convergence.
The second category refers to the decomposition-based algorithms which decompose an MaOP into several single-objective optimization problems, such as the MOEA based on Decomposition (MOEA/D)~\cite{zhang2007moea} which was initially proposed for solving MOPs but scaled well for MaOPs. Specifically, MOEA/D transformed the original MOP/MaOP with $m$ objectives into a group of single-objective optimization problems, and each sub-problem was solved in its neighboring region which constrained by their corresponding reference vectors. Recently, diverse variants~\cite{yuan2016balancing, wang2014replacement,li2014stable, li2015interrelationship, asafuddoula2015decomposition, gee2015online} of MOEA/D were proposed for improving the performance much further.
The third category is known as the convergence enhancement-based approaches. More specifically, the traditional Pareto dominance comparison methods widely utilized in MOEAs are not effective in discriminating populations with good proximity in MaOPs. A natural way is to modify this comparison principle to promote the selection mechanism. For example, the $\epsilon$-dominance method~\cite{laumanns2002combining} employed a relaxed factor $\epsilon$ to compare the dominance relation between solutions; Pierro \textit{et al}.~\cite{di2007investigation} proposed the preference order ranking approach to replace the traditional non-dominated sorting. Furthermore, the fuzzy dominance methods~\cite{wang2007fuzzy,he2014fuzzy} studied the fuzzification of the Pareto-dominance relation to design the ranking scheme to select promising solutions; the $L$-optimality paradigm was proposed in~\cite{zou2008new} to pick up solutions whose objectives were with the same importance by considering their objective value improvements. In addition, Yang \textit{et al.}~\cite{yang2013grid} proposed the grid-based approach to select the solutions that have the higher priority of dominance, and control the proportion of Pareto-optimal solutions by adjusting the grid size. Meanwhile, Antonio \textit{et al.}~\cite{l2013alternative} alternated the achievement function and the $\epsilon$-indicator method to improve the performance of MOEA in solving MaOPs. In~\cite{li2014shift}, a modification of density estimation, termed as shift-based density estimation, was proposed to make the dominance comparison better suited for solving MaOPs. Furthermore, the favorable convergence scheme was proposed in~\cite{cheng2015many} to improve the selection pressure in mating and environmental selections. Recently, a knee point-based algorithm (KnEA)~\cite{zhang2015knee} was presented as a secondary selection scheme to enhance the selection pressure. In summary, these algorithms introduced new comparison methods, designed effective selection mechanisms, or relaxed the original comparison approach to improve the selection pressure in addressing MaOPs.
The fourth category is known as the indicator-based methods. For instance, several MOEAs based on the hypervolume (HV) were proposed in~\cite{emmerich2005emo,igel2007covariance,brockhoff2007improving}, however their major disadvantages were the costly overhead in calculating the HV values especially in solving MaOPs. To this end, Bader and Zitzler proposed the HypE method with the Monte Carlo simulation~\cite{bader2011hype} to estimate the HV value. Consequently, the computational cost was largely lowered compared to its predecessors whose HV values were calculated exactly. In~\cite{gerstl2011finding}, an $\bigtriangleup p$ indicator-based algorithm ($\bigtriangleup p$-EMOA) was proposed for solving bi-objective optimization problems, and then extended further for tri-objective problems~\cite{trautmann2013finding}. Furthermore, Villalobos and Coello~\cite{rodriguez2012new} integrated the $\bigtriangleup p$ indicator with the differential evolution~\cite{storn1995differential} to solve MaOPs with up to $10$ objectives. Recently, an Inverse Generational Distance Plus (IGD$^+$)~\cite{ishibuchi2015modified} indicator-based evolutionary algorithm (IGD$^+$-EMOA) was proposed in~\cite{lopez2016igd+} for addressing MaOPs with no more than $8$ objectives. Basically, the IGD$^+$ indicator is viewed as a variant of the Inverse Generational Distance (IGD) indicator.
Although the MaOEAs mentioned above have experimentally demonstrated their promising performance, major issues are easily to be identified in solving real-world applications. For example, it is difficult to choose the converting strategy of the MaOEAs from the second category, which motivates multiple variants~\cite{yuan2016balancing,sun2016manifold,li2014stable,sun2017global,asafuddoula2015decomposition, gee2015online} to be developed further. In addition, the MaOEAs from the first and third categories only highlighted one of the characters in their designs (i.e., only the diversity promotion is explicitly concerned in the MaOEAs from the first category, and the convergence from the third category). However, both the diversity and convergence are concurrently desired by the MaOEAs. In this regard, some performance indicators, which are capable of simultaneously measuring the diversity and convergence, such as the HV and IGD indicators, are preferred to be employed for designing MaOEAs. However, the major issue of HV is its high computational complexity. Although the Monte Carlo simulation has been employed to mitigate this adverse impact, the calculation is still impracticable when the number of objectives is more than 10~\cite{bader2011hype}, while the calculation of IGD is scalable without these deficiencies.
In this paper, an IGD indicator-based Many-Objective Evolutionary Algorithm (MaOEA/IGD) has been proposed for effectively addressing MaOPs, and the contributions are outlined as follows:
\begin{enumerate}
\item A Decomposition-based Nadir Point Estimation method (DNPE) has been presented to estimate the nadir points to facilitate the calculation of IGD indicator. In DNPE, the estimation focuses only on the extreme point areas and transforms the computation of an $m$-objective optimization problem into $m$ single-objective optimization problems. Therefore, less computational cost is required compared to its peer competitors (experiments are demonstrated in Subsection~\ref{section_experiment_nadir}).
\item A comparison scheme for the non-dominated sorting has been designed for improving the convergence of the proposed algorithm. In this scheme, the dominance relations of solutions are not obtained by the comparisons among all the solutions but the solutions to the reference points. Therefore, the computational complexity of the presented comparison scheme is significantly lessened compared to the traditional comparison means because the number of reference points is generally much less than that of the whole population.
\item Three types of proximity distance assignment mechanisms are proposed for the solutions according to their PF rank values, which make the solutions with good convergence in the same PF to have higher chances to be selected. Furthermore, these assignment mechanisms collectively assure the proposed IGD indicator to be Pareto compliance.
\item Based on the proposed dominance comparison scheme and the proximity distance assignments, the selection mechanism which is employed for the mating selection and the environmental selection is proposed to concurrently facilitate the convergence and the diversity.
\end{enumerate}
The reminder of this paper is organized as follows. First, related works are reviewed, and the motivation of the proposed DNPE is presented in Section~\ref{section_2}. Then the details of the proposed algorithm are documented in Section~\ref{section_3}. To evaluate the performance of the proposed algorithm in addressing MaOPs, a series of experiments over scalable benchmark test suits are performed against state-of-the-art MaOEAs, and their results are measured by commonly chosen performance metrics and then analyzed in Section~\ref{section_4}. In addition, the performance of the proposed MaOEA/IGD is also demonstrated by solving a real-world application, and the performance of the proposed DNPE in nadir point estimation is investigated against its peer competitors. Finally, the proposed algorithm is concluded and the future works are illustrated in Section~\ref{section_5}.
\section{Related Works and Motivation}
\label{section_2}
Literatures related to the nadir point estimation and IGD indicator-based EAs are thoroughly reviewed in this section. Specifically, the Worst Crowded NSGA-II (WC-NSGA-II)~\cite{deb2006towards} and the Pareto Corner Search Evolutionary Algorithm (PCSEA)~\cite{singh2011pareto} would be reviewed and criticized in detail, because the insightful observations of the deficiencies of these two approaches naturally lead to the motivation of the proposed DNPE design. In addition, the IGD$^+$-EMOA is reviewed as well to highlight the utilization of the IGD$^+$ indicator and the reference points sampling for the calculation of IGD in the proposed MaOEA/IGD. Please note that all the discussions in this section are with the context of the problem formulation in~(\ref{equ_maop}).
\subsection{Nadir Point Estimation Methods}
\label{section_2_1}
According to literatures~\cite{wang2015nadir,deb2008review}, the approaches for estimating the nadir points can be divided into three categories including the surface-to-nadir, edge-to-nadir, and extreme-point-to-nadir schemes. In the surface-to-nadir scheme, the nadir points are constructed from the current Pareto-optimal solutions, and updated as the corresponding algorithms evolve towards the PF. MOEAs in ~\cite{deb2002fast,ke2013moea,ma2016multiobjective,chen2015evolutionary,li2015interrelationship} and MaOEAs in~\cite{deb2014evolutionary,yuan2015balancing,cheng2016reference} belong to this category. However, these MOEAs are shown to perform poorly in MaOPs due to the curse of dimensionality~\cite{praditwong2007well}. In addition, the MaOEAs related methods are not suitable for the proposed algorithm because the MaOPs have been solved prior to the nadir point estimation, while the nadir points in this paper are targeted for addressing MaOPs.
The edge-to-nadir scheme covers the Marcin and Andrzej's approach~\cite{szczepanski2003application}, Extremized Crowded NSGA-II (EC-NSGA-II)~\cite{deb2006towards}, and the recently proposed Emphasized Critical Region (ECR) approach~\cite{wang2015nadir}. Specifically, Marcin and Andrzej's approach decomposed an $m$-objective problem into $C^2_m$ sub-problems to estimate the nadir point from the $C^2_m$ edges, in which the major issues were the poor quality in nadir point found and the impractical computation complexity beyond three objectives~\cite{wang2015nadir}. EC-NSGA-II modified the crowding distance of NSGA-II by assigning large rank values to the solutions which had the minimum or maximum objective values. The ECR emphasized the solutions lying at the edges of the PF (i.e., the critical regions) with the adopted MOEAs. Although EC-NSGA-II and ECR have been reported to be capable of estimating the nadir points in MaOPs, they required a significantly large number of functional evaluations~\cite{wang2015nadir}.
The extreme-point-to-nadir approaches refer to employ a direct means to estimate the extreme points based on which the nadir points are derived, such as the Worst Crowded NSGA-II (WC-NSGA-II)~\cite{deb2006towards} in which the worst crowded solutions (extreme points) were preferred by ranking their crowding distances with large values. In WC-NSGA-II, it was hopeful that the extreme points were obtained when the evolution terminated. However, emphasizing the extreme points easily led to the WC-NSGA-II losing the diversity which inadvertently affected the convergence in turn. In addition, Singh \textit{et al.} proposed the Pareto Corner Search Evolutionary Algorithm (PCSEA)~\cite{singh2011pareto} to look for the nadir points with the corner-sort ranking method for the MaOPs whose objective values were required to be with the identical scales.
In addition, there are also various methods not falling into the above categories. For example, Benayoun \textit{et al.} estimated the nadir points with the pay-table~\cite{benayoun1971linear} in which the $j$-th row denoted the objective values of the solution which had the minima on its $j$-th objective. In addition, other related works were suggested in~\cite{{dessouky1986estimates,isermann1988computational,korhonen1997heuristic}} for the problems assuming a linear relationship between the objectives and variables. On the contrary, most of the real-world applications are non-linear in nature.
Because the nadir point estimation is a critical part of the proposed algorithm for solving MaOPs, an approach with a high computational complexity is certainly not preferable. Furthermore, the nadir points are employed for constructing the Utopian PF, while the reference point of IGD would come from the PF. As a consequence, nadir points with a high accuracy are not necessary a guarantee. Considering the balance between the computational complexity and the estimation accuracy, we have proposed in this paper a Decomposition-based Nadir Point Estimation method (DNPE) by transforming an $m$-objective MaOP into $m$ single-objective optimization problems to search for the respective $m$ extreme points. Thereafter the nadir point is derived.
Specifically, because the proposed nadir point estimation method (i.e., the DNPE) is based on the extreme-point-to-nadir scheme, the WC-NSGA-II and the PCSEA which are with the similar scheme are detailed further. For convenience of reviewing the related nadir point estimation methods, multiple fundamental concepts of the MaOPs are given first. Then the WC-NSGA-II and PCSEA are discussed.
\begin{figure}[htp]
\centering
\includegraphics[width=0.8\columnwidth]{all_points}\\
\caption{An example with bi-objective optimization problem to illustrate the ideal point, extreme point, worst point, nadir point, and the PF.}\label{fig_all_points}
\end{figure}
\begin{de}
Generally, there are $m$ extreme points denoted as $\textbf{y}^{ext}_1$, $\cdots$, $\textbf{y}^{ext}_m$ in an $m$-objective optimization problem,
$\textbf{y}^{ext}_i=\textbf{f}(\textbf{x}^{ext}_i)$, and $\textbf{x}^{ext}_i = \argmax_{\textbf{x}}~f_i(\textbf{x}),$ where $\textbf{x}\in \text{PS}$ and $i \in \{1,\cdots,m\}$.
\end{de}
\begin{de}
\label{definition_nadir_point}
The nadir point is defined as $\textbf{z}^{nad}=\{z_1^{nad}, \cdots, z_m^{nad}\}$, where $z_i^{nad}=f_i(\textbf{x}^{ext}_i)$.
\end{de}
\begin{de}
The worst point is defined as $\textbf{z}^{w}=\{z_1^{w},\cdots,z_m^{w}\}$, where $z_i^w = \text{max}~f_i(\textbf{x})$ and $\textbf{x} \in \Omega$.
\end{de}
\begin{de}
\label{definition_ideal_point}
The ideal point is defined as $\textbf{z}^*=\{z_1^*, \cdots, z_m^*\}$, where $z_i^*=\text{min}~f_i(\textbf{x})$ and $\textbf{x} \in \Omega$.
\end{de}
Furthermore, the ideal point, extreme point, worst point, nadir point, and the PF are plotted with a bi-objective optimization problem in Fig.~\ref{fig_all_points} for intuitively understanding their significance. With these fundamental definitions, a couple of nadir point estimation algorithms, WC-NSGA-II and PCSEA, which are in the extreme-point-to-point scheme are discussed as follows.
WC-NSGA-II was designed based on NSGA-II by modifying its crowding distance assignment. According to the definition of nadir point in Definition~\ref{definition_nadir_point}, WC-NSGA-II naturally emphasized the solutions with maximal objectives front-wise. Specifically, solutions on a particular non-dominated front were sorted with an increasing order based on their fitness, and rank values equal to their positions in the ordered list were assigned. Then the solutions with larger rank values were preferred in each generation during the evolution. By this emphasis mechanism, it was hopeful that nadir point was obtained when the evolution of WC-NSGA-II was terminated. However, one major deficiency is that over-emphasis on these solutions with maximal fitness leads to the lack of diversity, which in turn affects the convergence of the generated extreme points, i.e., the generated extreme points in WC-NSGA-II are not necessarily Pareto-optimal.
PCSEA employed the corner-sorting to focus on the extreme points during the evolution. Specifically, there were $2m$ ascended lists during the executions of PCSEA. The first $m$ lists were about the $m$ objectives of the solutions, while the other $m$ lists were about the excluded square $L_2$ norm with each objective. Furthermore, the $j$-th objective of the problem to be optimized was with the excluded square $L_2$ norm $\sum_{i=1, i\neq j}^{m}f_i (\textbf{x})^2$. From these $2m$ lists, solutions with smaller rank values which were equal to their positions in these lists were selected until there was no available slot. Experimental results have shown that PCSEA performs well in MaOPs due to the utilization of corner-sorting other than the non-dominated sorting which easily leads to the loss of selection pressure. However, the corner-sorting can be viewed as to minimize the square $L_2$ norm of all objectives, which deteriorates the performance of PCSEA in solving the problems with different objective value scales and non-concave PF shapes. For example in Fig.~\ref{fig_pcsea_problem_1} which illustrates an example of bi-objective optimization problem, the arc $AB$ denotes the PF, points $A$ and $B$ are with different values. It is clearly observed that if the minimization of $L_2$ norm regarding $f_1({\textbf{x}})$ and $f_2({\textbf{x}})$ are emphasized, only the extreme point $A$ would be obtained while the other one (point $B$) would be missed. This deficiency can also be seen in the problems with non-concave PF shapes.
\begin{figure}
\centering
\includegraphics[width=0.8\columnwidth]{pcsea_different_scale}\\
\caption{A bi-objective example to illustrate the deficiency of Pareto Corner Search Evolutionary Algorithm in addressing the problem with different objective value scales.}\label{fig_pcsea_problem_1}
\end{figure}
Briefly, major concerns in these two nadir point estimation algorithms are summarized as 1) over-emphasizing extreme points leads to the loss of diversity which in turn deteriorates the convergence of the found nadir points, and 2) simultaneously minimizing the objectives does not scale to problems with different objective value scales and non-concave PF shapes. To this end, a natural approach is recommended by 1) decomposing the problem to be solved into several single-objective optimization problems in which the diversity is not required, and 2) assigning different weights to the objectives. In the proposed DNPE, the $m$ respective extreme points are estimated by decomposing the $m$-objective MaOP into $m$ single-objective problems associated with different weights. Specifically, the $i$-th extreme point estimation is with the form formulated by~(\ref{equ_extrme_point})
\begin{equation}
\label{equ_extrme_point}
\text{min}~|f_i(\textbf{x})| + \lambda \sum_{j=1, j\neq i}^{m}(f_j(\textbf{x}))^2
\end{equation}
where $\lambda$ is a factor with the value greater than $1$ to highlight the priority of solving its associated term. In order to better justify our motivation, an example with bi-objective is plotted in Fig.~\ref{fig_motivation_extreme_point} in which $A,B$ are the extreme points, $C,D$ are the worse points, and the shaded region denotes the feasible objective space. To obtain the extreme point on the $f_1$ objective, it is required to minimize $|f_1(\textbf{x})| + \lambda (f_2(\textbf{x}))^2$ according to~(\ref{equ_extrme_point}). Because $\lambda$ is greater than $1$, the term ${(f_2(\textbf{x}))^2}$ is optimized with a higher priority. Consequently, solutions locating in line $BC$ are obtained, based on which $|f_1(\textbf{x})|$ is minimized much further, then the extreme point $B$ is obtained.
\begin{figure}
\centering
\includegraphics[width=0.8\columnwidth]{our_motivation_extreme_point}\\
\caption{An example of bi-objective optimization problem, $A,B$ are the extreme points, and $C,D$ are the worse points. The shade region denotes the feasible objective space.}\label{fig_motivation_extreme_point}
\end{figure}
\subsection{IGD$^{+}$-EMOA}
Prior to the introduction of IGD$^{+}$-EMOA, it is necessary to compare the differences between the IGD and IGD$^+$ indicators. For this purpose, we first list their respective mathematical formulations. Then the superiority of IGD$^+$ indicator is highlighted. Finally, the IGD$^+$-EMOA is discussed much further.
Basically, the IGD indicator is with the form formulated by~(\ref{equ_igd_indicator})
\begin{equation}
\label{equ_igd_indicator}
\mathrm{IGD} = \frac{\sum_{p \in p^*}\text{dist}(p,PF)}{|p^*|},
\end{equation}
where $p^*$ denotes a set of reference points in the calculation of IGD, $PF$ denotes the non-dominated solutions generated by the algorithm, $dist(p,PF)$ denotes the nearest distance from $p$ to solutions in $PF$, and the distance from $p$ to the solution $y$ in $PF$ is calculated by $d(p,y) = \sqrt{\sum_{j=1}^m(p_j-y_j)^2}$. It has been pointed out in~\cite{ishibuchi2015modified} that IGD cannot differentiate the quality of generated solutions when they are non-dominated to the solutions in $p^*$, and the IGD$^+$ indicator is proposed by changing the calculation of $d(p,y)$ to $\sqrt{\sum_{j=1}^m \text{max}(y_j-p_j, 0)^2}$.
IGD$^+$-EMOA employed the IGD$^+$ indicator as its selection mechanism. In addition, the $p^*$ in IGD$^+$-EMOA is sampled from the approximate PF. Specifically, it supposed that the PF is obtained by solving $y_1^r+\cdots+y_m^r=1$ where $\textbf{y}=\{y_1,\cdots,y_m\}$ is from the non-dominated solutions of the current population. However, this approximate approach for generating the PF performs badly in MaOPs where multiple local Pareto-optimal exist, which leads IGD$^+$-EMOA only to solve MaOPs with no more than $8$ objectives~\cite{lopez2016igd+}.
In summary, we first introduce details of the proposed DNPE which is motivated by the insightful observations in deficiencies of WC-NSGA-II and PCSEA. With the help of the estimated nadir point, the Utopian PF is constructed and the reference points are sampled for the calculation of the proposed MaOEA/IGD. Compared to the approximation of the PF in IGD$^+$-EMOA, our proposed MaOEA/IGD is capable of solving problems with many more objectives. Considering the superiority of IGD$+$, its design principle is employed in the proposed MaOEA/IGD when the generated solutions are non-dominated to the sampled reference points.
\section{Proposed algorithm}
\label{section_3}
In this section, the proposed Inverted Generational Distance indicator-based evolutionary algorithm for addressing many-objective optimization problems (in short for MaOEA/IGD) is presented. To be specific, the framework of the proposed algorithm is outlined first. Then the details of each step in the framework are documented. Next, the computational complexity of the proposed algorithm is analyzed. Finally, the mechanisms of promoting the diversity and the convergence in the proposed algorithm are discussed. Noted here that, the proposed algorithm is described within the context formulated by~(\ref{equ_maop}).
\subsection{Framework of the Proposed Algorithm}
\label{section_3_1}
Because the proposed algorithm is based on the IGD indicator, a set of uniformly distributed points which is generated from the PF is required. However, exact points are difficult to be obtained due to the unknown analytical form of the PF for a given real-world application. In the proposed algorithm, a set of reference points, denoted by $p^*$, which are evenly distributed in the Utopian PF is generated first (Subsection~\ref{section_3_2}). Then the population with the predefined size is randomly initialized in the feasible space, and their fitness are evaluated. Next, the population begins to evolve in pursuing the optima until the stopping conditions are satisfied. When the evolution terminates, a number of promising solutions which are hopeful to uniformly distributed in the PF with good proximity is obtained. In order to remedy the shortage caused by using the $p^*$ as the Pareto-optimal solutions with promising diversity for IGD indicator, the rank of each solution as well as its proximity distances to all the points in $p^*$ are assigned (Subsection~\ref{section_3_3}) first during each generation of the evolution. Then new offspring are generated from their parents selected based on the comparisons over their corresponding rank values and proximity distances (Subsection~\ref{section_3_4}). Next, the fitness, ranks, and the proximity distances of the generated offspring to the solutions in $p^*$ are calculated. Finally, a limit number of individuals is selected from the current population to survive into the next generation with the operation of environmental selection (Subsection~\ref{section_3_5}). In summary, the details of the framework are listed in Algorithm~\ref{alg_the_proposed_algorithm}.
\begin{algorithm}
\caption{Framework of the Proposed Algorithm}
\label{alg_the_proposed_algorithm}
$p^* \leftarrow$ Uniformly generate reference points for IGD indicator;\\
\label{alg_framework_line_1}
$P_0 \leftarrow$ Randomly initialize the population;\\
\label{alg_framework_line_2}
Fitness evaluation on $P_0$;\\
\label{alg_framework_line_3}
$t\leftarrow 0$;\\
\While{stopping criteria are not satisfied}
{\label{alg_framework_start_while}
Assign the ranks and the proximity distances for individuals in $P_t$;\\
\label{alg_framework_line_4}
$Q_t \leftarrow$ Generate offspring from $P_t$;\\
\label{alg_framework_line_5}
Fitness evaluation on $Q_t$;\\
\label{alg_framework_line_6}
Assign the rank and the proximity distance for each individual in $ Q_t$;\\
\label{alg_framework_line_7}
$P_{t+1}\leftarrow$ Environmental selection from $Q_t\cup P_t$;\\
\label{alg_framework_line_8}
$t\leftarrow t+1$;\\
}
\label{alg_framework_end_while}
\textbf{Return} $P_{t}$.
\end{algorithm}
\subsection{Uniformly Generating Reference Points}
\label{section_3_2}
In order to obtain the $p^*$, the extreme points of the problem $\textbf{f}(\textbf{x})$ to be optimized are calculated first. Then the ideal point and the nadir point are extracted. Next, a set of solutions is uniformly sampled from the constrained $(m-1)$-dimensional hyperplane. Finally, these solutions are transformed into the Utopian PF for the usage of the IGD indicator based on the ideal point and the nadir point. Furthermore, these steps are listed in Algorithm~\ref{alg_obtain_reference_points}, and all the details of obtaining the $p^*$ are illustrated as follows.
\begin{algorithm}
\caption{Uniformly Generate $p^*$ for IGD Indicator}
\label{alg_obtain_reference_points}
\KwIn{Optimization problem $\textbf{f}(\textbf{x})=(f_1(\textbf{x}),\cdots,f_m(\textbf{x}))$; the size $k$ of $p^*$.}
\KwOut{$p^*$.}
Estimate the extreme points of $\textbf{f}(\textbf{x})$ with Algorithm~\ref{alg_estimate_extreme_points};\\
$\textbf{z*}=\{z_1^*, \cdots, z_m^*\} \leftarrow$ Extract the ideal point;\\
$\textbf{z}^{nad}=\{z_1^{nad}, \cdots, z_m^{nad}\} \leftarrow$ Extract the nadir point;\\
$p^* \leftarrow$ Uniformly generate $k$ points from the constrained hyperplane;\\
\label{alg_generate_pf_in_unit_hyperplane}
\For{$i\leftarrow 1$ \rm{\textbf{to}} $k$}{
\label{alg_begin_to_transform_pf}
\For{$j\leftarrow$ \rm{\textbf{to}} $m$} {
$(p^*)^i_j$ = $(p^*)^i_j\times (z_j^{nad}-z_j^*) + z_j^*$
}
}
\label{alg_end_to_transform_pf}
\textbf{Return} $p^*$.
\end{algorithm}
To estimate the extreme points, the motivation mentioned in Subsection~\ref{section_2_1} is implemented, and the details are presented in Algorithm~\ref{alg_estimate_extreme_points}. Specifically, the $m$ extreme points are estimated individually based on the $m$ objectives of the optimization problem. Furthermore, to estimate the $i$-th extreme point $\textbf{y}_i^{ext}$, the square $L_2$ norm of $\{f_k(\textbf{x})|k = 1,\cdots, i-1, i+1,\cdots, m\}$ is calculated first. Then the absolute value of $f_i(\textbf{x})$ is calculated. Mathematically, these two steps are formulated by $f_{l2}(\textbf{x})=\sum_{k=1,k\neq i}^m||f_k(\textbf{x})||_2^2$ and $f_{l1}(\textbf{x})=|f_i(\textbf{x})|$, respectively, where $\|\cdot\|_{2}$ is the $L_2$ norm operator, and $|\cdot|$ is the absolute value operator. Finally, the extreme point $\textbf{y}_i^{ext}$ is obtained by line~\ref{alg_estimate_extreme_points_optimization_step} in Algorithm~\ref{alg_estimate_extreme_points}, where $\lambda$ is a factor with the value greater than $1$ to highlight the weight of the corresponding term in the optimization. When all the extreme points have been estimated, the nadir point and the ideal point are extracted from the extreme points based on Definitions~\ref{definition_nadir_point} and~\ref{definition_ideal_point}, respectively. This is followed by generating a set of uniformly distributed reference points denoted by $p^*$ from the $(m-1)$-dimensional constrained hyperplane which is contoured by $m$ lines with the unit intercepts in the positive part of the quadrant. Noted here that the Das and Dennis's method~\cite{das1998normal}, which is widely used by some state-of-the-art MaOEAs, such as MOEA/D~\cite{zhang2007moea} and NSGA-III~\cite{deb2014evolutionary}, is employed for the generation of $p^*$ (line~\ref{alg_generate_pf_in_unit_hyperplane} of Algorithm~\ref{alg_obtain_reference_points}). Ultimately, all the points in $p^*$ are transformed into the Utopian PF, which are detailed in lines~\ref{alg_begin_to_transform_pf}-\ref{alg_end_to_transform_pf} of Algorithm~\ref{alg_obtain_reference_points}.
\begin{algorithm}
\caption{Estimate Extreme Points}
\label{alg_estimate_extreme_points}
\KwIn{Optimization problem $\textbf{f}(\textbf{x})=(f_1(\textbf{x}),\cdots,f_m(\textbf{x}))$.}
\KwOut{Extreme points $\{\textbf{y}_1^{ext}, \cdots, \textbf{y}_m^{ext}\}$.}
$\Upsilon\leftarrow\emptyset$;\\
\For{$i\leftarrow 1$ \rm{\textbf{to}} $m$}{
$\Gamma\leftarrow\emptyset$;\\
\For{$k\leftarrow 1$ \rm{\textbf{to}} $m$}{
\If{$k\neq i$}{
$\Gamma\leftarrow$ $\Gamma\cup f_k(\textbf{x})$;
}
}
$\textbf{x}_i^{ext} = \argmin_{\textbf{x}}~\lambda \|\Gamma\|_2^2 + |f_i(\textbf{x})|$;\\
$\textbf{y}_i^{ext}\leftarrow f_i(\textbf{x}_i^{ext})$;\\
\label{alg_estimate_extreme_points_optimization_step}
$\Upsilon\leftarrow$ $\Upsilon\cup \textbf{y}_i^{ext}$;
}
\textbf{Return} $\Upsilon$.
\end{algorithm}
\subsection{Assigning Ranks and Proximity Distances}
\label{section_3_3}
When $p^*$ has been generated, the population is randomly initialized in the feasible search space first, and then the fitness of individuals are evaluated. Next, the rank value and the proximity distances of each individual are assigned. It is noted here that, the rank values are used to distinguish the proximity of the solutions to the Utopian PF from the view of reference points, and the proximity distances are utilized to indicate which individuals are with better convergence and diversity in the sub-population in which the solutions are with the same rank values. More details are discussed in Subsection~\ref{section_3_7}.
Particularly, three rank values, denoted by $r_1$, $r_2$, and $r_3$, exist in the proposed algorithm for all the individuals. Specifically, the way to rank individual $s$ is based on the definitions given as follows.
\begin{de}
\label{definition_r1}
Individual $s$ is ranked as $r_1$, if it dominates at least one solution in $p^*$.
\end{de}
\begin{de}
\label{definition_r2}
Individual $s$ is ranked as $r_2$, if it is non-dominated to all the solutions in $p^*$.
\end{de}
\begin{de}
\label{definition_r3}
Individual $s$ is ranked as $r_3$, if it is dominated by all the solutions in $p^*$, or dominated by a part of solutions in $p^*$ but non-dominated to the remaining solutions.
\end{de}
With Definitions~\ref{definition_r1}, \ref{definition_r2}, and \ref{definition_r3}, it is concluded that Pareto-optimal solutions are all with rank values $r_1$, $r_2$, and $r_3$, if the PF is convex, a hyperplane, and concave, respectively\footnote{This is considered in the context of a minimization problem with continuous PF, and the extreme points are excluded from the Pareto-optimal solutions.}. To be specific, if the PF of a minimization problem is a hyperplane, the Utopian PF is obviously equivalent to the PF. Consequently, the Pareto-optimal solutions lying at the PF are all non-dominated to the reference points which are sampled from the Utopian PF. Based on Definition~\ref{definition_r2}, the Pareto-optimal solutions are ranked with $r_2$. This is also held true for the Pareto-optimal solutions ranked with $r_1$ for convex PF and Pareto-optimal solutions ranked with $r_3$ for concave PF.
Based on the conclusion mentioned above, the proximity distances of each individual with different ranks in the population are calculated. For convenience, it is assumed that there are $k$ solutions in $p^*$, and $q$ individuals in the current population. Consequently, there will be $q\times k$ proximity distances. Let $d^i_j$ denote the proximity distance of $\textbf{f}(\textbf{x}^i)=(f_1(\textbf{x}^i),\cdots,f_m(\textbf{x}^i))$ ($\textbf{x}^i$ refers to the $i$-th individual) to the $j$-th point in $(\textbf{p}^*)^j=\left((p^*)^j_1,\cdots,(p^*)^j_m\right)$, where $i=\{1,\cdots,q\}$ and $j=\{1,\cdots, k\}$. Due to one individual with multiple proximity distances, the corresponding minimal proximity distance would be employed when two individuals are compared upon their proximity distances. Because the proximity distance assignment is used to differentiate the convergence of individuals with the same rank values by comparing their associate proximity distances when a prior knowledge of the PF is unknown in advance, the rank $r^i$ of $\textbf{x}^i$ is confirmed first in order to calculate the $d^i_j$. Particularly, the proximity distance assignment is designed as follows. If $r^i$ is equal to $r_3$, $d^i_j$ is set to the Euclidean distance between $\textbf{f}(\textbf{x}^i)$ and $(\textbf{p}^*)^j$; if $r^i$ is equal to $r_1$, $d^i_j$ is set to the negative value of the Euclidean distance between $\textbf{f}(\textbf{x}^i)$ and $(\textbf{p}^*)^j$; if $r^i$ is equal to $r_2$, $d^i_j$ is calculated by~(\ref{equ_calculate_proximity_distance_r2}).
\begin{equation}
\label{equ_calculate_proximity_distance_r2}
d^i_j = \sqrt{\sum_{l=1}^m \text{max}(f_l(\textbf{x}^i)-(p^*)^j_l, 0)^2}
\end{equation}
For an intuitive understanding, an example is illustrated in Fig.~\ref{fig_igd_plus_comparison} to present the motivation of the proximity distance assignment for individuals who are ranked as $r_2$. In Fig.~\ref{fig_igd_plus_comparison}, the black solid circles refer to the reference points, the black triangles marked by $1$, $2$, and $3$ refer to the individuals $\textbf{x}^1$, $\textbf{x}^2$, and $\textbf{x}^3$ which are with the rank $r_2$ (these three individuals are non-dominated to the reference points). In this situation, it is clearly shown that the individual $\textbf{x}^1$ is with the smallest minimal proximity distance if the Euclidean distance metric is employed (the minimal proximity distances of individuals $\textbf{x}^1$, $\textbf{x}^2$, and $\textbf{x}^3$ are $2.2361$, $2.5495$, and $2.8284$, respectively). Consequently, both the distance measurements of the individuals with ranks $r_1$ and $r_3$ in this situation cannot be utilized to select the desirable individual $\textbf{x}^2$ which has the most promising convergence to the PF.
However, if the proximity distance quantified by~(\ref{equ_calculate_proximity_distance_r2}) is employed, it is clearly that the individual $\textbf{x}^2$ is with the smallest minimal proximity distance (the minimal proximity distances of individuals $\textbf{x}^1$, $\textbf{x}^2$, and $\textbf{x}^3$ are $2$, $0.5$, and $2$, respectively), which satisfies the motivation of the proximity distance assignment that a smaller value implies a better convergence.
\begin{figure}
\centering
\includegraphics[width=0.7\columnwidth]{igd_plus_comparison}\\
\caption{A bi-objective optimization problem is illustrated to show the motivation of the proximity distance assignment for the individuals with rank value $r_2$.}\label{fig_igd_plus_comparison}
\end{figure}
Noted here that, the proximity distance assignment for the individuals with rank $r_2$ is also employed in the IGD$^+$ indicator~\cite{ishibuchi2015modified}.
In summary, smaller proximity distance reveals the better proximity when the exact PF of the problem to be optimized is unknown. Furthermore, the algorithm of the proximity distance assignment is presented in Algorithm~\ref{alg_assign_proximity_distance}.
\begin{algorithm}
\caption{Assign Proximity Distance}
\label{alg_assign_proximity_distance}
\KwIn{Current population $P_t$ with size $q$; reference points $p^*$ with size $k$.}
\KwOut{Proximity distances matrix $d$.}
\For{$i\leftarrow1$ \rm{\textbf{to}} $q$}{
$\textbf{x}^i\leftarrow$ $P_t^i$;\\
$r\leftarrow$ Calculate the rank of $\textbf{x}^i$;\\
\For{$j\leftarrow1$ \rm{\textbf{to}} $k$}{
\uIf{$r = r_1$}{
$d^i_j\leftarrow$ $-\sqrt{\sum_{l=1}^{m}(f_l(\textbf{x}^i)-(p^*)^j_l})^2$\;
}
\uElseIf{$r = r_2$}{
$d^i_j\leftarrow$ $\sqrt{\sum_{l=1}^{m}\text{max}(f_l(\textbf{x}^i)-(p^*)^j_l}, 0)^2$\;
}
\Else{
$d^i_j\leftarrow$ $\sqrt{\sum_{l=1}^{m}(f_l(\textbf{x}^i)-(p^*)^j_l})^2$\;
}
}
}
\textbf{Return} $d$.
\end{algorithm}
\subsection{Generating Offspring}
\label{section_3_4}
The process of generating offspring in the proposed algorithm is similar to that in genetic algorithms, in addition to the selection of individuals for filling up the gene pool from which the parent solutions are selected to generate offspring. In this subsection, the processes of generating offspring are elaborated in Steps~\ref{step_generate_offspting_1}-\ref{step_generate_offspting_5}. Then, the details for filling up the gene pool are presented in Algorithm~\ref{alg_filling_up_gene_pool}.
\begin{enumerate}[Step 1:]
\item Select solutions from the current population to fill up the gene pool, until it is full.
\label{step_generate_offspting_1}
\item Select two parent solutions from the gene pool and remove them from the gene pool.
\label{step_generate_offspting_2}
\item Employ the Simulated binary crossover (SBX) operator to generate offspring with the selected parent solutions.
\item Employ the polynomial mutation operator to mutate the generated offspring.
\label{step_generate_offspting_4}
\item Repeat Steps~\ref{step_generate_offspting_2}-\ref{step_generate_offspting_4} until the gene pool is empty.
\label{step_generate_offspting_5}
\end{enumerate}
\begin{algorithm}[htp]
\caption{Filling Up the Gene Pool}
\label{alg_filling_up_gene_pool}
\KwIn{Current population $P_t$; Gene pool size $g$.}
\KwOut{Gene pool $G$.}
$G\leftarrow$ $\emptyset$;\\
\While{the size of $G$ is less than $g$}{
$\{\textbf{x}^1, \textbf{x}^2\} \leftarrow$ Randomly select two individuals from $P_t$;\\
\label{alg_filling_up_gene_pool_line1}
$r_{\textbf{x}^1} \leftarrow$ Obtain the rank of ${\textbf{x}^1}$;\\
\label{alg_filling_up_gene_pool_line2}
$r_{\textbf{x}^2} \leftarrow$ Obtain the rank of ${\textbf{x}^2}$;\\
$\textbf{d}^1 \leftarrow$ Obtain the proximity distances of ${\textbf{x}^1}$;\\
$\textbf{d}^2 \leftarrow$ Obtain the proximity distances of ${\textbf{x}^2}$;\\
\label{alg_filling_up_gene_pool_line5}
\uIf{$r_{\textbf{x}^1}<r_{\textbf{x}^2}$}{
\label{alg_filling_up_gene_pool_line6}
$G\leftarrow$ $G\cup\textbf{x}^1$;\\
}\uElseIf{$r_{\textbf{x}^1}>r_{\textbf{x}^2}$}{
$G\leftarrow$ $G\cup\textbf{x}^2$;\\
\label{alg_filling_up_gene_pool_line8}
}\Else{
\uIf{$min(\textbf{d}^1)<min(\textbf{d}^2)$}{
\label{alg_filling_up_gene_pool_line9}
$G\leftarrow$ $G\cup\textbf{x}^1$;\\
}\uElseIf{$min(\textbf{d}^1)>min(\textbf{d}^2)$}{
$G\leftarrow$ $G\cup\textbf{x}^2$;\\
\label{alg_filling_up_gene_pool_line12}
}\Else{
\label{alg_filling_up_gene_pool_line13}
$\textbf{x} \leftarrow$ Randomly select one individual from $\{\textbf{x}^1, \textbf{x}^2\}$;\\
$G\leftarrow$ $G\cup\textbf{x}$;\\
\label{alg_filling_up_gene_pool_line15}
}
}
}
\textbf{Return} $G$.
\end{algorithm}
The binary tournament selection~\cite{miller1995genetic} approach is employed in Algorithm~\ref{alg_filling_up_gene_pool} to select individuals from the current population to fill up the gene pool. In other words, the binary tournament selection~\cite{miller1995genetic} approach is employed to select individuals from the current population. Specifically, two individuals which are denoted by $\textbf{x}^1$ and $\textbf{x}^2$ are randomly selected from the current population first (line~\ref{alg_filling_up_gene_pool_line1}). Then, their ranks and the proximity distances are obtained (lines~\ref{alg_filling_up_gene_pool_line2}-\ref{alg_filling_up_gene_pool_line5}). Next, the individual with smaller rank value is selected to be copied to the gene pool (lines~\ref{alg_filling_up_gene_pool_line6}-\ref{alg_filling_up_gene_pool_line8}). If $\textbf{x}^1$ and $\textbf{x}^2$ have the same rank values, the individual who has the smaller minimal proximity distance is selected (lines~\ref{alg_filling_up_gene_pool_line9}-\ref{alg_filling_up_gene_pool_line12}). Otherwise, an individual from $\textbf{x}^1$ and $\textbf{x}^2$ is randomly selected being as one potential parent solution to be put into the gene pool (lines~\ref{alg_filling_up_gene_pool_line13}-\ref{alg_filling_up_gene_pool_line15}). When the gene pool is full, two parent solutions are randomly selected from the gene pool for generating offspring, and then these selected parent solutions are removed from the gene pool until the gene pool is empty. Noted here that, the SBX~\cite{deb1994simulated} and the polynomial mutation~\cite{deb2001multi} operators are employed for the corresponding crossover and mutation operations in the proposed algorithm. It has been reported that two solutions selected in a large search space is not necessary to generate promising offspring~\cite{purshouse2007evolutionary,adra2011diversity}. Generally, two ways can be employed to solve this problem. One is the mating restriction method to limit the offspring to be generated by the neighbor solutions~\cite{deb1989investigation}. The other one is to use SBX with a large distribution index~\cite{deb2014evolutionary}. In the proposed algorithm, the latter one is utilized due to its simplicity.
\subsection{Environmental Selection}
\label{section_3_5}
When the offspring have been generated, the size of the current population is greater than that of the available slots. As a consequence, the environmental selection takes effects to select a set of representatives to survive to the next generation. In summary, the individuals are selected from the current population according to their assigned rank values and proximity distances. For convenience, it is assumed that there are $N$ available slots, the selected individuals are to be stored in $P_{t+1}$, and the individuals with ranks $r_1$, $r_2$, as well as $r_3$ are grouped into $F_{r_1}$, $F_{r_2}$, and $F_{r_3}$ non-dominated fronts, respectively. To be specific, the counter $i$ is increased by one until $\sum_{j=1}^i|F_{r_j}| > N$ where $|\cdot|$ is a countable operator. If $\sum_{j=1}^{i-1}|F_{r_j}| $ is equal to $N$, the individuals in $F_{r_1},\cdots, F_{r_{i-1}}$ are copied into $P_{t+1}$ and the environmental selection is terminated. Otherwise, the individuals in $F_{r_1},\cdots, F_{r_{i-1}}$ are copied into $P_{t+1}$ first, then $A=N-\sum_{j=1}^{i-1}|F_{r_j}|$ individuals are selected from $F_{r_i}$.
In summary, the details of the environmental selection are presented in Algorithm~\ref{alg_environment_selection}. Furthermore, line~\ref{alg_select_A_individuals} is confirmed by finding $A$ individuals who have the minimal total proximity distances to the $A$ reference points $r$ (line~\ref{alg_select_A_reference_points}), which involves a linear assignment problem (LAP). In the proposed algorithm, the Hungarian method~\cite{jonker1986improving} is employed to solve this LAP.
\begin{algorithm}
\caption{Environmental selection}
\label{alg_environment_selection}
\KwIn{$F_{r_1}$, $F_{r_2}$, and $F_{r_3}$; Available slots size $N$.}
\KwOut{$P_{t+1}$.}
$P_{t+1}\leftarrow$ $\emptyset$;\\
$i\leftarrow$ $1$;\\
\While{$|P_{t+1}| + |F_{r_i}| < N$}
{
$P_{t+1}\leftarrow$ $P_{t+1} \cup F_{r_i}$;\\
$i\leftarrow$ $i+1$;\\
}
\uIf{$|P_{t+1}| + |F_{r_i}| = N$}{
$P_{t+1}\leftarrow$ $P_{t+1} \cup F_{r_i}$;\\
}\Else{
$r\leftarrow$ Uniformly select $A=N-\sum_{j=1}^{i-1}|F_{r_j}|$ reference points from $p^*$;\\
\label{alg_select_A_reference_points}
$R\leftarrow$ Select $A$ individuals from $F_{r_i}$;\\
\label{alg_select_A_individuals}
$P_{t+1}\leftarrow$ $P_{t+1}\cup R$;
}
\textbf{Return} $P_{t+1}$.
\end{algorithm}
\subsection{Computational Complexity}
\label{section_3_6}
In this subsection, the computational complexity of the proposed algorithm is analyzed. For convenience, it is assumed that the problem to be optimized is with $m$ objectives, $n$ decision variables, $N$ desired solutions for decision-makers, and the computational complexity is analyzed in the context of Algorithm~\ref{alg_the_proposed_algorithm}. To estimate each extreme point, the genetic algorithm is employed, and the SBX as well as polynomial mutation are used as the genetic operators. Furthermore, it is assumed that the population size for estimating extreme points is set to be $N$, and the generation is set to be $t_1$. Consequently, the total computation cost of uniformly generating reference points for IGD indicator (line~\ref{alg_framework_line_1}) is $O(t_1m^2N)$. Furthermore, lines~\ref{alg_framework_line_2} and~\ref{alg_framework_line_3} require $O(nN)$ and $O(mN)$ computations, respectively. Because the number of the reference points is equal to that of the desired solutions, the computational complexity of assigning ranks and proximity distances in line~\ref{alg_framework_line_4} are $O(mN^2)$ and $O(nN^2)$, respectively. Furthermore, generating offspring (line~\ref{alg_framework_line_5}) needs $O(\frac{N}{2}(n+n))$ computations because the size of gene pool is set to be $N$. Since only the fitness, ranks and the proximity distances of the generated offspring need to be calculated, as a consequence, lines~\ref{alg_framework_line_6} and~\ref{alg_framework_line_7} consume $O(\frac{N}{2}m)$ and $O(\frac{N}{2}Nm)$ + $O(\frac{N}{2}Nn)$, respectively. In the environmental selection, the best case scenario in computational complexity is $O(N)$, while the worst is $O(N^3)$ given that $N$ individuals are linearly assignment to the reference points. Furthermore, it is considered common that $N$ is greater than $n$, and $N>>m$ in MaOPs. Therefore, lines~\ref{alg_framework_start_while}-\ref{alg_framework_end_while} overall need $O(tN^3)$ computations with the generation $t$. In summary, the computational complexity of the proposed algorithm is $O(tN^3)$ where $t$ is the number of the generation and $N$ is the number of solutions.
\subsection{Discussions}
\label{section_3_7}
Loss of selection pressure is a major issue for traditional MOEAs in effectively solving MaOPs because of the traditional domination comparisons between individuals giving a large proportion of non-dominated solutions. In the proposed algorithm, the dominance relation of all the individuals are compared to the reference points which are employed for the calculation of IGD indicator. However, the exact reference points which are uniformly distributed in the PF are difficult to obtain. For this purpose, a set of points which are evenly distributed in the Utopian PF are sampled. Furthermore, in order to address this inefficiency given by these approximated reference points, three proximity distances are designed according to their dominance relation to the approximated reference points. This is in hope that less value of the proximity distance means that the corresponding individual is with a better proximity. Specifically, if the solutions with rank $r_2$ are still with the distance calculation of that with ranks $r_1$ or $r_3$, the convergence will be lost in the proposed algorithm~\cite{ishibuchi2015modified}.
When the number of solutions to be selected is larger than the available slots, the representatives are chosen from a global view in the proposed algorithm. For convenience of understanding, it is first assumed that $a$ representatives need to be selected from $b$ solutions where $b>a$. Then the selection of $a$ representatives is simultaneously considered by the calculation of IGD indicator, as oppose to choosing one by one. Simultaneously selecting $a$ representatives involves a linear assignment problem. By this linear assignment, each selected reference point can have one distinct individual, which improves the diversity and the convergence simultaneously and this conclusion can also be found in literatures~\cite{berenguer2015evolutionary,lopez2016igd+,rodriguez2012new}. If the individuals are selected by finding the individual who has the least distance to the reference points, the diversity is not necessarily guaranteed.
\section{Experiments}
\label{section_4}
To evaluate the performance of the proposed algorithm in solving MaOPs, a series of experiments is performed. Particularly, NSGA-III~\cite{deb2014evolutionary}, MOEA/D~\cite{zhang2007moea}, HypE~\cite{bader2011hype}, RVEA~\cite{cheng2016reference}, and KnEA~\cite{zhang2015knee} are selected as the state-of-the-art peer competitors. Although the IGD$^{+}$-EMOA can be viewed as the peer algorithm based on IGD indicator, it is merely capable of solving MaOPs with no more than $8$ objectives. As a consequence, IGD$^{+}$-EMOA is excluded from the list of peer competitors in our experiments.
The remaining of this section is organized as follows. At first, the selected benchmark problems used in this experiment are introduced. Then, the chosen performance metric is given to measure the quality of the approximate Pareto-optimal solutions generated by the competing algorithms. Next, the parameter settings employed in all the compared algorithms are listed, and experimental results measured by the considered performance metric are presented and analyzed. Finally, the performance of the proposed algorithm in solving a real-world MaOP is shown (in Section III of the Supplemental Materials), and the performance on the proposed DNPE in estimating nadir point is empirically investigated.
\subsection{Benchmark Test Problems}
The widely used scalable test problems DTLZ1-DTLZ7 from the DTLZ benchmark test suite~\cite{deb2005scalable} and WFG1-WFG9 from the WFG benchmark test suite~\cite{huband2006review} are employed in our experiments. Specifically, each objective function in one given $m$-objective test problem of DTLZ has $n=k+m-1$ decision variables, and $k$ is set to be $5$ for DTLZ1, $10$ for DTLZ2-DTLZ6, and $20$ for DTLZ7 problems. Moreover, each objective function of a given problem in WFG test suite has $n=k+l$ decision variables, and $k$ is set to be $(m-1)$ and $l$ is set to be $20$ based on the suggestion from~\cite{huband2006review}.
\subsection{Performance Metric}
The widely used Hypervolume (HV)~\cite{zitzler1999multiobjective} which simultaneously measures the convergence and diversity of the MaOEAs is selected as the performance metric in these experiments. Specifically, the reference points for the calculation of HV are set to be $\{1,\cdots,1\}$ for DTLZ1, $\{2,\cdots,2\}$ for DTLZ2-DTLZ6, and $\{3,\cdots,2m+1\}$ for DTLZ7 as well as WFG1-WFG9 test problems. Please note that the solutions are discarded for the calculation of HV when they are dominated by the predefined reference points. Because the computational cost increases significantly as the number of objectives grows, Monte Carlo simulation~\cite{bader2011hype}\footnote{The source code is available at:~\url{http://www.tik.ee.ethz.ch/sop/download/supplementary/hype/}.} is applied for the calculation when $m\geq 10$, otherwise the exact approach proposed in~\cite{while2012fast} is utilized\footnote{The source code is available at:~\url{http://www.wfg.csse.uwa.edu.au/hypervolume/}.}. In our experiments, all the HV values are normalized to $[0,1]$ by dividing the HV value of the origin with the corresponding reference point. Moreover, higher HV values indicate a better performance of the corresponding MaOEA.
\subsection{Parameter Settings}
In this subsection, the baseline parameter settings which are adopted by all the compared MaOEAs are declared first. Then the special parameter settings required by each MaOEA are provided.
\subsubsection{Number of Objectives} Test problems with $8$, $15$, and $20$ objectives are considered in the experiments because the proposed algorithm aims specifically at effectively solving MaOPs.
\subsubsection{Number of Function Evaluations} and Stop Criterion All compared algorithms are individually executed $30$ independent times. The maximum number of function evaluations for each compared MaOEA in one independent run is set to be $2.3\times 10^6$, $4.3\times 10^6$, and $5.5\times 10^6$ for $8$-, $15$-, and $20$-objective, respectively, which is employed as the termination criterion. Noted that, the parameter settings here are set based on the convention that the maximum generations for the MaOEAs with more than $10$ objectives are generally in the order of $10^3$ (the generation number set here is approximately to $1,200$). Because of the proposed algorithm includes the phases of nadir point estimation and the optimization for MaOPs, the function evaluations specified here will be shared by these two phases for a fair comparison.
\subsubsection{Statistical Approach} Because of the heuristic characteristic of the peer evolutionary algorithms, all the results, which are measured by the performance metric over $30$ independent runs for each competing algorithm, are statistically evaluated. In this experiment, the Mann-Whitney-Wilcoxon rank-sum test~\cite{steel1997principles} with a $5\%$ significance level is employed for this purpose.
\begin{table}[ht]
\caption{configurations of two layers setting.}
\label{two_layers_setting}
\begin{center}
\begin{tabular}{p{0.05\columnwidth}<{\centering}|p{0.3\columnwidth}<{\centering}|p{0.4\columnwidth}<{\centering}}
\hline
$m$ & \# of divisions & \# of population \\
\hline
8 & 3,3 & 240\\
15 & 2,2 & 240\\
20 & 2,1 & 230\\
\hline
\end{tabular}
\end{center}
\end{table}
\subsubsection{Population Size} In principle, the population size can be arbitrarily assigned. However, the population size of the proposed algorithm, NSGA-III, MOEA/D, and RVEA depends on the number of the associated reference points or reference vectors. For a fair comparison, the sizes of population in HypE and KnEA are set to be the same as that in others. In the experiment, the reference points and reference vectors are sampled with the two-layer method~\cite{deb2014evolutionary} and the configurations are listed in Table~\ref{two_layers_setting}.
\begin{table*}[!htb]
\caption{HV results of MaOEA/IGD against NSGA-III, MOEA/D, HypE, RVEA, and KnEA over DTLZ1-DTLZ7 with $8$-, $15$-, and $20$-objective.}
\label{hv_results_on_dtlz1-4}
\begin{center}
\begin{tabular}{c|c|c|c|c|c|c|c|c}
\hline
& & &MaOEA/IGD&NSGA-III&MOEA/D&HypE&RVEA&KnEA\\
\hline
\multirow{6}{*}{\rotatebox{90}{\textbf{Linear}}}&\multirow{3}{*}{DTLZ1}
&8&\textbf{0.9998(2.93E-4)}&0.9964(6.12E-4)(+)&0.9996(3.52E-5)(+)&0.7213(4.31E-1)(+)&0.9992(2.15E-4)(+)&0.9921(4.21E-5)(+)\\
\cline{3-9}
& &15&0.9990(3.18E-3)&0.9984(7.23E-4)(+)&0.9987(3.20E-4)(+)&0.6922(5.45E-1)(+)&0.9992(3.99E-3)(-)&\textbf{0.9994(2.91E-4)(-)}\\
\cline{3-9}
& &20&\textbf{0.9990(2.32E-3)}&0.9983(3.82E-4)(+)&0.9977(7.24E-4)(+)&0.7672(3.88E-1)(+)&0.9989(3.68E-4)(=)&0.9890(3.80E-4)(+)\\
\cline{2-9}
&\multirow{3}{*}{DTLZ7}
&8&\textbf{0.6999(8.97E-3)}&0.6959(2.72E-5)(=)&0.5439(6.37E-5)(+)&0.2122(4.82E-2)(+)&0.6894(4.24E-4)(=)&0.5466(8.12E-2)(+)\\
\cline{3-9}
& &15&0.3592(1.20E-2)&0.2769(2.33E-5)(+)&0.2119(2.42E-2)(+)&0.1999(7.68E-5)(+)&\textbf{0.4070(9.50E-4)(-)}&0.2804(2.11E-2)(+)\\
\cline{3-9}
& &20&\textbf{0.5261(7.46E-4)}&0.2348(3.29E-4)(+)&0.4325(1.12E-4)(+)&0.0986(5.31E-4)(+)&0.5206(7.58E-2)(+)&0.5166(8.24E-5)(+)\\
\hline
\multirow{15}{*}{\rotatebox{90}{\textbf{Concave}}}&\multirow{3}{*}{DTLZ2}
&8&0.7174(3.96E-3)&\textbf{0.8132(2.78E-3)(-)}&0.5221(3.83E-3)(+)&0.1121(3.34E-2)(+)&0.6821(3.68E-3)(+)&0.7320(3.66E-3)(+)\\
\cline{3-9}
& &15&\textbf{0.9268(2.62E-3)}&0.8832(9.11E-3)(+)&0.3329(1.73E-2)(+)&0.0892(4.12E-2)(+)&0.9020(3.48E-3)(+)&0.8599(6.82E-3)(+)\\
\cline{3-9}
& &20&0.8905(6.80E-3)&\textbf{0.9660(3.23E-3)(-)}&0.3298(2.10E-2)(+)&0.0633(5.32E-2)(+)&0.9443(3.19E-3)(-)&0.9307(4.38E-3)(-)\\
\cline{2-9}
&\multirow{3}{*}{DTLZ3}
&8&0.4664(9.25E-2)&0.0055(3.80E-4)(+)&\textbf{0.5169(5.68E-3)(-)}&0.0085(0.76E-5)(+)&0.4572(0.54E-3)(=)&0.3537(5.31E-4)(+)\\
\cline{3-9}
& &15&0.6984(6.68E-2)&0.0091(0.78E-5)(+)&0.3030(4.43E-3)(+)&0.0133(1.07E-5)(+)&\textbf{0.7183(9.62E-2)(-)}&0.5961(0.05E-2)(+)\\
\cline{3-9}
& &20&\textbf{0.7476(7.52E-2)}&0.0002(6.48E-4)(+)&0.2162(4.51E-4)(+)&0.0065(5.47E-4)(+)&0.6491(2.96E-2)(+)&0.7317(7.45E-4)(+)\\
\cline{2-9}
&\multirow{3}{*}{DTLZ4}
&8&\textbf{0.8338(3.31E-3)}&0.8187(6.22E-4)(+)&0.5322(5.87E-2)(+)&0.2537(2.08E-4)(+)&0.8159(3.01E-4)(+)&0.8302(4.71E-3)(=)\\
\cline{3-9}
& &15&\textbf{0.9548(1.66E-3)}&0.9537(4.24E-4)(=)&0.3150(5.08E-3)(+)&0.1957(0.86E-4)(+)&0.9267(2.62E-5)(+)&0.9188(8.01E-3)(+)\\
\cline{3-9}
& &20&0.9824(1.33E-3)&\textbf{0.9947(1.37E-3)(-)}&0.2755(7.21E-5)(+)&0.2101(1.07E-4)(+)&0.9854(6.54E-2)(-)&0.9797(4.94E-3)(+)\\
\cline{2-9}
&\multirow{3}{*}{DTLZ5}
&8&0.4190(0.64E-3)&0.3908(7.67E-3)(+)&0.3174(7.15E-5)(+)&0.0451(2.24E-5)(+)&0.3474(0.88E-3)(+)&\textbf{0.6401(2.65E-4)(-)}\\
\cline{3-9}
& &15&\textbf{0.2677(9.71E-3)}&0.2178(5.34E-5)(+)&0.1821(8.85E-2)(+)&0.0418(8.99E-5)(+)&0.1379(7.84E-4)(+)&0.1257(3.46E-3)(+)\\
\cline{3-9}
& &20&0.2101(5.57E-3)&0.3390(0.44E-2)(-)&0.1790(0.99E-4)(+)&0.0423(4.90E-2)(+)&0.3606(5.52E-2)(-)&\textbf{0.4139(5.49E-2)(-)}\\
\cline{2-9}
&\multirow{3}{*}{DTLZ6}
&8&\textbf{0.7202(8.94E-4)}&0.2866(0.22E-5)(+)&0.3037(7.95E-3)(+)&0.0548(7.36E-4)(+)&0.2467(9.72E-2)(+)&0.4547(4.17E-3)(+)\\
\cline{3-9}
& &15&\textbf{0.7756(8.36E-5)}&0.4385(7.14E-3)(+)&0.6748(4.56E-2)(+)&0.1957(0.86E-4)(+)&0.4918(9.97E-2)(+)&0.7314(1.56E-2)(+)\\
\cline{3-9}
& &20&0.8639(0.20E-3)&\textbf{0.9081(8.32E-2)(-)}&0.5170(5.20E-4)(+)&0.1080(6.17E-2)(+)&0.4438(2.39E-4)(+)&0.3002(5.79E-5)(+)\\
\hline
\multicolumn{4}{c|}{+/=/-}&14/2/5&20/0/1&21/0/0&12/3/6&16/1/4\\
\hline
\end{tabular}
\end{center}
\end{table*}
\subsubsection{Genetic Operators}
The SBX~\cite{deb1994simulated} and polynomial mutation~\cite{deb1996combined} are employed as the genetic operators. Moreover, the probabilities of the crossover and mutation are set to be $1$ and $1/n$, respectively. The distribution indexes of mutation and crossover are set to be $20$, in addition to NSGA-III whose mutation distribution index is specifically set to be $30$ based on the recommendation from its developers~\cite{deb2014evolutionary}.
In solving the proposed nadir point estimation method for constructing the Utopian PF, evolutionary algorithm is employed. To be specific, the SBX and polynomial mutation, both of whose distribution index are set to be $20$, and probabilities for crossover and mutation are set to be $0.9$ and $1/n$, respectively, are utilized as the genetic operators. In addition, the population sizes are set to be the same to those in Table~\ref{two_layers_setting}, and the numbers of generations for all are set to be $1,000$. Besides, the balance parameter $\lambda$ in~(\ref{equ_extrme_point}) is specified as $100$.
\subsection{Experimental Results and Analysis}
In this subsection, the results, which are generated by competing algorithms over considered test problems with specific objective numbers and then measured by the selected performance metric, are presented and analyzed to highlight the superiority of the proposed algorithm in addressing MaOPs. Specifically, the mean values as well as the standard deviations of HV results over DTLZ1-DTLZ7 and WFG1-WFG9 test problems are listed in Tables~\ref{hv_results_on_dtlz1-4} and~\ref{hv_results_on_wfg1-9}, respectively. Furthermore, the numbers with bold face imply the best mean values over the corresponding test problem with a given objective number (the second and third columns in Tables~\ref{hv_results_on_dtlz1-4} and \ref{hv_results_on_wfg1-9}) against all compared algorithms. Moreover, the symbols ``+,'' ``-,'' and ``='' indicate whether the null hypothesis of the results, which are generated by the proposed algorithm and corresponding compared peer competitor, is accepted or rejected with the significance level $5\%$ by the considered rank-sum test. In addition, the last rows in Tables~\ref{hv_results_on_dtlz1-4} and \ref{hv_results_on_wfg1-9} present the summarizations indicating how many times the proposed algorithm performs better than, worse than or equal to the chosen peer competitor, respectively. In order to conveniently investigate the experimental results of the well-designed proximity distance assignments in the proposed MaOEA/IGD and the conclusion in Section~\ref{section_3_3}, test problems are group into ``Convex,'' ``Linear,'' and ``Concave'' based on the respective test problem features, and displayed in the first columns of Tables~\ref{hv_results_on_dtlz1-4} and \ref{hv_results_on_wfg1-9}. Noted that, although the PFs of DTLZ7 and WFG1 are mixed, they are classified into the ``Linear'' category due to their PF shapes being more similar to linear.
From the results measured by HV on DTLZ1-DTLZ7 test problems (Table~\ref{hv_results_on_dtlz1-4}), it is clearly shown that MaOEA/IGD achieves the best performance among its peer competitors upon $8$- and $20$-objective DTLZ1 and DTLZ7, while performs slightly worse upon $15$-objective DTLZ1 by KnEA and DTLZ7 by RVEA. Furthermore, MaOEA/IGD also wins the best scores on $8$- and $15$-objective DTLZ4 and DTLZ6, but is defeated by NSGA-III upon these two problems with $20$-objective. Although NSGA-III and KnEA show better performance upon $8$- and $20$-objective DTLZ2 and DTLZ5, MaOEA/IGD is the winner upon $15$-objective DTLZ2 and DTLZ5. In addition, MaOEA/IGD achieves the best score upon $20$-objective DTLZ3.
\begin{table*}[!htb]
\caption{HV results of MaOEA/IGD against NSGA-III, MOEA/D, HypE, RVEA, and KnEA over WFG1-WFG9 with $8$-, $15$-, and $20$-objective.}
\label{hv_results_on_wfg1-9}
\begin{center}
\begin{tabular}{c|c|c|c|c|c|c|c|c}
\hline
& & &MaOEA/IGD&NSGA-III&MOEA/D&HypE&RVEA&KnEA\\
\hline
\multirow{3}{*}{\rotatebox{90}{\textbf{Convex}}}&\multirow{3}{*}{WFG2}
&8&\textbf{0.9839(2.00E-2)}&0.9587(9.10E-3)(+)&0.9474(9.09E-2)(+)&0.9514(5.92E-4)(+)&0.9380(3.33E-3)(+)&0.9686(8.53E-2)(+)\\
\cline{3-9}
& &15&0.9362(7.76E-3)&\textbf{0.9672(3.16E-2)(-)}&0.9402(7.00E-4)(-)&0.6216(6.25E-3)(+)&0.9475(9.71E-4)(-)&0.9360(8.37E-3)(=)\\
\cline{3-9}
& &20&\textbf{0.9782(2.52E-2)}&0.9624(0.11E-2)(+)&0.9460(5.73E-2)(+)&0.8354(7.90E-4)(+)&0.9657(4.04E-2)(+)&0.8106(5.11E-2)(+)\\
\hline
\multirow{6}{*}{\rotatebox{90}{\textbf{Linear}}}&\multirow{3}{*}{WFG1}
&8&\textbf{0.9578(1.30E-1)}&0.9255(9.48E-2)(+)&0.9454(0.61E-4)(+)&0.6528(5.85E-4)(+)&0.8383(2.85E-4)(+)&0.5847(8.28E-2)(+)\\
\cline{3-9}
& &15&0.9354(5.02E-3)&0.9536(3.29E-3)(-)&0.9405(6.50E-2)(=)&0.6395(9.75E-3)(+)&\textbf{0.9538(1.84E-3)(-)}&0.9373(4.98E-2)(+)\\
\cline{3-9}
& &20&\textbf{0.9806(2.64E-2)}&0.9405(8.01E-2)(+)&0.9593(1.43E-2)(+)&0.6340(4.78E-2)(+)&0.9020(5.43E-2)(+)&0.9374(8.84E-2)(+)\\
\cline{2-9}
& \multirow{3}{*}{WFG3}
&8&0.8615(1.08E-1)&0.8423(7.55E-2)(+)&0.8521(7.42E-4)(=)&0.5660(8.31E-3)(+)&0.8457(2.34E-2)(+)&\textbf{0.8618(1.57E-2)(=)}\\
\cline{3-9}
& &15&0.3346(4.58E-3)&0.5091(4.10E-2)(-)&0.3393(1.32E-2)(=)&0.2852(5.41E-3)(+)&\textbf{0.5188(2.43E-2)(-)}&0.5076(8.26E-4)(-)\\
\cline{3-9}
& &20&\textbf{0.5750(3.13E-2)}&0.5313(3.89E-2)(+)&0.4863(4.29E-2)(+)&0.2918(9.56E-3)(+)&0.3425(5.73E-2)(+)&0.4579(8.50E-2)(+)\\
\hline
\multirow{18}{*}{\rotatebox{90}{\textbf{Concave}}}&\multirow{3}{*}{WFG4}
&8&0.7800(2.64E-2)&\textbf{0.7877(7.02E-2)(=)}&0.7507(3.75E-4)(+)&0.7229(9.74E-3)(+)&0.7648(7.29E-2)(+)&0.7715(1.74E-4)(+)\\
\cline{3-9}
& &15&0.8314(5.76E-3)&0.6315(0.01E-2)(+)&0.8414(0.03E-2)(-)&0.4801(0.87E-2)(+)&0.8154(2.61E-3)(+)&\textbf{0.8690(0.23E-2)(-)}\\
\cline{3-9}
& &20&\textbf{0.8108(3.86E-2)}&0.7885(4.68E-3)(+)&0.7707(8.61E-2)(+)&0.7309(4.67E-2)(+)&0.8094(8.35E-3)(=)&0.4794(7.43E-2)(+)\\
\cline{2-9}
& \multirow{3}{*}{WFG5}
&8&0.8653(1.10E-1)&0.7993(4.57E-4)(+)&0.4429(6.68E-2(+)&0.4915(6.99E-3)(+)&0.8638(6.51E-3)(+)&\textbf{0.8741(3.09E-2)(-)}\\
\cline{3-9}
& &15&\textbf{0.8335(6.31E-3)}&0.6408(1.69E-2)(+)&0.3397(0.01E-2)(+)&0.4351(4.18E-2)(+)&0.7553(4.88E-2)(+)&0.8276(1.60E-3)(+)\\
\cline{3-9}
& &20&\textbf{0.8905(6.28E-3)}&0.8022(9.87E-4)(+)&0.4964(0.84E-2)(+)&0.3125(2.50E-2)(+)&0.7946(9.13E-4)(+)&0.7258(6.64E-3)(+)\\
\cline{2-9}
& \multirow{3}{*}{WFG6}
&8&0.9785(3.12E-2)&0.9918(8.77E-2)(-)&0.9488(8.06E-4(+)&0.9261(4.61E-2)(+)&\textbf{0.9976(6.97E-4)(-)}&0.9876(3.68E-2)(-)\\
\cline{3-9}
& &15&\textbf{0.9357(8.22E-3)}&0.8756(5.13E-3)(+)&0.8327(2.41E-4)(+)&0.8406(2.60E-3)(+)&0.8538(0.21E-2)(+)&0.9223(8.21E-2)(+)\\
\cline{3-9}
& &20&\textbf{0.8854(1.82E-2)}&0.8189(2.62E-2)(+)&0.8489(5.80E-2)(+)&0.7633(8.78E-4)(+)&0.7968(5.83E-2)(+)&0.7502(5.00E-3)(+)\\
\cline{2-9}
& \multirow{3}{*}{WFG7}
&8&\textbf{0.8858(5.43E-3)}&0.8118(7.25E-2)(+)&0.7430(8.58E-4)(+)&0.7416(3.48E-2)(+)&0.8192(2.51E-2)(+)&0.7635(5.82E-2)(+)\\
\cline{3-9}
& &15&0.8352(6.48E-3)&\textbf{0.8780(3.82E-2)(-)}&0.7343(7.92E-4)(+)&0.4030(8.39E-3)(+)&0.6366(1.79E-4)(+)&0.5463(1.70E-3)(+)\\
\cline{3-9}
& &20&0.7919(3.89E-3)&0.8482(1.35E-4)(-)&0.4844(9.14E-3)(+)&0.6418(6.41E-3)(+)&\textbf{0.8706(5.71E-1)(-)}&0.8116(9.03E-3)(-)\\
\cline{2-9}
& \multirow{3}{*}{WFG8}
&8&0.6839(1.78E-2)&\textbf{0.6869(1.39E-2)(-)}&0.4405(3.49E-3)(+)&0.3155(1.51E-3)(+)&0.5908(5.04E-3)(+)&0.6850(5.72E-3)(-)\\
\cline{3-9}
& &15&\textbf{0.7340(7.08E-3)}&0.5470(5.14E-3)(+)&0.3412(8.14E-4)(+)&0.2065(0.97E-2)(+)&0.6455(5.90E-4)(+)&0.6246(1.24E-2)(+)\\
\cline{3-9}
& &20&0.7831(2.08E-2)&0.6855(5.75E-3)(+)&0.4928(9.16E-2)(+)&0.1117(4.95E-4)(+)&\textbf{0.7844(8.87E-3)(=)}&0.6027(4.21E-2)(+)\\
\cline{2-9}
& \multirow{3}{*}{WFG9}
&8&\textbf{0.7694(8.69E-0)}&0.7328(8.73E-3)(+)&0.4488(0.55E-3)(+)&0.3030(5.00E-2)(+)&0.7444(3.41E-2)(+)&0.7528(4.91E-2)(+)\\
\cline{3-9}
& &15&\textbf{0.8329(8.16E-3)}&0.6105(0.13E-2)(+)&0.3359(7.18E-4)(+)&0.2176(3.91E-2)(+)&0.7294(0.34E-3)(+)&0.6595(4.06E-2)(+)\\
\cline{3-9}
& &20&\textbf{0.7829(2.79E-2)}&0.7299(5.76E-4)(+)&0.4881(8.07E-4)(+)&0.1923(6.55E-1)(+)&0.7128(8.78E-3)(+)&0.7824(5.36E-2)(=)\\
\hline
\multicolumn{4}{c|}{+/=/-}&19/1/7&22/3/2&27/0/0&20/2/5&18/3/6\\
\hline
\end{tabular}
\end{center}
\end{table*}
The HV results from WFG1-WFG9 test problems generated by competing algorithms are listed in Table~\ref{hv_results_on_wfg1-9}. For $8$-objective WFG test problems, MaOEA/IGD shows a better performance on WFG1, WFG2, WFG7, and WFG9 than its peer competitors, and performs a little worse than that of KnEA on WFG5, RVEA on WFG6, and NSGA-III on WFG8 test problems. Although MaOEA/IGD does not show the best scores on WFG3 and WFG4, it obtains similar statistical results compared to the respective winners (i.e., KnEA and NSGA-III). For $15$-objective test problems, MaOEA/IGD shows a better performance on WFG5, WFG6, WFG8, and WFG9 than competing algorithms, while worse than RVEA on WFG2 and WFG3, NSGA-III on WFG2, and KnEA on WFG4. Although NSGA-III performs better than MaOEA/IGD on WFG7, MaOEA/IGD performs better than all other peer competitors. In addition, MaOEA/IGD wins over NSGA-III, MOEA/D, HypE, RVEA, and KnEA on $20$-objective WFG1, WFG2, WFG3, WFG4, WFG5, WFG6, and WFG9, but underperforms on WFG7 and WFG8 in which RVEA performs better.
Briefly, MaOEA/IGD wins 9 times out of the 12 comparisons upon the test problems whose PF shapes are linear (i.e., DTLZ1, DTLZ7, WFG1, and WFG3), which can be interpreted that the sampled reference points from the Utopian PF for the proposed algorithm are the Pareto-optimal solutions due to the linear feature of the PF, and the proximity distance assignment for the solutions with rank value $r_2$ has taken effects. Furthermore, MaOEA/IGD shows competitive performance on WFG2 test problem whose feature of the PF is convex. Because the sampled reference points on the Utopian PF are all non-dominated by the Pareto-optimal solutions, the proximity distances for solutions with rank $r_1$ in MaOEA/IGD take effects in this situation. In addition, it is no strange that MaOEA/IGD obtains better results on most of other test problems whose PF features are concave because the reference points utilized to maintain the diversity and convergence of the proposed algorithm dominate the solutions uniformly generated from the PF. In summary, the proposed algorithm shows considerable competitiveness against considered competing algorithms in addressing selected MaOPs with the results measured by the HV performance metric.
Theoretically, the major shortcoming of HV indicator against IGD is its much higher computational complexity. However, noted that from Tables~\ref{hv_results_on_dtlz1-4} and~\ref{hv_results_on_wfg1-9}, the proposed algorithm, which is designed based on the IGD indicator, outperforms HypE, which is motivated by the HV indicator, upon all test problems with the selected numbers of objectives, although the numbers of function evaluations regarding HypE is set to be a much large number. The deficiencies of HypE in this regard are explained as follows. First, it has been reported in~\cite{auger2009theory,ishibuchi2010many,yuan2016balancing,yuan2016new} that the HV result is largely affected by the nadir points of the problem to be optimized. In HypE, the nadir points are determined as the evolution continues. In this way, the obtained nadir point would be inaccurate during the early evolution process (the reasons have been discussed in reviewing the nadir point estimation approaches in Section~\ref{section_2}), which leads to the worse performance of HypE. Secondly, the HV results of HypE in solving MaOPs are estimated by Monte Carlo simulation, while the number of reference points in Monte Carlo simulation is critical to the successful performance~\cite{bader2011hype}. In practice, that number is unknown and unavailable of such may lead to a poor performance.
\subsection{Investigation on Nadir Point Estimation}
\label{section_experiment_nadir}
In this subsection, we will investigate the performance of the proposed DNPE on estimating the nadir point. To be specific, two peer competitors including WC-NSGA-II and PCSEA which have been discussed in Section~\ref{section_2} are utilized to perform comparisons on selected test problems. In these comparisons, the numbers of function evaluations regarding each compared algorithm are counted until 1) the metric $E \leq 0.01$ formulated by~(\ref{equ_nadir_metric})
\begin{equation}
\label{equ_nadir_metric}
E=\sqrt{\sum_{i=1}^m(z^{nad}_i-z_i)^2/(z^{nad}_i-z^*_i)^2}
\end{equation}
where $z_i$ denotes the $i$-th element of the estimated nadir point derived from the extreme points generated by the compared algorithm or 2) the maximum function evaluation numbers $100,000$ is met. The experimental results for DTLZ1, DTLZ2, and WFG2 with $8$-, $10$-, $15$-, and $20$-objective are plotted in Fig.~\ref{fig_nadir_point_comparison}. Please note that the reason of choosing these three test problems is that they cover the various shapes of PF (i.e., DTLZ1, DTLZ2, and WFG2 are with linear, concave, and convex PF, respectively) and characteristics of objective value scales (i.e., DTLZ1 and DTLZ2 are with the same objective value scales while WFG2 is not). Specifically, the ideal points of DTLZ1, DTLZ2, and WFG2 are $\{0,\cdots,0\}$, and the nadir points are $\{0.5,\cdots, 0.5\}$, $\{1,\cdots,1\}$, and $\{2, 4, \cdots, 2m\}$, respectively. In addition, the population size is specified as $200$, the probabilities of SBX and polynomial mutation are set to be $0.9$ and $1/n$, and both of distribution index are set to be $20$. Because the proposed DNPE is based on the decomposition to estimate the nadir point, $E \leq 0.01/m$ and maximum function evaluation number with $100,000/m$ are set to be the stopping criteria for estimating each extreme point.
\begin{figure}[htp]
\begin{center}
\subfloat[DTLZ1]{\includegraphics[width=0.75\columnwidth]{nadir_dtlz1
\label{fig_nadir_dtlz1}}
\hfil
\subfloat[DTLZ2]{\includegraphics[width=0.75\columnwidth]{nadir_dtlz2
\label{fig_nadir_dtlz2}}
\hfil
\subfloat[WFG2]{\includegraphics[width=0.75\columnwidth]{nadir_wfg2
\label{fig_nadir_wfg2}}
\caption{The numbers of function evaluations performed by WC-NSGA-II, PCSEA, and DNPE on DTLZ1, DTLZ2, and WFG2 with $8$-, $10$-, $15$-, and $20$-objective.}
\label{fig_nadir_point_comparison}
\end{center}
\end{figure}
The results performed by compared nadir point estimation methods on $8$-, $10$-, $15$-, and $20$-objective DTLZ1, DTLZ2, and WFG2 are illustrated in Figs.~\ref{fig_nadir_dtlz1},~\ref{fig_nadir_dtlz2}, and~\ref{fig_nadir_wfg2}, respectively. It is clearly shown in Fig.~\ref{fig_nadir_dtlz1} that these compared algorithms find the satisfactory nadir points of the DTLZ1 which is with the linear PF within the predefined maximum function evaluation numbers, and the proposed DNPE takes the least numbers of function evaluations over the four considered objective numbers. Moreover, WC-NSGA-II cannot find the nadir point over DTLZ2 with concave PF and WFG2 with convex PF with $10$-, $15$-, and $20$-objective, and PCSEA cannot find the nadir point over WFG2 with different objective value scales, while the proposed DNPE performs well on both test problems with all considered objective numbers. In addition, the proposed DNPE is scalable to the objective number in the estimating nadir points of the MaOPs, which can be seen from Figs.~\ref{fig_nadir_dtlz1} and~\ref{fig_nadir_dtlz2}. In summary, the proposed DNPE shows quality performance in estimating nadir point of MaOPs with different PF features and objective scales.
\section{Conclusion and Future Works}
\label{section_5}
In this paper, an IGD indicator-based evolutionary algorithm is proposed for solving many-objective optimization problems. In order to obtain a set of uniformly distributed reference points for the calculation of the IGD indicator, a decomposition-based nadir point estimation method is designed to construct the Utopian PF in which the reference points can be easily sampled. For solving the deficiency of the Utopian PF being as the PF in the phase of sampling the reference points, one rank assignment mechanism is proposed to compare the dominance relation of the solutions to the reference points, based on which three types of proximity distance assignments are designed to distinct the quality of the solutions with the same front rank values. In addition, the linear assignment principle is utilized as the selection mechanism to choose representatives for concurrently facilitating the convergence and diversity of the proposed algorithm. In summary, based on the proposed nadir estimation method, the proposed dominance comparison approach, rank value and proximity distance assignment, and selection mechanism collectively improve the evolution of the proposed algorithm towards the PF with promising diversity. In order to qualify the performance of the proposed algorithm, a series of well-designed experiments is performed over two widely used benchmark test suites with $8$-, $15$-, and $20$-objective, their results measured by the selected performance metric indicate that the proposed algorithm is with considerable competitiveness in solving many-objective optimization problems. In addition, we utilize the proposed algorithm to solve one real-world many-objective optimization problem, in which the satisfactory results demonstrate the superiority of the proposed algorithm. Moreover, experiments are performed by the proposed decomposition-based nadir point estimation method against a couple of competitors over three representative test problems (DTLZ1, DTLZ2, and WFG2) with challenging features in PF shapes and objective value scales, the experimental results reveal the satisfactory results obtained by the proposed nadir point estimation method. In near future, we will place our efforts mainly on two essential aspects 1) constructing more accurate PF with limited information priori to obtaining the Pareto-optimal solutions to improve the development of indicator-based algorithms which require the uniformly distributed reference points, and 2) extending the proposed algorithm to solve constrained many-objective optimization problems.
\IEEEpeerreviewmaketitle
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
|
{
"timestamp": "2018-02-27T02:03:55",
"yymm": "1802",
"arxiv_id": "1802.08792",
"language": "en",
"url": "https://arxiv.org/abs/1802.08792"
}
|
\section{Architectures}\label{app:architectures}
\input{tables/architecture.tex}
\newpage
\section{Hyperparameters}\label{app:hyperparameters}
We observed that extensive hyperparameter-tuning is not necessary to achieve state-of-the-art performance. To demonstrate this, we restrict our hyperparameter search for each task to $\lambda_d = \sset{0, 10^{-2}}, \lambda_s = \sset{0, 1}, \lambda_t = \sset{10^{-2}, 10^{-1}}$, in all experiments with instance-normalized inputs. We fixed $\beta = 10^{-2}$. Note that the decision to turn $(\lambda_d, \lambda_s)$ on or off that can often be determined \emph{a priori} based on prior belief regarding the extent to covariate shift. In the absence of such prior belief, a reliable choice is $(\lambda_d = 10^{-2}, \lambda_s = 1, \lambda_t = 10^{-2}, \beta = 10^{-2})$.
\input{tables/hyperparameters.tex}
When the target domain is MNIST/MNIST-M, the task is sufficiently simple that we only allocate $B = 500$ iterations to each optimization problem in \cref{eq:dirtt}. In all other cases, we set the refinement interval $B = 5000$. We apply Adam Optimizer (learning rate $= 0.001, \beta_1 = 0.5, \beta_2 = 0.999$) with Polyak averaging (more accurately, we apply an exponential moving average with momentum $=0.998$ to the parameter trajectory). VADA was trained for $80000$ iterations and DIRT-T takes VADA as initialization and was trained for $\set{20000, 40000, 60000, 80000}$ iterations, with number of iterations chosen as hyperparameter.
\section{Replacing Gradient Reversal}\label{app:gradientreversal}
We note from \cite{goodfellow2014gan} that the gradient of $\nabla_\theta\ln (1 - D(f_\theta(x)))$ is tends to have smaller norm than $-\nabla_\theta\ln D(f_\theta(x))$ during initial training since the latter rescales the gradient by $1 / D(f_\theta(x))$. Following this observation, we replace the gradient reversal procedure with alternating minimization of
\begin{align}
&\min_D
-\Expect_{x \sim \D_s} \brac{\ln D(f_\theta(x))} -
\Expect_{x \sim \D_t} \brac{\ln 1 - D(f_\theta(x))} \nonumber\\
&\min_\theta
-\Expect_{x \sim \D_t} \brac{\ln D(f_\theta(x))} -
\Expect_{x \sim \D_s} \brac{\ln 1 - D(f_\theta(x))}. \nonumber
\end{align}
The choice of using gradient reversal versus alternating minimization reflects a difference in choice of approximating the mini-max using saturating versus non-saturating optimization \citep{fedus2017many}. In some of our initial experiments, we observed the replacement of gradient reversal with alternating minimization stabilizes domain adversarial training. However, we encourage practitioners to try either optimization strategy when applying VADA.
\section{Instance Normalization for Domain Adaptation}\label{app:instancenorm}
\Cref{thm:adaptation} suggests that we should identify ways of constraining the hypothesis space without hurting the global optimal classifier for the joint task. We propose to further constrain our model by introducing instance normalization as an image pre-processing step for the input data. Instance normalization was proposed for style transfer \cite{ulyanov2016instance} and applies the operation
\begin{align}
\ell(x^{(i)}) = \frac{x^{(i)} - \mu(x^{(i)})}{\sigma(x^{(i)})},
\end{align}
where $x^{(i)} \in \mathbb{R}^{H \times W \times C}$ denotes the $i\textsuperscript{th}$ sample with $(H, W, C)$ corresponding to the height, width, and channel dimensions, and where $\mu, \sigma: \mathbb{R}^{H \times W \times C} \to \mathbb{R}^C$ are functions that compute the mean and standard deviation across the spatial dimensions. A notable property of instance normalization is that it is invariant to channel-wide scaling and shifting of the input elements. Formally, consider scaling and shift variables $\gamma, \beta \in \mathbb{R}^C$. If $\gamma \succ 0$ and $\sigma(x^{(i)}) \succ 0$, then
\begin{align}
\ell(x^{(i)}) = \ell(\gamma x^{(i)} + \beta).
\end{align}
For visual data the application of instance normalization to the input layer makes the classifier invariant to channel-wide shifts and scaling of the pixel intensities. For most visual tasks, sensitivity to channel-wide pixel intensity changes is not critical to the success of the classifier. As such, instance normalization of the input may help reduce $d_\HDH$ without hurting the globally optimal classifier. Interestingly, \Cref{fig:inorm} shows that input instance normalization is not equivalent to gray-scaling, since color is partially preserved. To test the effect of instance normalization, we report results both with and without the use of instance-normalized inputs.
\section{Limitation of Domain Adversarial Training}\label{app:limitation}
We denote the source and target distributions respectively as $p_s(x, y)$ and $p_t(x, y)$. Let the source covariate distribution $p_s(x)$ define the random variable $X_s$ that have support $\supp(X_s) = \X_s$ and let $(X_t, \X_t)$ be analogously defined for the target domain. Both $\X_s$ and $\X_t$ are subsets of $\R^n$. Let $p_s(y)$ and $p_t(y)$ define probabilities over the support $\Y = \sset{1, \ldots, K}$. We consider any embedding function $f: \R^n \to \R^m$, where $\R^m$ is the embedding space, and any embedding classifier $g: \R^m \to \C$, where $\C$ is the $(K-1)$-simplex. We denote a classifier $h = g \circ f$ has the composite of an embedding function with an embedding classifier.
For simplicity, we restrict our analysis to the simple case where $K = 2$, i.e. where $\Y = \sset{0, 1}$. Furthermore, we assume that for any $\delta \in [0, 1]$, there exists a subset $\Omega \subseteq \R^n$ where $p_s(x \in \Omega) = \delta$. We impose a similar condition on $p_t(x)$.
For a joint distribution $p(x, y)$, we denote the generalization error of a classifier as
\begin{align}
\eps_p(h) = \Expect_{p(x, y)} \abs{y - h(x)}.
\end{align}
Note that for a given classifier $h: \R^n \to [0, 1]$, the corresponding hard classifier is $k(x) = \1{h(x) > 0.5}$. We further define the set $\Omega \subseteq \R^n$ such that
\begin{align}
\Omega = \set{x \in \R^n \giv k(x) = 1} \iff k(x) = \1{x \in \Omega}.
\end{align}
In a slight abuse of notation, we define the generalization error $\eps(\Omega)$ with respect to $\Omega$ as
\begin{align}
\eps_p(\Omega) = \Expect_{p(x, y)} \1{x \in \Omega} = \eps(k).
\end{align}
An optimal $\Omega_p^*$ is a partitioning of $\R^n$
\begin{align}
\eps_p(\Omega_p^*) = \min_{\Omega \subseteq \R^n} \eps_p(\Omega)
\end{align}
such that generalization error under the distribution $p(x, y)$ is minimized.
\subsection{Good Target-Domain Accuracy is not Guaranteed}\label{app:dannmain}
Domain adversarial training seeks to find a single classifier $h$ used for both the source $p_s$ and target $p_t$ distributions. To do so, domain adversarial training sets up the objective
\begin{align}
\min_{f \in \F, g\in \G} ~& \eps_{p_s}(g \circ f) \\
\text{s.t.} ~& g(X_s) = g(X_t),
\end{align}
where $\F$ and $\G$ are the hypothesis spaces for the embedding function and embedding classifier. Intuitively, domain adversarial training operates under the hypothesis that good source generalization error in conjunction with source-target feature matching implies good target generalization error. We shall see, however, that if $\X_s \cap \X_t = \varnothing$ and $\F$ is sufficiently complex, this implication does not necessarily hold.
Let $\F$ contain all functions mapping $\R^n \to \R^m$, i.e. $\F$ has infinite capacity. Suppose $\G$ contains the function $g(z) = \1{z = \one_m}$ and $\X_s \cap \X_t = \varnothing$. We consider the set
\begin{align}
\H^* = \set{g \circ f \giv \exists g \in \G, f \in \F \text{ s.t. } \eps_{p_s}(g \circ f) \le \eps_{p_s}(\Omega^*_{p_s}), f(X_s) = f(X_t)}.
\end{align}
Such a set of classifiers satisfies the feature-matching constraint while achieving source generalization error no worse than the optimal source-domain hard classifier. It suffices to show that $\H^*$ includes hypotheses that perform poorly in the target domain.
We first show $\H^*$ is not an empty set by constructing an element of this set. Choose a partitioning $\Omega$ where
\begin{align}
p_t(x \in \Omega) = p_s(x \in \Omega^*_{p_s}).
\end{align}
Consider the embedding function
\begin{align}
f_\Omega(x) = \begin{cases}
\one_m &\text{if } (x \in \X_s \cap \Omega^*_{p_s}) \vee (x \in \X_t \cap \Omega)\\
\0_m &\text{otherwise}.
\end{cases}
\end{align}
Let $g(z) = \1{z = \one_m}$. It follows that the composite classifier $h_\Omega = g \circ f_\Omega$ is an element of $\H^*$.
Next, we show that a classifier $h \in \H^*$ does not necessarily achieve good target generalization error. Consider the partitioning $\hat{\Omega}$ which solves the following optimization problem
\begin{align}
\max_{\Omega \subseteq \R^n} &~ \eps_{p_t}(\Omega) \\
\text{s.t.} &~ p_t(x \in \Omega) = p_s(x \in \Omega^*_{p_s}).
\end{align}
Such a partitioning $\hat{\Omega}$ is the worst-case partitioning subject to the probability mass constraint. It follows that worse case $h' \in \H^*$ has generalization error
\begin{align}
\eps_{p_t}(h') = \max_{h \in \H^*} \eps_{p_t}(h) \ge \eps_{p_t}(h_{\hat{\Omega}}).
\end{align}
To provide intuition that $\eps_{p_t}(h')$ is potentially very large, consider hypothetical source and target domains where $\X_s \cap \X_t = \varnothing$ and $p_t(x \in \Omega^*_{p_t}) = p_s(x \in \Omega^*_{p_s}) = 0.5$. The worst-case partitioning subject to the probability mass constraint is simply $\hat{\Omega} = \R^n \setminus \Omega^*_{p_t}$ (which flips the labels) and consequently, $\H^*$ contains solutions
\begin{align}
\max_{h \in \H^*} \eps_{p_t}(h) \ge 1 - \eps_{p_t}(\Omega^*_{p_t})
\end{align}
no better than the worst-case partitioning of the target domain.
\subsection{Connection to \cref{thm:adaptation}}
Let $\F$ contain all functions mapping $\R^n \to \R^m$, i.e. $\F$ has infinite capacity. Suppose $\G$ contains the function $g(z) = \1{z = \one_m}$ and $\X_s \cap \X_t = \varnothing$. We consider the sets
\begin{align}
\H &= \set{g \circ f \giv g \in \G, f \in \F} \\
\bar{\H} &= \set{g \circ f \giv \exists g \in \G, f \in \F \text{ s.t. } f(X_s) = f(X_t)}.
\end{align}
A justification for domain adversarial training is that the $\BDB$-divergence term is smaller than the $\HDH$-divergence, thus yielding a tighter upper bound for \cref{thm:adaptation}. However, we shall see that the $\BDB$-divergence term is in fact maximal.
Choose partitionings $\Omega_s, \Omega_t \subseteq \R^n$ such that
\begin{align}
p_s(x \in \Omega_s) = p_t(x \in \Omega_t) = 0.5.
\end{align}
Define the embedding functions
\begin{align}
f(x) &= \begin{cases}
\one_m &\text{if } (x \in \X_s \cap \Omega_s) \vee (x \in \X_t \cap \Omega_t)\\
\0_m &\text{otherwise}.
\end{cases} \\
f'(x) &= \begin{cases}
\one_m &\text{if } (x \in \X_s \cap \Omega_s) \vee (x \in \X_t \cap (\R^n \setminus \Omega_t))\\
\0_m &\text{otherwise}.
\end{cases}
\end{align}
Let $g'(z) = g(z) = \1{z = \one_m}$. It follows that the composite classifiers $h = g \circ f$ and $h' = g' \circ f'$ are elements of $\bar{\H}$.
From the definition of $d_\HDH$, we see that
\begin{align}
d_\BDB &\ge 2 \abs{
\Expect_{x \sim X_s} \brac{h(x) \neq h'(x)} -
\Expect_{x \sim X_t} \brac{h(x) \neq h'(x)}
} \\
&= 2 \cdot \abs{0 - 1} = 2.
\end{align}
The $\BDB$-divergence thus achieves the maximum value of $2$.
\subsection{Implications}
Our analysis assumes infinite capacity embedding functions and the ability to solve optimization problems exactly. The empirical success of domain adversarial training suggests that the use of finite-capacity convolutional neural networks combined with stochastic gradient-based optimization provides the necessary regularization for domain adversarial training to work. The theoretical characterization of domain adversarial training in the case finite-capacity convolutional neural networks and gradient-based learning remains a challenging but important open research problem.
\section{Non-Visual Domain Adaptation Task}\label{app:non-visual}
To evaluate the performance of our models on a non-visual domain adaptation task, we applied VADA and DIRT-T to the Wi-Fi Activity Recognition Dataset \citep{yousefi2017wifi}. The Wi-Fi Activity Recognition Dataset is a classification task that takes the Wi-Fi Channel State Information (CSI) data stream as input $x$ to predict motion activity within an indoor area as output $y$. The dataset collected the CSI data stream samples associated with seven activities, denoted as ``bed'', ``fall'', ``walk'', ``pick up'', ``run'', ``sit down'', and ``stand up''.
However, the joint distribution over the CSI data stream and motion activity changes depending on the room in which the data was collected. Since the data was collected for multiple rooms, we selected two rooms (denoted here as Room A and Room B) and constructed the unsupervised domain adaptation task by using Room A as the source domain and Room B as the target domain. We compare the performance of DANN, VADA, and DIRT-T on the Wi-Fi domain adaptation task in \Cref{table:wifitable}, using the hyperparameters $(\lambda_d = 0, \lambda_s = 0, \lambda_t = 10^{-2}, \beta = 10^{-2})$.
\Cref{table:wifitable} shows that VADA significantly improves classification accuracy compared to Source-Only and DANN. However, DIRT-T does not lead to further improvements on this dataset. We believe this is attributable to VADA successfully pushing the decision boundary away from data-dense regions in the target domain. As a result, further application of DIRT-T would not lead to better decision boundaries. To validate this hypothesis, we visualize the t-SNE embeddings for VADA and DIRT-T in \Cref{fig:wifitsne} and show that VADA is already capable of yielding strong clustering in the target domain. To verify that the decision boundary indeed did not change significantly, we additionally provide the confusion matrix between the VADA and DIRT-T predictions in the target domain (\cref{fig:confmat}).
\input{figures/wifitsne.tex}
\input{figures/confmat.tex}
\section{Introduction}
The development of deep neural networks has enabled impressive performance in a wide variety of machine learning tasks. However, these advancements often rely on the existence of a large amount of labeled training data. In many cases, direct access to vast quantities of labeled data for the task of interest (the target domain) is either costly or otherwise absent, but labels are readily available for related training sets (the source domain). A notable example of this scenario occurs when the source domain consists of richly-annotated synthetic or semi-synthetic data, but the target domain consists of unannotated real-world data \citep{sun2014virtual,vazquez2014virtual}. However, the source data distribution is often dissimilar to the target data distribution, and the resulting significant covariate shift is detrimental to the performance of the source-trained model when applied to the target domain \citep{shimodaira2000reweight}.
Solving the covariate shift problem of this nature is an instance of domain adaptation \citep{david2010impossibility}. In this paper, we consider a challenging setting of domain adaptation where 1) we are provided with fully-labeled source samples and completely-unlabeled target samples, and 2) the existence of a classifier in the hypothesis space with low generalization error in both source and target domains is not guaranteed. Borrowing approximately the terminology from \cite{david2010impossibility}, we refer to this setting as unsupervised, \emph{non-conservative} domain adaptation. We note that this is in contrast to \emph{conservative} domain adaptation, where we assume our hypothesis space contains a classifier that performs well in both the source and target domains.
To tackle unsupervised domain adaptation, \cite{ganin2015gradientreversal} proposed to constrain the classifier to only rely on domain-invariant features. This is achieved by training the classifier to perform well on the source domain while minimizing the divergence between features extracted from the source versus target domains. To achieve divergence minimization, \cite{ganin2015gradientreversal} employ domain adversarial training. We highlight two issues with this approach: 1) when the feature function has high-capacity and the source-target supports are disjoint, the domain-invariance constraint is potentially very weak (see \cref{sec:dann_limitation}), and 2) good generalization on the source domain hurts target performance in the non-conservative setting.
\cite{saito2017att} addressed these issues by replacing domain adversarial training with asymmetric tri-training (ATT), which relies on the assumption that target samples that are labeled by a source-trained classifier with high confidence \emph{are} correctly labeled by the source classifier. In this paper, we consider an orthogonal assumption: the cluster assumption \citep{chapelle2005cluster}, that the input distribution contains separated data clusters and that data samples in the same cluster share the same class label. This assumption introduces an additional bias where we seek decision boundaries that do not go through high-density regions. Based on this intuition, we propose two novel models: 1) the Virtual Adversarial Domain Adaptation (VADA) model which incorporates an additional virtual adversarial training \citep{miyato2017vat} and conditional entropy loss to push the decision boundaries away from the empirical data, and 2) the Decision-boundary Iterative Refinement Training with a Teacher (DIRT-T) model which uses natural gradients to further refine the output of the VADA model while focusing purely on the target domain. We demonstrate that
\begin{enumerate}
\item In conservative domain adaptation, where the classifier is trained to perform well on the source domain, VADA can be used to further constrain the hypothesis space by penalizing violations of the cluster assumption, thereby improving domain adversarial training.
\item In non-conservative domain adaptation, where we account for the mismatch between the source and target optimal classifiers, DIRT-T allows us to transition from a joint (source and target) classifier (VADA) to a better target domain classifier. Interestingly, we demonstrate the advantage of natural gradients in DIRT-T refinement steps.
\end{enumerate}
We report results for domain adaptation in digits classification (MNIST-M, MNIST, SYN DIGITS, SVHN), traffic sign classification (SYN SIGNS, GTSRB), general object classification (STL-10, CIFAR-10), and Wi-Fi activity recognition \citep{yousefi2017wifi}. We show that, in nearly all experiments, VADA improves upon previous methods and that DIRT-T improves upon VADA, setting new state-of-the-art performances across a wide range of domain adaptation benchmarks. In adapting MNIST $\to$ SVHN, a very challenging task, we out-perform ATT by over $20\%$.
\section{Related Work}\label{sec:relatedwork}
Given the extensive literature on domain adaptation, we highlight several works most relevant to our paper. \cite{shimodaira2000reweight,mansour2009reweight} proposed to correct for covariate shift by re-weighting the source samples such that the discrepancy between the target distribution and re-weighted source distribution is minimized. Such a procedure is problematic, however, if the source and target distributions do not contain sufficient overlap. \cite{huang2007correctingbias,long2015mmd,ganin2015gradientreversal} proposed to instead project both distributions into some feature space and encourage distribution matching in the feature space. \cite{ganin2015gradientreversal} in particular encouraged feature matching via domain adversarial training, which corresponds approximately to Jensen-Shannon divergence minimization \citep{goodfellow2014gan}. To better perform non-conservative domain adaptation, \cite{saito2017att} proposed to modify tri-training \citep{zhou2005tritrain} for domain adaptation, leveraging the assumption that highly-confident predictions are correct predictions \citep{zhu2005semisupsurvey}. Several of aforementioned methods are based on \cite{ben2010theory}'s theoretical analysis of domain adaptation, which states the following,
\begin{theorem}\label{thm:adaptation}
\citep{ben2010theory} Let $\H$ be the hypothesis space and let $(X_s, \eps_s)$ and $(X_t, \eps_t)$ be the two domains and their corresponding generalization error functions. Then for any $h \in \H$,
\begin{align}
\eps_t(h) \le \frac{1}{2} d_\HDH(X_s, X_t) + \eps_s(h) + \min_{h' \in \H} \eps_t(h') + \eps_s(h'),
\label{eq:adaptation_bound}
\end{align}
where $d_\HDH$ denotes the $\HDH$-distance between the domains $X_s$ and $X_t$,
\begin{align}
d_\HDH = 2 \sup_{h, h' \in \H} \abs{
\Expect_{x \sim X_s} \brac{h(x) \neq h'(x)} -
\Expect_{x \sim X_t} \brac{h(x) \neq h'(x)}
}.
\label{eq:adaptation_complexity}
\end{align}
\end{theorem}
Intuitively, $d_\HDH$ measures the extent to which small changes to the hypothesis in the source domain can lead to large changes in the target domain. It is evident that $d_\HDH$ relates intimately to the complexity of the hypothesis space and the divergence between the source and target domains. For infinite-capacity models and domains with disjoint supports, $d_\HDH$ is maximal.
A critical component to our paper is the cluster assumption, which states that decision boundaries should not cross high-density regions \citep{chapelle2005cluster}. This assumption has been extensively studied and leveraged for semi-supervised learning, leading to proposals such as conditional entropy minimization \citep{grandvalet2005entropymin} and pseudo-labeling \citep{lee2013pseudo}. More recently, the cluster assumption has led to many successful deep semi-supervised learning algorithms such as semi-supervised generative adversarial networks \citep{dai2017badgan}, virtual adversarial training \citep{miyato2017vat}, and self/temporal-ensembling \citep{laine2016temporal,tarvainen2017mean}. Given the success of the cluster assumption in semi-supervised learning, it is natural to consider its application to domain adaptation. Indeed, \cite{ben2014domain} formalized the cluster assumption through the lens of probabilistic Lipschitzness and proposed a nearest-neighbors model for domain adaptation. Our work extends this line of research by showing that the cluster assumption can be applied to deep neural networks to solve complex, high-dimensional domain adaptation problems. Independently of our work, \cite{french2017selfensembling} demonstrated the application of self-ensembling to domain adaptation. However, our work additionally considers the application of the cluster assumption to non-conservative domain adaptation.
\section{Limitation of Domain Adversarial Training}\label{sec:dann_limitation}
Before describing our model, we first highlight that domain adversarial training may not be sufficient for domain adaptation if the feature extraction function has high-capacity. Consider a classifier $h_\theta$, parameterized by $\theta$, that maps inputs to the $(K-1)$-simplex (denote as $\mathcal{C}$), where $K$ is the number of classes. Suppose the classifier $h = g \circ f$ can be decomposed as the composite of an embedding function $f_\theta: \X \to \Z$ and embedding classifier $g_\theta: \Z \to \mathcal{C}$. For the source domain, let $\D_s$ be the joint distribution over input $x$ and one-hot label $y$ and let $X_s$ be the marginal input distribution. $(\D_t, X_t)$ are analogously defined for the target domain. Let $(\L_s, \L_d)$ be the loss functions
\begin{align}
\L_y(\theta; \D_s) &= \Expect_{x, y \sim \D_s} \brac{ y^\top \ln h_\theta(x)} \\
\L_d(\theta; \D_s, \D_t) &= \sup_D
\Expect_{x \sim \D_s} \brac{\ln D(f_\theta(x))} +
\Expect_{x \sim \D_t} \brac{\ln (1 - D(f_\theta(x)))},
\end{align}
where the supremum ranges over discriminators $D: \Z \to (0,1)$. Then $\L_y$ is the cross-entropy objective and $D$ is a domain discriminator. Domain adversarial training minimizes the objective
\begin{align}
\minimize_\theta \L_y(\theta; \D_s) + \lambda_d \L_d(\theta; \D_s, \D_t),
\end{align}
where $\lambda_d$ is a weighting factor. Minimization of $\L_d$ encourages the learning of a feature extractor $f$ for which the Jensen-Shannon divergence between $f(X_s)$ and $f(X_t)$ is small.\footnote{In practice, the minimization of $\L_d$ requires solving a mini-max optimization problem. We discuss this in more detail in \cref{app:gradientreversal}} \cite{ganin2015gradientreversal} suggest that successful adaptation tends to occur when the source generalization error and feature divergence are both small.
It is easy, however, to construct situations where this suggestion fails. In particular, if $f$ has infinite-capacity and the source-target supports are disjoint, then $f$ can employ arbitrary transformations to the target domain so as to match the source feature distribution (see \cref{app:limitation} for formalization). We verify empirically that, for sufficiently deep layers, jointly achieving small source generalization error and feature divergence does not imply high accuracy on the target task (\cref{table:layer_ablation}). Given the limitations of domain adversarial training, we wish to identify additional constraints that one can place on the model to achieve better, more reliable domain adaptation.
\section{Constraining via Conditional Entropy Minimization}
\input{figures/vada.tex}
In this paper, we apply the cluster assumption to domain adaptation. The cluster assumption states that the input distribution $X$ contains clusters and that points in the same cluster come from the same class. This assumption has been extensively studied and applied successfully to a wide range of classification tasks (see \cref{sec:relatedwork}). If the cluster assumption holds, the optimal decision boundaries should occur far away from data-dense regions in the space of $\X$ \citep{chapelle2005cluster}. Following \cite{grandvalet2005entropymin}, we achieve this behavior via minimization of the conditional entropy with respect to the target distribution,
\begin{align}
\L_c(\theta; \D_t) = -\Expect_{x \sim \D_t} \brac{h_\theta(x)^\top \ln h_\theta(x)}.
\end{align}
Intuitively, minimizing the conditional entropy forces the classifier to be confident on the unlabeled target data, thus driving the classifier's decision boundaries away from the target data \citep{grandvalet2005entropymin}. In practice, the conditional entropy must be empirically estimated using the available data. However, \cite{grandvalet2005entropymin} note that this approximation breaks down if the classifier $h$ is not locally-Lipschitz. Without the locally-Lipschitz constraint, the classifier is allowed to abruptly change its prediction in the vicinity of the training data points, which 1) results in a unreliable empirical estimate of conditional entropy and 2) allows placement of the classifier decision boundaries close to the training samples even when the empirical conditional entropy is minimized. To prevent this, we propose to explicitly incorporate the locally-Lipschitz constraint via virtual adversarial training \citep{miyato2017vat} and add to the objective function the additional term
\begin{align}
\L_v(\theta; \D) = \Expect_{x \sim \D} \brac{\max_{\| r\| \le \epsilon} \KL(h_{\theta}(x) \| h_\theta(x + r))},
\label{eq:vat}
\end{align}
which enforces classifier consistency within the norm-ball neighborhood of each sample $x$. Note that virtual adversarial training can be applied with respect to either the target or source distributions. We can combine the conditional entropy minimization objective and domain adversarial training to yield
\begin{align}
\minimize_\theta \L_y(\theta; \D_s) +
\lambda_d \L_d(\theta; \D_s, \D_t) +
\lambda_s\L_v(\theta; \D_s) +
\lambda_t\brac{\L_v(\theta; \D_t) + \L_c(\theta; \D_t)},
\label{eq:vada}
\end{align}
a basic combination of domain adversarial training and semi-supervised training objectives. We refer to this as the Virtual Adversarial Domain Adaptation (VADA) model. Empirically, we observed that the hyperparameters $(\lambda_d, \lambda_s, \lambda_t)$ are easy to choose and work well across multiple tasks (\cref{app:hyperparameters}).
\textbf{$\HDH$-Distance Minimization}. VADA aligns well with the theory of domain adaptation provided in \cref{thm:adaptation}. Let the loss,
\begin{align}
\L_t(\theta) = \L_v(\theta; \D_t) + \L_c(\theta; D_t),
\label{eq:proxy}
\end{align}
denote the degree to which the target-side cluster assumption is violated. Modulating $\lambda_t$ enables VADA to trade-off between hypotheses with low target-side cluster assumption violation and hypotheses with low source-side generalization error. Setting $\lambda_t > 0$ allows rejection of hypotheses with high target-side cluster assumption violation. By rejecting such hypotheses from the hypothesis space $\H$, VADA reduces $d_\HDH$ and yields a tighter bound on the target generalization error. We verify empirically that VADA achieves significant improvements over existing models on multiple domain adaptation benchmarks (\cref{table:accuracy}).
\section{Decision-boundary Iterative Refinement Training}
\input{figures/dirtt.tex}
In non-conservative domain adaptation, we assume the following inequality,
\begin{align}
\min_{h \in \H} \eps_t(h) < \eps_t(h^a) \text{ where } h^a = \argmin_{h \in \H} \eps_s(h) + \eps_t(h),
\label{eq:gap}
\end{align}
where $(\eps_s, \eps_t)$ are generalization error functions for the source and target domains. This means that, for a given hypothesis class $\H$, the optimal classifier in the source domain does not coincide with the optimal classifier in the target domain.
We assume that the optimality gap in \cref{eq:gap} results from violation of the cluster assumption. In other words, we suppose that any source-optimal classifier drawn from our hypothesis space \emph{necessarily} violates the cluster assumption in the target domain. Insofar as VADA is trained on the source domain, we hypothesize that a better hypothesis is achievable by introducing a secondary training phase that solely minimizes the target-side cluster assumption violation.
Under this assumption, the natural solution is to initialize with the VADA model and then further minimize the cluster assumption violation in the target domain. In particular, we first use VADA to learn an initial classifier $h_{\theta_0}$. Next, we incrementally push the classifier's decision boundaries away from data-dense regions by minimizing the target-side cluster assumption violation loss $\L_t$ in \cref{eq:proxy}. We denote this procedure Decision-boundary Iterative Refinement Training (DIRT).
\subsection{Decision-boundary Iterative Refinement Training with a Teacher}
Stochastic gradient descent minimizes the loss $\L_t$ by selecting gradient steps $\Delta \theta$ according to the following objective,
\begin{align}
\minimize_{\Delta \theta} &~ \L_t(\theta + \Delta \theta) \\
\text{s.t.} &~ \| \Delta \theta \| \le \epsilon,
\label{eq:gradient}
\end{align}
which defines the neighborhood in the parameter space. This notion of neighborhood is sensitive to the parameterization of the model; depending on the parameterization, a seemingly small step $\Delta \theta$ may result in a vastly different classifier. This contradicts our intention of incrementally and locally pushing the decision boundaries to a local conditional entropy minimum, which requires that the decision boundaries of $h_{\theta+\Delta\theta}$ stay close to that of $h_\theta$. It is therefore important to define a neighborhood that is parameterization-invariant. Following \cite{pascanu2013revisit}, we instead select $\Delta \theta$ using the following objective,
\begin{align}
\minimize_{\Delta \theta} &~ \L_t(\theta + \Delta \theta) \nonumber \\
\text{s.t.} &~ \Expect_{x \sim D_t} \brac{\KL(h_\theta(x) \| h_{\theta+\Delta\theta}(x))} \le \epsilon.
\end{align}
Each optimization step now solves for a gradient step $\Delta \theta$ that minimizes the conditional entropy, subject to the constraint that the Kullback-Leibler divergence between $h_\theta(x)$ and $h_{\theta+\Delta \theta}(x)$ is small for $x \sim \X_t$. The corresponding Lagrangian suggests that one can instead minimize a sequence of optimization problems
\begin{align}
\minimize_{\theta_n} &~ \lambda_t \L_t(\theta_n) + \beta_t\Expect\brac{\KL(h_{\theta_{n-1}}(x) \| h_{\theta_n}(x))},
\label{eq:dirtt}
\end{align}
that approximates the application of a series of natural gradient steps.
In practice, each of the optimization problems in \cref{eq:dirtt} can be solved approximately via a finite number of stochastic gradient descent steps. We denote the number of steps taken to be the refinement interval $B$. Similar to \cite{tarvainen2017mean}, we use the Adam Optimizer with Polyak averaging \citep{polyak1992average}. We interpret $h_{\theta_{n-1}}$ as a (sub-optimal) teacher for the student model $h_{\theta_n}$, which is trained to stay close to the teacher model while seeking to reduce the cluster assumption violation. As a result, we denote this model as Decision-boundary Iterative Refinement Training with a Teacher (DIRT-T).
\textbf{Weakly-Supervised Learning}. This sequence of optimization problems has a natural interpretation that exposes a connection to weakly-supervised learning. In each optimization problem, the teacher model $h_{\theta_{n-1}}$ pseudo-labels the target samples with noisy labels. Rather than naively training the student model $h_{\theta_n}$ on the noisy labels, the additional training signal $\L_t$ allows the student model to place its decision boundaries further from the data. If the clustering assumption holds and the initial noisy labels are sufficiently similar to the true labels, conditional entropy minimization can improve the placement of the decision boundaries \citep{reed2014bootstrap}.
\textbf{Domain Adaptation}. An alternative interpretation is that DIRT-T is the \emph{recursive} extension of VADA, where the act of pseudo-labeling of the target distribution constructs a new ``source'' domain (i.e. target distribution $X_t$ with pseudo-labels). The sequence of optimization problems can then be seen as a sequence of non-conservative domain adaptation problems in which $X_s = X_t$ but $p_s(y \giv x) \neq p_t(y \giv x)$, where $p_s(y \giv x) = h_{\theta_{n-1}}(x)$ and $p_t(y \giv x)$ is the true conditional label distribution in the target domain. Since $d_\HDH$ is strictly zero in this sequence of optimization problems, domain adversarial training is no longer necessary. Furthermore, if $\L_t$ minimization does improve the student classifier, then the gap in \cref{eq:gap} should get smaller each time the source domain is updated.
\section{Experiments}
In principle, our method can be applied to any domain adaptation tasks so long as one can define a reasonable notion of neighborhood for virtual adversarial training \citep{miyato2016vattext}. For comparison against \cite{saito2017att} and \cite{french2017selfensembling}, we focus on visual domain adaptation and evaluate on MNIST, MNIST-M, Street View House Numbers (SVHN), Synthetic Digits (SYN DIGITS), Synthetic Traffic Signs (SYN SIGNS), the German Traffic Signs Recognition Benchmark (GTSRB), CIFAR-10, and STL-10. For non-visual domain adaptation, we evaluate on Wi-Fi activity recognition.
\subsection{Implementation Detail}
\textbf{Architecture} We use a small CNN for the digits, traffic sign, and Wi-Fi domain adaptation experiments, and a larger CNN for domain adaptation between CIFAR-10 and STL-10. Both architectures are available in \cref{app:architectures}. For fair comparison, we additionally report the performance of source-only baseline models and demonstrate that the significant improvements are attributable to our proposed method.
\textbf{Replacing gradient reversal}. In contrast to \cite{ganin2015gradientreversal}, which proposed to implement domain adversarial training via gradient reversal, we follow \cite{goodfellow2014gan} and instead optimize via alternating updates to the discriminator and encoder (see \cref{app:gradientreversal}).
\textbf{Instance normalization}. We explored the application of instance normalization as an image pre-processing step. This procedure makes the classifier invariant to channel-wide shifts and rescaling of pixel intensities. A discussion of instance normalization for domain adaptation is provided in \cref{app:instancenorm}. We show in \Cref{fig:inorm} the effect of applying instance normalization to the input image.
\input{figures/inorm.tex}
\textbf{Hyperparameters}. For each task, we tuned the four hyperparameters $(\lambda_d, \lambda_s, \lambda_t, \beta)$ by randomly selecting $1000$ labeled target samples from the training set and using that as our validation set. We observed that extensive hyperparameter-tuning is not necessary to achieve state-of-the-art performance. In all experiments with instance-normalized inputs, we restrict our hyperparameter search for each task to $\lambda_d = \sset{0, 10^{-2}}, \lambda_s = \sset{0, 1}, \lambda_t = \sset{10^{-2}, 10^{-1}}$. We fixed $\beta = 10^{-2}$. Note that the decision to turn $(\lambda_d, \lambda_s)$ on or off that can often be determined \emph{a priori}. A complete list of the hyperparameters is provided in \cref{app:hyperparameters}.
\subsection{Model Evaluation}
\input{tables/accuracy.tex}
\textbf{MNIST $\to$ MNIST-M}. We first evaluate the adaptation from MNIST to MNIST-M. MNIST-M is constructed by blending MNIST digits with random color patches from the BSDS500 dataset.
\textbf{MNIST $\leftrightarrow$ SVHN}. The distribution shift is exacerbated when adapting between MNIST and SVHN. Whereas MNIST consists of black-and-white handwritten digits, SVHN consists of crops of colored, street house numbers. Because MNIST has a significantly lower intrinsic dimensionality that SVHN, the adaptation from MNIST $\to$ SVHN is especially challenging when the input is not pre-processed via instance normalization. When instance normalization is applied, we achieve a strong state-of-the-art performance $76.5\%$ and an equally impressive margin-of-improvement over source-only of $35.6\%$. Interestingly, by reducing the refinement interval $B$ and taking noisier natural gradient steps, we were occasionally able to achieve accuracies as high as $87\%$. However, due to the high-variance associated with this, we omit reporting this configuration in \cref{table:accuracy}.
\textbf{SYN DIGITS $\to$ SVHN}. The adaptation from SYN DIGITS $\to$ SVHN reflect a common adaptation problem of transferring from synthetic images to real images. The SYN DIGITS dataset consist of $500000$ images generated from Windows fonts by varying the text, positioning, orientation, background, stroke color, and the amount of blur.
\textbf{SYN SIGNS $\to$ GTSRB}. This setting provides an additional demonstration of adapting from synthetic images to real images. Unlike SYN DIGITS $\to$ SVHN, SYN SIGNS $\to$ GTSRB contains 43 classes instead of 10.
\textbf{STL $\leftrightarrow$ CIFAR}. Both STL-10 and CIFAR-10 are 10-class image datasets. These two datasets contain nine overlapping classes. Following the procedure in \cite{french2017selfensembling}, we removed the non-overlapping classes (``frog'' and ``monkey'') and reduce to a 9-class classification problem. We achieve state-of-the-art performance in both adaptation directions. In STL $\to$ CIFAR, we achieve a $11.7\%$ margin-of-improvement and a performance accuracy of $73.3\%$. Note that because STL-10 contains a very small training set, it is difficult to estimate the conditional entropy, thus making DIRT-T unreliable for CIFAR $\to$ STL.
\input{tables/wifitable.tex}
\textbf{Wi-Fi Activity Recognition}. To evaluate the performance of our models on a non-visual domain adaptation task, we applied VADA and DIRT-T to the Wi-Fi Activity Recognition Dataset \citep{yousefi2017wifi}. The Wi-Fi Activity Recognition Dataset is a classification task that takes the Wi-Fi Channel State Information (CSI) data stream as input $x$ to predict motion activity within an indoor area as output $y$. Domain adaptation is necessary when the training and testing data are collected from different rooms, which we denote as Rooms A and B. \Cref{table:wifitable} shows that VADA significantly improves classification accuracy compared to Source-Only and DANN by $17.3\%$ and $15\%$ respectively. However, DIRT-T does not lead to further improvements on this dataset. We perform experiments in \cref{app:non-visual} which suggests that VADA already achieves strong clustering in the target domain for this dataset, and therefore DIRT-T is not expected to yield further performance improvement.
\input{tables/delta.tex}
\textbf{Overall}. We achieve state-of-the-art results across all tasks. For a fairer comparison against ATT and the $\Pi$-model, \Cref{table:delta} provides the improvement margin over the respective source-only performance reported in each paper. In four of the tasks (MNIST $\to$ MNIST-M, SVHN $\to$ MNIST, MNIST $\to$ SVHN, STL $\to$ CIFAR), we achieve substantial margin of improvement compared to previous models. In the remaining three tasks, our improvement margin over the source-only model is competitive against previous models. Our closest competitor is the $\Pi$-model. However, unlike the $\Pi$-model, we do not perform data augmentation.
It is worth noting that DIRT-T consistently improves upon VADA. Since DIRT-T operates by incrementally pushing the decision boundaries away from the target domain data, it relies heavily on the cluster assumption. DIRT-T's empirical success therefore demonstrates the effectiveness of leveraging the cluster assumption in unsupervised domain adaptation with deep neural networks.
\subsection{Analysis of VADA and DIRT-T}
\subsubsection{Role of Virtual Adversarial Training}\label{sec:ablation}
To study the relative contribution of the virtual adversarial training in the VADA and DIRT-T objectives (\cref{eq:vada} and \cref{eq:dirtt} respectively), we perform an extensive ablation analysis in \cref{table:accuracy_ablation}. The removal of the virtual adversarial training component is denoted by the ``no-vat'' subscript. Our results show that VADA$_\text{no-vat}$ is sufficient for out-performing DANN in all but one task. The further ability for DIRT-T$_\text{no-vat}$ to improve upon VADA$_\text{no-vat}$ demonstrates the effectiveness of conditional entropy minimization. Ultimately, in six of the seven tasks, both virtual adversarial training and conditional entropy minimization are essential for achieving the best performance. The empirical importance of incorporating virtual adversarial training shows that the locally-Lipschitz constraint is beneficial for pushing the classifier decision boundaries away from data.
\input{tables/accuracy_ablation.tex}
\subsubsection{Role of Teacher Model in DIRT-T}
\input{figures/kl.tex}
When considering \cref{eq:dirtt}, it is natural to ask whether defining the neighborhood with respect to the classifier is truly necessary. In \Cref{fig:kl}, we demonstrate in SVHN $\to$ MNIST and STL $\to$ CIFAR that removal of the KL-term negatively impacts the model. Since the MNIST data manifold is low-dimensional and contains easily identifiable clusters, applying naive gradient descent (\cref{eq:gradient}) can also boost the test accuracy during initial training. However, without the KL constraint, the classifier can sometimes deviate significantly from the neighborhood of the previous classifier, and the resulting spikes in the KL-term correspond to sharp drops in target test accuracy. In STL $\to$ CIFAR, where the data manifold is much more complex and contains less obvious clusters, naive gradient descent causes immediate decline in the target test accuracy.
\subsubsection{Visualization of Representation}
\input{figures/tsne.tex}
We further analyze the behavior of VADA and DIRT-T by showing T-SNE embeddings of the last hidden layer of the model trained to adapt from MNIST $\to$ SVHN. In \Cref{fig:tsne}, source-only training shows strong clustering of the MNIST samples (blue) and performs poorly on SVHN (red). VADA offers significant improvement and exhibits signs of clustering on SVHN. DIRT-T begins with the VADA initialization and further enhances the clustering, resulting in the best performance on MNIST $\to$ SVHN.
\subsection{Domain Adversarial Training: Layer Ablation}
\input{tables/layer_ablation.tex}
In \Cref{table:layer_ablation}, we applied domain adversarial training to various layers of a Domain Adversarial Neural Network \citep{ganin2015gradientreversal} trained to adapt MNIST $\to$ SVHN. With the exception of layers $L - 2$ and $L - 0$, which experienced training instability, the general observation is that as the layer gets deeper, the additional capacity of the corresponding embedding function allows better matching of the source and target distributions without hurting source generalization accuracy. This demonstrates that the combination of low divergence and high source accuracy does not imply better adaptation to the target domain. Interestingly, when the classifier is regularized to be locally-Lipschitz via VADA, the combination of low divergence and high source accuracy appears to correlate more strongly with better adaptation.
\section{Conclusion}
In this paper, we presented two novel models for domain adaptation inspired by the cluster assumption. Our first model, VADA, performs domain adversarial training with an added term that penalizes violations of the cluster assumption. Our second model, DIRT-T, is an extension of VADA that recursively refines the VADA classifier by untethering the model from the source training signal and applying approximate natural gradients to further minimize the cluster assumption violation. Our experiments demonstrate the effectiveness of the cluster assumption: VADA achieves strong performance across several domain adaptation benchmarks, and DIRT-T further improves VADA performance. Our proposed models open up several possibilities for future work. One possibility is to apply DIRT-T to weakly supervised learning; another is to improve the natural gradient approximation via K-FAC \citep{martens2015kfac} and PPO \citep{schulman2017ppo}. Given the strong performance of our models, we also recommend them for other downstream domain adaptation applications.
\subsubsection*{Acknowledgments}
We gratefully acknowledge funding from Adobe, NSF (grants \#1651565, \#1522054, \#1733686), Toyota Research Institute, Future of Life Institute, and Intel. We also thank Daniel Levy, Shengjia Zhao, and Jiaming Song for insightful discussions, and the anonymous reviewers for their helpful comments and suggestions.
|
{
"timestamp": "2018-03-20T01:16:24",
"yymm": "1802",
"arxiv_id": "1802.08735",
"language": "en",
"url": "https://arxiv.org/abs/1802.08735"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.